This is a machine learning problem and goes beyond AWS Ground Truth.
Usually you do not get to measure how confident each annotation is, unless you asked the annotators to say how confident they are on each annotation.
Usually you have to provide some gold standard annotations that you think are correct. You do this on some sample PDFs that you annotated yourself. Then you check the performance of each annotator on your set. You can compute metrics such as Cohen's Kappa to assess agreement between annotators: https://en.wikipedia.org/wiki/Cohen%27s_kappa
Validating annotations in AWS ground truthasked 8 months ago
SageMaker Ground Truth notification after annotation job considered "done"?asked 4 years ago
Ground Truth Label Jobs - Same document repeating in the labeling job
Passing Ground Truth pdf labelling task output to Comprehend custom entity trainingAccepted Answerasked a month ago
Ground Truth - Progress Not Updating?asked 4 years ago
Ground Truth Text Formatasked 4 years ago
Sagemaker Ground truth pdf annotation tool not rendering anythingasked 8 months ago
Ground Truth - Active Learning Statusasked 4 years ago
Ground Truth Job Validation post completion
Ground truth labeling job - unable to submit annotationsasked 4 years ago