By using AWS re:Post, you agree to the Terms of Use
/Text processing & Analytics/

Questions tagged with Text processing & Analytics

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

GroundTruth text labelling - hide data columns, and methods of quality control

I have a csv of sentences which I'd like labelled, and have identified GroundTruth labelling jobs as a way to do this. Having spent some time exploring the service, I have some questions: **1) **I can't find a way to display only particular columns to the labellers - e.g. if the dataset has a column of IDs for each sentence, this ideally shouldn't be shown to labellers **2)** There is either single labelling or multi labelling, but I would like a way to have two sets of single-selection labels, where one captures difficulty of assigning the label: Select one for binary classification a) Yes, b) No Select one for difficulty of classification c) Easy, d) Medium, e) Hard Can this be done using custom HTML? Is there a guide to writing this - the template it gives you doesn't seem to render as-is. **3)** There appears to be a maximum of $1.20 payment per task. Is this the case, and why? **4)** Having not used mechanical turk before, are there ways of ensuring people take the work seriously and don't just select random answers? I can see there's an option to have x number of people answer the same question, but is there also a way to put in unambiguous questions to which we already have a 'pre_agreed_label' every nth question, and remove people from the task if they get them wrong? Thanks!
0
answers
1
votes
13
views
asked 6 months ago
  • 1
  • 90 / page