Jump-start your machine learning project by using the crowd to build your training set. Before you can train your machine learning algorithm, you need to take your raw inputs and label, annotate, or tag them to build your ground truth. Learn how to use the Amazon Mechanical Turk marketplace to perform these tasks. We share Amazon's best practices, developed while training our own machine learning algorithms, and walk you through quickly getting affordable and high-quality training data.
2. What to expect from the session
• How to use Mechanical Turk to build ML datasets
• Best practices at smaller and larger scales
• Lessons learned building datasets for an AWS service
What not to expect
• Detailed tutorial on how to use Mechanical Turk
3. Machine learning requires large scale data
production pipelines
• Readily available: algorithms and compute clusters
• Not readily available: large scale, high quality datasets
Amazon Mechanical Turk can help you build your dataset
• Training data is the key differentiator
• Success depends on the quality, scale, and throughput
of your dataset production pipelines
5. What is Amazon Mechanical Turk (MTurk)?
• A marketplace for getting simple tasks done in parallel by humans
• Launched in 2005, one of the first AWS services
• Basic unit of work is a HIT, a single, self-contained task
• Example: “How many wolves are in this photo?”
• Requesters use website or APIs to publish HITs to workers and
consume results
• One or more workers per HIT
• Rapid response times
• Simple workflow: HTML template, csv in, csv out
6. What datasets can you build with MTurk?
Almost any domain
• Vision, NLP, psych, etc.
All key task types
• Open-ended questions
• Structured questions
• Binary verifications
ImageNet
• Prof. Fei Fei Li, Stanford AI Lab
• 21841 WordNet categories
• 14.1 MM total images
• 1 MM localized examples
• ImageNet Challenge 2010-2016
Search Google Scholar for “Mechanical Turk” + “machine learning”
Result: 6000+ citations
L. Fei-Fei, ImageNet: crowdsourcing, benchmarking and other
cool things, CMU VASC Seminar, March, 2010.
7. What kind of quality can you expect?
1. How much do your
categories intrinsically
overlap?
2. How representative is
your “golden set”?
3. How well can workers
solve your specific HIT?
Wolves Dogs?
Feature B
Feature A
11
8. What kind of quality can you expect?
1. How much do your
categories intrinsically
overlap?
2. How representative is
your “golden set”?
3. How well can Workers
solve your specific HIT?
True wolves
2
Golden wolves
Feature B
Feature A
Worker
wolves
3
10. Dataset construction is highly iterative...
… so MTurk supports
rapid iterations
Source
data
Define HIT &
golden set
Evaluate
HIT results
Augment
data
Train & test
ML algorithm
Define
objectives
MTurk
11. Example: Build a wristwatch classifier
• ML objective: Label wristwatch “shape”
• Dataset objective: ~2000 training examples
• Data source: Amazon Catalog
14. Experiment A: Accuracy is 97.29%
Golden set
MTurk
Circle Rect Other 700 Accuracy Precision Recall
Circle 504 3 10 571 97.29% 97% 99%
Rect 0 64 2 66 97% 94%
Other 3 1 113 117 97% 90%
700 507 68 125 681
72% 10% 18%
Can we do better?
15. Experiment B: 2 shape features, 16 categories
Dial face
shape
Casing shape
Other
Rectangle
Circle/oval
Circle/oval
Rectangle
Other
Tonneau
Tonneau
19. Many possible experiments; we’ve reported three
Quality levers Lever settings / experiments
ML objectives 1 feature / 3 categories, 2 features / 16 categories
Data sources and segments Held constant
Golden set # examples: 100, 700
# annotators: 1,3
HIT design / instructions Picture only, text only, picture and text
Worker selection Prequalified
Workers per HIT 3, 5
HIT aggregation rules Majority vote vs majority of filtered workers
Worker feedback None
20. Quality control levers
ML objectives
Data sources and segments
Golden sets
HIT design / instructions
Worker selection
Workers per HIT
HIT aggregation rules
Worker feedback
Myth: Dataset quality is an intrinsic property of the
MTurk marketplace
Throughput control levers
• HIT price
• HIT publication rate
1
2
3
MTurk provides you with control
levers to optimize dataset
quality and throughput
21. Best practices: HIT design
• Simplicity of question vs. clarity of answers
• Prefer questions with limited option sets to open-ended questions
• Prefer mutually exclusive, collectively exhaustive option sets
• Prefer smaller option sets to larger ones
• Ease of learning vs. time to complete HIT
• Prefer more questions and simpler instructions to fewer questions and more
complex instructions
• Prefer that each possible answer set costs the same time to provide
• Workers optimize their behaviors to your design; don’t “tweak” too much
22. Using MTurk criteria
• Geography
• Worker approval rating
• Total # HITs approved
• Masters status
• Mobile device user
• Political affiliation
• High school graduate
• Bachelor’s degree
• Marital status
• Parenthood status
• Voted in 2012 presidential election
• Smoker
• Car owner
• Handedness
Best practices: Selecting workers
Using your own criteria
• Past performance on your HITs
• Custom tests of domain specific knowledge
• Custom tests of decision-making ability
23. Aggregating results
• When using multiple workers /
HIT, aggregate results with voting
scheme
• Align voting w/ prevalence
• Moderate prevalence => Majority
voting
• Low prevalence => Any yes vote
• Either drop split decisions, or force
them into a category that can be
split later
Best practices: Assessing results
Worker feedback
• Approve and reject HITs carefully
• Automatic rejections require
ironclad reasons
• Adjust selection criteria
• Monitor emails and forums
24. Best practices: Boosting quality
• Separate your categories
• Scrutinize false positives and negatives
• Simplify and clarify instructions
• Optimize worker quals
• Experiment, experiment, experiment!
Dataset accuracy puts an upper bound on system performance
26. Challenge: Measuring quality over time
Example
• 1 MM HITs @ 100K / week
• 99% confidence level
• Confidence interval (CI)
varies with sample size
Three strategies
• Scrutinize @ fixed CI
• Scrutinize @ decreasing CI
• Scrutinize and trust
Potential tactic
• Partition workers and
interleave golden sets
0.0%
1.0%
2.0%
3.0%
4.0%
5.0%
6.0%
7.0%
8.0%
9.0%
0 1000 2000 3000 4000 5000 6000
Credibleregionwidth
Number of answers checked
Credible Region Width vs. # Answers Checked
27. Myth: Workers always fatigue/satisfice with time
Worker accuracy is
stable and predictable
Check forums for
Worker discussions of
your HITs
Kenji Hata, Ranjay Krishna, Li Fei-Fei, and Michael S. Bernstein. “A Glimpse Far into the Future:
Understanding Long-term Crowd Worker Accuracy.” arXiv preprint arXiv:1609.04855 (2016).
28. Top down (20 questions)
• Build classifier with 100 root
categories, not 20K leaf
categories
• HIT 1 labels candidate root
examples (only 10-20K ex. req’d)
• HIT 2 verifies root members
• Split root categories
• Repeat
Challenge: Minimizing cost per training example
Bottom up (data mining)
• Mine your source data for clusters
• HIT 1 assigns labels to clusters
• HIT 2 verifies members of clusters
• Delete clusters members from
source data
• Repeat
Divide and conquer to maximize validation rates
ML goal: recognize 20K categories; Dataset goal: 1K examples / category.
29. Challenge: Success
• Eventually, user input will
differ from the data you
trained on
• Assess your actual
recognition rates
• Use errors to guide
expansion of your test and
training datasets
31. Amazon Rekognition
A deep learning-based image recognition service
Search, verify, and organize millions of images
Object and scene
detection
Facial analysis Face comparison Facial recognition
32. What do people see?
• People see a lot more than what is imaged on the retina
Vision involves a process called “unconscious inference” in neuroscience
The largely unconscious nature of the inferences is confirmed by the study
of optical illusions
• In order for a human observer to recognize an image,
two neuronal processes come together:
Sensory activation from the eyes (referential system)
Information from past experience that is stored in distributed regions across
the brain (inferential system)
33. What do you see in the yellow bounding box (“region proposal”)?
“a hat”?
34. We “know” from correlation with other image crops and past experience that it’s a baby
People don’t classify “region proposals” in isolation
What do you see in the yellow bounding box (“region proposal”)?
“a baby”!
52. Group image verification example
Yes
No
Sample Images
Descriptions>
Progress
6/200
Back
Does each image contain a cat?
53. How to improve quality and consistency
Make the HITs “bite”-sized
Create clear and concise instructions
Ask multiple people and build consensus
Include control images to measure performance of workers
Use qualifications and white/black lists to control workforce