On Optimizing Human-Machine Task Assignments
Andreas Veit
Michael J. Wilber
Rajan Vaish
Serge J. Belongie
James Davis
Vishal Anand
Anshu Aviral
Prithvijit Chakrabarty
Yash Chandak
Sidharth Chaturvedi
Chinmaya Devaraj
Ankit Dhall
Utkarsh Dwivedi
S. Gupte
S. N. Sridhar
Karthik Paga
Anuj Pahuja
Aditya Raisinghani
Ayush Sharma
Shweta Sharma
Darpana Sinha
Nisarg Thakkar
K. B. Vignesh
Utkarsh Verma
Kanniganti Abhishek
A. Agrawal
A. Aishwarya
Aurgho Bhattacharjee
S. Dhanasekar
Venkata Karthik Gullapalli
Shuchita Gupta
G. Chandana
Kinjal Jain
Simran Kapur
Meghana Kasula
Shashi Kumar
Parth Kundaliya
U. Mathur
Alankrit Mishra
Aayush Mudgal
Aditya Nadimpalli
M. Nihit
Akanksha Periwal
Ayush Sagar
A. Shah
V. Sharma
Yashovardhan Sharma
F. Siddiqui
Virender Singh
S. Abhinav
Anurag D. Yadav
- HAI

Abstract
When crowdsourcing systems are used in combination with machine inference systems in the real world, they benefit the most when the machine system is deeply integrated with the crowd workers. However, if researchers wish to integrate the crowd with "off-the-shelf" machine classifiers, this deep integration is not always possible. This work explores two strategies to increase accuracy and decrease cost under this setting. First, we show that reordering tasks presented to the human can create a significant accuracy improvement. Further, we show that greedily choosing parameters to maximize machine accuracy is sub-optimal, and joint optimization of the combined system improves performance.
View on arXivComments on this paper