recording your question and again to stop. side, User ? Database - al Client mote Services and Worker Interface ؼُ٦وٝ؝ٝؾُذ٦ءّٝך⢽7J[8J[ 鋔鋙ꥺְָ罏佄䴂ءأذيח➂穈鴥 Ύءأذيⰻ鿇 ך➂ָ㔐瘶 J. Bigham et al.: VizWiz: Nearly real-time answers to visual questions, In UIST, 2010. ِ٦ؠָ颵㉏䫎珲 ⢽չ؝٦ָٝⰅ 綸כוպ 5/58
YES YES NO YES NO YES 㔐瘶罏 ? ? ? ㉏겗 腉⸂䱿㹀 A. P. Dawid and A. M. Skene: Maximum likelihood estimation of observer error-rates using the EM algorithm, Journal of the Royal Statistical Society. Series C (Applied Statistics), 1979. 㔐瘶罏ך腉⸂䱿㹀׃姻鍑✮庠ח欽ְ 姻鍑 17/58
= Pr [ti = 1 | {yij }] / p Y j ✓yij j (1 ✓j)(1 yij ) 姻鍑ָ:&4ך㉏겗דך姻瘶桦ך״ֲזך ✓j = P i qiyij P i qi , j = P i (1 qi) yij P i (1 qi) p = Pr [ti = 1] 19/58 姻鍑
YES YES NO YES NO YES ? ? ? ㉏겗 J. Whitehill et al.: Whose vote should count more: optimal integration of labels from labelers of unknown expertise, In NIPS, 2009. 㔐瘶罏 㔐瘶罏ך腉⸂ה㉏겗ךꨇ僒䏝䱿㹀׃姻鍑✮庠 ꨇ僒䏝䱿㹀 姻鍑 20/58
⎼ 7FSJGZ 吤姻铎ך嗚⳿ ➭➂ך瘶ִ⢪ג鍑ַׇ Figure 2. Crowdproof is a human-augmented proofreader. J. Bigham et al.: Soylent: A word processor with a crowd inside, In UIST, 2010. 穠卓 穠卓 剑穄穠卓 29/58
Zhou: No oops, you won’t do it again: Mechanisms for self-correction in crowdsourcing, In ICML, 2016. ➭➂ך瘶ִ䲿爙׃荈䊹鎍姻⤛ׅ չ؟ٝؿٓٝءأ؝ךⱖ溪鼅דְֻׁպ չ֮זך㔐瘶ָ麩גְה䙼ֲ㜥さכ⥜姻׃גְֻׁպ ֮זך㔐瘶 ➭ך➂ך㔐瘶 30/58
䗳銲 ⥜姻 㹋倵 鐰⣣ָ 䗳銲 ⥜姻䖓ך 嫰鯰 穄✪ YES NO NO YES P. Dai et al.: Decision-theoretic control of crowd-sourced workflows, In AAAI, 2010. ،ٕ؞ٔؤيⵖ䖴ח״⥜姻ה鐰⣣粸鵤ׅ 31/58 ⥜姻䖓ךוַ䱰欽
˖غ؎،أ % ˖⥋걾䚍 % ⡲䧭罏ػًٓ٦ة ˖腉⸂ ( ˖⥋걾䚍 ( ⡲䧭罏 鐰⣣罏腉⸂罋䣁׃䧭卓暟ךㅷ颵✮庠ׅ Y. Baba and H. Kashima: Statistical quality estimation for general crowdsourcing tasks, In KDD, 2013. 32/58
Y. Baba and H. Kashima, Pairwise HITS: Quality estimation from pairwise comparisons in creator-evaluator crowdsourcing process, In AAAI, 2017 䧭卓暟 " 䧭卓暟 # " " # # # " ? #ךㅷ颵 34/58
users time a user clicks on the a we record a conversion ev the advertising system. T the system to optimize th mizing the number of con contribution yield, instead the number of clicks. Although optimizing fo Figure 1: Screenshot of the Quizz system. healthline 1*QFJSPUJT BOE&(BCSJMPWJDI2VJ[[UBSHFUFEDSPXETPVSDJOHXJUIBCJMMJPO QPUFOUJBM VTFST *O888 㼔㹺ָ״ֻ⢪ֲ嗚稊ؙؒٔ䩛ַָהׅ 姻瘶桦؝ٝغ٦آّٝ桦 ה׃ג⢪欽 39/58
D E A D A C A B B C A A D B E A D E A C E D A A E E D E D C D E A E D E A D D B C D A A A A A A A A A A A A A A A A A A B B B B B B B B B B B B B B B B B B B B B B B C C C C C C C C C C C C C C C C C C C C C C D D D D D D D D D D D D D D E E E E E E E E E E E E E E E E E E Question Worker { Experts Question Question A D A B C C D MV ꬊ㼔㹺 㢳 侧 寸 ㉏겗 㼔㹺 䌢ח姻瘶 ٓٝتيח㔐瘶 ˟"։&ך䫛 Ⰻגך㉏겗ד"ָ姻瘶 1 2 3 4 铎瘶ָ涪欰 J. Li, Y. Baba and H. Kashima: Hyper Questions: Unsupervised Targeting of a Few Experts in Crowdsourcing, In CIKM, 2017. 姻鍑劢濼ך㉏겗ח㼎ׅ㔐瘶ַ㼔㹺鋅אֽ 㢳侧寸ָ㣟侁ׅ噰畭ז⢽ 40/58
3 4 (1, 2, 3) (1, 2, 4) (1, 3, 4) (2, 3, 4) B B B )ZQFS RVFTUJPO ח㢌䳔 A A A A A A A A A A A B B B B C D D E E ਪᕚ 㙢㖽㘞 {1, 2, 3} AAA AAA {1, 2, 4} AAA AAA {1, 3, 4} {2, 3, 4} AAA AAA AAA AAA ABD ABA ADA BDA EBC EBA ECA BCA EBB EBD EBD BBD ൴ਪᕚ 㙢㖽㘞 ൴ਪᕚ 㗸㗮Ϋഈʹ ม౬ 1 εӔ 2 A A A A A A A A A A A B B B B C D D E E ਪᕚ 㙢㖽㘞 {1, 2, 3} AAA AAA {1, 2, 4} AAA AAA {1, 3, 4} {2, 3, 4} AAA AAA AAA AAA ABD ABA ADA BDA EBC EBA ECA BCA EBB EBD EBD BBD ൴ਪᕚ 㙢㖽㘞 ൴ਪᕚ 㗸㗮Ϋഈʹ ม౬ 1 εӔ 2 㼔㹺ず㡦ך㔐瘶ָ♧荜ֿׅהⵃ欽׃㢳侧寸 㼔㹺ך㔐瘶כ♧荜׃ 㢳侧崢הז ꬊ㼔㹺ך㔐瘶 כלאֻ ˘ ˘ 41/58
〡؝ىד涪鋅 +5BOHFUBM3FGMFDUJOHPOUIF%"31"SFECBMMPPODIBMMFOHF *O $PNNVOJDBUJPOTPGUIF"$. contributed articles pla use pu nin ins et ing dis on Ma use cro wa cia on ba the tea in pro firs Figure 1. Locations in the DARPA Red Balloon Challenge. Figure 2. Example recursive incentive-structure process for the MIT team. ➭➂ַך䱿讂ח㛇בְג㼔㹺鋅אֽ 42/58
㼔䚍ָ鵚ַ׆黅ַ׆ך➂ח ⣛걾ׅ⫘ぢָ֮ o 植㖈ךةأؙꆀח㛇בֻⶴ䔲 o ٓٝتيⶴ䔲 ! 㼔䚍כةأؙ⚥ך⽃铂ה麓ך㸣✪㾶娖ַ䱿㹀 ➂ָ㼔㹺䱱ׅ麓玎ٌرٕ⻉ׅ H. Sun et al.: Analyzing expert behaviors in collaborative networks, In KDD, 2014. customers. A task is posted e network from an expert to When an expert cannot solve ., where to transfer a task) is y affect the completion time empt to deduce the cognitive model the decision making of s where a routing decision is patterns. interesting phenomenon that ask to someone whose knowl- or too different from his own. xpertise difference based rout- e formalize multiple routing nt both rational and random a generative model to com- of tasks, our model not only ences very well, but also accu- n time. Under three different significantly outperforms al- han 75% accuracy gain. In In service businesses, a service provider of expert network where service agents coll problems reported by customers. Bugzilla ing system where software developers joint ed bugs in projects. In a classic collaborati receiving a task, an expert first tries to so the expert will route the task to another is completed until it reaches an expert w solution. ( $ % & ' ) * + ƚϭ ƚϮ Figure 1: A Sample Collaborativ Figure 1 shows a sample collaborative n 44/58
4UFQ鿇ⴓ㉏겗ך鍑꧊秈׃剑穄涸ז鍑⳿⸂ A. Kulkarni et al.: Collaboratively crowdsourcing workflows with Turkomatic, In CSCW, 2012. ㉏겗ⴓⶴ꧊㔚ד遤ְ㣐鋉垷ז㉏겗鍑寸ׅ ㉏겗⢽չջ㎳倯⤑ռכ姻׃ְַպָذ٦وך㼭锷俑㛁瘗 鿇ⴓ㉏겗չ㎳כ״ֻזְպהְֲ媮衅㛁瘗 鿇ⴓ㉏겗չ䗳銲ז㎳ָ֮պהְֲ媮衅㛁瘗 ˘ 47/58
圫ղז؛٦أדך 勲䚍ָ 椚锷涸٥㹋꿀涸ח爙ׁגְ N. Garg et al.: Collaborative optimization for collective decision-making in continuous spaces, In WWW, 2017. 然桦涸⺟ꂁ꣬♴岀ח⦺ג꧊㔚ד鍑䱱稊ׅ ㉏겗⢽✮皾ך寸㹀 猰㷕䮶莆顤 爡⠓⥂ꥺ顤 植㖈ך鍑 鍑⹛ַׅ 48/58
P. Siangliulue et al.: Toward collaborative ideation at scale: Leveraging ideas from others to generate more creative and diverse ideas, In CSCW, 2015. ،؎ر،鋅ׇさֲֿהד✼ְך涪䟝⤛ׅ " # $ 51/58
J. Li, Y. Baba, and H. Kashima: Simultaneous clustering and ranking from pairwise comparisons, In IJCAI, 2018 (to appear). ،؎ر،겲⡂䏝٥⮚⸋꧊㔚דⴻ倖׃〳鋔⻉חⵃ欽 ㉏겗⢽չ㹀劍ذأزדךؕٝصּؚٝחכպ "չؕٝصؚٝכ⽯鷌㷕պ #չ㉏겗ךꂁ㢌ִպ "չ娄ֹ㔐ג湊鋔պ #չ欰䖝׀הח㉏겗㢌ִպ "ה#כ⡂גְ "ה#כ⡂גְזְ "ך倯ָ葺ְ #ך倯ָ葺ְ 53/58