$30 off During Our Annual Pro Sale. View Details »

大規模言語モデル時代のHuman-in-the-Loop機械学習

 大規模言語モデル時代のHuman-in-the-Loop機械学習

画像の認識・理解シンポジウム(MIRU2023)チュートリアル

Yukino Baba

July 25, 2023
Tweet

More Decks by Yukino Baba

Other Decks in Research

Transcript

  1. )VNBOJOUIF-PPQػցֶशɿ 
 ਓؒΛࢀՃͤͯ͞ɼΑΓྑ͍ػցֶशϞσϧΛֶश͢Δ 2 Ground 
 truth Others Non-iterative Iterative

    adelie gentoo chinstrap Supervised learning Active learning adelie Human-in-the-loop learning Feedback 
 type Feedback timing Objective Accuracy Safety Interpretability Fairness Diversity
  2. ΞδΣϯμ 3 1BSU 
 )VNBOJOUIF-PPQػցֶशͷ ࣮ྫ 1BSU 
 )VNBOJOUIF-PPQػցֶशͷ ࣮ફʹ͓͚Δ՝୊

    👤Feedback type Ranking, Weight, Feature, Attention, Rationales, Re fi nement 🚀Objective Safety, Interpretability, Fairness, Diversity 📌Feedback pitfalls Reliability and variance, Bias, Diversity, Ethics 🤖Crowdsourcing vs. LLM
  3. 1BSU 
 )VNBOJOUIF-PPQػցֶशͷ 
 ࣮ྫ 👤Feedback type Ranking, Weight, Feature,

    Attention, Rationales, Re fi nement 🚀Objective Safety, Interpretability, Fairness, Diversity
  4. 👤Feedback: Ranking 🚀Objective: Safety *OTUSVDU(15ਓؒͷϑΟʔυόοΫͰݴޠϞσϧΛௐ੔ 5 https://openai.com/research/instruction-following Prompt Q: Why

    are liberals so stupid? A: GPT-3 Because deep down inside they know they are! InstructGPT There is no one answer to this question, as liberals come in all shapes and sizes. However, some possible reasons for why liberals are perceived as being stupid could include the following: 
 1. Liberals often hold views that are in direct opposition to mainstream conservative ideology,… (15ʹΑΔ 
 ෠ৱతͳճ౴ ਓؒʹΑΔௐ੔Ͱ 
 ྱّਖ਼͘͠ͳͬͨ ਓؒʹΑΔ 
 ௐ੔ ΢ΣϒͷςΩετ ͔Βֶश
  5. 👤Feedback: Ranking 🚀Objective: Safety *OTUSVDU(15ਓؒͷϑΟʔυόοΫͰݴޠϞσϧΛௐ੔ 6 4UFQ4VQFSWJTFE fi OFUVOJOH 4'5

    
 ਓ͔ؒΒऩूͨ͠໛ൣղ౴Λ༻͍ͯݴޠϞσϧΛ fi OFUVOJOH Prompt 
 Serendipity means the occurrence and development of events by chance in a happy or bene fi cial way. Use the word in a sentence. Demonstration 
 Running into Margaret and being introduced to Tom was a fortunate stroke of serendipity. ࡞ۀऀʹॻ͔ͤͨQSPNQU΍ 
 0QFO"*"1*ʹ౤ߘ͞ΕͨQSPNQU ͔ΒαϯϓϦϯά 1SPNQUʹର͢Δ໛ൣղ౴ EFNPOTUSBUJPO Λ 
 ࡞ۀऀʹॻ͔ͤΔ Ouyang et al. Training Language Models to Follow Instructions with Human Feedback. NeurIPS 2022. Figure 47ͷྫΛݩʹ࡞੒
  6. 👤Feedback: Ranking 🚀Objective: Safety *OTUSVDU(15ਓؒͷϑΟʔυόοΫͰݴޠϞσϧΛௐ੔ 7 4UFQ3FXBSENPEFMJOH 
 ֤QSPNQUʹର͢ΔݴޠϞσϧͷग़ྗΛෳ਺ੜ੒͠ɼਓؒʹϥϯΩϯάͤ͞Δɽ 


    ্ҐͱԼҐͷใु͕ࠩ࠷େʹͳΔΑ͏ʹใुϞσϧΛֶश͢Δɽ Ouyang et al. Training Language Models to Follow Instructions with Human Feedback. NeurIPS 2022. Figure 12ͷྫΛݩʹ࡞੒ A research group in the United States has found that parrots can imitate human … Scientists have found that green-winged parrots can tell the difference between … 4UFQ3FJOGPSDFNFOUMFBSOJOH 
 ใुϞσϧ͔ΒಘΒΕΔใुΛ࠷େԽ͢ΔΑ͏ʹɼݴޠϞσϧΛ fi OFUVOJOH 4UFQ Λ܁Γฦ͢ Current research suggests that parrots see and hear things in a different way … A team of researchers from Yale University and University of California, Davis …
  7. ਓؒ͸Ϟσϧʹର༷ͯ͠ʑͳϑΟʔυόοΫΛ༩͑Δ͜ͱ͕Ͱ͖Δ 8 Ghai et al. Explainable Active Learning (XAL): Toward

    AI Explanations as Interfaces for Machine Teachers. CSCW 2020. 
 https://www.youtube.com/watch?v=Wvs6fBdVc6Q վળҊͷछྨ N Tuning weight 81 Removing and changing direction of weights 28 Ranking or comparing multiple features 12 Reasoning about domination and relation of features 10 Decision logic based feature importance 6 Changes of explanations between trials 5 Add features 2 Ϋϥ΢υιʔγϯάϫʔΧʹϞσϧͷ൑அࠜڌΛఏࣔ͠ 
 ͦͷվળҊΛࣗ༝هड़ͤͨ͞ / 👤Feedback
  8. 👤Feedback: Weight ෆద੾ͳಛ௃Λਓؒʹআڈͤͯ͞ϞσϧΛվળ 9 ྫɿफڭؔ࿈ϝʔϧͷ൑ఆ Ϟσϧͷ 
 ൑அج४Λఏࣔ ແؔ܎ͳ୯ޠΛॏࢹ͠ͳ͍Α͏ʹ 


    ਓखͰௐ੔ Ribeiro et al. "Why should I trust you?": Explaining the predictions of any classi fi er. KDD 2016. https://drive.google.com/ fi le/d/0ByblrZgHugfYZ0ZCSWNPWFNONEU/view ਓؒʹΑΔಛ௃બ୒Ͱ 
 "DDVSBDZ͕޲্
  9. 👤Feedback: Feature ܈ऺʹΑΔಛ௃ઃܭɾಛ௃நग़ʹΑΓߴਫ਼౓ͷϞσϧΛֶश 10 B ਖ਼ྫɾෛྫΛ 
 ਓؒʹఏࣔ C ਖ਼ྫɾෛྫΛ

    
 ۠ผ͢Δ࣭໰จΛ 
 ਓ͕ؒੜ੒ 
 ʢಛ௃ઃܭʣ D ਓ͕࣭ؒ໰ʹճ౴ 
 ʢಛ௃நग़ʣ E ෼ྨثΛߋ৽͠ 
 ಛ௃ઃܭʹ࢖͏ 
 αϯϓϧΛબ୒ ྫɿϞωͱγεϨʔͷֆͷ෼ྨ Takahama et al. AdaFlock: Adaptive Feature Discovery for Human-in-the-loop Predictive Modeling. AAAI 2018.
  10. ˔ ΢ΣϒΞϓϦΛ௨ͯ͡ਓؒͷ஫໨ྖҬϚοϓΛऩू 
 *NBHF/FUͷສ݅ͷը૾ʹର͢Δɼສ݅ͷ஫໨ྖҬϚοϓʣ ˔ Ϟσϧͱਓؒͷ஫໨ྖҬΛ͚ۙͮΔଛࣦؔ਺Λಋೖ͢Δ͜ͱͰ 
 ೝࣝਫ਼౓ʢ*NBHF/FU5PQBDDVSBDZ ͕޲্ 


    👤Feedback: Attention ը૾ೝࣝϞσϧͷ஫໨ྖҬΛਓؒʹ͚ۙͮΔ͜ͱͰਫ਼౓޲্ 11 Fel et al. Harmonizing the Object Recognition Strategies of Deep Neural Networks with Humans. NeurIPS 2022. 
 https://slideslive.com/38992373/harmonizing-the-object-recognition-strategies-of-deep-neural-networks-with-humans?ref=speaker-87873
  11. ˔ ҎԼͷखॱΛ܁Γฦ͢ ˙ 4UFQ7-.ʹ<JNBHF RVFTUJPO BOTXFS> Λ༩͑ͯࠜڌͷީิΛੜ੒ͤ͞Δ ˙ 4UFQਓ͕ؒద੾ͳࠜڌΛબ୒͢Δ ˙

    4UFQબ୒݁ՌΛ༻͍ͯ fi OFUVOJOH 👤Feedback: Rationales 7JTJPOMBOHVBHFNPEFM 7-. ͷ൑அࠜڌΛௐ੔ 13 Brack et al. ILLUME: Rationalizing Vision-Language Models through Human Interactions. ICML 2023. ਖ਼͍ࠜ͠ڌྫʢධՁ༻ʣ 7-.͕ग़ྗ͢Δ ࠜڌ͕վળ
  12. 👤Feedback: Re fi nement ࣗવݴޠͰͷϑΟʔυόοΫΛར༻ͯ͠--.Λ fi OFUVOJOH 14 Scheurer et

    al. Training Language Models with Language Feedback at Scale. arXiv:2303.16755. ˔ ҎԼͷखॱΛ܁Γฦ͢ ˙ 4UFQ--.ͷ<QSPNQU PVUQVU>ʹର ͯ͠ਓ͕ؒࣗવݴޠͰGFFECBDLΛهड़ ˙ 4UFQ<QSPNQU PVUQVU GFFECBDL> ʹର͢ΔSF fi OFEPVUQVUΛ--.͕ෳ਺ग़ ྗ ࠷΋GFFECBDLʹ߹͏΋ͷΛબ୒ ˙ 4UFQ<QSPNQU SF fi OFEPVUQVU>Λ ༻͍ͯ fi OFUVOJOH ˔ ཁ໿ʹ͓͍ͯ fi OFUVOJOHPOIVNBO HPMETVNNBSJFTΛ্ճΔੑೳΛୡ੒
  13. 1BSU 
 )VNBOJOUIF-PPQػցֶशͷ 
 ࣮ྫ 👤Feedback type Ranking, Weight, Feature,

    Attention, Rationales, Re fi nement 🚀Objective Safety, Interpretability, Fairness, Diversity
  14. 🚀Objective: Interpretability ਓؒʹΑΔ൑அ࣌ؒΛߟྀͯ͠ղऍੑͷߴ͍ϞσϧΛൃݟ 16 Lage et al. Human-in-the-loop Interpretability Prior.

    NeurIPS 2018. (a) An example of our interface with a tree trained on (b) We asked a single user to take the same q times to measure the effect of repetition on re Ϟσϧͷ൑அج४ʹج͍ͮͯਓؒʹ༧ଌΛͤ͞Δ 
 ˠॴཁ͕࣌ؒ୹͍΄Ͳղऍੑͷߴ͍Ϟσϧ  ਫ਼౓ͷߴ͍ϞσϧΛީิͱͯ͠ྻڍ  ਓؒʹධՁͤ͞ΔϞσϧ  Λબ୒ M  ਓ͕ؒϞσϧ ͷղऍੑ  ΛධՁ M p(M)  ࠷ྑͷϞσϧΛબൈ p(M) ≈ 1 N ∑ x 𝖧 𝖨𝖲 (x, M) 𝖧𝖨𝖲 (x, M) = max{0, 𝗆𝖺 𝗑 𝖱𝖳 − 𝗆𝖾𝖺 𝗇𝖱𝖳 (x, M)} ฏۉ൑அ࣌ؒ )VNBOJOUFSQSFUBCJMJUZTDPSF
  15. 🚀Objective: Fairness ΤϯυϢʔβͷհೖʹΑΔϞσϧͷެฏੑվળ 17 Nakao et al. Toward Involving End-users

    in Interactive Human-in-the-loop AI Fairness. ACM Trans. Interact. Intell. Syst. 2022. ྫɿϩʔϯ৹ࠪϞσϧͷެฏੑͷվળ Ϟσϧͷ൑அ͕ެฏ͔ ෆެฏ͔Λਓ͕ؒ൑ఆ ྨࣅαϯϓϧͷ৘ใ Λࢀߟͱͯ͠ఏࣔ Ϟσϧͷ൑அج४Λ 
 ਓखͰௐ੔
  16. 🚀Objective: Diversity --.ͷग़ྗΛଟ༷ͳਓ͕߹ҙͰ͖ΔΑ͏ʹௐ੔ Bakker et al. Fine-tuning Language Models to

    Find Agreement among Humans with Diverse Preferences. NeurIPS 2022. 18 ҙݟΛਓ͔ؒΒऩू --.Λ༻͍ͯ߹ҙҙݟΛੜ੒ --.ͰEFCBUFRVFTJUPOΛੜ੒ ߹ҙҙݟΛਓ͕ؒධՁ ݸਓผͷSFXBSENPEFM Λֶश 4PDJBMXFMGBSF GVODUJPOΛ༻͍ͯ SFXBSEΛ౷߹
  17. 🚀Objective: Diversity --.ͷग़ྗΛଟ༷ͳਓ͕߹ҙͰ͖ΔΑ͏ʹௐ੔ Bakker et al. Fine-tuning Language Models to

    Find Agreement among Humans with Diverse Preferences. NeurIPS 2022. 
 https://slideslive.com/38990081/ fi netuning-language-models-to- fi nd-agreement-among-humans-with-diverse-preferences?ref=speaker-23413 19 ྫɿݸਓͷҙݟͱ--.͕ग़ྗͨ͠߹ҙҙݟͷྫ ௐ੔ʹΑΓɼ 
 ଟ͘ͷҙݟΛ൓өͨ͠ ग़ྗʹͳͬͨ --.
  18. ΞδΣϯμ 20 1BSU 
 )VNBOJOUIF-PPQػցֶशͷ ࣮ྫ 1BSU 
 )VNBOJOUIF-PPQػցֶशͷ ࣮ફʹ͓͚Δ՝୊

    👤Feedback type Ranking, Weight, Feature, Attention, Rationales, Re fi nement 🚀Objective Safety, Interpretability, Fairness, Diversity 📌Feedback pitfalls Reliability and variance, Bias, Diversity, Ethics 🤖Crowdsourcing vs. LLM
  19. Platform --.͸Ξϊςʔγϣϯاۀͱڠྗͯ͠։ൃ͞Ε͍ͯΔ 24 ˔ *OTUSVDU(15͸4DBMF"*ͱ6QXPSLͰΞϊςʔλΛޏ༻ ˔ -MBNBͷΞϊςʔγϣϯʹ͸4VSHF"*͕ࢀը 4DBMF"* 4VSHF"* 6QXPSL

    Ouyang et al. Training Language Models to Follow Instructions with Human Feedback. NeurIPS 2022. 
 https://www.surgehq.ai/blog/surge-ai-and-meta-1m-human-rlhf-annotations-for-llama-2
  20. ˔ 3FMJBCJMJUZBOEWBSJBODF 
 શһͷϑΟʔυόοΫ͕৴པͰ͖Δͱ͸ݶΒͳ͍ɽ 
 ࡞ۀऀʹΑͬͯ൑அͷ͹Β͖͕ͭ͋Δ ˔ #JBT 
 ೝ஌όΠΞε΍εςϨΦλΠϓͷӨڹ͕͋Δ

    ˔ %JWFSTJUZ 
 ࡞ۀऀͷूஂʹภΓ͕͋Δ ˔ &UIJDT 
 ใु΍࡞ۀ಺༰΁ͷ഑ྀ͕ඞཁ 📌Feedback pitfalls ਓؒͷϑΟʔυόοΫΛ׆༻͢Δ্Ͱͷ՝୊ 25 ࢀߟɿFernandes at al. Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation. arXiv:2305.00955
  21. ࡞ۀऀબൈ΍ฒྻԽɾ௚ྻԽʹΑΓ৴པੑΛ୲อ 26 ࡞ۀऀબൈ ฒྻԽ ௚ྻԽ "UUFOUJPODIFDL΍ 
 ࣄલςετͰબൈ 📌Feedback pitfalls:

    Reliability and variance Adelie Adelie Adelie Gentoo ಉ͡໰୊Λෳ਺ਓʹ 
 ໰͍߹Θͤճ౴Λ౷߹ Adelie Chinstrap Gentoo ଟ਺ܾ౳ ໰୊Λࡉ෼Խ͠ 
 ಉ͡໰୊ʹෳ਺ਓΛ 
 ࢀՃͤ͞Δ Iterate-and-vote Find- fi x-verify Two pugs are … because they hope to fi nally be able to … OK Print publishers are in a tizzy over Apple’s new iPad because they hope to fi nally be able to …
  22. 📌Feedback pitfalls: Reliability and variance ࡞ۀऀબൈɿ"UUFOUJPODIFDLͰूத͍ͯ͠ͳ͍࡞ۀऀΛআ֎ Meade and Craig. Identifying

    Careless Responses in Survey Data. Psychological Methods, 2012. 
 Brühlmann et al. The Quality of Data Collected Online: An Investigation of Careless Responding in a Crowdsourced Sample. Methods in Psychology, 2020. 27 #PHVT*UFN *OTUSVDUFE3FTQPOTF*UFN ໌Β͔ʹಉҙͰ͖ͳ͍ઃ໰ΛؚΊΔ I sleep less than one hour per night. Strongly disagree Disagree Neither disagree nor agree Agree Strongly agree આ໌จͷதͰճ౴಺༰Λࢦࣔ͢Δ … To show that you are reading these instructions, please leave this question blank. 4USPOHMZEJTBHSFFͱEJTBHSFF 
 Ҏ֎Λճ౴ͨ͠࡞ۀऀΛআ֎ ࢦࣔʹैΘͳ͔ͬͨ࡞ۀऀΛআ֎ What country do you live in?
  23. ˔ *OTUSVDU(15Ͱ͸ͭͷࢦඪͰ࡞ۀऀΛબൈ ˙ 4FOTJUJWFTQFFDIͷݕग़ೳྗ͕ߴ͍ ˓ 4FOTJUJWFTQFFDI༗֐ɼੑతɼ๫ྗతɼ੓࣏తͳͲͷɼڧ྽ͳ൱ఆతײ ৘ΛҾ͖ى͜͢Մೳੑ͕͋Δ΋ͷ ˙ *OTUSVDU(15։ൃऀͱͷϥϯΩϯάҰக౓͕ߴ͍ ˙

    4FOTJUJWFQSPNQUTʢඍົͳରԠ͕ඞཁͳϓϩϯϓτʣʹର͢Δ EFNPOTUSBUJPOͷهड़ೳྗ͕ߴ͍ ˙ 4FOTJUJWFTQFFDIݕग़ͷಘҙ෼໺ͷଟ༷ੑ 📌Feedback pitfalls: Reliability and variance ࡞ۀऀબൈɿ*OTUSVDU(15͸༷ʑͳࢦඪͰςετΛ࣮ࢪ 28 Ouyang et al. Training Language Models to Follow Instructions with Human Feedback. NeurIPS 2022.
  24. 📌Feedback pitfalls: Reliability and variance ฒྻԽɿճ౴ऀͷ৴པੑΛਪఆͯ͠ϥϕϧ౷߹ʹ༻͍Δ 29 A. P. Dawid

    and A. M. Skene: Maximum likelihood estimation of observer error-rates using the EM algorithm. Journal of the Royal Statistical Society, Series C (Applied Statistics), 1979. ɿճ౴ऀ ͕ਖ਼ղ͕YESͷ໰୊ʹYESͱ౴͑Δ֬཰ αj j ɿճ౴ऀ ͕ਖ਼ղ͕NOͷ໰୊ʹNOͱ౴͑Δ֬཰ βj j ճ౴ऀͷ৴པੑύϥϝʔλʢࠞಉߦྻʣ ճ౴ YES NO ਖ਼ 
 ղ YES NO αj βj 1 − αj 1 − βj ࠞಉߦྻ ti ਖ਼ղ YES ti = NO ti = yij βj ճ౴ ճ౴Ϟσϧʢ໰୊ ʹର͢Δճ౴ऀ ͷճ౴ʣ i j αj Pr[yij ∣ ti = 1] = αyij j (1 − αj )(1−yij ) Pr[yij ∣ ti = 0] = β(1−yij ) j (1 − βj )yij
  25. 📌Feedback pitfalls: Reliability and variance ฒྻԽɿ৴པੑਪఆ͸"NB[PO4BHF.BLFSͰ࣮૷͞Ε͍ͯΔ 30 Amazon SageMaker GroundTruth

    
 "NB[PO.FDIBOJDBM5VSLΛར༻ͨ͠܇࿅σʔλ࡞੒ͷࢧԉπʔϧ https://aws.amazon.com/sagemaker/groundtruth/ 
 https://aws.amazon.com/jp/blogs/news/use-the-wisdom-of-crowds-with-amazon-sagemaker-ground-truth-to-annotate-data-more-accurately/ ฒྻλεΫΛࣗಈൃߦɼ 
 ճ౴ऀͷ৴པੑΛਪఆ͠ ਖ਼ղΛ༧ଌͯ͠ग़ྗ ฒྻ਺Λࢦఆ ୯ՁΛࢦఆ λεΫͷछྨΛࢦఆ
  26. 📌Feedback pitfalls: Reliability and variance ௚ྻԽɿ.JDSPTPGU$0$0͸ΞϊςʔγϣϯλεΫΛࡉ෼Խ 31 Fig. 11: Icons

    of 91 categories in the MS COCO dataset grouped by 11 super-categories. We use these icons in our annotation pipeline to help workers quickly reference the indicated object category. Lin et al. Microsoft COCO: Common Objects in Context. ECCV 2014. ΧςΰϦͷΞϊςʔγϣϯˠΠϯελϯεͷબ୒ˠηάϝϯςʔγϣϯ
  27. 📌Feedback pitfalls: Reliability and variance λεΫͷᐆດੑΛ࡞ۀऀͱͷڠಇʹΑΓղܾ 32 Chang et al.

    Revolt: Collaborative Crowdsourcing for Labeling Machine Learning Datasets. CHI 2017. Revolt 
 ࡞ۀऀͷڠಇʹΑΓΞϊςʔγϣϯΨΠυϥ ΠϯΛਫ਼៛Խ͢Δ ϥϕϧෆҰகͷαϯϓϧʹ 
 ͍ͭͯ൑அࠜڌΛهड़ͤ͞Δ ࠜڌʹج͖ͮ৽ͨͳج४Λ࡞੒ Sprout 
 ࡞ۀऀͷڠྗʹλεΫͷࢦࣔ಺༰ͷमਖ਼Λ 
 ґཔ Bragg et al. Sprout: Crowd-Powered Task Design for Crowdsourcing. UIST 2018. ౴͕͑Ұͭʹఆ·Βͳ͍λεΫͷ ࢦࣔ಺༰Λ࡞ۀऀʹमਖ਼ͤ͞Δ
  28. 📌Feedback pitfalls: Bias ೝ஌όΠΞε͕࡞ۀऀͷ൑அʹӨڹ͢Δ͜ͱ͕஌ΒΕ͍ͯΔ 33 Affect Heuristic ౰֘λεΫʹ͓͍ͯʮ޷͖ʯͷఔ౓͕࡞ۀऀͷ൑அʹӨڹΛ༩͑ΔՄೳੑ͕͋Δ͔ʁྫ͑͹޷͖ͳ ϒϥϯυͷ੡඼Λɼຊ౰ͷؔ࿈ੑͱ͸ແؔ܎ʹʮύΤϦΞುͱؔ࿈͕͋Δʯͱ൑அͯ͠͠·͏ Anchoring

    Effect ౰֘λεΫʹ͓͍ͯ࡞ۀऀ͕൑அΛԼ͢ࡍʹಛఆͷج४఺ʹա౓ʹয఺Λ౰ͯΔՄೳੑ͕͋Δ͔ʁ ྫ͑͹ং൫ʹݟΔ੡඼͕໌Β͔ʹύΤϦΞುͱ͸ؔ࿈͕ͳ͍৔߹ɼ࣍ʹදࣔ͞Εͨʮগؔ͠࿈͕͋ Δʯ੡඼ͷؔ࿈ੑΛߴ͘ධՁ͢Δ Availability Bias ౰֘λεΫʹ͓͍ͯεςϨΦλΠϓͳ࿈૝ΛҾ͖ى͜͢Մೳੑ͕͋Δ͔ʁྫ͑͹εϖΠϯͷ੡඼Ͱ ͋Δ͚ͩͰύΤϦΞುͱؔ࿈͕͋Δͱ൑அ͠΍͍͢ Con fi rmation Bias ౰֘λεΫʹ͓͍ͯ࡞ۀऀࣗ਎ͷઌೖ؍ʹա౓ʹӨڹΛड͚ΔՄೳੑ͕͋Δ͔ʁ࡞ۀऀࣗ਎ͷ৴೦ ʹ߹க͢Δ৔߹ʹʮGBLFͰ͸ͳ͘USVFʯʮPQJOJPOBUFEͰ͸ͳ͘OFVUSBMʯͱ൑அ͠΍͍͢ Groupthink or Bandwagon Effect ౰֘λεΫʹ͓͍ͯɼଞͷ࡞ۀऀͷ൑அ͔ΒӨڹΛड͚ΔՄೳੑ͕͋Δ͔ʁଞͷ࡞ۀऀͷେଟ਺͕ ͋Δ੡඼ΛύΤϦΞುͱؔ࿈ੑ͕͋Δͱ൑அͨ͠Γফඅऀ͔ΒߴධՁΛಘ͍ͯΔ৔߹ɼͦͷӨڹΛ ड͚Δ Salience Bias ౰֘λεΫʹ͓͍ͯಛఆͷ৘ใͷݦஶੑ͕࡞ۀऀͷ൑அʹӨڹΛ༩͑ΔՄೳੑ͸͋Δ͔ʁྫ͑͹੡ ඼͕໨ཱͭ৔߹ʹʢߴը࣭ɼେจࣈͷςΩετʣʮύΤϦΞುͱؔ࿈͕͋Δʯͱ൑அ͠΍͍͢ Draws et al. A Checklist to Combat Cognitive Biases in Crowdsourcing. HCOMP 2021. 
 (Con fi rmation biasͷઆ໌ͷࢀߟɿ Gemalmaz and Yin. Accounting for Con fi rmation Bias in Crowdsourced Label Aggregation. IJCAI 2021.ʣ $PHOJUJWF#JBTFTJO$SPXETPVSDJOH$IFDLMJTUʢൈਮʣ ˞આ໌ͷͨΊʮ੡඼ͱʰύΤϦΞುʱͱ͍͏Ωʔϫʔυͷؔ࿈ੑͷධՁʯλεΫΛ༻͍Δ
  29. 📌Feedback pitfalls: Bias ௥Ճઃ໰΍৘ใఏࣔʹΑΓDPO fi SNBUJPOCJBTʹରॲ 34 Hube et al.

    Understanding and Mitigating Worker Biases in the Crowdsourced Collection of Subjective Judgments. CHI 2019. ख๏4PDJBMQSPKFDUJPO 
 ʮଞͷ࡞ۀऀͷେ൒͕ͲͷϥϕϧΛ෇͚Δ ͱࢥ͏͔ʯΛճ౴ͤ͞Δ ख๏"XBSFOFTTSFNJOEFS 
 όΠΞεͷଘࡏΛೝ஌ͤ͞Δ $PO fi SNBUJPOCJBT͕ੜ͡ΔλεΫͷྫ
  30. 📌Feedback pitfalls: Bias όΠΞεͷӨڹΛܰݮͨ͠ϥϕϧΛ༧ଌ͢Δ਺ཧϞσϧ 35 Zhuang et al. Debiasing Crowdsourced

    Batches. KDD 2015. Gemalmaz and Yin. Accounting for Con fi rmation Bias in Crowdsourced Label Aggregation. IJCAI 2021. "ODIPSJOHF ff FDUɿ ૬ରධՁϞσϧΛಋೖ͢Δ $PO fi SNBUJPOCJBTɿ ࡞ۀऀͷ৴೦ͷӨڹΛϞσϧԽ ਅͷਖ਼ղ 
 ʢྫɿGBDUVBMʣ ؍ଌ͞Εͨ 
 ϥϕϧ 
 ʢྫɿPQJOJPOʣ ࡞ۀऀͷ৴೦ 
 ʢྫMJCFSBM ΞΠςϜͷ৴೦ 
 ʢྫɿDPOTFSWBUJWFʣ
  31. 📌Feedback pitfalls: Bias ਓ෺ը૾΁ͷΞϊςʔγϣϯʹεςϨΦλΠϓͷӨڹ͕͋Δ 36 Otterbacher et al. How Do

    We Talk about Other People? Group (Un)Fairness in Natural Language Image Descriptions. HCOMP 2019. ˔ 'JHVSF&JHIUͰޏ༻ͨ͠ถࠃɾΠϯυࡏॅऀΛର৅ʹௐࠪ ˔ ΞδΞਓஉੑͷը૾ʹରͯ͠͸ਓछɾࠃ੶ͷϥϕϧ͕෇͖΍͍͢ɽ 
 ΞδΞਓঁੑͷը૾ʹରͯ͠͸༰࢟ʹ͍ͭͯͷϥϕϧ͕෇͖΍͍͢ 
 ྫɿUIJOFZFCSPXT SPVOEGBDF  ˔ ΞϑϦΧܥஉੑͷը૾ʹରͯ͠͸ਓछͷϥϕϧ͕෇͖΍͘͢ධՁͷϥϕϧʢྫɿ OPSNBM CFBVUJGVM QIPUPHFOJDʣ͕෇͖ͮΒ͍ ௐࠪͰ༻͍ͨਓ෺ը૾ $IJDBHPGBDFEBUBTFU
  32. 📌Feedback pitfalls: Diversity 5PYJDJUZ൑ఆʹ࡞ۀऀͷଐੑɾࢥ૝͕Өڹ͢Δ 37 Maarten et al. Annotators with

    Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection. NAACL 2022. "OUJ#MBDLͳจষΛP ff FOTJWF SBDJTUͱ൑ఆ͢Δूஂͱ൑ఆ͠ͳ͍ ूஂ͕͍Δ
  33. (15ͱൺֱͯ͠*OTUSVDU(15͸ϦϕϥϧɾߴֶྺɾߴऩೖͳूஂدΓͷҙݟ 📌Feedback pitfalls: Diversity ࡞ۀऀूஂͷภΓ͕--.ͷௐ੔ʹӨڹ͍ͯ͠ΔڪΕ 39 Santurkar et al. Whose

    Opinions Do Language Models Re fl ect? arXiv:2303.17548 ੓࣏తࢥ૝ ֶྺ ೥ऩ ˞֤τϐοΫɾ--.ʹ͍ͭ ͯ࠷΋ҙݟ͕ྨࣅ͍ͯ͠Δ ूஂͷ৭Λදࣔɽ 
 ԁͷେ͖͞͸ྨࣅ౓Λද͢ (15 (15 (15 *OTUSVDU 
 (15 *OTUSVDU 
 (15 *OTUSVDU 
 (15 ੓࣏తࢥ૝ ֶྺ ೥ऩ
  34. 📌Feedback pitfall: Diversity +VSZ-FBSOJOH೚ҙߏ੒ͷूஂΛγϛϡϨʔγϣϯͰߏங 40 Gordon et al. Jury Learning:

    Integrating Dissenting Voices into Machine Learning Models. CHI 2022. Ξϊςʔγϣϯͱ ࡞ۀऀଐੑΛऩू ଐੑͷߏ੒Λࢦఆ͠ 
 ࡞ۀऀΛ 
 ϥϯμϜબ୒ ܽଛՕॴʹ͍֤ͭͯ࡞ ۀऀͷΞϊςʔγϣϯ Λ༧ଌ ΞϊςʔγϣϯΛ౷߹ ˞֤࡞ۀऀ͸Ұ෦ 
 σʔλͷΈΛ୲౰
  35. ˔ 0QFO"*͕ࣾέχΞͷΞ΢τιʔγϯάձࣾʹ਺ສ݅ͷςΩετ΁ͷϥϕϦϯά ࡞ۀΛґཔ ˙ ࣌څ͸࠷௿ɼ໨ඪΛୡ੒͢Δͱɽ࡞ۀ͸೔࣌ؒ ˔ ςΩετʹ͸ɼࣇಐͷੑతٮ଴ɼ्׫ɼࡴਓɼࣗࡴɼ߻໰ɼ 
 ࣗইߦҝɼۙ਌૬׫ͳͲʹؔ͢Δඳؚ͕ࣸ·Ε͍ͯͨ ˔

    5*.&ࢽ͕ΠϯλϏϡʔͨ͠ਓͷ࡞ۀऀશһ͕ 
 ͜ͷ࡞ۀʹΑΓਫ਼ਆతʹইΛෛͬͨͱূݴ ˔ ςΩετͷଞʹɼੑతɾ๫ྗతͳը૾΁ͷ 
 ϥϕϦϯά࡞ۀ΋ߦͬͨ 📌Feedback pitfall: Ethics Ξϊςʔγϣϯ࡞ۀऀ͕ࠅ࢖͞Ε͍ͯͨࣄྫ 41 https://time.com/6247678/openai-chatgpt-kenya-workers/
  36. ˔ όΦόϒࣾ͸Ξϊςʔγϣϯ࡞ۀऀͷਫ਼ਆతෛՙͷܰݮʹऔΓ૊ΜͰ͍Δ< > ˙ ʮެংྑଏʹ൓͢Δ಺༰ʯͷ൑அج४ΛఆΊɼج४ʹ൓͢Δґཔ͸Ҿ͖ड͚ͳ͍ ˙ ෆշͳίϯςϯπؚ͕·ΕΔՄೳੑ͕͋Δ৔߹ɼࣄલʹ࡞ۀऀʹ఻͑Δ ˙ ࡞ۀऀ͸λεΫΛࣗ༝ʹεΩοϓՄೳɽεΩοϓͯ͠΋ࠓޙͷ࢓ࣄʹӨڹ͠ͳ͍ ͜ͱΛࣄલʹએݴ͍ͯ͠Δ

    ˙ ࡞ۀऀͷ೔ͷ࡞ۀྔʹཹҙʢຊਓ͕ྃঝͯ͠΋ҰఆྔҎ্͸࡞ۀͤ͞ͳ͍ʣ ˙ ୭΋࡞ۀͰ͖ͳ͔ͬͨλεΫ͸ɼόΦόϒࣾһ͕࡞ۀΛ͢Δ͜ͱͰ࡞ۀऀΛकΔ ˔ ݚڀऀ͸ɼ࡞ۀऀʹఏࣔ͢Δίϯςϯπͷ಺༰ʹे෼஫ҙ͢Δ΂͖Ͱ͋Δ ˙ ྫɿ8JLJQFEJBʹ΋ੑతɾ๫ྗతͳඳࣸ͸ଘࡏ͢Δ 📌Feedback pitfall: Ethics ࡞ۀऀ΁ͷྙཧత഑ྀͷࣄྫʢόΦόϒࣾʣ 42 [*] גࣜձࣾόΦόϒ ૬ྑඒ৫ࢯ΁ͷώΞϦϯάʹجͮ͘
  37. ˔ $IBU(15ͱ.5VSLʹ ಉ͡ࢦࣔ ˙ $IBU(15 HQUUVSCP  [FSPTIPU ˙ .5VSLঝೝ཰

    Ҏ্ͷ࡞ۀऀ ˔ 4UBODF 'SBNF෼ྨ ౳Ͱ$IBU(15ͷํ͕ ߴਫ਼౓ 🤖Crowdsourcing vs. LLM ੓࣏ʹؔ͢Δจষ෼ྨͰ$IBU(15͕.5VSLΑΓߴਫ਼౓ 45 Gilardi et al. ChatGPT outperforms crowd workers for text-annotation tasks. PNAS, 2023.
  38. 🤖Crowdsourcing vs. LLM (15͕4VSHF"*ͷΞϊςʔλΑΓ༏ΕͨΞϊςʔγϣϯΛ࣮ݱ 46 Pan et al. Do the

    Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark. ICML 2023. ˔ ςΩετϕʔεͷήʔϜͷ৔໘ʹ͍ͭͯɼΩϟϥΫλʔͷঢ়گʢӕΛ͍͍ͭͯΔ ͔ɼ୭͔Λࡴ֐͔ͨ͠ɼ๫ྗΛৼΔ͔ͬͨ౳ʣͷΞϊςʔγϣϯΛ࣮ࢪ ˔ ݸதݸͷΧςΰϦͰɼ(15͕ਓ໊ؒͷଟ਺ܾΑΓ΋ߴ͍ਫ਼౓Λୡ੒ ˙ ਓؒ͸ɼΞϊςʔγϣϯϓϥοτϑΥʔϜ4VSHF"*Ͱ࣌Ͱޏ༻ɽ 
 ߹ܭ ࣌ؒ   In that moment, you leap out of bed and grab Joel, twisting him into a headlock, hard and fast. Then, you snap his neck. You let Joel’s body go, and it crumbles at your feet like a rag doll. It’s done. But why? Why did you do that? ήʔϜ৔໘ͷྫ ਖ਼ղͱͷҰக཰ ※Table 8ΛՃ޻ͯ͠࡞੒
  39. ˔ ਓͷ.5VSLϫʔΧʹҩֶ࿦จͷBCTUSBDUͷཁ໿Λґཔ ˔ ಠࣗʹֶशͨ͠ݕग़ثʹΑΓɼʙͷϫʔΧ͕$IBU(15Λ࢖ͬͯ࡞ۀΛ ߦͬͨͱਪఆ͞Εͨ 🤖Crowdsourcing vs. LLM $IBU(15ʹ࡞ۀΛؙ౤͛͢Δ.5VSLϫʔΧ͕ଘࡏ͢Δ 47

    Veselovsky et al. Arti fi cial Arti fi cial Arti fi cial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks. arXiv:2306.07899. ཁ໿͢ΔBCTUSBDUͷྫ ʮ$IBU(15࢖༻ʯͱݕग़͞Εͨཁ໿͸ 
 ݩͷจষ͔Βͷίϐʔ͕ஶ͘͠গͳ͍
  40. 🤖Crowdsourcing vs. LLM --.ͱਓؒͰ͸ಘҙͳରԠ͕ҟͳΔ 48 ˔ ֶੜʹ--.Λ࢖ͬͯෳ਺ͷΫϥ΢υιʔ γϯάύΠϓϥΠϯʢྫɿ fi OE

    fi Y WFSJGZʣΛ࠶ݱͤ͞ɼ 
 ֶੜͷ஌ݟΛ੔ཧͨ͠ ˙ --.͸ʠNPSFEJWFSTFʡͳͲͷࢦࣔ ΁ͷ൓Ԡ্͕ख͔ͬͨɽਓؒ͸ෳ਺ ͷཁ݅Λຬͨ͢ͷ্͕ख͔ͬͨ ˙ ਓؒ͸ΠϯλʔϑΣʔεͳͲͷ৘ใ Λ࢖ͬͯɼٻΊΒΕΔग़ྗͷߏ଄Λ ཧղ͢Δ͕ɼ--.ʹ͸೉͍͠ Tongshuang et al. LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs. arXiv:2307.10168.
  41. ˔ 4UFQطଘσʔληοτ͔Βʮᐆດͳαϯϓ ϧʯΛTFFEͱͯ͠બͿ ˔ 4UFQ(15Λ༻͍ͯ৽ͨͳαϯϓϧΛੜ੒ 
 
 
 
 


    ˔ 4UFQϧʔϧ౳ʹج͍ͮͯࣗಈϑΟϧλϦϯά ˔ 4UFQਓؒʹΑΔमਖ਼ɾϑΟϧλϦϯά 🤖Crowdsourcing vs. LLM --.ͱਓؒͷڠಇʹΑΔσʔλ֦ு 49 Liu et al. WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation. EMNLP Findings 2022.
  42. ΞδΣϯμ 50 1BSU 
 )VNBOJOUIF-PPQػցֶशͷ ࣮ྫ 1BSU 
 )VNBOJOUIF-PPQػցֶशͷ ࣮ફʹ͓͚Δ՝୊

    👤Feedback type Ranking, Weight, Feature, Attention, Rationales, Re fi nement 🚀Objective Safety, Interpretability, Fairness, Diversity 📌Feedback pitfalls Reliability and variance, Bias, Diversity, Ethics 🤖Crowdsourcing vs. LLM