Slide 1

Slide 1 text

ϚϧνϦϯΨϧͳݴޠϞσϧೖ໳ d͜Ε·Ͱͱ͜Ε͔Βd 2023/9/1 ৘ใॲཧֶձ ୈ257ճ NLݚ ཥ ྇פ (LINE גࣜձࣾʣ

Slide 2

Slide 2 text

‣ 2018 - 2023: ౦ژେֶ ௽Ԭݚڀࣨʢम࢜ - ത࢜ʣ ‣ 2023 -: LINE גࣜձࣾʢNLP ΤϯδχΞʣ ࣗݾ঺հ ࠷ۙɺֶੜ࣌୅ʹΠϯλʔϯͰ ͓ੈ࿩ʹͳͬͨํʑͱຊΛॻ͖·ͨ͠ɻ

Slide 3

Slide 3 text

༮গظ… 私 中国 持 算数

Slide 4

Slide 4 text

ֶߍʹͯ 算数 !

Slide 5

Slide 5 text

தࠃޠͰֶश͠ ೔ຊޠͷςετΛड͚Δ ݴޠԣஅసҠֶश Cross-lingual Transfer Learning

Slide 6

Slide 6 text

܇࿅σʔλ͕ಛఆͷݴޠ͔͠ͳ͍ʢྫɿதࠃޠʣঢ়گԼͰɺଞͷ ݴޠʢྫɿ೔ຊޠʣͷσʔλʹ΋ରԠͰ͖ΔϞσϧΛͭ͘Δɻ ݴޠԣஅసҠֶश Q. ͳͥͦͷΑ͏ͳ͜ͱΛ͢Δͷ͔ʁ ‣ ϥϕϧ෇σʔλ͕رগͳݴޠ΋ੈքʹ͸ͨ͘͞Μ͋ΔͨΊ

Slide 7

Slide 7 text

ੈքதͷݴޠͱσʔλྔ 7 ݴޠʢ೔ຊޠɺӳޠͳͲʣ 2191 ݴޠ 222 ݴޠ The State and Fate of Linguistic Diversity and Inclusion in the NLP World (Joshi et al., ACL 2020) Figure 1 ΑΓ

Slide 8

Slide 8 text

ChatGPT ΋ݴޠԣஅసҠֶशΛ͍ͯ͠Δʁ

Slide 9

Slide 9 text

InstructGPT ͷ࿦จͷ࣮ݧʹΑΔͱ… ‣ Instruct Tuning ͷֶशσʔλ͸96%͕ӳޠ ‣ ϑΝΠϯνϡʔχϯάͨ͠ GPT3 ͸ଞͷݴޠʹ΋͏·͘ ൚Խ͍ͯ͠Δ ChatGPT ΋ݴޠԣஅసҠֶशΛ͍ͯ͠Δʁ Q. ͳͥ͜ͷΑ͏ͳ͜ͱ͕Ͱ͖Δͷ͔ʁ ‣ ϕʔεͷݴޠϞσϧ͕ෳ਺ݴޠͰ܇࿅͞Ε͓ͯΓݴޠԣஅసҠֶश ʹରԠͰ͖ΔೳྗΛ͍࣋ͬͯΔɻ

Slide 10

Slide 10 text

‣ ϚϧνϦϯΨϧͳݴޠϞσϧ͸Ͳ͏΍ͬͯ࡞Δ͔ʁ ‣ ϚϧνϦϯΨϧͳݴޠϞσϧʹؔ͢ΔະղܾͷṖ ‣ ϚϧνϦϯΨϧͳݴޠϞσϧͷ͜Ε͔Β ຊ೔ͷςʔϚ

Slide 11

Slide 11 text

‣ ϚϧνϦϯΨϧͳݴޠϞσϧ͸Ͳ͏΍ͬͯ࡞Δ͔ʁ ‣ ϚϧνϦϯΨϧͳݴޠϞσϧʹؔ͢ΔະղܾͷṖ ‣ ϚϧνϦϯΨϧͳݴޠϞσϧͷ͜Ε͔Β ຊ೔ͷςʔϚ

Slide 12

Slide 12 text

୯ޠͷҙຯͷۙ͞ΛϕΫτϧ্ۭؒͰදݱͨ͠΋ͷ ϞϊϦϯΨϧͳ୯ޠϕΫτϧ ݘ ೣ ίϯϐϡʔλ ܭࢉػ ͱ ΋ ͳ͍ ދ ࣅͨ୯ޠͷϕΫτϧ͸ۙ͘ͳΔ

Slide 13

Slide 13 text

ݴޠʹґΒͣɺ୯ޠͷҙຯͷۙ͞ΛϕΫτϧ্ۭؒͰදݱͨ͠΋ͷ ϚϧνϦϯΨϧͳ୯ޠϕΫτϧ ೣ ݘ ܭࢉػ ͱ ΋ ͳ͍ DBU EPH DPNQVUFS OPU UP BOE ίϯϐϡʔλ

Slide 14

Slide 14 text

ϚϧνϦϯΨϧͳݴޠϕΫτϧͷ࡞Γํ ᶃ ର༁σʔλΛ࢖ͬͯϕΫτϧΛֶश͢Δ Bilingual Word Representations with Monolingual Quality in Mind (Luong et al., LatentVar 2015) Figure 1 ΑΓ ςΩετͷपғͷ୯ޠΛ༧ଌ͢Δ͜ͱʹՃ͑ͯɺର༁ͱͳΔ୯ޠ΋༧ଌ͢Δɻ

Slide 15

Slide 15 text

‣ ݴޠͷߏ଄ͷڞ௨ੑΛར༻͢Ε͹Մೳɻ Q. ର༁σʔλͳ͠ͰϚϧνϦϯΨϧͳݴޠϕΫτϧΛ࡞Εͳ͍͔ʁ

Slide 16

Slide 16 text

୯ޠϕΫτϧۭؒ ಉܕͷԾఆ ผʑʹֶश͞ΕͨҟͳΔݴޠͷ୯ޠຒΊࠐΈۭؒͷߏ଄͕ࣅΔɻ Exploiting Similarities among Languages for Machine Translation (Mikolov et al., arXiv 2013) Figure 1 ΑΓ ӳޠ εϖΠϯޠ

Slide 17

Slide 17 text

ϚοϐϯάʹΑΔ ϚϧνϦϯΨϧͳ୯ޠϕΫτϧͷ֫ಘ ϞϊϦϯΨϧͳ୯ޠϕΫτϧΛֶश

Slide 18

Slide 18 text

ϚοϐϯάʹΑΔ ϚϧνϦϯΨϧͳ୯ޠϕΫτϧͷ֫ಘ ϞϊϦϯΨϧͳ୯ޠϕΫτϧΛֶश ઢܗม׵Ͱ ڞ௨ۭؒʹϚοϐϯά

Slide 19

Slide 19 text

ର༁σʔλ͋Γ ‣ ର༁ࣙॻΛ༻ҙɻର༁ͷ୯ޠϕΫτϧಉ͕࢜Ұக͢ΔΑ͏ʹϚοϐϯά Λֶशɻ ର༁σʔλͳ͠ʢର༁ࣙॻੜ੒ ⁵ Ϛοϐϯάͷ֫ಘʣ ‣ ιʔε/λʔήοτݴޠͷϕΫτϧͷݟ෼͚͕͔ͭͳ͘ͳΔϚοϐϯάΛ ఢରతֶशͰ֫ಘɻ ‣ ୯ޠϕΫτϧಉ࢜ͷྨࣅ౓ϕΫτϧͷྨࣅ౓Ͱର༁ࣙॻΛੜ੒ɻ ϚοϐϯάʹΑΔ ϚϧνϦϯΨϧͳ୯ޠϕΫτϧͷ֫ಘ Exploiting Similarities among Languages for Machine Translation (Mikolov et al., arXiv 2013) Word translation without parallel data (Lample et al., ICLR 2018) A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings (Artetxe et al., ACL 2018)

Slide 20

Slide 20 text

ιʔε/λʔήοτݴޠͷϕΫτϧͷ ݟ෼͚͕͔ͭͳ͘ͳΔϚοϐϯάΛఢରతֶशͰ֫ಘ ϕΫτϧۭؒͷܗ͕ಉ͡ͳͷͰɺେମҰக͢Δɻ ʢ͜ͷޙʹ൓෮తʹϚοϐϯάΛվળ͢Δεςοϓ͋Γɻʣ Word translation without parallel data (Lample et al., ICLR 2018) Figure 1 ΑΓ

Slide 21

Slide 21 text

୯ޠϕΫτϧۭؒಉܕͷԾఆͷݶք ୯ޠϕΫτϧۭؒͷܗ͕߹க͢ΔͨΊʹ͸ɺ ‣ ֶशίʔύεͷυϝΠϯ͕͋Δఔ౓Ұக͢Δඞཁ͕͋Δɻ ‣ ݴޠ͕ܥ౷తʹ͋Δఔ౓͍ۙ͠ ඞཁ͕͋Δɻ On the Limitations of Unsupervised Bilingual Dictionary Induction (Søgaard et al., ACL 2018) ͨͩ͠ɺ͍ͭͰ΋ର༁σʔλͳ͠ͰϚϧνϦϯΨϧͳ୯ޠϕΫτϧΛ ֫ಘ͢Δ͜ͱ͕͍ͭͰ΋͏·͘ߦ͔͘ͱ͍͏ͱͦΜͳ͜ͱ΋ͳ͘… 😢

Slide 22

Slide 22 text

຋༁ؔ܎ʹ͋ΔͱࢥΘΕ͍ͯΔ୯ޠͰ΋ɺҙຯ͕શ͘ಉ͡ͱݴ͑ΔΘ͚Ͱ͸ ͳ͔ͬͨΓ͢Δɻݻ༗໊ࢺͰ͋Δʮ෋࢜ࢁʯΛͱͬͯ΋… ͱ͜ΖͰ… ʮҧ͏ݴޠͰಉ͡ҙຯ ➡︎ ಉ͡ϕΫτϧʯͰຊ౰ʹྑ͍ʁ ͱ͋ΔݴޠͰʮ෋࢜ࢁʯ͕ͲͷΑ͏ͳจ຺Ͱݴٴ͞ΕΔ͔ʹҧ͍͕͋Γɺͦ͏ ͨ͠จԽతͳؚ஝ͳͲΛߟ͑Δͱɺશ͘ಉ͡Α͏ͳҙຯͰ͋Δͱ͸͍͑ͳ͍ɻ Ͳ͜·Ͱͷҙຯͷ౳ՁੑΛٻΊΔ͔͸݁ہ͸λεΫґଘɻ ෋࢜ࢁ .U'VKJ େ࿨ࠢ ೔ຊͰҰ൪ ߴ͍ࢁ ؍ޫ஍

Slide 23

Slide 23 text

‣ ͋Μ·Γ೉͍͜͠ͱߟ͑ͳͯ͘΋ग़དྷͯ͠·͏ɻ Q. BERT ͰϚϧνϦϯΨϧϞσϧ࡞Δ࣌͸Ͳ͏͢Δͷʁ BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al., NAACL 2019) Figure 1 ΑΓ

Slide 24

Slide 24 text

ϚϧνϦϯΨϧͳ BERT ͷ࡞Γํ ෳ਺ݴޠͷσʔλΛ༻͍ͯΤϯίʔμΛ܇࿅͢Δɻ ࣄલֶश Corpus 🇯🇵🇺🇸🇨🇳🇬🇧🇰🇷🇫🇷🇮🇳🇩🇪… Encoder ෳ਺ݴޠͷ୯ݴޠίʔύεΛूΊɺϚεΫ෇͖ݴޠ ϞσϦϯάͰΤϯίʔμΛ܇࿅͢Δɻ

Slide 25

Slide 25 text

ݴޠԣஅసҠֶशͷྲྀΕ ϚϧνϦϯΨϧࣄલֶश 🇯🇵🇬🇧 Encoder

Slide 26

Slide 26 text

ݴޠԣஅసҠֶशͷྲྀΕ 🇬🇧 Task-specific Module Output 🇯🇵🇬🇧 Encoder ϑΝΠϯνϡʔχϯά ϚϧνϦϯΨϧࣄલֶश Encoder

Slide 27

Slide 27 text

ݴޠԣஅసҠֶशͷྲྀΕ 🇬🇧 Output 🇯🇵🇬🇧 🇯🇵 Output ϚϧνϦϯΨϧࣄલֶश ධՁ ϑΝΠϯνϡʔχϯά Task-specific Module Encoder Task-specific Module Encoder Encoder

Slide 28

Slide 28 text

ϚϧνϦϯΨϧͳ BERT ͷੑೳ XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization (Hu et al., ICML 2020) Table 2 ΑΓ ‣ mBERT ͸ 100ݴޠҎ্ͷ Wikipedia هࣄͰࣄલֶश͞Ε͍ͯΔɻ ‣ ෯޿͍λεΫͰϑΝΠϯνϡʔχϯά࣌ʹݟ͍ͯͳ͍ݴޠͰͷධՁ ͰνϟϯεϨʔτΑΓང͔ʹߴ͍είΞΛ͍ࣔͯ͠Δɻ

Slide 29

Slide 29 text

ϚϧνϦϯΨϧͳݴޠϞσϧͷ ಺෦දݱ ϕΫτϧͷ੒෼Λ෼ੳ͢Δͱ… ‣ݴޠຖʹ෼཭ͨ͠੒෼ ‣ݴޠʹґΒͣࠞ߹ͨ͠੒෼ Λݟ͚ͭΔ͜ͱ΋Ͱ͖Δɻ The Geometry of Multilingual Language Model Representations (Chang et al., EMNLP 2022) Figure 1 ΑΓɻ

Slide 30

Slide 30 text

‣ ϚϧνϦϯΨϧͳݴޠϞσϧ͸Ͳ͏΍ͬͯ࡞Δ͔ʁ ‣ ϚϧνϦϯΨϧͳݴޠϞσϧʹؔ͢ΔະղܾͷṖ ‣ ϚϧνϦϯΨϧͳݴޠϞσϧͷ͜Ε͔Β ຊ೔ͷςʔϚ

Slide 31

Slide 31 text

Q. ͳͥ mBERT ͸ର༁σʔλͳ͠ͰϚϧνϦϯΨϧͳೳྗΛ֫ಘ͍ͯ͠Δͷ͔ʁ

Slide 32

Slide 32 text

ϚϧνϦϯΨϧͳ BERT ͷ࡞Γํ ෳ਺ݴޠͷσʔλΛ༻͍ͯΤϯίʔμΛ܇࿅͢Δɻ ࣄલֶश Corpus 🇯🇵🇺🇸🇨🇳🇬🇧🇰🇷🇫🇷🇮🇳🇩🇪… Encoder ෳ਺ݴޠͷ୯ݴޠίʔύεΛूΊɺϚεΫ෇͖ݴޠ ϞσϦϯάͰΤϯίʔμΛ܇࿅͢Δɻ ͳͥݴޠԣஅసҠֶशʹඞཁͳ ݴޠؒͷରԠΛֶश͍ͯ͠Δʁ

Slide 33

Slide 33 text

౰ॳ͞͞΍͔Ε͍ͯͨᷚ ڞ௨ͷจࣈྻԾઆ आ༻ͳͲ΋͋Γɺදه͕Ұக͢Δ୯ޠ΋ଟ਺ʂ EN DE sing singen EN FR banana banane ಉ͡ޠ଒ͷݴޠʹ͸ࣅͨΑ͏ͳจࣈྻ͕ଟ͍ɻ ςΩετ͸αϒϫʔυ෼ׂ͞ΕΔͷͰɺಉ͡αϒϫʔυ͕ ҟͳΔݴޠͰಉ͡ҙຯ߹͍Λ࣋ͭ͜ͱʹͳΔɻ 🍌 🎶

Slide 34

Slide 34 text

౰ॳ͞͞΍͔Ε͍ͯͨᷚ ڞ௨ͷจࣈྻԾઆ ݻ༗໊ࢺ ਺ࣈ

Slide 35

Slide 35 text

ڞ௨ͷจࣈྻԾઆͷݕূ ڞ௨ͷจࣈྻ͕େ੾ʁ ➡︎ ڞ௨ͷจࣈྻ͕શ͘ݱΕͳ͍ঢ়گͰ ϚϧνϦϯΨϧͳ BERT Λֶश͢ΔͱͲ͏ͳΔ͔ʁ ҟͳΔݴޠ͔ΒͷτʔΫϯͷ ID ͕ॏͳΒͳ͚Ε͹Α͍ ‣ ยํͷݴޠͷจࣈͷϢχίʔυΛͣΒͯ͠มͳจࣈʹ͢Δ ‣ τʔΫϯͷ ID ΛͣΒ͢ ࣮ݧઃఆͷ࡞Γํ Emerging Cross-lingual Structure in Pretrained Language Models (Conneau et al., ACL 2020) Cross-Lingual Ability of Multilingual BERT: An Empirical Study (K et al., ICLR 2020)

Slide 36

Slide 36 text

ڞ௨ͷจࣈྻԾઆͷ൓ূ ‣ ͦͷ··ͷઃఆ ‣ ڞ௨จࣈྻ͕ແ͍Α͏ʹௐઅͨ͠ઃఆ ͰݴޠԣஅసҠֶशͷੑೳ͸͋·ΓมΘΒͳ͍ɻ Cross-Lingual Ability of Multilingual BERT: An Empirical Study (K et al., ICLR 2020) Table 1 ΑΓ

Slide 37

Slide 37 text

݁ہɺݴޠԣஅͳೳྗ͸Ͳ͔͜Βʁ จࣈྻͱ͍ͬͨද૚తͳݴޠͷڞ௨ੑ͸ɺݴޠԣஅͳೳྗ ͷ֫ಘʹඞਢͰ͸ͳ͍͜ͱ͕Θ͔ͬͨɻ ➡︎ ॏཁͳͷ͸ݴޠͷߏ଄తͳڞ௨ੑʁ ݴޠԣஅͷೳྗ͕֫ಘ͞ΕΔ৚݅ʹ͍ͭͯௐࠪ͸͞Ε͍ͯ Δ͕ɺ֬ఆతʹʮ͜Ε͕ॏཁʯͱ͍͏ཁૉ͸Α͘෼͔ͬͯ ͍ͳ͍ɻ Towards a Common Understanding of Contributing Factors for Cross-Lingual Transfer in Multilingual Language Models: A Review (Philippy et al., ACL 2023)

Slide 38

Slide 38 text

݁ہɺݴޠԣஅͳೳྗ͸Ͳ͔͜Βʁ ݴޠͷߏ଄తͳڞ௨ੑͱ͸Կ͔ʁ ݴޠ͕ҟͳ͍ͬͯͯ΋… ‣ ໊ࢺɺಈࢺͳͲͷҟͳΔΧςΰϦͷ୯ޠΛ૊Έ߹Θͤͯ จΛ࡞Δɻ ‣ ಉ͡Α͏ͳτϐοΫʹ͍ͭͯ࿩͢ɻ ͜ͷลͷڞ௨ੑΛ Transformer ͕र্͍͍͛ͯΔͷͩͱ ࢥ͍ͬͯΔ͕ɺ͏·͍ࣔ͠ํΛߟ͍͑ͨɻ

Slide 39

Slide 39 text

‣ ϚϧνϦϯΨϧͳݴޠϞσϧ͸Ͳ͏΍ͬͯ࡞Δ͔ʁ ‣ ϚϧνϦϯΨϧͳݴޠϞσϧʹؔ͢ΔະղܾͷṖ ‣ ϚϧνϦϯΨϧͳݴޠϞσϧͷ͜Ε͔Β ຊ೔ͷςʔϚ

Slide 40

Slide 40 text

ChatGPT ͸൚༻తͳϚϧνϦϯΨϧϞσϧ͔ʁ GPT4 ͸̐୒໰୊ͷϕϯνϚʔΫͰɺଟ͘ͷݴޠʹ͍ͭͯ ׂࣣҎ্ͷਖ਼౴཰Λൃش https://openai.com/research/gpt-4 ΑΓ

Slide 41

Slide 41 text

ChatGPT ͸൚༻తͳϚϧνϦϯΨϧϞσϧ͔ʁ ଟ͘ͷݴޠΛͦͦ͜͜ཧղͰ͖͍ͯΔΑ͏ʹݟ͑Δɻ

Slide 42

Slide 42 text

ϚϧνϦϯΨϧϞσϧ͸ ChatGPT Ͱ΋͏ྑ͍ʁ ϚϧνϦϯΨϧϞσϧͷ࣮༻্ͷେ՝୊ ޮ཰ͱੑೳͷτϨʔυΦϑ

Slide 43

Slide 43 text

ϚϧνϦϯΨϧϞσϧ͸ ChatGPT Ͱ΋͏े෼ʁ -τʔΫφΠθʔγϣϯͷඇޮ཰͞- Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models (Ahia et al., NAACL 2023) Figure 5 ΑΓ ಉ͡಺༰ͷςΩετͰ΋ӳޠʹൺ΂ͯ೔ຊޠ͸̎ഒ΄ͲͷτʔΫϯΛফඅ͢Δɻ

Slide 44

Slide 44 text

ϚϧνϦϯΨϧͳϞσϧʹ͓͚Δ τʔΫφΠθʔγϣϯͷඇޮ཰͞ ͋ΔݴޠͷςΩετͷܥྻ௕ΛॖΊΔͨΊʹ͸ɺ ͦͷݴޠͷޠኮ਺Λ૿΍͢ඞཁ͕͋Δɻ ͋Δݴޠͷޠኮ਺Λ૿΍͢ͱɺଞͷݴޠ͕ඇޮ཰ΛඃΔ ͋Δݴޠͷޠኮ਺Λ૿΍͢ͱɺҟͳΔݴޠͰͷܭࢉ͕ ෆඞཁʹॏ͘ͳΔɻੑೳ΋ѱ͘ͳΔ͔΋͠Εͳ͍ɻ

Slide 45

Slide 45 text

ଟݴޠͷढ͍ (Curse of multilinguality) ͱ͍͏ݱ৅͕஌ΒΕ͍ͯΔɻ ‣ ݴޠΛ૿΍͍ͯ͘͠ͱɺݴޠԣஅλεΫͷੑೳ্͕͕Δ… ͱࢥͬͨΒɺҰఆ਺Λ௒͑ΔͱԼ͕Γ࢝ΊΔݱ৅ɻ ଟݴޠͷढ͍ ݴޠ਺ ੑೳ 😱

Slide 46

Slide 46 text

ͭ·Γɺଟݴޠͷढ͍ͱ͸ɺϞσϧ ύϥϝʔλͱ͍͏ϦιʔεΛɺෳ਺ ݴޠ͕৯͍߹͏ݱ৅͕ى͖͍ͯΔͱ ଊ͑ΒΕΔɻ ଟݴޠͷढ͍ Ϟσϧͷύϥϝʔλ਺Λେ͖͍ͯ͘͘͠ͱϚγʹͳΔɻ Unsupervised Cross-lingual Representation Learning at Scale (Conneau et al., ACL 2020) Figure 4 ΑΓ

Slide 47

Slide 47 text

ϞσϧύϥϝʔλΛ૿΍͍ͯ͘͠ʹ΋ݶ౓͕͋Γɺ࢖༻࣌ͷ ίετ΋ෛ୲ʹͳͬͯ͘Δɻ ଟݴޠͷढ͍Λղͨ͘Ίʹ ‣ ݴޠؒͷڝ߹Λܰݮ͢ΔͨΊʹɺ ݴޠຖʹݸผͷϞδϡʔϧΛઃ͚ Δɻ ղܾ๏ͷ̍ͭ Lifting the Curse of Multilinguality by Pre-training Modular Transformers (Pfeiffer et al., NAACL 2022)

Slide 48

Slide 48 text

֤ࠃ͕ߴੑೳͳϞϊϦϯΨϧ LLM ͷ։ൃʹۈ͠ΜͰ͍Δɻ طଘͷϞϊϦϯΨϧϞσϧΛ૊Έ߹ΘͤΔ͜ͱ͸Ͱ͖ͳ͍͔ʁ ϞϊϦϯΨϧϞσϧ͸׆༻Ͱ͖Δ͔ʁ ֶशʹඞཁͳϦιʔεΛେ෯ʹઅ໿Ͱ͖Δ͔΋ɻ Ja LLM En LLM Zh LLM … ϚϧνϦϯΨϧϞσϧ

Slide 49

Slide 49 text

‣ ݴޠڞ௨ɾݴޠಛԽͷϞδϡʔϧΛ࠷దʹ഑ஔͨ͠ΞʔΩς Ϋνϟͷ։ൃ ‣ طଘͷϞϊϦϯΨϧϞσϧΛ૊Έ߹ΘͤͯɺϚϧνϦϯΨϧ ϞσϧΛߏங͢Δख๏ɻ ະདྷͷϚϧνϦϯΨϧϞσϧʹ޲͚ͯͷํ޲ੑ