$30 off During Our Annual Pro Sale. View Details »

情報幾何の応用と最近の機械学習の動向

Oshita Noriaki
November 03, 2018

 情報幾何の応用と最近の機械学習の動向

勉強会での発表した資料です。

Oshita Noriaki

November 03, 2018
Tweet

More Decks by Oshita Noriaki

Other Decks in Education

Transcript

  1. ৘ใزԿͷԠ༻ͱ ࠷ۙͷػցֶशͷಈ޲ େԼൣߊ

  2. ൃදͷ֓ཁ ɾػցֶशʹ࢖ΘΕΔ͜ͱͷ͋Δ਺ֶ ɾ৘ใزԿͱͦͷԠ༻ࣄྫͷઆ໌ ɾΧʔωϧ๏ͱͦͷԠ༻ࣄྫͷઆ໌ ɾΨ΢εաఔͷجૅͱͦͷԠ༻ࣄྫͷઆ໌ ˞ࢲ͸਺ֶͷॳ৺ऀͳҝɼؒҧ͍΍ࢦఠ಺༰͕͋Ε ͹ɼൃද్தͰ΋͝ࢦఠ͍ͩ͘͞ɽ

  3. ʢ࢖ΘΕΔ͜ͱ͕͋Δ͔ʣ શ෦ཧղ͢Δඞཁ͸ͳ͍ɽ

  4. ʢ࢖ΘΕΔ͜ͱ͕͋Δ͔ʣ

  5. ৘ใزԿͱ͸ w ৘ใزԿͷ૑ઃऀ w ࠎ૊Έ͸म࢜ͱത࢜Ͱߟ ͑ͨͦ͏Ͱ͢ɽ ؁རढ़Ұઌੜ

  6. ৘ใزԿͱ͸ w ૒ରΞϑΝΠϯ઀ଓͷඍ෼زԿ w ύϥϝʔλۭؒͷزԿֶʢ㱠σʔλۭؒͷزԿֶʣ

  7. ඍ෼زԿͱ͸ w ͻͱ͜ͱͰݴ͑͹ඍ෼Λ༻͍ͨزԿɽ

  8. ۂ཰ɾᎇ཰ʢΕ͍Γͭʣ w ۂ཰ ɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹΛ࣍ݩϢʔΫϦουۭؒ தͷۂઢͱ͢Δɽʢ·ͨۂઢ1 T ʹΑͬͯද͞ΕΔӡಈͷ଎ ͞͸ҰఆͰʹͳΔΑ͏ʹύϥϝʔλΛͱͬͯ͋Δɽʣ ͢ͳΘͪ଎౓ϕΫτϧ ͕௕͞Ͱ͋Δͱ͢Δɽ۩ମతʹ͸

    ͱͳ͍ͬͯΔͱ͢Δɽ p(s) = (x(s), y(s), z(s))(a ≤ s ≤ b) e1 (s) = p′(s) = (x′(s), y′(s), z′(s)) e1 (s) ⋅ e1 (s) = x′(s)2 + y′(s)2 + z′(s)2 = 1
  9. ۂ཰ɾᎇ཰ʢΕ͍Γͭʣ ͦͷͱ͖Ճ଎౓ϕΫτϧΛߟ͑ͯΈΔͱ Ͱ͋Δ͔Βɼɹɹ͕ɹɹɹʹ௚ަ͍ͯ͠Δɽɹɹͷ௕͞Λ ͱॻ͖ɼۂઢ1 T ͷۂ཰ͱݺͿɽ͢ͳΘͪ ɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɾ e′ 1 (s)

    0 = d (e1 (s) ⋅ e1 (s)) ds = 2e′ i (s) ⋅ e1 (s) e′ 1 (s) e1 (s) e′ 1 (s) k(s) κ(s) = e′ 1 (s) ⋅ e′ 1 (s) = x′′(s)2 + y′′(s)2 + z′′(s)2
  10. ۂ཰ɾᎇ཰ʢΕ͍Γͭʣ w ᎇ཰ e′ 1 e′ 2 e′ 3 =

    0 k 0 −κ 0 τ 0 −τ 0 e1 e2 e3 ಋग़͸লུ͠·͢ɽ
  11. ૒ରΞϑΝΠϯ઀ଓ ΞϑΝΠϯ઀ଓ∇Λ࣋ͭ3JFNBOOଟ༷ମ(M, g)ʹ͓͍ͯ Xg (Y, Z) = g(∇X Y, Z)

    + g(Y, ∇* X Z) (X, Y, Z) ∈ (M) Ͱఆٛ͞ΕΔΞϑΝΠϯ઀ଓ∇*Λܭྔgʹؔ͢Δ∇ͷ૒ରΞϑΝΠϯ઀ଓͱ͍͏ɽ
  12. ৘ใزԿͷԠ༻ w ࣗવޯ഑๏ w αϙʔτϕΫτϧϚγϯ w #PPTUJOH w ओ੒෼෼ੳ w

    ͳͲɼ͋Γͱ͋ΒΏΔ΋ͷʹԠ༻͞Ε͍ͯΔ w ৄ͘͠͸৘ใزԿֶͷ৽ల։ ⋮
  13. ৘ใزԿͷԠ༻ w #BZFTJBOTISJOLBHFQSFEJDUJPOGPSUIFSFHSFTTJPO QSPCMFNʢਖ਼ن෼෍ͷϕΠζ༧ଌʹ͓͚Δࣄલ෼෍ͷߏ ੒ʣ IUUQTXXXTDJFODFEJSFDUDPNTDJFODFBSUJDMFQJJ 49 w 4UBUJTUJDBM*OGFSFODFXJUI6OOPSNBMJ[FE%JTDSFUF .PEFMTBOE-PDBMJ[FE)PNPHFOFPVT%JWFSHFODFT

    ඇ ਖ਼نԽϞσϧͷਪఆཧ࿦  IUUQKNMSPSHQBQFSTWIUN
  14. ਖ਼ن෼෍ͷϕΠζ༧ଌʹ ͓͚Δࣄલ෼෍ͷߏ੒ w͜ͷ࿦จͷجૅ ࠓҎԼͷଟมྔਖ਼ن෼෍͕؍ଌ͞ΕΔͱ͢Δɽ y ∼ Nd (y; μ, Σ)

    Nd ͸ฏۉЖɼڞ෼ࢄЄ͔ΒͳΔɼ ̳࣍ݩͷଟมྔਖ਼ن෼෍ͷີ౓ؔ਺Ͱ͋Δɽ
  15. Χʔωϧ๏ͷ؆୯ͳઆ໌

  16. Χʔωϧ๏ͷॏཁͳఆཧ L Y Z ू߹Њ্ͷਖ਼ఆ஋Χʔωϧ Њ্ͷؔ਺͔ΒͳΔώϧϕϧτۭؒͰ࣍ͷࡾͭΛຬͨ͢ ΋ͷ͕Ұҙʹଘࡏ͢Δɻ   ͸೚ҙʹݻఆ

      ༗ݶ࿨ͷܗͷݩ͸ͷதͰ᜚ີ 
  ࠶ੜੑ ⟨f, k( ⋅ , x)⟩ℋ = f(x) (∀x ∈ , ∀f ∈ ℋ) ʢ.PPSF"SPOT[KOͷఆཧʣ Hk k(· ,x) ∈ Hk x ∈ Ω f = n ∑ i=1 ci k(· ,xi ) Hk
  17. ਖ਼ఆ஋Χʔωϧ ɽɽɹਖ਼ఆ஋Χʔωϧͷఆٛͱجຊతੑ࣭ ·ͣɼ࣮਺஋ͷਖ਼ఆ஋Χʔωϧ͔Βఆٛ͢Δɽ Λू߹ͱ͢Δͱ͖ɼ࣍ͷ৚݅Λຬͨ͢Χʔωϧ ɹɹɹɹɹɹɹɹΛਖ਼ఆ஋Χʔωϧ QPTJUJWF EFpOFLFSOFM ͱ͍͏ɽ w ରশੑɿ೚ҙͷɹɹɹɹɹɹʹର͠ɹɹ

    w ਖ਼஋ੑɿ೚ҙͷ ʹର͠ɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹ ɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɽ k : × → ℝ (্ͷ) x, y ∈ k(x, y) = k(y, x) n ∈ ℕ, x1 , …, xn ∈ , c1 , …, cn ∈ ℝ n ∑ i,j=1 ci cj k (xi , xj) ≥ 0
  18. άϥϜߦྻ ରশੑͷ΋ͱɼਖ਼஋ੑͷ৚݅͸ରশߦྻ ͕൒ਖ਼ఆ஋Ͱ͋Δ͜ͱΛҙຯ͢Δɽ ͜ͷରশߦྻΛάϥϜߦྻͱ͍͏ɽ

  19. ਖ਼ఆ஋ɾ൒ਖ਼ఆ஋ zTMz > 0 zTMz ≥ 0 ͕ඞͣͳΓͨͭͱ͖ਖ਼ఆ஋ͱ͍͏ɽ ͕੒Γཱͭͱ͖൒ਖ਼ఆ஋ͱ͍͏ɽ z

    ≡ ඇθϩྻϕΫτϧ M ≡ OºOͷ࣮਺ରশߦྻ .͕ਖ਼ఆ஋ͱ͸ .͕ਖ਼ఆ஋ͱ͸ zTMz = [z1 , z2 , ⋯, zn] m11 m12 ⋯ m1n ⋮ ⋱ mn1 m21 ⋯ mnn z1 z2 ⋮ zn
  20. "-JOFBS5JNF,FSOFM(PPEOFTTPG'JU5FTU w/*14ͷϕετϖʔύʔʂʂ w ෼෍ͷ෼͔Βͳ͍ͱ͜ΖΛਪఆ͢Δɽ w ྫ͑͹͜Ε͸ඪ४ਖ਼ن෼෍ͱ͍ͬͯྑ͍ͷ͔Ͳ͏͔Λ൑ ఆ͢Δ w ",FSOFMJ[FE4UFJO%JTDSFQBODZGPS(PPEOFTTPGpU 5FTUTʢ2JBOH-JV

    +BTPO%-FF .JDIBFM*+PSEBOʣ ͱ͍͏ख๏Λ༻͍ͯʢޙ೔2JJUBʹ֓ཁΛॻ͘༧ఆ ͭͷ ֬཰෼෍͕ࣅ͍ͯΔ͔Λઢܗ࣌ؒͰଌఆ͢Δɽ IUUQTBSYJWPSHBCT
  21. A Linear-Time Kernel Goodness-of-Fit Test Wittawat Jitkrittum1 Wenkai Xu1 Zolt´

    an Szab´ o2 Kenji Fukumizu3 Arthur Gretton1 1Gatsby Unit, University College London 2CMAP, ´ Ecole Polytechnique 3The Institute of Statistical Mathematics A Linear-Time Kernel Goodness-of-Fit Test Wittawat Jitkrittum1 Wenkai Xu1 Zolt´ an Szab´ o2 Kenji Fukumizu3 Arthur Gretton1 1Gatsby Unit, University College London 2CMAP, ´ Ecole Polytechnique 3The Institute of Statistical Mathematics Summary •Given: {x i}n i=1 ⇠ q (unknown), and a density p. •Goal: Test H0 : p = q vs H1 : p 6= q quickly. •New multivariate goodness-of-fit test (FSSD): 1.Nonparametric: arbitrary, unnormalized p. x 2 Rd. 2.Linear-time: O(n) runtime complexity. Fast. 3.Interpretable: tell where p does not fit the data. Previous: Kernel Stein Discrepancy (KSD) •Let x(x, v) := 1 p(x)rx[k(x, v)p(x)] 2 Rd. Stein witness function: g(v) = E x⇠q [x(x, v)] where g = (g1 , . . . , gd ) and each gi 2 F, an RKHS associated with kernel k. - 4 - 2 2 4 - 0.2 0.2 0.4 p(x) q(x) g(x) Known: Under some conditions, kgkFd = 0 () p = q. [Chwialkowski et al., 2016, Liu et al., 2016] Statistic: KSD2 = kgk2 Fd = double sums z }| { E x⇠q E y⇠q hp (x, y) ⇡ 2 n(n 1) P i<j hp (x i , x j ). where hp (x, y) := [rx log p(x)] k(x, y) [ry log p(y)] + rxryk(x, y) + [ry log p(y)] rxk(x, y) + [rx log p(x)] ryk(x, y). Characteristics of KSD: 3 Nonparametric. Applicable to a wide range of p. 3 Do not need the normalizer of p. 7 Runtime: O(n 2). Computationally expensive. Linear-Time KSD (LKS) Test: [Liu et al., 2016] kgk2 Fd ⇡ 2 n P n/2 i=1 hp (x2i 1 , x2i ). 3 Runtime: O(n). 7 High variance. Low test power. The Finite Set Stein Discrepancy (FSSD) Idea: Evaluate witness g at J locations {v1 , . . . , v J}. Fast. FSSD2 = 1 dJ J X j=1 kg(v j )k 2 2 . Proposition (FSSD is a discrepancy measure). Main conditions: 1.(Nice kernel) Kernel k is C0 -universal, and real analytic (Taylor series at any point converges) e.g., Gaussian kernel. 2.(Vanishing boundary) lim kxk!1 p(x)g(x) = 0. 3.(Avoid “blind spots” ) Locations {v1 , . . . , v J} are drawn from a distribution h which has a density. Then, for any J 1, h-a.s. FSSD2 = 0 () p = q. Characteristics of FSSD: 3 Nonparametric. 3 Do not need the normalizer of p. 3 Runtime: O(n). 3 Higher test power than LKS. Model Criticism with FSSD Proposal: Optimize locations {v1 , . . . , v J} and kernel bandwidth by arg max score = FSSD2/s H1 (runtime: O(n)). Proposition: This procedure maximizes the true positive rate = P(detect di↵erence | p 6= q). score: 0.034 score: 0.44 Interpretable Features for Model Criticism 12K robbery events in Chicago in 2016 Model p = 10-component Gaussian mixture F = v⇤ = where model does not fit well. Maximization objective FSSD2/s H1 . Optimized v⇤ is highly interpretable. Bahadur Slope and Bahadur E ciency •Bahadur slope u rate of p-value ! 0 of statistic Tn under H1 . High = good. •Bahadur e ciency = ratio slope(1) slope(2) of slopes of two tests. > 1 means test(1) better. •Results: Slopes of FSSD and LKS tests when p = N(0, 1) and q = N(µ q , 1). 0 50 100 n 0.0 0.5 1.0 p-value T(1) n T(2) n Proposition. Let s2 k , k2 be kernel bandwidths of FSSD and LKS. Fix s2 k = 1. Then, 8 µ q 6= 0, 9v 2 R, 8 k2 > 0, the Bahadur e ciency slope(FSSD)(µ q , v, s2 k ) slope(LKS)(µ q , k2) > 2. FSSD is statistically more e cient than LKS. Experiment: Restricted Boltzmann Machine •40 binary hidden units. d = 50 visible units. Significance level a = 0.05. · · · · · · Model p Perturb one weight to get q. 2000 4000 Sample size n 0.00 0.25 0.50 0.75 Rejection rate 1000 2000 3000 4000 Sample size n 0 100 200 300 Time (s) 2000 4000 Sample size n 0.0 0.5 1.0 Rejection rate FSSD-opt FSSD-rand KSD LKS MMD-opt ME-opt Better •FSSD-opt, (FSSD-rand) = Proposed tests. J = 5 optimized, (random) locations. •MMD-opt [Gretton et al., 2012] = State-of-the-art two-sample test (quadratic-time). •ME-opt [Jitkrittum et al., 2016] = Linear-time two-sample test with optimized locations. •Key: FSSD (O(n)), KSD (O(n 2)) have comparable power. FSSD is much faster. WJ, WX, and AG thank the Gatsby Charitable Foundation for the financial support. ZSz was financially supported by the Data Science Initiative. KF has been supported by KAKENHI Innovative Areas 25120012. Contact: wittawat@gatsby.ucl.ac.uk Code: github.com/wittawatj/kernel-gof
  22. Ψ΢εաఔͷجૅ w ·ͣɼΨ΢εաఔͰ͸ͳ͍ྫͰߟ͑ͯΈ·͠ΐ͏ɽ ͨͱ͑͹ɼ࣍ݩͷೖྗʹ͍ͭͯɹɹɹɹɹɹɹɹͱ͍͏ಛ ௃ϕΫτϧΛߟ͑Ε͹ɼͷ࣍ؔ਺  ͸ɼରԠ͢ΔॏΈΛ࢖ͬͨઢܗճؼϞσϧ Λ ͱද͢͜ͱ͕ग़དྷΔɽ ͱఆٛ͢Δɽ

    y = WTϕ(x) y = w0 + w1 x + w2 x2 + w3 x3 ϕ(x) = (1,x, x2, x3)T x x w = (w0 , w1 , w2 , w3 )T
  23. Ψ΢εաఔͷجૅ ઌ΄Ͳͷճؼؔ਺Λ ͱͭͷجఈؔ਺Ͱද͍ͯ͠ΔͱΈΔ͜ͱ͕Ͱ͖Δɽ ·ͨɼجఈؔ਺Λ࣍ͷΑ͏ʹఆٛ͠ɼ೚ҙͷؔ਺Λද͢͜ͱ Λߟ͑Δɽ ϕ(x) = exp(− x −

    μ σ2 ) ϕ(x) ϕ0 (x) = 1,ϕ1 (x) = x, ϕ2 (x) = x2, ϕ3 (x) = x3
  24. Ψ΢εաఔͷجૅ

  25. Ψ΢εաఔͷجૅ

  26. Ψ΢εաఔͷجૅ ͔͠͠ɼ͜ͷํ๏Ͱ͸͇ͷ࣍ݩ͕খ͍͞ͱ͖͔͠ར༻Ͱ͖ͳ ͍ɽ͇ͷ࣍ݩ͕ͷ৔߹ɼ—͔Β·ͰɼִؒɽͰج ఈؔ਺ͷத৺ͳΒ΂ͨͱ͠·͠ΐ͏ɽ ͜ΕʹରԠ͢ΔϕΫτϧ͸ʹͳΓ·͢ɽ ͢ͳΘͪ    

                     μm y = WTϕ(x) w
  27. Ψ΢εաఔͷجૅ Ͱ͸͇͕࣍ݩͷ৔߹͸Ͳ͏ͳΔͰ͠ΐ͏ ౴͑͸  EJN   EJN  

     EJN y = WTϕ(x1 , x2 ) 212 = 441 213 = 9261 ⋮ 2110 = 16,679,880,978,201
  28. Ψ΢εաఔͷجૅ ͇͕ߴ࣍ݩͷ৔߹Ͱ΋͖͞΄ͲͷਤͷΑ͏ʹॊೈͳճؼϞσ ϧΛ࣮ݱ͢Δʹ͸Ͳ͏͢Ε͹͍͍Ͱ͠ΐ͏ɽ ղܾ๏͸ઢܗճؼϞσϧͷύϥϝʔλʹ͍ͭͯظ଴஋Λ ͱͬͯɼϞσϧ͔Βੵ෼ফڈͯ͠͠·͏͜ͱͰ͢ɽ ઢܗճؼϞσϧ͸࣍ͷΑ͏ʹॻ͘͜ͱ΋ग़དྷ·͢ɽ w ̂ y1 ̂

    y2 ⋮ ̂ yN = ϕ0 (x1 ) ϕ1 (x1 ) ⋯ ϕM (x1 ) ⋮ ⋱ ϕ0 (xN ) ϕ0 (xN ) ⋯ ϕM (xN ) w0 w1 ⋮ wM
  29. Ψ΢εաఔͷجૅ ɹ ্ͷΑ͏ʹஔ͖׵͑Δͱͱදͤ·͢ɽ ࣍ʹࠓճ͸؆୯ͳͨΊʹZ͕Y͔Βਖ਼֬ʹޡࠩͳ͘ճؼ͞ΕΔͱ͢Δͱ ͕੒ΓཱͭͱԾఆ͠·͢ɽ ̂ y1 ̂ y2 ⋮

    ̂ yN = ϕ0 (x1 ) ϕ1 (x1 ) ⋯ ϕM (x1 ) ⋮ ⋱ ϕ0 (xN ) ϕ0 (xN ) ⋯ ϕM (xN ) w0 w1 ⋮ wM ̂ y ϕ w ̂ y = ϕw y = ϕw
  30. Ψ΢εաఔͷجૅ ͜͜ͰॏΈ͕ Ͱੜ੒͞ΕΔͱ͠·͢ɽ୯Ґߦྻ ͜ͷͱ͖ɼڞ෼ࢄߦྻ͸࣍ͷΑ͏ʹͳΓ·͢ɽ ݁Ռͱͯ͠Zͷ෼෍͸࣍ͷଟมྔΨ΢ε෼෍ʹ͕͍ͨ͠·͢ɽ ɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹɹ  w w ∼

    N(0,λ2I) I [yyT] − [y][y]T = [(Φw)(Φw)T] = Φ [wwT] ΦT = λ2ΦΦT y ∼ (0, λ2ΦΦT) 8͕ফ͑·ͨ͠ʂʂ ظ଴஋ΛऔΔ͜ͱ͸ੵ෼͢Δ͜ͱͱΠίʔϧ
  31. Ψ΢εաఔ w ఆٛ ͲΜͳ/ݸͷೖྗͷू߹ʹ͍ͭͯ΋ɼରԠ͢ Δग़ྗͷಉ࣌෼෍͕ଟมྔΨ΢ε෼෍ʹ ै͏ͱ͖ɼ͸Ψ΢εաఔ (BVTTJBOQSPDFTT ʹै͏ͱ ͍͍·͢ɽ (x1

    , x2 , ⋯, xN ) y = (y1 , y2 , ⋯, yN ) p(y)
  32. Ψ΢εաఔͷ·ͱΊ w ֬཰աఔͱ͸΋ͱ΋ͱ͸࣌ܥྻʹର͢Δཧ࿦ͱͯ͠ੜ·Ε ·͕ͨ͠ɼඞͣ͠΋࣌ܥྻͰ͋Δ͜ͱΛཁ੥͍ͯ͠ΔΘ͚ Ͱ͸͋Γ·ͤΜɽ w ࣌ܥྻͰ͸ͳͯ͘΋ཧ࿦͸ಉ༷ʹ੒Γཱͭɽ w ೖྗͷݸ਺/ ͢ͳΘͪग़ྗͷ࣍ݩ/͸͍͘Βେ͖ͯ͘΋੒

    Γཱͪ·͢ɽ w Ψ΢εաఔͱ͸࣮͸ແݶ࣍ݩͷΨ΢ε෼෍ͷ͜ͱͰ͢ɽ
  33. Ψ΢εաఔͷ ͳʹ͕͏Ε͍͠ͷ͔ ࣜͷڞ෼ࢄߦྻΛ ͱ͓͘ͱɼ O N ཁૉ͸ɼਤͷΑ͏ʹ ͇ͷಛ௃ϕΫτϧΛͱͯ͠ɼ Ͱද͞Ε·͢ɽ K

    = λ2ΦΦT ϕ(x) = (ϕ0 , ⋯, ϕM (x))T Knm = λ2ϕ (xn) T ϕ (xm)
  34. Ψ΢εաఔͷ ͳʹ͕͏Ε͍͠ͷ͔ ڞ෼ࢄߦྻɹɹɹɹɹɹɹ͕ޓ͍ʹࣅ͍ͯΕ͹ ಛ௃ϕΫτϧɹɹɹɹɹɹɹͷ಺ੵͷఆ਺ഒ͕ɼڞ෼ࢄߦྻ ,ͷ O N ཁૉʹͳ͍ͬͯ·͢ɽ ͢ͳΘͪಛ௃ϕΫτϧۭؒʹ͓͍͕ͯࣅ͍ͯΔͳΒ ରԠ͢Δɹ΋ࣅͨ஋Λ࣋ͭ͜ͱʹͳΓ·͢ɽ

    K = λ2ΦΦT ϕ(xn )ͱϕ(xm ) Knm xn ͱxm yn ͱym େ͖͍ͳΒ͹ yn ͱym ΋ࣅͨ஋ʹͳΔɽ ೖྗY͕ࣅ͍ͯΔͳΒ͹͈΋ࣅͨ஋ʹͳΔ
  35. Ψ΢εաఔͷԠ༻ 7BSJBUJPOBM-FBSOJOHPO"HHSFHBUF0VUQVUTXJUI (BVTTJBO1SPDFTTFT IUUQTBSYJWPSHBCT ɾڭࢣ͋Γֶश͸ೖྗͱग़ྗ͕ಉ༷ͷਫ਼౓Ͱ؍ଌ͞ΕΔ͜ͱ Λ૝ఆ͍ͯ͠Δ͕ɼҰൠతͳڭࢣ͋ΓֶशͰ͸ೖྗΑΓ΋ग़ ྗͷํ͕ૈ͘؍ଌ͞ΕΔɽ͜ͷ໰୊ʹର͢ΔɼΞϓϩʔνΛ Ψ΢εաఔΛ༻͍ͨม෼ֶशΛఏҊ͍ͯ͠Δɽ

  36. ࢀߟจݙ w ৘ใزԿֶͷ৽ల։ɿ؁རढ़Ұ w ৘ใزԿֶͷجૅɿ౻ݪজ෉ w ۂઢͱۂ໘ͷඍ෼زԿɿখྛতࣣ w Χʔωϧ๏ೖ໳ɿ෱ਫ݈࣍ w

    (BVTTJBO1SPDFTTGPS.BDIJOF-FBSOJOH ʢIUUQXXXHBVTTJBOQSPDFTTPSHHQNMʣɹ  w Ψ΢εաఔͷجૅͱڭࢣͳֶ͠श IUUQXXXJTNBDKQdEBJDIJMFDUVSFT)(BVTTJBO1SPDFTTHQMFDUVSFEBJDIJQEG  w ʰΨ΢εաఔͱػցֶशʱαϙʔτϖʔδ IUUQDIBTFOPSHdEBJUJNHQCPPL