Upgrade to Pro — share decks privately, control downloads, hide ads and more …

最近の深層学習におけるAttention機構  - CVとNLPを中心に -

最近の深層学習におけるAttention機構  - CVとNLPを中心に -

2019.4.14 関東CV勉強会で発表した資料です.
Attention機構について発表しています.

Hiroshi Fukui

April 14, 2019
Tweet

More Decks by Hiroshi Fukui

Other Decks in Research

Transcript

  1. ࣗݾ঺հ w ໊લɿ෱Ҫ޺  ॴଐɿ/&$σʔλαΠΤϯεݚڀॴ ݚमத   ݩ୅໨໊ݹ԰$713.-ษڧձװࣄ w

    4/4ɿ  ݸਓ)1ɿIUUQTTJUFTHPPHMFDPNTJUFGIJSPSFTFBSDIIPNF  5XJUUFSɿIUUQTUXJUUFSDPN$BUFDIJOF  'BDFCPPLɿIUUQTXXXGBDFCPPLDPNHSFFOUFB  
  2. "UUFOUJPOػߏͬͯԿʁ w ಛ௃ྔ΁ͷॏΈ෇͚ʹΑΔಛ௃நग़ͷվળ  ώτͷ஫ҙػߏΛػցֶश΁ͱԠ༻ٕͨ͠ज़ w "UUFOUJPOػߏͷॏΈ͸αϯϓϧ ಛఆͷཁૉ ͝ͱʹҟͳΔ 

    ωοτϫʔΫͷύϥϝʔλ஋͸શαϯϓϧͰҰఆ   ˠαϯϓϧ͝ͱʹՄม ˠֶशޙ͸શαϯϓϧͰݻఆ f′(x) = M(x) ⋅ f(x) ಛ௃ϕΫτϧPSಛ௃Ϛοϓ "UUFOUJPOػߏͷॏΈ *HOPSF "UUFOUJPO
  3. "UUFOUJPOػߏ͸Ͳ͏΍ͬͯྲྀߦͬͨͷ͔ʁ   Published as a conference paper at ICLR

    2015 (a) (b) (c) (d) Figure 3: Four sample alignments found by RNNsearch-50. The x-axis and y-axis of each plot correspond to the words in the source sentence (English) and the generated translation (French), respectively. Each pixel shows the weight ↵ij of the annotation of the j-th source word for the i-th target word (see Eq. (6)), in grayscale (0: black, 1: white). (a) an arbitrary sentence. (b–d) three randomly selected samples among the sentences without any unknown words and of length between 10 and 20 words from the test set. One of the motivations behind the proposed approach was the use of a fixed-length context vector in the basic encoder–decoder approach. We conjectured that this limitation may make the basic encoder–decoder approach to underperform with long sentences. In Fig. 2, we see that the perfor- mance of RNNencdec dramatically drops as the length of the sentences increases. On the other hand, both RNNsearch-30 and RNNsearch-50 are more robust to the length of the sentences. RNNsearch- 50, especially, shows no performance deterioration even with sentences of length 50 or more. This superiority of the proposed model over the basic encoder–decoder is further confirmed by the fact that the RNNsearch-30 even outperforms RNNencdec-50 (see Table 1). 6 /FVSBM.BDIJOF5SBOTMBUJPOCZ+PJOUMZ-FBSOJOH UP"MJHOBOE5SBOTMBUF &⒎FDUJWF"QQSPBDIFTUP "UUFOUJPOCBTFE/FVSBM.BDIJOF5SBOTMBUJPO yt ˜ ht ct at ht pt ¯ hs Attention Layer Context vector Local weights Aligned position Figure 3: Local attention model – the model first predicts a single aligned position pt for the current target word. A window centered around the source position pt is then used to compute a context vec- tor ct , a weighted average of the source hidden states in the window. The weights at are inferred refers to the global attention approach in which weights are placed “softly” over all patches in the source image. The hard attention, on the other hand, selects one patch of the image to attend to at a time. While less expensive at inference time, the hard attention model is non-differentiable and re- quires more complicated techniques such as vari- ance reduction or reinforcement learning to train. Our local attention mechanism selectively fo- cuses on a small window of context and is differ- entiable. This approach has an advantage of avoid- ing the expensive computation incurred in the soft attention and at the same time, is easier to train than the hard attention approach. In concrete de- tails, the model first generates an aligned position pt for each target word at time t. The context vec- tor ct is then derived as a weighted average over the set of source hidden states within the window [pt −D, pt +D]; D is empirically selected.8 Unlike /-1 Figure 3. Examples of attending to the correct object (white indicates the attended regions, underlines indicated the corre two variants: a “hard” attention mechanism and a “soft” attention mechanism. We also show how one advantage of including attention is the ability to visualize what the model “sees”. Encouraged by recent advances in caption genera- tion and inspired by recent success in employing attention in machine translation (Bahdanau et al., 2014) and object recognition (Ba et al., 2014; Mnih et al., 2014), we investi- gate models that can attend to salient part of an image while generating its caption. 2. Related Work In this section we provide relevant backgroun work on image caption generation and attent several methods have been proposed for gen descriptions. Many of these methods are b rent neural networks and inspired by the suc sequence to sequence training with neural ne chine translation (Cho et al., 2014; Bahdana 4IPX "UUFOEBOE5FMM /FVSBM*NBHF$BQUJPO(FOFSBUJPOXJUI7JTVBM"UUFOUJPO /-1 $7 3FTJEVBM"UUFOUJPO/FUXPSLGPS*NBHF$MBTTJpDBUJPO down sample down sample up sample up sample convolution receptive field Soft Mask Branch Trunk Branch 1 + # $ % '($) * + + , + Figure 3: The receptive field comparison between mask branch and trunk branch. range to [0, 1] after two consecutive 1 ⇥ 1 convolution lay- ers. We also added skip connections between bottom-up and top-down parts to capture information from different scales. The full module is illustrated in Fig.2. The bottom-up top-down structure has been applied to image segmentation and human pose estimation. However, the difference between our structure and the previous one lies in its intention. Our mask branch aims at improving Activation f1( f2( f3( Table 1: Th network with Layer Conv1 Max pool Residual U Attention M Residual U Attention M Residual U Attention M Residual U Average po FC,Softm pa $7
  4. "UUFOUJPOػߏ͸Ͳ͏΍ͬͯྲྀߦͬͨͷ͔ʁ   Published as a conference paper at ICLR

    2015 (a) (b) (c) (d) Figure 3: Four sample alignments found by RNNsearch-50. The x-axis and y-axis of each plot correspond to the words in the source sentence (English) and the generated translation (French), respectively. Each pixel shows the weight ↵ij of the annotation of the j-th source word for the i-th target word (see Eq. (6)), in grayscale (0: black, 1: white). (a) an arbitrary sentence. (b–d) three randomly selected samples among the sentences without any unknown words and of length between 10 and 20 words from the test set. One of the motivations behind the proposed approach was the use of a fixed-length context vector in the basic encoder–decoder approach. We conjectured that this limitation may make the basic encoder–decoder approach to underperform with long sentences. In Fig. 2, we see that the perfor- mance of RNNencdec dramatically drops as the length of the sentences increases. On the other hand, both RNNsearch-30 and RNNsearch-50 are more robust to the length of the sentences. RNNsearch- 50, especially, shows no performance deterioration even with sentences of length 50 or more. This superiority of the proposed model over the basic encoder–decoder is further confirmed by the fact that the RNNsearch-30 even outperforms RNNencdec-50 (see Table 1). 6 /FVSBM.BDIJOF5SBOTMBUJPOCZ+PJOUMZ-FBSOJOH UP"MJHOBOE5SBOTMBUF &⒎FDUJWF"QQSPBDIFTUP "UUFOUJPOCBTFE/FVSBM.BDIJOF5SBOTMBUJPO yt ˜ ht ct at ht pt ¯ hs Attention Layer Context vector Local weights Aligned position Figure 3: Local attention model – the model first predicts a single aligned position pt for the current target word. A window centered around the source position pt is then used to compute a context vec- tor ct , a weighted average of the source hidden states in the window. The weights at are inferred refers to the global attention approach in which weights are placed “softly” over all patches in the source image. The hard attention, on the other hand, selects one patch of the image to attend to at a time. While less expensive at inference time, the hard attention model is non-differentiable and re- quires more complicated techniques such as vari- ance reduction or reinforcement learning to train. Our local attention mechanism selectively fo- cuses on a small window of context and is differ- entiable. This approach has an advantage of avoid- ing the expensive computation incurred in the soft attention and at the same time, is easier to train than the hard attention approach. In concrete de- tails, the model first generates an aligned position pt for each target word at time t. The context vec- tor ct is then derived as a weighted average over the set of source hidden states within the window [pt −D, pt +D]; D is empirically selected.8 Unlike Figure 3. Examples of attending to the correct object (white indicates the attended regions, underlines indicated the corre two variants: a “hard” attention mechanism and a “soft” attention mechanism. We also show how one advantage of including attention is the ability to visualize what the model “sees”. Encouraged by recent advances in caption genera- tion and inspired by recent success in employing attention in machine translation (Bahdanau et al., 2014) and object recognition (Ba et al., 2014; Mnih et al., 2014), we investi- gate models that can attend to salient part of an image while generating its caption. 2. Related Work In this section we provide relevant backgroun work on image caption generation and attent several methods have been proposed for gen descriptions. Many of these methods are b rent neural networks and inspired by the suc sequence to sequence training with neural ne chine translation (Cho et al., 2014; Bahdana 4IPX "UUFOEBOE5FMM /FVSBM*NBHF$BQUJPO(FOFSBUJPOXJUI7JTVBM"UUFOUJPO 3FTJEVBM"UUFOUJPO/FUXPSLGPS*NBHF$MBTTJpDBUJPO down sample down sample up sample up sample convolution receptive field Soft Mask Branch Trunk Branch 1 + # $ % '($) * + + , + Figure 3: The receptive field comparison between mask branch and trunk branch. range to [0, 1] after two consecutive 1 ⇥ 1 convolution lay- ers. We also added skip connections between bottom-up and top-down parts to capture information from different scales. The full module is illustrated in Fig.2. The bottom-up top-down structure has been applied to image segmentation and human pose estimation. However, the difference between our structure and the previous one lies in its intention. Our mask branch aims at improving Activation f1( f2( f3( Table 1: Th network with Layer Conv1 Max pool Residual U Attention M Residual U Attention M Residual U Attention M Residual U Average po FC,Softm pa /-1 /-1 $7 $7 ࿦จͷओுͰ͸ɻɻɻ Stacked Hourglass Networks for Human Pose Estimation Alejandro Newell, Kaiyu Yang, and Jia Deng University of Michigan, Ann Arbor {alnewell,yangky,jiadeng}@umich.edu Abstract. This work introduces a novel convolutional network archi- tecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial re- lationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods. Keywords: Human Pose Estimation Fig. 1. Our network for pose estimation consists of multiple stacked hourglass modules which allow for repeated bottom-up, top-down inference. 1 Introduction A key step toward understanding people in images and video is accurate pose estimation. Given a single RGB image, we wish to determine the precise pixel location of important keypoints of the body. Achieving an understanding of a person’s posture and limb articulation is useful for higher level tasks like ac- tion recognition, and also serves as a fundamental tool in fields such as human- computer interaction and animation. arXiv:1603.06937v2 [cs.CV] 26 Jul 2016 4UBDLFE)PVSHMBTT/FUXPSLT GPS)VNBO1PTF&TUJNBUJPO )JHIXBZ/FUXPSLT 1.1. Notation We use boldface letters for vectors and matrices, and ital- icized capital letters to denote transformation functions. 0 and 1 denote vectors of zeros and ones respectively, and I denotes an identity matrix. The function (x) is defined as (x) = 1 1+e x , x 2 R. 2. Highway Networks A plain feedforward neural network typically consists of L layers where the lth layer (l 2 {1, 2, ..., L}) applies a non- linear transform H (parameterized by W H,l ) on its input x l to produce its output y l . Thus, x 1 is the input to the network and y L is the network’s output. Omitting the layer index and biases for clarity, y = H(x, W H ). (1) H is usually an affine transform followed by a non-linear activation function, but in general it may take other forms. For a highway network, we additionally define two non- linear transforms T(x, W T ) and C(x, W C ) such that y = H(x, W H )· T(x, W T ) + x · C(x, W C ). (2) We refer to T as the transform gate and C as the carry gate, since they express how much of the output is produced by transforming the input and carrying it, respectively. For simplicity, in this paper we set C = 1 T, giving y = H(x, W H )· T(x, W T ) + x · (1 T(x, W T )). (3) The dimensionality of x, y, H(x, W H ) and T(x, W T ) must be the same for Equation (3) to be valid. Note that ple computing units such that the i unit computes yi = Hi (x), a highway network consists of multiple blocks such that the ith block computes a block state Hi (x) and trans- form gate output Ti (x). Finally, it produces the block out- put yi = Hi (x) ⇤ Ti (x) + xi ⇤ (1 Ti (x)), which is con- nected to the next layer. 2.1. Constructing Highway Networks As mentioned earlier, Equation (3) requires that the dimen- sionality of x, y, H(x, W H ) and T(x, W T ) be the same. In cases when it is desirable to change the size of the rep- resentation, one can replace x with ˆ x obtained by suitably sub-sampling or zero-padding x. Another alternative is to use a plain layer (without highways) to change dimension- ality and then continue with stacking highway layers. This is the alternative we use in this study. Convolutional highway layers are constructed similar to fully connected layers. Weight-sharing and local receptive fields are utilized for both H and T transforms. We use zero-padding to ensure that the block state and transform gate feature maps are the same size as the input. 2.2. Training Deep Highway Networks For plain deep networks, training with SGD stalls at the beginning unless a specific weight initialization scheme is used such that the variance of the signals during forward and backward propagation is preserved initially (Glorot & Bengio, 2010; He et al., 2015). This initialization depends on the exact functional form of H. For highway layers, we use the transform gate defined as T(x) = (W T T x+b T ), where W T is the weight matrix and b T the bias vector for the transform gates. This sug- gests a simple initialization scheme which is independent of the nature of H: bT can be initialized with a negative value (e.g. -1, -3 etc.) such that the network is initially biased towards carry behavior. This scheme is strongly in- $7 $7
  5. /-1ʹ͓͚Δ"UUFOUJPOػߏͷߟ͑ํ w "UUFOUJPOػߏ͸ࣙॻΦϒδΣΫτ  &ODPEFSͷӅΕ૚ɿ,FZ 7BMVFɼ%FDPEFSͷӅΕ૚ɿ2VFSZ  2VFSZͱ,FZͰྨࣅ౓ ॏΈ Λࢉग़ɼྨࣅ౓ͷߴ͍Ґஔͷ7BMVFΛऔΓग़͢

      2VFSZ ,FZ 7BMVF ॏΈΛࢉग़ 8FJHIU 1JDLVQ 7 " # $ % & 8 9 : ; &04 &04 8 9 : ; &ODPEFS 4PVSDF %FDPEFS 5BSHFU Weight( ) 2 , ɼ ".JMMFS "'JTDI +%PEHF "),BSJNJ "#PSEFT BOE+8FTUPO l,FZ7BMVF.FNPSZ/FUXPSLTGPS%JSFDUMZ3FBEJOH%PDVNFOUTz "$-  "7BTXBOJ /4IB[FFS /1BSNBS +6T[LPSFJU -+POFT "/(PNF[ -,BJTFS BOE-1PMPTVLIJO l"UUFOUJPOJT"MM:PV/FFEz /*14 
  6. /FVSBM.BDIJOF5SBOTMBUJPO /.5 w &ODPEFS%FDPEFS 4FR4FR Λ༻͍ͨϞσϧʹΑΓ຋༁  &ODPEFS 4PVSDF ɿݪจΛೖྗ

     %FDPEFS 5BSHFU ɿ຋༁จΛग़ྗ  "UUFOUJPOػߏͷ֓೦Λ࡞ͬͨݚڀ   %#BIEBOBV ,$IP BOE:#FOHJP l/FVSBM.BDIJOF5SBOTMBUJPOCZ+PJOUMZ-FBSOJOHUP"MJHOBOE5SBOTMBUFz *$-3  ction that outputs the probability of yt , and st is other architectures such as a hybrid of an RNN Kalchbrenner and Blunsom, 2013). TE eural machine translation. The new architecture c. 3.2) and a decoder that emulates searching tion (Sec. 3.1). x 1 x 2 x 3 x T + α t,1 α t,2 α t,3 α t,T y t-1 y t h 1 h 2 h 3 h T h 1 h 2 h 3 h T s t-1 s t Figure 1: The graphical illus- tration of the proposed model trying to generate the t-th tar- nal probability , (4) d by er–decoder ap- ed on a distinct of annotations sentence. Each input sequence th word of the ations are com- ed sum of these Published as a conference paper at ICLR 2015 (a) (b) Published as a conference paper at ICLR 2015 (a) (b)
  7. (MPCBM"UUFOUJPOͱ-PDBM"UUFOUJPO   unit defined in ng objective is (y|x)

    (4) corpus. ls are classifed nd local. These the “attention” r on only a few ese two model y. odels is the fact ing phase, both hidden state ht M. The goal is hat captures rel- help predict the e models differ yt ˜ ht ct at ht ¯ hs Global align weights Attention Layer Context vector Figure 2: Global attentional model – at each time step t, the model infers a variable-length align- ment weight vector at based on the current target state ht and all source states ¯ hs . A global context vector ct is then computed as the weighted aver- age, according to at , over all the source states. Here, score is referred as a content-based function yt ˜ ht ct at ht pt ¯ hs Attention Layer Context vector Local weights Aligned position Figure 3: Local attention model – the model first predicts a single aligned position pt for the current target word. A window centered around the source position pt is then used to compute a context vec- tor ct , a weighted average of the source hidden states in the window. The weights at are inferred from the current target state ht and those source states ¯ hs in the window. refers to the glo weights are plac source image. hand, selects one a time. While le hard attention m quires more com ance reduction o Our local atte cuses on a smal entiable. This ap ing the expensiv attention and at than the hard at tails, the model pt for each targe tor ct is then de the set of source [pt −D, pt +D]; the global appro is now fixed-dim sider two varian (MPCBM"UUFOUJPO -PDBM"UUFOUJPO ɾ&ODPEFSͷ͢΂ͯͷ૚ͷಛ௃͔ΒॏΈΛܭࢉ ɾೖྗจશͯͷ୯ޠΛߟྀͨ͠"UUFOUJPOػߏ ɾ&ODPEFSͷಛఆͷ૚ͷಛ௃͔ΒॏΈΛܭࢉ ɾಛఆͷ୯ޠؒͷؔ܎ੑΛߟྀͨ͠"UUFOUJPOػߏ 5-.JOI 1)JFV BOE.%$ISJTUPQIFS l&⒎FDUJWF"QQSPBDIFTUP"UUFOUJPOCBTFE/FVSBM.BDIJOF5SBOTMBUJPOz "$- 
  8. 5SBOTGPSNFS w "UUFOUJPOػߏͷΈͰ຋༁͢Δ/.5  $//΍3//Λ༻͍ͳ͍ωοτϫʔΫߏ଄  .VMUJ)FBE"UUFOUJPOʹΑΓωοτϫʔΫΛߏ੒ w 4FMG"UUFOUJPOͱ4PVSDF5BSHFU"UUFOUJPOͷ૊Έ߹Θͤ 

     Figure 1: The Transformer - model architecture. "7BTXBOJ /4IB[FFS /1BSNBS +6T[LPSFJU -+POFT "/(PNF[ -,BJTFS BOE-1PMPTVLIJO l"UUFOUJPOJT"MM:PV/FFEz /*14 
  9. 4FMG"UUFOUJPO w ೖྗͷಛ௃ͷΈ༻͍ͯ"UUFOUJPOػߏͷॏΈΛࢉग़  4PVSDFͱ5BSHFUͷ֓೦͕ແ͍ͷ͕ಛ௃   ൴ɹ͸ɹ෱Ҫ͞ΜɹͰ͢ɹɽ ൴ ͸

    ෱Ҫ͞Μ Ͱ͢ ɽ ൴ɹ͸ɹ෱Ҫ͞ΜɹͰ͢ɹɽ "UUFOUJPOػߏ Λ ௐ΂ͨ ൴ ɽ จষ 5BSHFU จষ 4PVSDF จষ 4PVSDF5BSHFU"UUFOUJPO 4FMG"UUFOUJPO
  10. 5SBOTGPSNFSͷ"UUFOUJPOػߏ   Multi-Head Attention ttention. (right) Multi-Head Attention consists

    of several MultiHead(Q, K, V) = concat(head1 , …, headh )WO 4FMG"UUFOUJPO Scaled Dot-Product Attention Multi-Head Attention Attention(Q, K, V) = softmax ( QKT dk ) ⋅ V 4DBMFE%PU1SPEVDU"UUFOUJPO 4PVSDF5BSHFU"UUFOUJPO headi = Attention (QWQ i , KWK i , VWV i )
  11. .FNPSZ/FUXPSL w ֎෦ϝϞϦΛಋೖͨ͠ωοτϫʔΫ  ͭͷϞδϡʔϧ͔Βߏ੒͞ΕͨωοτϫʔΫ  ௕จΛهԱ͢Δ͜ͱͰߴਫ਼౓ͳจষཁ໿Λ࣮ݱ   propagate

    through it. Other recently proposed forms of memory or attention take this approach, notably Bahdanau et al. [2] and Graves et al. [8], see also [9]. Generating the final prediction: In the single layer case, the sum of the output vector o and the input embedding u is then passed through a final weight matrix W (of size V ⇥ d) and a softmax to produce the predicted label: ˆ a = Softmax(W(o + u)) (3) The overall model is shown in Fig. 1(a). During training, all three embedding matrices A, B and C, as well as W are jointly learned by minimizing a standard cross-entropy loss between ˆ a and the true label a. Training is performed using stochastic gradient descent (see Section 4.2 for more details). Question q Output Input Embedding B Embedding C Weights Softmax Weighted Sum pi ci mi Sentences {xi } Embedding A o W Softmax Predicted Answer a ^ u u Inner Product Out 3 In 3 B Sentences W a ^ {xi } o1 u1 o2 u2 o3 u3 A1 C1 A3 C3 A2 C2 Question q Out 2 In 2 Out 1 In 1 Predicted Answer (a) (b) Figure 1: (a): A single layer version of our model. (b): A three layer version of our model. In practice, we can constrain several of the embedding matrices to be the same (see Section 2.2). 44VLICBBUBS "4[MBN +8FTUPO BOE3'FSHVT l&OE5P&OE.FNPSZ/FUXPSLTz /*14 
  12. .FN/FUʹ͓͚Δ"UUFOUJPOػߏͷ໾ׂ w ͲͷϝϞϦΛΞΫηε͢Δ͔Λ"UUFOUJPOػߏʹΑΓબ୒  2VFSZ ,FZ 7BMVFͷߟ͑ํ͸࣮͸.FNPSZ/FUXPSL͕ൃ঵   Figure

    1: The Key-Value Memory Network model for question answering. See Section 3 for details. ".JMMFS "'JTDI +%PEHF "),BSJNJ "#PSEFT BOE+8FTUPO l,FZ7BMVF.FNPSZ/FUXPSLTGPS%JSFDUMZ3FBEJOH%PDVNFOUTz "$- 
  13. .FN/FUʹ͓͚Δ"UUFOUJPOػߏͷ໾ׂ w ͲͷϝϞϦΛΞΫηε͢Δ͔Λ"UUFOUJPOػߏʹΑΓબ୒  2VFSZ ,FZ 7BMVFͷߟ͑ํ͸࣮͸.FNPSZ/FUXPSL͕ൃ঵   2VFSZ

    ,FZ 7BMVF 8FJHIU ".JMMFS "'JTDI +%PEHF "),BSJNJ "#PSEFT BOE+8FTUPO l,FZ7BMVF.FNPSZ/FUXPSLTGPS%JSFDUMZ3FBEJOH%PDVNFOUTz "$- 
  14. $7ʹ͓͚Δ"UUFOUJPOػߏ w $7ͷ"UUFOUJPOػߏ͸࣌ܥྻΛѻ͏͔൱͔Ͱߏ଄͕ҟͳΔ  ࣌ܥྻΛ༻͍Δ"UUFOUJPOػߏ w 4IPX "UUFOEBOE5FMM  ࣌ܥྻΛ༻͍ͳ͍"UUFOUJPOػߏ

    w 3FTJEVBM"UUFOUJPO/FUXPSL w 4RVFF[FBOE&YDJUBUJPO/FUXPSL w /POMPDBM/FUXPSL w "UUFOUJPO#SBODI/FUXPSL  
  15. ࣌ܥྻͷ༗ແʹ͓͚Δ"UUFOUJPOػߏͷҧ͍ w ࣌ܥྻ͋Γͷ"UUFOUJPOػߏ $BQUJPOJOH 72" ʜ ɿ  ཁૉ ୯ޠ

    ͝ͱʹ"UUFOUJPOػߏͷॏΈΛࢉग़ w ࣌ܥྻͳ͠ͷ"UUFOUJPOػߏ ը૾෼ྨɼݕग़ɼʜ ɿ  ཁૉ͕ը૾Ұຕ͔͠ͳ͍   ˠͭͷωοτϫʔΫʹෳ਺ͷ"UUFOUJPOػߏΛಋೖ ͋Εɹ͸ɹϑΫϩ΢ɹͰ͢ɹɽ "UUFOUJPO ࣌ܥྻ͋Γͷ"UUFOUJPOػߏ "UUFOUJPO ࣌ܥྻͳ͠ͷ"UUFOUJPOػߏ
  16. 4IPX "UUFOEBOE5FMM w ΩϟϓγϣχϯάϞσϧʹͭͷ"UUFOUJPOػߏΛऔΓೖΕͨख๏  %FUFSNJOJTUJDTPGUBUUFOUJPOɿ4PGUNBYϕʔεͷ"UUFOUJPOػߏ  4UPDIBTUJDIBSEBUUFOUJPOɿڧԽֶश ϞϯςΧϧϩϕʔε ʹΑΔ"UUFOUJPOػߏ

      Neural Image Caption Generation with Visual Attention Figure 2. Attention over time. As the model generates each word, its attention changes to reflect the relevant parts of the image. “soft (top row) vs “hard” (bottom row) attention. (Note that both models generated the same captions in this example.) Figure 3. Examples of attending to the correct object (white indicates the attended regions, underlines indicated the corresponding word Neural Image Caption Generation with Visual Attention Figure 2. Attention over time. As the model generates each word, its attention changes to reflect the relevant parts of the image. “soft (top row) vs “hard” (bottom row) attention. (Note that both models generated the same captions in this example.) Figure 3. Examples of attending to the correct object (white indicates the attended regions, underlines indicated the corresponding word Neural Image Caption Generation with Visual Attention Figure 2. Attention over time. As the model generates each word, its attention changes to reflect the relevant parts of the image. “sof (top row) vs “hard” (bottom row) attention. (Note that both models generated the same captions in this example.) Figure 3. Examples of attending to the correct object (white indicates the attended regions, underlines indicated the corresponding word %FUFSNJOJTUJDTPGUBUUFOUJPO 4UPDIBTUJDIBSEBUUFOUJPO ,9V +#B 3,JSPT ,$IP "$PVSWJMMF 34BMBLIVEJOPW 3;FNFM BOE:#FOHJP l4IPX "UUFOEBOE5FMM/FVSBM*NBHF$BQUJPO(FOFSBUJPOXJUI7JTVBM"UUFOUJPOz *$.- 
  17. %FUFSNJOJTUJDTPGUBUUFOUJPO w ֤୯ޠ͝ͱͷಛ௃ʹରͯ͠ը૾্ۭؒͷॏΈΛࢉग़  4PVSDF5BSHFU"UUFOUJPOͱͷҧ͍ɿ,FZ 7BMVF $POWPMVUJPOMBZFST 2VFSZ -45. 

     Model Representation CNN Image: H x W x 3 Features: L x D h0 a1 z1 Weighted combination of features y1 h1 First word Distribution over L locations a2 d1 h2 a3 d2 z2 y2 Weighted features: D Distribution over vocab
  18. %FUFSNJOJTUJDTPGUBUUFOUJPO w ֤୯ޠ͝ͱͷಛ௃ʹରͯ͠ը૾্ۭؒͷॏΈΛࢉग़  4PVSDF5BSHFU"UUFOUJPOͱͷҧ͍ɿ,FZ 7BMVF $POWPMVUJPOMBZFST 2VFSZ -45. 

     Model Representation CNN Image: H x W x 3 Features: L x D h0 a1 z1 Weighted combination of features y1 h1 First word Distribution over L locations a2 d1 h2 a3 d2 z2 y2 Weighted features: D Distribution over vocab Model Representation CNN Image: H x W x 3 Features: L x D h0 a1 z1 Weighted combination of features y1 h1 First word Distribution over L locations a2 d1 h2 a3 d2 z2 y2 Weighted features: D Distribution over vocab Model Representation CNN Image: H x W x 3 Features: L x D h0 a1 z1 Weighted combination of features y1 h1 First word Distribution over L locations a2 d1 h2 a3 d2 z2 y2 Weighted features: D Distribution over vocab 2VFSZ ,FZ 7BMVF
  19. 3FTJEVBM"UUFOUJPO/FUXPSL w 3FT/FUΛϕʔεʹ"UUFOUJPOػߏΛಋೖͨ͠Ϟσϧ  "UUFOUJPOػߏͷಋೖʹΑΓਫ਼౓Λ޲্ͤ͞ΔͨΊʹͭͷςΫχοΫΛಋೖ w 4UBDLFE/FUXPSL4USVDUVSFͱ3FTJEVBM"UUFOUJPO-FBSOJOH   Origin

    image Feature before mask Soft attention mask Feature after mask Feature before mask Feature after mask Low-level color feature Sky mask High-level part feature Balloon instance mask Classification Input Attention Attention mechanism Soft attention mask Figure 1: Left: an example shows the interaction between features and attention masks. Right: example images illustrating '8BOH .+JBOH $2JBO 4:BOH $-J );IBOH 98BOH BOE95BOH l3FTJEVBM"UUFOUJPO/FUXPSLGPS*NBHF$MBTTJpDBUJPOz $713 
  20. 4UBDLFE/FUXPSL4USVDUVSF w Ұ͚ͭͩͷ"UUFOUJPOػߏͰ͸ωοτϫʔΫͷߴਫ਼౓Խ͕ࠔ೉  ωοτϫʔΫͷಛ௃දݱ͕ૄʹͳΓ͗͢Δ͜ͱ͕ݪҼ  3FT/FUͷ֤ϒϩοΫͷޙ૚ʹ"UUFOUJPOػߏΛಋೖ   ҄

    ҄ ҄ × × × max pooling residual unit max pooling interpolation residual unit 1x1 conv 1x1 conv interpolation residual unit residual unit residual unit sigmoid residual unit residual unit residual unit residual unit stage2 stage1 stage3 Attention Module Attention Module Attention Module Input Image ... ... ... ... ... ... ... ... ... ... t p p p p p p t t ... ... ... Soft Mask Branch r r 2r down sample up sample residual unit sigmoid function × element-wise product ҄ element-wise sum convolution pooling Figure 2: Example architecture of the proposed network for ImageNet. We use three hyper-parameters for the design of
  21. "UUFOUJPO3FTJEVBM-FBSOJOH w "UUFOUJPOػߏͷ4DBMJOHʹՃ͑ͯ3FTJEVBM-FBSOJOHΛಋೖ  Ծʹ"UUFOUJPOػߏͷॏΈ͕શͯʹͳͬͯ΋ಛ௃Ϛοϓ͕ফࣦ͠ͳ͍ w ࣮͸࠷ۙͷ"UUFOUJPOػߏͰ͸͔ͳΓ࢖ΘΕ͍ͯΔςΫχοΫ  5SBOTGPSNFS 4&/FU

    3FT/FUܕ /POMPDBM// "#/ ʜ   f′(x) = M(x) ⋅ f(x) f′(x) = (1 + M(x)) ⋅ f(x) Ұൠతͳ"UUFOUJPOػߏ "UUFOUJPO3FTJEVBM-FBSOJOH down sample down sample up sample up sample convolution receptive field Soft Mask Branch Trunk Branch 1 + # $ % '($) * + + , + Figure 3: The receptive field comparison between mask Ac Table netwo M R Atte R M(x) f(x)ɿಛ௃Ϛοϓ ɿॏΈ
  22. 4RVFF[FBOE&YDJUBUJPO/FUXPSL w ಛ௃Ϛοϓͷνϟϯωϧʹରͯ͠"UUFOUJPOػߏΛಋೖ  গྔͷύϥϝʔλͷ૿ՃͰը૾ೝࣝͷੑೳΛ޲্Մೳ  γϯϓϧͰಋೖ͠΍͍͢"UUFOUJPOػߏͳͷͰ༷ʑͳख๏ͰऔΓೖΕΒΕ͍ͯΔ   Figure

    1: A Squeeze-and-Excitation block. features in a class agnostic manner, bolstering the quality of the shared lower level representations. In later layers, the dinality (the size of the set of transformations) [15, 47]. Multi-branch convolutions can be interpreted as a generali- +)V -4IFO 4"MCBOJF (4VO BOE&8V l4RVFF[FBOE&YDJUBUJPO/FUXPSLz $713 
  23. 4RVFF[FBOE&YDJUBUJPO.PEVMF w *ODFQUJPOܕͱ3FT/FUܕͷछྨΛఏҊ w (MPCBM"WFSBHF1PPMJOHͱ૚ͷGD૚͔Β"UUFOUJPOػߏΛߏங  4RVFF[Fɿ(MPCBM"WFSBHF1PPMJOH  &YDJUBUJPOɿ૚ͷGD૚ 

     Inception Global pooling FC SE-Inception Module FC X Inception X Inception Module X X Sigmoid Scale ReLU × W × C 1 × 1 × C 1 × 1 × C 1 × 1 × C 1 × 1 × C 1 × 1 × C × W × C Figure 2: The schema of the original Inception module (left) and the SE-Inception module (right). X X × W × C Residual Residual a single pass forwards and backwards through ResNet-50 takes 190 ms, compared to 209 ms for SE-ResNet-50 (both timings are performed on a server with 8 NVIDIA Titan X GPUs). We argue that this represents a reasonable overhead particularly since global pooling and small inner-product operations are less optimised in existing GPU libraries. Moreover, due to its importance for embedded device ap- plications, we also benchmark CPU inference time for each model: for a 224 × 224 pixel input image, ResNet-50 takes 164 ms, compared to 167 ms for SE-ResNet-50. The small additional computational overhead required by the SE block is justified by its contribution to model performance. Next, we consider the additional parameters introduced by the proposed block. All of them are contained in the two FC layers of the gating mechanism, which constitute a small fraction of the total network capacity. More precisely, the number of additional parameters introduced is given by: 2 S Ns · Cs 2 (5) Global pooling FC SE-Inception Module FC X Inception Module X Sigmoid Scale ReLU 1 × 1 × C 1 × 1 × C 1 × 1 × C 1 × 1 × C 1 × 1 × C × W × C Figure 2: The schema of the original Inception module (left) and the SE-Inception module (right). SE-ResNet Module + Global pooling FC ReLU + ResNet Module X X X X Sigmoid 1 × 1 × C 1 × 1 × C 1 × 1 × C 1 × 1 × C Scale × W × C × W × C × W × C Residual Residual FC 1 × 1 × C Figure 3: The schema of the original Residual module (left) and the SE-ResNet module (right). GPUs). We argu particularly sinc operations are Moreover, due t plications, we al model: for a 224 164 ms, compar additional comp is justified by its Next, we con by the proposed FC layers of the fraction of the t number of addit where r denotes ber of stages (w blocks operating mension), Cs de and Ns denotes ResNet-50 intro beyond the ∼25 corresponding to parameters com excitation is per sions. However, final stage of SE in performance ( *ODFQUJPOܕ "MFY/FUɼ7((/FU౳ 3FT/FUܕ 3FT/FUɼ3FT/F9U౳
  24. /POMPDBM/FVSBM/FUXPSL w 5SBOTGPSNFSʹ͓͚Δ4FMG"UUFOUJPOΛಋೖͨ͠ωοτϫʔΫ  $//΍3//ΑΓ௕͍εύϯͰೖྗಛ௃ΛࢀরՄೳ w $//ɿΧʔωϧαΠζ෼ͷࢀরྖҬ w 3//ɿ࣌ܥྻ෼ ͔Βಛఆͷ࣌ࠁ·Ͱ

    ͷࢀরྖҬ   xi ,FSOFMT 3FDFQUJWFpFMET 3FQSFTFOUBUJPO $POWPMVUJPOBM/FVSBM/FUXPSL JOQVU U JOQVU U JOQVU U5 -BZFS -BZFS -BZFS 0VUQVU 0VUQVU 0VUQVU JOQVU U -BZFS 0VUQVU 3FDVSSFOU/FVSBM/FUXPSL xi xj f( ⋅ ) g(xj ) ∑ j f(xi , xj ) ⋅ g(xj ) /POMPDBM 98BOH 3(JSTIJDL "(VQUB BOE,)F l/POMPDBM/FVSBM/FUXPSLz $713 
  25. /POMPDBM/FVSBM/FUXPSLͷண؟఺ w /POMPDBMpMUFSͱ4%1"UUFOUJPO͕ೖྗʹ֬཰෼෍Λ৐ࢉ  /POMPDBMpMUFSɿλʔήοτྖҬͱۙ๣ྖҬ͔Βྨࣅ౓ɹɹɹΛࢉग़  4%1"UUFOUJPOɿ2ͱ,Ωʔ͔Βྨࣅ౓Λࢉग़   1SPCBCJMJUZ%JTUSJCVUJPO

    *OQVU 0VUQVU 0VUQVU 1SPCBCJMJUZ%JTUSJCVUJPO *OQVU Attention(Q, K, V) = softmax ( QKT dk ) ⋅ V 4DBMFE%PU1SPEVDU"UUFOUJPO y(i) = ∑ j=I w(i, j) ⋅ x(j) /POMPDBM.FBO'JMUFS w( ⋅ ) softmax (QKT / dk) ˠ/POMPDBMpMUFSͷΞϧΰϦζϜΛ൓ө͢Δ͜ͱͰ4FMG"UUFOUJPOΛ$//ʹಋೖ
  26. /POMPDBMCMPDLͷߏ଄ w ೖྗಛ௃ͱ֬཰෼෍ͷ৐ࢉʹΑΓࢉग़  ࠷ऴతͳԠ౴஋ɹ͸3FTJEVBM-FBSOJOHʹΑΓࢉग़ w /POMPDBMCMPDL &NCFEEFE(BVTTJBO  

    ɹͱɹΛ࢖ͬͯ4FMG"UUFOUJPOͷߏ଄Λߏங  TPGUNBYΛ༻͍Δ͜ͱͰਖ਼نԽ͕༰қ  5SBOTGPSNFSͷTFMGBUUFOUJPOʹҰ൪ࣅ͍ͯΔߏ଄   x z T × H × W × 1024 T × H × W × 1024 1 × 1 × 1 T × H × W × 512 THW × 512 T × H × W × 512 T × H × W × 512 Softmax 1 × 1 × 1 ϕ : 1 × 1 × 1 θ : 1 × 1 × 1 T × H × W × 512 THW × THW f( ⋅ ) g( ⋅ ) y y = 1 C(x) ∑ f(x) ⋅ g(x) = 1 C(x) softmax(xTWT θ Wϕ x) ⋅ g(x) 4JNJMBSJUZ *OQVU Wg y θ ϕ
  27. $//3//ͱͷൺֱ w ߦಈೝࣝʹ͓͍ͯ3// -45. ϕʔεͷख๏ΑΓ΋ੑೳ͕ߴ͍  TFMGBUUFOUJPOΑΓ΋ϫΠυͳಛ௃ྖҬΛࢀরͰ͖ΔΑ͏ʹͳͬͨͨΊ  ͍ͭͰʹɼ'MPX৘ใΛ༻͍ͨํ๏ΑΓ΋ੑೳ͕ߴ͍ w

    /POMPDBMCMPDLͷಋೖʹΑΓֶश͕҆ఆԽ   model backbone modality top-1 val top-5 val to I3D in [7] Inception RGB 72.1 90.3 2-Stream I3D in [7] Inception RGB + flow 75.7 92.0 RGB baseline in [3] Inception-ResNet-v2 RGB 73.0 90.9 3-stream late fusion [3] Inception-ResNet-v2 RGB + flow + audio 74.9 91.6 3-stream LSTM [3] Inception-ResNet-v2 RGB + flow + audio 77.1 93.2 3-stream SATT [3] Inception-ResNet-v2 RGB + flow + audio 77.7 93.2 NL I3D [ours] ResNet-50 RGB 76.5 92.6 ResNet-101 RGB 77.7 93.3 Table 3. Comparisons with state-of-the-art results in Kinetics, reported on the val and test sets. W winner’s results [3], but their best results exploited audio signals (marked in gray) so were not visio of top-1 and top-5 accuracy; individual top-1 or top-5 numbers are not available from the test server tom) results. are used. model, R101 params FLOPs top-1 top-5 C2D baseline 1⇥ 1⇥ 73.1 91.0 I3D3⇥3⇥3 1.5⇥ 1.8⇥ 74.1 91.2 I3D3⇥1⇥1 1.2⇥ 1.5⇥ 74.4 91.1 NL C2D, 5-block 1.2⇥ 1.2⇥ 75.1 91.7 (e) Non-local vs . 3D Conv: A 5-block non-local C2D vs. inflated 3D ConvNet (I3D) [7]. All entries are with ResNet-101. The numbers of parameters and FLOPs are relative to the C2D baseline (43.2M and 34.2B). model top-1 top-5 R50 C2D baseline 71.8 89.7 I3D 73.3 90.7 NL I3D 74.9 91.6 R101 C2D baseline 73.1 91.0 I3D 74.4 91.1 NL I3D 76.0 92.1 (f) Non-local 3D ConvNet: 5 non-local blocks are added on top of our best I3D mod- els. These results show that non-local opera- tions are complementary to 3D convolutions. model top-1 top-5 R50 C2D baseline 73.8 91.2 I3D 74.9 91.7 NL I3D 76.5 92.6 R101 C2D baseline 75.3 91.8 I3D 76.4 92.7 NL I3D 77.7 93.3 (g) Longer clips: we fine-tune and test the models in Table 2f on the 128-frame clips. The gains of our non-local operations are con- sistent. Table 2. Ablations on Kinetics action classification. We show top-1 and top-5 classification accuracy (%). 0 50 100 150 200 250 300 350 400 iterations (K) 25 30 35 40 45 50 55 60 error (%) C2D baseline (train) C2D baseline (val) NL C2D, 5-block (train) NL C2D, 5-block (val) Figure 4. Curves of the training procedure on Kinetics for the Table 2 shows the ablation results, analyzed as follows: Instantiations. Table 2a compares different types of a sin- gle non-local block added to the C2D baseline (right before the last residual block of res4 ). Even adding one non-local block can lead to ⇠1% improvement over the baseline. Interestingly, the embedded Gaussian, dot-product, and concatenation versions perform similarly, up to some random variations (72.7 to 72.9). As discussed in Sec. 3.2, the non- local operations with Gaussian kernels become similar to the self-attention module [49]. However, our experiments show that the attentional (softmax) behavior of this module is not the key to the improvement in our applications; instead, it is
  28. ࢹ֮తઆ໌ w ਂ૚ֶश͕ਪ࿦࣌ʹ஫ࢹͨ͠ྖҬΛώʔτϚοϓͰදݱ  $MBTT"DUJWBUJPO.BQQJOH $". ɼ(SBE$".౳   ೖྗը૾

    b(JBOU@TDIOBV[FS` b.JOJBUVSF@TDIOBV[FS` b4UBOEBSE@TDIOBV[FS` b.JOJBUVSF@TDIOBV[FS` b4UBOEBSE@TDIOBV[FS` b*SJTI@UFSSJFS` (5b.JOJBUVSF@TDIOBV[FS` 3FT/FU (SBE$". 3FT/FU $".
  29. "UUFOUJPO#SBODI/FUXPSL w ࢹ֮తઆ໌ͷ"UUFOUJPONBQΛॏΈͱͯ͠"UUFOUJPOػߏ΁Ԡ༻  $MBTT"DUJWBUJPO.BQQJOH͔ΒಘΒΕΔ"UUFOUJPONBQΛϕʔεʹ"UUFOUJPOػߏͷॏΈΛࢉग़  "UUFOUJPOػߏʹΑΔߴਫ਼౓Խͱࢹ֮తઆ໌Λಉ࣌ʹ࣮ݱ   Prob.

    score Attention map Input image Prob. score M(x i) Label Attention mechanism Attention branch L per(x i) L att(x i) x i (a) Overview of Attention Branch Network … Feature map g(x i) Classifier Softmax … Feature map g (x i) Perception branch Convolution layers K ReLU Batch Normalization Batch Normalization Sigmoid 1x1 Conv., 1 1x1 Conv., K Softmax GAP 1x1 Conv., K Feature extractor : Convolution layer : Activation function : Batch Normalization )'VLVJ 5)JSBLBXB 5:BNBTIJUB BOE)'VKJZPTIJ l"UUFOUJPO#SBODI/FUXPSL-FBSOJOHPG"UUFOUJPO.FDIBOJTNGPS7JTVBM&YQMBOBUJPOz $713 
  30. ଞͷ"UUFOUJPOػߏͱͷҧ͍   3FTJEVBM"UUFOUJPO/FUXPSL 4&/FU /POMPDBM/FUXPSL "#/ w ଞͷ"UUFOUJPOػߏͱҟͳΓҰՕॴʹͭͷॏΈ͚ͩ࢖༻ 

    ଞͷ"UUFOUJPOػߏɿͭͷ"UUFOUJPOػߏʹෳ਺ͷॏΈΛ࢖༻ ɹɹɹɹɹɹɹɹɹෳ਺ͷϒϩοΫPSϞδϡʔϧʹ"UUFOUJPOػߏΛಋೖ Inception Global pooling FC SE-Inception Module FC X Inception X Inception Module X X Sigmoid Scale ReLU × W × C 1 × 1 × C 1 × 1 × C 1 × 1 × C 1 × 1 × C 1 × 1 × C × W × C Figure 2: The schema of the original Inception module (left) and the SE-Inception module (right). SE-ResNet Module + Global pooling FC ReLU + ResNet Module X X X X Sigmoid 1 × 1 × C 1 × 1 × C 1 × 1 × C 1 × 1 × C Scale × W × C × W × C × W × C Residual Residual FC 1 × 1 × C Figure 3: The schema of the original Residual module (left) and the SE-ResNet module (right). 4. Model and Computational Complexity For the proposed SE block to be viable in practice, it must provide an effective trade-off between model com- plexity and performance which is important for scalability. We set the reduction ratio r to be 16 in all experiments, ex- cept where stated otherwise (more discussion can be found a single pass forwards and backw takes 190 ms, compared to 209 m timings are performed on a server GPUs). We argue that this represe particularly since global pooling operations are less optimised in Moreover, due to its importance plications, we also benchmark CP model: for a 224 × 224 pixel inpu 164 ms, compared to 167 ms for S additional computational overhead is justified by its contribution to m Next, we consider the addition by the proposed block. All of them FC layers of the gating mechanism fraction of the total network capa number of additional parameters i 2 r S s=1 Ns · C where r denotes the reduction ra ber of stages (where each stage r blocks operating on feature maps mension), Cs denotes the dimensi and Ns denotes the repeated block ResNet-50 introduces ∼2.5 milli beyond the ∼25 million paramete corresponding to a ∼10% increas parameters come from the last sta excitation is performed across the sions. However, we found that the final stage of SE blocks could be r in performance (<0.1% top-1 erro the relative parameter increase to useful in cases where parameter tion (see further discussion in Sec 5. Implementation Each plain network and its co part are trained with identical opt ֤3FTJEVBMVOJUʹಋೖ
  31. "#/ͷ"UUFOUJPOػߏͷಛੑ   %FMFUF VOOFDFTTBSZ SFHJPO "GUFS "EESFHJPOPG JOUFSFTU #FGPSF

    TPDDFSCBMM DPOGJEFODF (5EBMNBUJBO EBMNBUJBO DPOGJEFODF ʜ 4PGUNBY ʜ K 'FBUVSFFYUSBDUPS "UUFOUJPONBQ *OQVUJNBHF 'FBUVSFNBQ 'FBUVSFNBQ "UUFOUJPONFDIBOJTN "UUFOUJPOCSBODI 1FSDFQUJPOCSBODI $POWPMVUJPO MBZFST $MBTTJpFS #/ 4JHNPJE Y$POW  #/ Y$POW 3F-6 ("1 4PGUNBY K Y$POW w ਓखͰ"UUFOUJPONBQΛमਖ਼͢Δ͜ͱͰೝࣝ݁ՌΛௐ੔Մೳ
  32. ·ͱΊ w /-1ͱ$7ʹ͓͚Δ"UUFOUJPOػߏʹ͍ͭͯαʔϕΠ  /-1ɿ&ODPEFS%FDPEFSϞσϧΛத৺ʹ"UUFOUJPOػߏΛಋೖ w 4PVSDF5BSHFU"UUFOUJPO w 4FMG"UUFOUJPO w

    .FNPSZ/FUXPSL  $7ɿ$//Λத৺ʹ"UUFOUJPOػߏΛಋೖ w %FUFSNJOJTUJDTPGUBUUFOUJPO w 4RVFF[FBOE&YDJUBUJPO/FUXPSL w /POMPDBM/FUXPSL w "UUFOUJPO#SBODI/FUXPSL   5IBOLZPVGPSZPVSl"UUFOUJPOz