Upgrade to Pro — share decks privately, control downloads, hide ads and more …

グラフ信号処理〜基礎から応用まで〜

 グラフ信号処理〜基礎から応用まで〜

グラフ信号処理は,定義域をグラフの頂点上に持つ信号ーーネットワーク上のデーターーをスパースに表現するために盛んに研究が行われている信号処理の一分野であり,信号処理や画像処理だけではなく制御,機械学習,医用・生体情報処理などでも注目を集めている.また,例えば適応的画像処理やグラフ深層学習で利用される graph convolution などの手法は,グラフ周波数領域での振る舞いを考えることで,通常の時間・空間領域における信号処理技術の拡張であることが自然に理解できる.本チュートリアルでは,グラフフーリエ変換や graph convolution の基礎の解説と同時に,近年の周辺分野での研究動向の紹介も行う.

Yuichi Tanaka

July 29, 2019
Tweet

Other Decks in Research

Transcript

  1. എܠ 4 Image courtesy of martingrandjean.ch ෳࡶͳߏ଄Λ࣋ͭେن໛σʔλͷวࡏ άϥϑ৴߸ॲཧ ωοτϫʔΫ্ͷσʔλղੳٕज़ ަ௨໢

    ߴ࣍ݩಛ௃ εϚʔτάϦου ιʔγϟϧωοτϫʔΫ ෼ࢠߏ଄ʢεΫϩʔεʣ ࣍ݩϝογϡ ө૾ ೴ػೳྖҬ -*%"3 Image courtesy of ses.jrc.ec.europa.eu [Monti+ CVPR 2017] [Cheung+ PIEEE 2018] http://gph.is/2cybPuU Image courtesy of dribbble.com/holypix Image courtesy of commons.wikimedia.org Image courtesy of Verge Genomics
  2. എܠ ˒ ػցֶशɾσʔλαΠΤϯε ˒ ϢʔβؒɾΞΠςϜؒͷؔ܎ʴείΞ ˒ ඃҾ༻ωοτϫʔΫʴಛ௃ྔ ˒ 8FCαΠτͷϦϯΫɾඃϦϯΫʴίϯςϯπ ˒

    ަ௨ ˒ ಓ࿏ωοτϫʔΫʴ৐߱٬਺ ˒ ӡߦ࿏ઢʴӺɾۭߓͷ৐߱٬਺ ˒ ఺܈ɾө૾ ˒ ΦϒδΣΫτͷܗঢ়ʴ࣍ݩ࠲ඪ ˒ ྠֲɾςΫενϟʴըૉ஋ 5 ෳࡶͳߏ଄Λ࣋ͭେن໛σʔλͷวࡏ ࿦จͷඃҾ༻ωοτϫʔΫ ϚϯϋολϯͷλΫγʔ৐߱٬σʔλ COMPUTER SCIENCES SOCIAL SCIENCES likely to be required in the future. A schema of the method is shown in Fig. 1. Our formulation is flexible with respect to physical and performance-related constraints that might need to be added. In our implementation, we consider the following. (i) For each request r, the waiting time !r , given by the difference between the pickup time tp r and the request time tr r , must be below a max- imum waiting time ⌦, for example, 2 min. (ii) For each passenger or request r the total travel delay r = td r t⇤ r must be lower than a maximum travel delay , for example, 4 min, where td r is the drop-off time and t⇤ r = tr r +⌧(or , dr ) is the earliest possible time at which the destination could be reached if the shortest path between the origin or and the destination dr was followed with- out any waiting time. The total travel delay r includes both the in-vehicle delay and the waiting time. Finally, (iii) for each vehi- cle v, we consider a maximum number of passengers, npass v  ⌫, for example, capacity 10. We define the cost C of an assignment as the sum of delays r (which includes the waiting time) over all assigned requests and passengers, plus a large constant cko for each unassigned request. Given an assignment ⌃ of requests to vehicles, we denote by Rok the set of requests that have been assigned to some vehicle and Rko the set of unassigned requests, due to the constraints or the fleet size. Formally, C(⌃) = X v2V X r2Pv r + X r2Rok r + X r2Rko cko . [1] This constrained optimization problem is solved via four steps (Fig. 1), which are: computing a pairwise request-vehicle share- ability graph (RV-graph) (Fig. 1B); computing a graph of fea- sible trips and the vehicles that can serve them (RTV-graph) (Fig. 1C); solving an ILP to compute the best assignment of vehi- cles to trips (Fig. 1D); and rebalancing the remaining idle vehi- cles (Fig. 1E). Given a network graph with travel times, we consider a func- tion travel(v, Rv ) for single-vehicle routing. For a vehicle v, with passengers Pv , this function returns the optimal travel route v to satisfy requests Rv . This route minimizes the sum of delays P r2Pv [Rv r subject to the constraints Z (waiting time, delay, and capacity). For low-capacity vehicles, such as taxis, the optimal path can be computed via an exhaustive search. For vehicles with larger capacity, heuristic methods such as Lin–Kernighan (20), Tabu search (21), or simulated annealing (22) may be used. Fig. 2, Right shows the optimal route for a vehicle with four pas- sengers and an additional request. The RV-graph (Fig. 1B) represents which requests and vehi- cles might be pairwise-shared and builds on the idea of share- Fig. 2. (A) Snapshot: 2,000 vehicles, capacity of 4 (⌦ = 5 min, Wednesday, 2000 hours). Vehicle in the fleet are represented at their current positions. Colors indicate number of passengers (0: light blue; 1: light green; 2: yellow; 3: dark orange; 4: dark red); 39 rebalancing vehicles are displayed in dark blue—mostly in the upper Manhattan returning to the middle. (B) Close view of the scheduled path for a vehicle (dark red circle) with four passen- gers, which drops one off, picks up a new one (blue star), and drops all four. Drop-off locations are displayed with inverted triangles. See Movie S1 for a complete simulation. two types of edges: (i) edges e(r, T), between a request r and a trip T that contains request r (i.e., 9 e(r, T) , r 2 T), and (ii) edges e(T, v), between a trip T and a vehicle v that can exe- cute the trip (i.e., 9 e(T, v) , travel(v, T) is feasible). The cost P r2Pv [T r , sum of delays, is associated to each edge e(T,v). The algorithm to compute the feasible trips and edges pro- ceeds incrementally in trip size for each vehicle, starting from the request-vehicle edges in the RV-graph (SI Appendix, Algorithm 1). For computational efficiency, we rely on the fact that a trip T only needs to be checked for feasibility if there exists a vehicle v for which all of its subtrips T0 = T \ r (obtained by removing one request) are feasible and have been added as edges e(T0, v) to the RTV-graph. Next, we compute the optimal assignment ⌃optim of vehicles to trips. This optimization is formalized as an ILP, initialized with a greedy assignment obtained directly from the RTV-graph. To compute the greedy assignment ⌃ , trips are assigned to likely to be required in the future. A schema of the method is shown in Fig. 1. Our formulation is flexible with respect to physical and performance-related constraints that might need to be added. In our implementation, we consider the following. (i) For each request r, the waiting time !r , given by the difference between the pickup time tp r and the request time tr r , must be below a max- imum waiting time ⌦, for example, 2 min. (ii) For each passenger or request r the total travel delay r = td r t⇤ r must be lower than a maximum travel delay , for example, 4 min, where td r is the drop-off time and t⇤ r = tr r +⌧(or , dr ) is the earliest possible time at which the destination could be reached if the shortest path between the origin or and the destination dr was followed with- out any waiting time. The total travel delay r includes both the in-vehicle delay and the waiting time. Finally, (iii) for each vehi- cle v, we consider a maximum number of passengers, npass v  ⌫, for example, capacity 10. We define the cost C of an assignment as the sum of delays r (which includes the waiting time) over all assigned requests and passengers, plus a large constant cko for each unassigned request. Given an assignment ⌃ of requests to vehicles, we denote by Rok the set of requests that have been assigned to some vehicle and Rko the set of unassigned requests, due to the constraints or the fleet size. Formally, C(⌃) = X v2V X r2Pv r + X r2Rok r + X r2Rko cko . [1] This constrained optimization problem is solved via four steps Fig. 2. (A) Snapshot: 2,000 vehicles, capacity of 4 (⌦ = 2000 hours). Vehicle in the fleet are represented at the Colors indicate number of passengers (0: light blue; 1: li 3: dark orange; 4: dark red); 39 rebalancing vehicles a blue—mostly in the upper Manhattan returning to th view of the scheduled path for a vehicle (dark red circl gers, which drops one off, picks up a new one (blue star Drop-off locations are displayed with inverted triangles complete simulation. IEEE SIG PROC MAG [IN7] 3D shape correspondence application: Finding intrin- sic correspondence between deformable shapes is a classical tough problem that underlies a broad range of vision and graphics applications, including texture mapping, animation, editing, and scene understanding [107]. From the machine learning standpoint, correspondence can be thought of as a classification problem, where each point on the query shape is assigned to one of the points on a reference shape (serving as a “label space”) [108]. It is possible to learn the correspondence with a deep intrinsic network applied to some input feature vector f(x) at each point x of the query shape X, producing an output U⇥(f(x))(y), which is interpreted as the conditional probability p(y|x) of x being mapped to y. Using a training set of points with their ground-truth correspondence {xi , yi}i2I, supervised learning is performed minimizing the multinomial regression loss min ⇥ X i2I log U⇥(f(xi))(yi) (64) w.r.t. the network parameters ⇥. The loss penalizes for the de- viation of the predicted correspondence from the groundtruth. We note that, while producing impressive result, such an approach essentially learns point-wise correspondence, which then has to be post-processed in order to satisfy certain properties such as smoothness or bijectivity. Correspondence is an example of structured output, where the output of the network at o (in the simp i.e., the out et al. [109] corresponden corresponden [FIGS7a] Lea U⇥ is applied The output of probability di of as a soft c [FIGS7b] Intrinsic correspondence established between human shapes using intrinsic layers). SHOT descriptors capturing the local normal vector orientations [110] were use is visualized by transferring texture from the leftmost reference shape. For additiona has brought a breakthrough in performance and led to an overwhelming trend in the community to favor deep learning formulation of but the mode ༷ʑͳϙʔζͷਓମܗঢ়σʔλ [Monti+ CVPR 2017] [Alonso-Mora+ PNAS 2017] [Bronstein+ IEEE SPM 2018]
  3. എܠ ˒ ηϯαωοτϫʔΫ ˒ ηϯαͷ૬ରతͳҐஔʴܭଌ஋ ˒ ిྗ໢ʴ1.6ͷࢦࣔ஋ ˒ ੜମ৘ใ ˒

    ೴ػೳྖҬωοτϫʔΫʴ೴೾ɾG.3* ˒ Ҩ఻ࢠ੍ޚωοτϫʔΫʴసࣸҼࢠූ߸ ˒ ੍ޚ ˒ ݸମؒωοτϫʔΫʴײडੑऀͷঢ়ଶ ˒ ϩϘοτ܈ωοτϫʔΫʴҐஔɾܭଌ஋ 6 ηϯαʹΑΔࣨ಺ͷ؀ڥଌఆ .3*৴߸͔Βͷ೴ωοτϫʔΫͷਪఆ Proceedings of the IEEE with entries A i,j representing the strength of the physical connection between brain regions i and j . Some readers may prefer to consider the graph as a tuple G = (V, ℰ) where ℰ ⊂ V × V describes the existence of physical connections between pairs of brain regions; each edge (i, j) ∈ ℰ has an underlying weight A i,j quantifying the strength of the con- nection. In this paper, we use G : = (V, A) because it is more concise; notice that we can infer the existence of an edge (i, j) ∈ ℰ from the weight in the adjacency matrix if A i,j > 0 . The brain regions encoded in the nodes of V​ are macro- scale parcels of the brain that our current understanding of neuroscience deems anatomically or functionally differenti- ated. There are various parcellations in use in the literature that differ mostly in their level of resolution and specific location [54], [55]. As an example, the networks that we study here consist of N = 82 regions from the Desikan– Killiany anatomical atlas [56] combined with the Harvard– Oxford subcortical parcels [57]. A schematic representation of a few labeled brain regions is shown in Fig. 1(a). The entries A ij of the adjacency matrix A measure the strength of the axonal connection between region i and region j . This strength is a simple count of the number of stream- lines (estimated individual fibers) that connect the regions, and can be estimated with diffusion spectrum imaging (DSI) [47]—see Fig. 1 for an illustration of the pipeline and Callout 1 for details on the specific techniques that are used for this purpose. In a situation of healthy development and an absenc of trauma, nodes in brain graphs are the same across individu als. Intersubject variability of structural connectivity has dem onstrated clinical value as it has been reliably associated wit neurological [61], [62] and psychological [63] disorders. Besides structural connectivity, it is also possible t acquire brain activity signals x ∈ ℝ N such that the valu of the i th component x i quantifies neuronal activity i brain region i —see Fig. 2 for an illustration of thes BOLD signals and Callout 2 for details on the methods BOLD signals for all the N studied brain regions ar acquired over T successive time points, and therefore we define the matrix X ∈ ℝ N×T such that its j th colum codifies brain activity at time j . An example of such brain signal matrix is provided in Fig. 2(a), with the cor responding distribution of values for each brain regio illustrated in Fig. 2(b). Brain activity signals carry dynamic information that i not only useful for the study of pathology [62], [64], [65 but also enables us to gain insight into human cognitiv abilities [66]–[68]. Whereas physical connectivity can b seen as a long-term property of individuals that change slowly over the course of years, brain activity signals displa meaningful fluctuations at second or subsecond time scale that reflect how different parts of the brain exchange an process information in the absence of any external stimu lus, and how they are recruited to meet emerging cognitiv Fig. 1. Estimating brain graphs. Knowledge from an anatomical atlas based on anatomical features such as gyri and sulci (a) is combined with MR structural connectivity extracted from diffusion-weighted MRI (b), which can then be used to estimate the brain graph (c). (Adapted from [53].) ᅗ 2 ᏶඲ࢢࣛࣇ࡜」㞧ࢿࢵࢺ࣮࣡ࢡ14) ᅗ 3 SIS ࣔࢹࣝ 㯮࠸㡬Ⅼࡣឤᰁ⪅ࢆ㸪ⓑ࠸㡬Ⅼࡣឤཷᛶ⪅ࢆ⾲ࡍ㸬௵ពࡢឤ ࡣ㸪ࢿࢵࢺ࣮࣡ࢡ࡛㞄᥋ࡍࡿឤཷᛶ⪅ i ࢆ βi > 0 ࡢ⋡࡛ឤ ࡏࡿ㸬୍᪉㸪ឤᰁࡋ࡚࠸ࡿ㡬Ⅼ i ࡣ㸪δi > 0 ࡢ⋡࡛ឤᰁ࠿ࡽ 4*4ϞσϧʹΑΔײછ఻ൖͷϞσϧԽ [Hagmann+ PLoS Biol 2008] http://db.csail.mit.edu/labdata/labdata.html [খଂ ܭଌͱ੍ޚ 2016] ෳࡶͳߏ଄Λ࣋ͭେن໛σʔλͷวࡏ
  4. എܠ ˒ ૉཻࢠ෺ཧ ˒ ૉཻࢠͷ࠲ඪʴϞʔϝϯλϜɾεϐϯ ˒ χϡʔτϦϊݕग़ثͷ࣍ݩߏ଄ʴDOM (digital optical module)

    ͷ ηϯαܭଌ஋ ˒ ࡐྉઃܭɾྔࢠԽֶ ˒ ݪࢠͷԽֶత݁߹ʴϑΟϯΨʔϓϦϯτ ˒ ݪࢠͷ૬ޓ࡞༻ʴΤωϧΪʔ 7 ΞΠεΩϡʔϒɾχϡʔτϦϊ؍ଌॴ ෼ࢠϑΟϯΨʔϓϦϯτͷܭࢉάϥϑ Fig. 1. The IceCube Neutrino Observatory with the in-ice array, its sub- array DeepCore, and the cosmic-ray air shower array IceTop. The string color scheme represents different deployment seasons. The top-right insert presents the top view of the IceCube detector. The DeepCore sub-array is represented by open circles. IceCube includes an array of 81 surface stations called IceTop, designed to study cosmic ray interactions in the atmosphere. The schematic view of the IceCube detector is shown in Fig. 1. The sensors record the photon arrival times using waveform digitizers. Across the array, the relative arrival times are known to better than 3 ns [25]. IceCube observes two classes of events. Contained events occur when neutrinos interact within the detector. Through- going events are mostly long-lived muons which can travel many kilometers in the ice. They can be produced in neutrino interactions, or, in downward-going events, in cosmic-ray air showers that occur when high-energy cosmic-rays interact in the upper atmosphere. F a t e i t m s d a d t s 1 t c C a i Figure 1: Left: A visual representation of the computational graph of both standard circular fin- gerprints and neural graph fingerprints. First, a graph is constructed matching the topology of the molecule being fingerprinted, in which nodes represent atoms, and edges represent bonds. At each layer, information flows between neighbors in the graph. Finally, each node in the graph turns on one bit in the fixed-length fingerprint vector. Right: A more detailed sketch including the bond [Choma+ ICMLA 2018] [Duvenaud+ NIPS 2015] ෳࡶͳߏ଄Λ࣋ͭେن໛σʔλͷวࡏ
  5. എܠ ˒ σʔλղੳͷຊ࣭͸ͳʹ͔ʁ ˒ ۭ࣌ؒؒྖҬͷσʔλͷεύʔεදݱ ˒ ''5ɼ΢ΣʔϒϨοτɿप೾਺ྖҬ ˒ ओ੒෼෼ੳ 1$",-5

    ɿڞ෼ࢄߦྻͷݻ༗ϕΫτϧͰுΒΕΔ ෦෼ۭؒ ˒ ѹॖηϯγϯάɿ͋ΔجఈΛ༻͍ͯૄʹදݱͰ͖Δ৴߸ͷ෮ݩ ˒ ࣙॻֶशɾਂ૚ֶशɿσʔλ͔ΒͷجఈʢʹϑΟϧλ܎਺ʣͷֶश 8 ˒ σʔλͷྖҬ(domain)Λมߋ͢Δ͜ͱͰɼσʔλΛ ૄʢεύʔεʣʹදݱ͢Δ ˒ ૄʹදݱͰ͖Δ͜ͱΛར༻ͯ͠σʔλΛղੳ͢Δ %'5 *%'5 ۭؒྖҬ प೾਺ྖҬ
  6. എܠ ˒ ༷ʑͳ෼໺ˣͰಉछͷݚڀ͕ߦΘΕ͍ͯΔ ˒ ৴߸ॲཧཧ࿦ͷ੔උ͕ඞཁ 9 ˒ σʔλͷૄੑͷར༻͕ෆे෼ ˒ Application-oriented

    / ad-hoc ˒ ಉ͡ࢁɾҧ͏ొࢁޱʁ ໰୊ҙࣝ Frequency index 0 100 200 300 400 500 DFT spectrum 0 5 10 15 20 25 %'5 *%'5 ηϯαωοτϫʔΫ্ͷσʔλ प೾਺ྖҬɿ εύʔεʹͳΒͳ͍ʂ ͦͷ··Ͱ͸Ͱ͖ͳ͍ͷͰɼηϯαͷ ΠϯσοΫεॱʹ৴߸Λ੔ྻͤͯ͞%'5
  7. ໨త ˒ άϥϑ৴߸ॲཧͰԿ͕Ͱ͖ΔΑ͏ʹͳΔ͔ ˒ ωοτϫʔΫ্ͷσʔλͷεύʔεදݱ ˒ σʔλ͔ΒͷωοτϫʔΫͷਪఆ ˒ ͦΕΒΛར༻ͨ͠ଟ෼໺Ͱͷෳࡶσʔλղੳ 10

    ඇϢʔΫϦουྖҬͷσʔλͷͨΊͷ৴߸ॲཧ ˠάϥϑ৴߸ॲཧ ηϯαωοτϫʔΫ্ͷσʔλ Graph frequency (eigenvalue) 0 5 10 15 GFT spectrum 0 0.2 0.4 0.6 0.8 1 άϥϑप೾਺ྖҬɿ εύʔεʹͰ͖Δʂ άϥϑϑʔϦΤม׵ ٯάϥϑϑʔϦΤม׵
  8. άϥϑ ˒ ௖఺ʢϊʔυʣͱลʢΤοδʣͷू߹ ˒ ௖఺ؒͷ૬ରతͳؔ܎ͷ਺ཧతͳදݱ ˒ άϥϑͷྫ ˒ ަ௨໢ʢඈߦػɼమಓɼಓ࿏ʜʣ ˒

    ϋΠύʔϦϯΫ ˒ ిؾճ࿏ ˒ ରਓؔ܎ʢ'BDFCPPL 5XJUUFS *OTUBHSBNʜʣ ˒ ਆܦ໢ ˒ ిྗ໢ ˒ ୿ന࣭ήϊϜߏ଄ ˒ %ϝογϡ 12
  9. άϥϑ ˒ άϥϑͷߦྻදݱ ˒ άϥϑͷεϖΫτϧ ͷܭࢉʹඞཁ ˒ ୅දతͳߦྻ ˒ ྡ઀ߦྻ

    BEKBDFODZNBUSJY  ˒ ౓਺ߦྻ EFHSFFNBUSJY  ˒ άϥϑϥϓϥγΞϯɿ(41ͷओཁͳมಈ࡞༻ૉ 13 άϥϑϑʔϦΤม׵ʢޙड़ʣʹ༻͍Δ
  10. άϥϑϥϓϥγΞϯ ˒ ૊߹ΘͤάϥϑϥϓϥγΞϯ ˒   ˒ ରশߦྻɾ൒ਖ਼ఆ஋ߦྻ ˒ ϥϓϥγΞϯೋ࣍ܗࣜ

     ˒ ਖ਼نԽରশάϥϑϥϓϥγΞϯ ˒   ˒ ର֯੒෼͕͢΂ͯ ˒ ϥϯμϜ΢ΥʔΫάϥϑϥϓϥγΞϯ ˒   ˒ ର֯੒෼͕͢΂ͯɼҰൠʹ͸ඇରশ L = D − A x⊤Lx = ∑ (i,j)∈E x⊤[L]i,j x = ∑ (i,j)∈E wij (x[i] − x[j])2 Lsym = D−1/2LD−1/2 = I − D−1/2AD−1/2 Lrw = D−1L = I − D−1A 14
  11. άϥϑϥϓϥγΞϯ ˒ άϥϑϥϓϥγΞϯͷݻ༗஋෼ղ ˒ ಛ௃ ˒ ࣮ରশߦྻ㱺ਖ਼ن௚ަߦྻͰର֯ԽՄೳɼ࣮ݻ༗஋ ˒ ݻ༗஋͕ඇෛ ɿ

    ͷߦͷ࿨͕ˠ  ˒ ݻ༗஋ͷ਺ʹ࿈݁άϥϑͷ਺ ˒ ࠷େݻ༗஋ͷ্քɿ L L1 = 0 ⋅ 1 λmax ≤ 2dmax 16 ͸൒ਖ਼ఆ஋ߦྻɿ
  12. άϥϑ৴߸ ˒ άϥϑʴ৴߸  ˒ ৴߸஋ʢղੳ͍ͨ͠΋ͷʣʴ৴߸ͷߏ଄ ˒ ཁૉɿάϥϑͷ௖఺JʹରԠ෇͚ΒΕͨεΧϥʔ f :

    V ⟶ ℝ 18 ௖఺ू߹  ลू߹ V E ʢσΟδλϧʣ৴߸ॲཧ άϥϑཧ࿦ɾωοτϫʔΫղੳ ˠάϥϑ৴߸ͷಛ௃͚ͮɿάϥϑͱ৴߸஋ͷ྆ํ͕ඞཁ ׈Β͔ͳάϥϑ৴߸ ׈Β͔Ͱͳ͍άϥϑ৴߸ FY׈Β͔͞ ৴߸஋͕ಉ͡Ͱ΋ɼߏ଄ʹ Αͬͯ׈Β͔͕͞ҟͳΔ  G = (V, E)
  13. άϥϑϑʔϦΤม׵ ˒ άϥϑ৴߸ʹର͢ΔϑʔϦΤม׵Λߟ͑Δ ˒ ྡ઀௖఺্ͷ৴߸஋ͷมಈ͕ʹ͍ͨ͠ ˒ άϥϑ৴߸ͷ׈Β͔͞ΛͲͷΑ͏ʹଌΔ͔ʁ ˒ DGϑʔϦΤม׵ ˒

    ࣍ݩϥϓϥε࡞༻ૉͷݻ༗ؔ਺ʹΑΔ৴߸ͷల։ ˒ ݻ༗஋ɿप೾਺ 19 ؇΍͔ʹ௿प೾਺ ܹ͍͠ʹߴप೾਺ ‎৴߸ͷߏ଄ʹج͍ͮͨप೾਺ղੳɿάϥϑϑʔϦΤม׵ খ ݻ༗஋ʢप೾਺ʣ େ ؇΍͔ʹৼಈ ݻ༗ؔ਺ ܹ͘͠ৼಈ