Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Deep Learningを用いた経路予測の研究動向

himidev
October 09, 2020

Deep Learningを用いた経路予測の研究動向

Deep Learningを用いた経路予測のサーベイ

himidev

October 09, 2020
Tweet

More Decks by himidev

Other Decks in Technology

Transcript

  1. ܦ࿏༧ଌͷΧςΰϦ 3 Bayesian-based Deep Learning-based Planning-based ಺෦ঢ়ଶ ؍ଌঢ়ଶ Update Update

    ex. model : Kalman Filter Past Future 1SFEJDUJPO 
 NPEFM Input Output ex. model : LSTMɼCNN ex. Model : IRL, RRT* Start Goal X O X ؍ଌঢ়ଶʹϊΠζΛ෇༩ͨ͠஋͔Β ະདྷͷ಺෦ঢ়ଶΛߋ৽͠༧ଌ஋Λஞ࣍ਪఆ ༧ଌର৅ͷաڈͷي੻͔ΒະདྷͷߦಈΛֶश ελʔτ͔Βΰʔϧ·Ͱͷใु஋Λ࠷దԽ ຊαʔϕΠͷର৅
  2. %FFQ-FBSOJOHʹΑΔܦ࿏༧ଌͷඞཁͳཁૉ 4 Ұਓশࢹ఺ ंࡌΧϝϥࢹ఺ ௗᛌࢹ఺ View point ର৅Ϋϥε ର৅ؒͷΠϯλϥΫγϣϯ ੩త؀ڥ৘ใ

    Context Long Short-Term Memory Convolutional Neural Network Gated Recurrent Unit Temporal Convolutional Network Model and the yellow-orange heatmaps are n ones are ground truth multi-future Single-Future Multi-Future 18.51 / 35.84 166.1 / 329.5 28.68 / 49.87 184.5 / 363.2 PIE JAAD Method MSE CMSE CFMSE MSE CMSE CFM 0.5s 1s 1.5s 1.5s 1.5s 0.5s 1s 1.5s 1.5s 1.5 Linear 123 477 1365 950 3983 223 857 2303 1565 611 LSTM 172 330 911 837 3352 289 569 1558 1473 576 B-LSTM[5] 101 296 855 811 3259 159 539 1535 1447 561 PIEtraj 58 200 636 596 2477 110 399 1248 1183 478 Table 3: Location (bounding box) prediction errors over varying future time steps. MSE in pixels is ca predicted time steps, CMSE and CFMSE are the MSEs calculated over the center of the bounding box predicted sequence and only the last time step respectively. MSE Method 0.5s 1s 1.5s Linear 0.87 2.28 4.27 LSTM 1.50 1.91 3.00 PIEspeed 0.63 1.44 2.65 !" !# !$ !% !& !' UU JOQVUMBZFS UU IJEEFOMBZFS *OQVU(BUF UU PVUQVUMBZFS UU  IJEEFOMBZFS .FNPSZ$FMM 'PSHFU(BUF 0VUQVU(BUF
  3. ΠϯλϥΫγϣϯΛߟྀͨ͠༧ଌख๏ͷ܏޲ͱ෼ྨ 6 2016 Social-LSTM [1] [A. Alahi+, CVPR, 2016] DESIRE

    [2] [N. Lee+, CVPR, 2017] Conv.Social-Pooling [3] [N. Deo+, CVPRW, 2018] SoPhie [15] [A. Sadeghian+, CVPR, 2019] Social-BiGAT [17] [V. Kosaraju+, NeurIPS, 2019] Social-STGCNN [27] [A. Mohamedl+, CVPR, 2020] Social-GAN [6] [A. Gupta+, CVPR, 2018] Next [14] [J. Liang+, CVPR, 2019] STGAT [16] [Y. Huang+, ICCV, 2019] Trajectron [18] [B. Ivanovic+, ICCV, 2019] Social-Attention [10] [A. Vemula+, ICRA, 2018] MATF [7] [T. Zhao+, CVPR, 2019] MX-LSTM [4] [I. Hasan+, CVPR, 2018] CIDNN [11] [Y. Xu+, CVPR, 2018] SR-LSTM [13] [P. Zhang+, CVPR, 2019] ΠϯλϥΫγϣϯΛߟྀͨ͠༧ଌख๏ ΠϯλϥΫγϣϯҎ֎ͷ՝୊Λѻ͏༧ଌख๏ Group-LSTM [5] [N. Bisagno+, CVPR, 2018] Reciprocal network [8] [S. Hao+, CVPR, 2020] PECNet [9] [K. Mangalam+, ECCV, 2020] RSBG [26] [J. SUN+, CVPR, 2020] STAR [21] [C. Yu+, ECCV, 2020] Behavior CNN [28] [S. Yi+, ECCV, 2016] FPL [30] [T. Yagi+, CVPR, 2018] Fast and Furious [32] [W. Luo+, CVPR, 2018] OPPU [31] [A. Bhattacharyya+, CVPR, 2018] OASE [33] [H. Minoura+, VISAPP, 2019] Rule of the Road [34] [J. Hong+, CVPR, 2019] Multiverse [38] [J. Liang+, CVPR, 2020] 2020 Trajectron++ [19] [T. Salzmann+, ECCV, 2020] ΞςϯγϣϯϞσϧ ϓʔϦϯάϞσϧ GD GAN [12] [T. Fernando +, ACCV, 2018] EvolveGraph [20] [J. Li+, NeurIPS, 2020] FVTraj [22] [T. Salzmann+, ECCV, 2020] RGM [23] [C. Choi +, ICCV, 2019] PRECOG [24] [N. Rhinehart +, ICCV, 2019] DSCMP [25] [C. Tao +, ECCV, 2020] Visual Path Prediction [29] [S. Huang +, TIP, 2016] Future Vehicle Localization [35] [Y. Yao +, ICRA, 2019] DTP [36] [O. Styles +, IV, 2019] TPNet [37] [L. Fang +, CVPR, 2020] SimAug [39] [J. Liang+, ECCV, 2020] ͦͷଞͷϞσϧ ຳӜ+ “Deep LearningΛ༻͍ͨҠಈ෺ମؒͷΠϯλϥΫγϣϯΛߟྀͨ͠ܦ࿏༧ଌͷݚڀಈ޲” ిࢠ৘ใ௨৴ֶձ 2022ΑΓҾ༻
  4. ΠϯλϥΫγϣϯΛߟྀͨ͠ܦ࿏༧ଌख๏ͷϕʔεϞσϧ 7 ༧ଌର৅Λத৺ͱͯ͠पғͷۭؒΛදݱͨ͠ グ Ϧο ド Λ࡞੒͠ɼ 
 ༧ଌର৅पลͷݸʑͷଞର৅ʹؔ͢Δ࣌ܥྻಛ௃ΛຒΊࠐΉΞ プ

    ϩʔν ༧ଌର৅ͱݸʑͷଞର৅ͱͷؔ܎Λ4PGUNBYؔ਺ で ߹ܭ஋ が ͱͳΔ 
 ࿈ଓతͳॏΈ஋ で දݱ͠ɼॏΈΛͦΕ ぞ Εͷଞର৅ͷಛ௃ྔͱ 
 ৐ࢉ͢ΔΞ プ ϩʔν ϓʔϦϯάϞσϧ ΞςϯγϣϯϞσϧ
  5. ˔ ֤ΧςΰϦʹଐ͢Δܦ࿏༧ଌख๏ຖͷಛ௃Λ·ͱΊΔ  ΠϯλϥΫγϣϯ͋Γ w ϓʔϦϯάϞσϧ w ΞςϯγϣϯϞσϧ w ͦͷଞͷϞσϧ

     ΠϯλϥΫγϣϯҎ֎ ˔ ఆྔతධՁͷͨΊͷσʔληοτɼධՁࢦඪ΋঺հ ˔ ୅දతϞσϧΛ࢖༻ͯ͠ɼ֤Ϟσϧͷਫ਼౓ͱ༧ଌ݁Ռʹ͍ͭͯٞ࿦ ຊαʔϕΠͷ໨త 8 %FFQ-FBSOJOHΛ༻͍ͨܦ࿏༧ଌख๏ͷಈ޲ௐࠪ
  6. ຊαʔϕΠͷ໨త 9 %FFQ-FBSOJOHΛ༻͍ͨܦ࿏༧ଌख๏ͷಈ޲ௐࠪ ˔ ֤ΧςΰϦʹଐ͢Δܦ࿏༧ଌख๏ຖͷಛ௃Λ·ͱΊΔ  ΠϯλϥΫγϣϯ͋Γ w ϓʔϦϯάϞσϧ w

    ΞςϯγϣϯϞσϧ w ͦͷଞͷϞσϧ  ΠϯλϥΫγϣϯҎ֎ ˔ ఆྔతධՁͷͨΊͷσʔληοτɼධՁࢦඪ΋঺հ ˔ ୅දతϞσϧΛ࢖༻ͯ͠ɼ֤Ϟσϧͷਫ਼౓ͱ༧ଌ݁Ռʹ͍ͭͯٞ࿦
  7. ෳ਺ͷาߦऀͷҠಈܦ࿏Λಉ࣌ʹ༧ଌ ˔ าߦऀಉ࢜ͷিಥΛආ͚ΔͨΊʹ4PDJBM1PPMJOHMBZFS 41PPMJOH ΛఏҊ  ༧ଌର৅पลͷଞର৅ͷҐஔͱதؒ૚ग़ྗΛೖྗ  ࣍࣌ࠁͷ-45.ͷ಺෦ঢ়ଶʹาߦऀಉ࢜ͷۭؒతؔ܎͕อ࣋ 

    িಥΛආ͚Δܦ࿏༧ଌ͕Մೳ 4PDJBM-45.<""MBIJ $713 > 10 Linear! Social-LSTM! GT! SF [73]! Linear! Social-LSTM! GT! SF [73]! Linear! Social-LSTM! GT! SF [73]! Linear! Social-LSTM! GT! SF [73]! Linear! Social-LSTM! GT! SF [73]! Linear! GT! SF [73]! Linear! Social-LSTM! GT! SF [73]! Linear! Social-LSTM! GT! SF [73]! Linear! Social-LSTM! GT! SF [73]! Linear! Social-LSTM! GT! SF [73]! A. Alahi, et al., “Social LSTM: Human Trajectory Prediction in Crowded Spaces,” CVPR, 2016.
  8. ର৅ؒͷΠϯλϥΫγϣϯʹՃ͑पғͷ؀ڥ৘ใΛߟྀ ˔ ަࠩ఺΍ಓԊ͍୺ͳͲͷো֐෺ྖҬΛආ͚Δܦ࿏༧ଌΛ࣮ݱ ˔ $7"&ͰΤϯίʔυ͢Δ͜ͱͰෳ਺ͷܦ࿏Λ༧ଌՄೳ 3BOLJOH3F fi OFNFOU.PEVMFͰ༧ଌܦ࿏ʹϥϯΩϯά෇͚ ˔ ܦ࿏Λ൓෮తʹվળ͢Δ͜ͱͰ༧ଌਫ਼౓޲্ΛਤΔ

    %&4*3&</-FF $713 > 11 Input    KLD Loss fc + soft max  r1 rt r2 fc Y  Sample Generation Module Ranking & Renement Module RNN Encoder1 GRU GRU GRU RNN Encoder2 GRU GRU GRU RNN Decoder1 GRU GRU GRU RNN Decoder2 GRU GRU GRU CVAE fc fc z X Y Regression Scoring fc fc fc Y  Y Recon Loss CNN SCF SCF SCF Feature Pooling (I) Iterative Feedback    concat mask  addition Figure 2. The overview of proposed prediction framework DESIRE. First, DESIRE generates multiple plausible prediction samples ˆ Y via a CVAE-based RNN encoder-decoder (Sample Generation Module). Then the following module assigns a reward to the prediction samples at each time-step sequentially as IOC frameworks and learns displacements vector ∆ ˆ Y to regress the prediction hypotheses (Ranking DESIRE-S Top1 DESIRE-S Top10 DESIRE-SI Top1 DESIRE-SI Top10 Linear RNN ED RNN ED-SI X Y Method Linear RNN ED RNN ED-SI CVAE 1 CVAE 10% DESIRE-S-IT DESIRE-S-IT DESIRE-S-IT DESIRE-S-IT DESIRE-SI-I DESIRE-SI-I DESIRE-SI-I DESIRE-SI-I Linear RNN ED RNN ED-SI CVAE 1 CVAE 10% DESIRE-S-IT Linear RNN ED RNN ED-SI X Y X Y (a) GT Figure 6. KITTI resul DESIRE-S Top1 DESIRE-S Top10 DESIRE-SI Top1 DESIRE-SI Top10 Linear RNN ED RNN ED-SI X Y ਅ஋ ༧ଌ஋ ΠϯλϥΫγϣϯ͋Γ Top1 Top10 ΠϯλϥΫγϣϯͳ͠ Top1 Top10 N. Lee, et al., “DESIRE: Distant Future Prediction in Dynamic Scenes with Interacting Agents,” CVPR, 2017.
  9. ߴ଎ಓ࿏্Ͱྡ઀͢Δࣗಈंಉ࢜ͷΠϯλϥΫγϣϯΛߟྀͨ͠༧ଌख๏ ˔ ΠϯλϥΫγϣϯ৘ใʹۭؒతҙຯ߹͍Λ࣋ͨͤΔ$POWPMVUJPO4PDJBM1PPMJOHΛఏҊ  -45.&ODPEFSͰಘͨي੻ಛ௃ྔΛݻఆαΠζͷ4PDJBM5FOTPSʹ֨ೲ  $//ͰΠϯλϥΫγϣϯͷಛ௃ྔΛٻΊΔ  ༧ଌंͷಛ௃ྔͱ࿈݁͠ɼ-45.%FDPEFSͰܦ࿏Λ༧ଌ $POWPMVUJPOBM4PDJBM1PPMJOH</%FP

    $7138 > 12 Figure 3. Proposed Model: The encoder is an LSTM with shared weights that learns vehicle dynamics based on track histories. The convolutional social pooling layers learn the spatial interdependencies of of the tracks. Finally, the maneuver based decoder outputs a multi-modal predictive distribution for the future motion of the vehicle being predicted Convolutional Social Pooling for Vehicle Trajectory Prediction Nachiket Deo Mohan M. Trivedi University of California, San Diego La Jolla, 92093 [email protected] [email protected] Abstract Forecasting the motion of surrounding vehicles is a crit- ical ability for an autonomous vehicle deployed in complex traffic. Motion of all vehicles in a scene is governed by the traffic context, i.e., the motion and relative spatial config- uration of neighboring vehicles. In this paper we propose an LSTM encoder-decoder model that uses convolutional social pooling as an improvement to social pooling lay- ers for robustly learning interdependencies in vehicle mo- tion. Additionally, our model outputs a multi-modal predic- tive distribution over future trajectories based on maneuver classes. We evaluate our model using the publicly available NGSIM US-101 and I-80 datasets. Our results show im- provement over the state of the art in terms of RMS values of prediction error and negative log-likelihoods of true fu- Figure 1. Imagine the blue vehicle is an autonomous vehicle in the traffic scenario shown. Our proposed model allows it to make multi-modal predictions of future motion of it’s surrounding ve- hicles, along with prediction uncertainty shown here for the red v1 [cs.CV] 15 May 2018 N. Deo, et al., “Convolutional Social Pooling for Vehicle Trajectory Prediction,” CVPRW, 2018.
  10. าߦऀͷࢹઢ৘ใΛ׆༻ͨ͠ܦ࿏༧ଌख๏ ˔ ಄෦Λத৺ͱͨ͠ࢹ໺֯಺ͷଞର৅ͷΈ1PPMJOHॲཧ  ༧ଌର৅ͷ಄෦ํ޲ɼଞର৅ͱͷڑ཭஋͔Β1PPMJOHॲཧ͢Δର৅Λબ୒ ˔ ي੻ɼ಄෦ํ޲ɼΠϯλϥΫγϣϯ৘ใΛ-45.΁ೖྗ  ࢹ໺֯಺ʹ͍Δଞର৅ͱͷিಥΛආ͚Δܦ࿏༧ଌΛ࣮ݱ 

    ࢹઢ৘ใΛ೚ҙʹมߋ͢Δ͜ͱͰɼ೚ҙํ޲ʹ޲͔ͬͨܦ࿏༧ଌ͕Մೳ .9-45.<*)BTBO $713 > 13 3. Our approach In this section we present the MX-LSTM, capable of jointly forecasting positions and head orientations of an in- dividual thanks to the presence of two information streams: Tracklets and vislets. 3.1. Tracklets and vislets Given a subject i, a tracklet (see Fig. 1a) ) is formed by consecutive (x, y) positions on the ground plane, {x(i) t }t=1,...,T , x(i) t = (x, y) ∈ R2, while a vislet is formed by anchor points {a(i) t }t=1,...,T , with a(i) t = (ax , ay ) ∈ R2 indicating a reference point at a fixed distance r from the corresponding x(i) t , towards which the face is oriented1. In b) J d a) ) (i t a ) (i t x ) (i t D r ) ( 1 i t x c) 1  t x t a t x t D t E t Z (i) (i) e(x,i) t = φ x(i) t , Wx e(a,i) t = φ a(i) t , Wa where the embedding function φ consists in a jection through the embedding weigths Wx and D-dimensional vector, multiplied by a RELU n where D is the dimension of the hidden space. 3.2. VFOA social pooling The social pooling introduced in [3] is an eff to let the LSTM capture how people move in scene avoiding collisions. This work considers a interest area around the single pedestrian, in wh den states of the the neighbors are considered those which are behind the pedestrian. In our ca prove this module using the vislet information ing which individuals to consider, by building a tum of attention (VFOA), that is a triangle origin x(i) t , aligned with a(i) t , and with an aperture given gle γ and a depth d; these parameters have been cross-validation on the training partition of the T dataset (see Sec. 5). Our view-frustum social pooling is a No × N sor, in which the space around the pedestrian is d Figure 3. Qualitative results: a) MX-LSTM b) Ablation qualitative study on Individual MX-LSTM (better in color). I. Hasan, et al., “MX-LSTM: mixing tracklets and vislets to jointly forecast trajectories and head poses,” CVPR, 2018.
  11. άϧʔϓʹؔ͢ΔΠϯλϥΫγϣϯΛߟྀͨ͠ܦ࿏༧ଌख๏ ˔ ӡಈ܏޲͕ྨࣅ͢Δาߦऀಉ࢜ΛάϧʔϓͱΈͳ͢ ˔ ༧ଌର৅͕ଐ͢ΔάϧʔϓҎ֎ͷݸਓͷ৘ใΛ1PPMJOH  ҟͳΔάϧʔϓͱͷিಥΛආ͚Δܦ࿏Λ༧ଌ (SPVQ-45.</#JTBHOP &$$78 >

    14 Group LSTM 7 Fig. 3. Representation of the Social hidden-state tensor Hi t . The black dot represents the pedes- trian of interest pedi. Other pedestrians pedj (∀j = i) are shown in different color codes, namely green for pedestrians belonging to the same set, and red for pedestrians belonging to a different Group LSTM 9 ing to the studies in interpersonal distances [15, 10], socially correlated people tend to stay closer in their personal space and walk together in crowded environments as compared to pacing with unknown pedestrians. Pooling only unrelated pedestrians will focus more on macroscopic inter-group interactions rather than intra-group dynamics, thus allowing the LSTM network to improve the trajectory prediction performance. Collision avoidance influences the future motion of pedestrians in a similar manner if two pedestrians are walking together as in a group. In Tables 2, 3 and Fig. 4, we display some demos of predicted trajectories which highlight how our Group-LSTM is able to predict pedestrian trajectories with better precision, showing how the prediction is improved when we pool in the social tensor of each pedestrian only pedestrians not belonging to his group. In Table 2, we show how the prediction of two pedestrians walking together in the crowd improves when they are not pooled in each other’s pooling layer. When the two pedestrians are pooled together, the network applies on them the typical repulsion force to avoid colliding with each other. Since they are in the same group, they allow the other pedestrian to stay closer in they personal space. In Fig. 4 we display the sequences of two groups walking toward each other. In Table 3, we show how the prediction for the two groups is improved with respect to the Social LSTM. While both prediction are not very accurate, our Group LSTM perform better because it is able to forecast how pedestrian belonging to the same group will stay together when navigating the environment. Name Scene Our Group-LSTM Social-LSTM ETH Univ Frame 2425 Table 2. ETH dataset: the prediction is improved when pooling in the social tensor of each pedes- trian only pedestrians not belonging to his group. The green dots represent the ground truth tra- jectories; the blue crosses represent the predicted paths. 5 Conclusion In this work, we tackle the problem of pedestrian trajectory prediction in crowded scenes. We propose a novel approach, which combines the coherent filtering algorithm with the LSTM networks. The coherent filtering is used to identify pedestrians walking together in a crowd, while the LSTM network is used to predict the future trajectories 10 Niccol´ o Bisagno, Bo Zhang and Nicola Conci (a) (b) (c) (d) Fig. 4. Sequences taken from the UCY dataset. It displays an interaction example between two groups, which will be further analyzed in Table 3. Name Scene Our Group-LSTM Social-LSTM UCY Univ Frame 1025 Table 3. We display how the prediction is improved for two groups walking in opposite direc- tions. The green dots represent the ground truth trajectories, while the blue crosses represent the predicted paths. N. Bisagno, et al., “Group LSTM: Group Trajectory Prediction in Crowded Scenarios,” ECCVW, 2018.
  12. ("/Λ༻͍ͯෳ਺ܦ࿏Λ༧ଌ͢Δख๏ ˔ (FOFSBUPSɿෳ਺ͷ༧ଌܦ࿏ΛαϯϓϦϯά  -45.&ODPEFSͷಛ௃ྔΛ༻͍ͯɼ1PPMJOH.PEVMFͰΠϯλϥΫγϣϯ৘ใΛग़ྗ  ର৅ؒͷ૬ରڑ཭ͱ-45.ͷग़ྗ͔Β༧ଌʹॏཁͳର৅Λநग़͢ΔͨΊʹ."9QPPMJOH  ֤ग़ྗͱϊΠζϕΫτϧΛ࿈݁͠ɼ-45.%FDPEFSͰະདྷͷෳ਺ͷ༧ଌܦ࿏Λग़ྗ ˔

    %JTDSJNJOBUPSɿ༧ଌܦ࿏ͱ࣮ࡍͷܦ࿏Λ൑ผ  ఢରతʹֶशͤ͞Δ͜ͱͰɼ࣮ࡍͷܦ࿏ͱ᱐͢༧ଌܦ࿏Λੜ੒͢Δ͜ͱΛظ଴ 4PDJBM("/<"(VQUB $713 > 15 igure 2: System overview. Our model consists of three key components: Generator (G), Pooling Module, and Discriminator D). G takes as input past trajectories Xi and encodes the history of the person i as Ht i . The pooling module takes as input ll H tobs i and outputs a pooled vector Pi for each person. The decoder generates the future trajectory conditioned on H tobs i nd Pi . D takes as input Treal or Tfake and classifies them as socially acceptable or not (see Figure 3 for PM). A. Gupta, et al., “Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks,” CVPR, 2018. decoder as: c t i = (Pi, h t ei ; Wc) h t di = [c t i, z] (3) where (·) is a multi-layer perceptron (MLP) with ReLU non-linearity and Wc is the embedding weight. We deviate from prior work in two important ways regarding trajectory prediction: • Prior work [1] uses the hidden state to predict pa- rameters of a bivariate Gaussian distribution. How- ever, this introduces difficulty in the training process as backpropagation through sampling process in non- Figure 3: Comparison between our pooling mechanism (red dotted arrows) and Social Pooling [1] (red dashed grid) for the red person. Our method computes relative positions be- tween the red and all other people; these positions are con-
  13. าߦऀ΍ࣗಈं౳ͷҟͳΔҠಈର৅ͱͷΠϯλϥΫγϣϯΛߟྀͨ͠༧ଌख๏ ˔ ΠϯλϥΫγϣϯʹՃ͑γʔϯίϯςΩετΛڞಉͰϞσϧԽ  ಈతͱ੩తͷͭͷ෺ମͱͷিಥΛආ͚Δܦ࿏Λ༧ଌՄೳ ˔ .VMUJ"HFOU5FOTPS'VTJPO  $//Ͱγʔϯʹؔ͢ΔίϯςΩετ৘ใΛநग़ 

    Ҡಈର৅ຖͷҐஔ৘ใ͔ΒۭؒతάϦουʹ-45.ͷग़ྗΛ֨ೲ  ίϯςΩετ৘ใͱۭؒతάϦουΛνϟωϧํ޲ʹ࿈݁͠ɼ$//Ͱ'VTJPO  'VTJPOͨ͠ಛ௃ྔ͔Β-45.%FDPEFSͰܦ࿏Λ༧ଌ .VMUJ"HFOU5FOTPS'VTJPO<5;IBP $713 > 16 Figure 5: Ablative results on Stanford Drone dataset. From left to right are results from MATF Multi Agent Scene, MAT ೖྗ஋ ਅ஋ ༧ଌ஋ T. Zhao, et al., “Multi-Agent Tensor Fusion for Contextual Trajectory Prediction,” CVPR, 2019.
  14. ͭͷωοτϫʔΫΛ݁߹ͨ͠૬ޓֶशʹΑΔܦ࿏༧ଌख๏ ˔ 'PSXBSE1SFEJDUJPO/FUXPSLɿҰൠతͳܦ࿏༧ଌख๏ ؍ଌˠ༧ଌ  ˔ #BDLXBSE1SFEJDUJPO/FUXPSLɿҰൠతͳܦ࿏༧ଌख๏ͷٯ ༧ଌˠ؍ଌ  ૬ޓ੍໿ʹج͍ͮͯ"EWFSTBSJBM"UUBDLͷ֓೦ʹجͮ͘ϞσϧΛߏங

    ˔ ೖྗي੻ΛJUFSBUJWFʹมߋ ˔ Ϟσϧͷग़ྗͱҰகͤ͞Δ͜ͱͰɼ৽͍֓͠೦ ૬ޓ߈ܸ ͱݺͿϞσϧΛ։ൃ 3FDJQSPDBM/FUXPSL<4)BP $713 > 17 orks for Human Trajectory Prediction iqun Zhao, and Zhihai He ersity of Missouri ,hezhi}@mail.missouri.edu ward ward orms y dif- prop- earn- Figure 1. Illustration of our idea of reciprocal learning for human 3. The generator is constructed by a decoder LSTM. Sim- ilar to the conditional GAN [24], a white noise vector Z is sampled from a multivariate normal distribution. Then, a merge layer is used in our proposed network which concate- nates all encoded features mentioned above with the noise vector Z. We take this as the input to the LSTM decoder to generate the candidate future paths for each human. The discriminator is built with an LSTM encoder which takes the input as randomly chosen trajectory from either ground truth or predicted trajectories and classifies them as “real” or “fake”. Generally speaking, the discriminator classifies the trajectories which are not accurate as “fake” and forces the generator to generate more realistic and feasible trajec- tories. Within the framework of our reciprocal learning for hu- man trajectory prediction, let Gθ : X → Y and Gφ : Y → Figure 4. Illustration of the proposed attack method. S. Hao, et al., “Reciprocal Learning Networks for Human Trajectory Prediction,” CVPR, 2020.
  15. ຊαʔϕΠͷ໨త 18 %FFQ-FBSOJOHΛ༻͍ͨܦ࿏༧ଌख๏ͷಈ޲ௐࠪ ˔ ֤ΧςΰϦʹଐ͢Δܦ࿏༧ଌख๏ຖͷಛ௃Λ·ͱΊΔ  ΠϯλϥΫγϣϯ͋Γ w ϓʔϦϯάϞσϧ w

    ΞςϯγϣϯϞσϧ w ͦͷଞͷϞσϧ  ΠϯλϥΫγϣϯҎ֎ ˔ ఆྔతධՁͷͨΊͷσʔληοτɼධՁࢦඪ΋঺հ ˔ ୅දతϞσϧΛ࢖༻ͯ͠ɼ֤Ϟσϧͷਫ਼౓ͱ༧ଌ݁Ռʹ͍ͭͯٞ࿦
  16. άϥϑߏ଄Λۭ࣌ؒํ޲ʹ֦ுͨ͠ܦ࿏༧ଌख๏ ˔ /PEFɿର৅ͷҐஔ৘ใ ˔ &EHFɿର৅ؒͷۭؒ৘ใɼ࣌ؒํ޲΁఻೻͢Δର৅ࣗ਎ͷ৘ใ ˔ /PEFͱ&EHF͔Β"UUFOUJPOΛٻΊɼ஫໨ର৅Λಋग़  ஫໨ର৅Λճආ͢Δܦ࿏༧ଌ͕Մೳ 

    "UUFOUJPOΛٻΊΔ͜ͱͰࢹ֮తͳઆ໌͕Մೳ 4PDJBM"UUFOUJPO<"7FNVMB *$3" > 19 ponding factor graph el (a) (b) (a) (b) Fig. 3. Architecture of EdgeRNN (left), Attention module (middle) and NodeRNN (right) and Interacting Gaussian Processes [4]. Hence, we chose Social LSTM as the baseline to compare the performance of our method. C. Quantitative Results The prediction errors for all the methods on the 5 crowd A. Vemula, et al., “Social Attention: Modeling Attention in Human Crowds,” ICRA, 2018.
  17. Ҡಈର৅ͷߦಈʹΑΔةݥ౓Λ"UUFOUJPOͰਪఆ͠ɼߦಈͷಛ௃ʹॏΈ෇͚ ˔ .PUJPO&ODPEFS.PEVMFͰର৅ຖͷߦಈΛΤϯίʔυ ˔ -PDBUJPO&ODPEFS.PEVMFͰର৅ຖͷҐஔ৘ใΛΤϯίʔυ  ༧ଌର৅ͱશଞର৅ͷ಺ੵΛٻΊɼ4PGUNBYͰଞର৅ͷಛ௃ʹॏΈ෇͚ ˔ ͭͷ.PEVMFΛ࿈݁͠ɼ࣍࣌ࠁҎ߱ͷܦ࿏Λ༧ଌ $*%//<:9V

    $713 > 20 N t h 1 fc 2 fc 3 fc 1 t S 1 fc 2 fc 3 fc i t S 1 fc 2 fc 3 fc N t S 1 t h i t h Ö Ö ,1 i t a , i i t a , i N t a Ö Ö 1 1 1 1 2 , ,..., t S S S LSTM LSTM 1 t z 1 2 , ,..., i i i t S S S LSTM LSTM i t z 1 2 , ,..., N N N t S S S LSTM LSTM N t z Ö Ö Ö Ö Displacement Prediction Module 1 i t S d + i t c fc # ƒ ƒ ƒ Location Encoder Module Motion Encoder Module Crowd Interaction ƒ # Inner product Scalar multiplication Sum Ö Ö Ö Ö Figure 2. The architecture of crowd interaction deep neural network (CIDNN). Successful Cases Figure 3. Qualitative results: history traj Successful Cases Figure 3. Qualitative results: history trajectory (red), ground truth (blue Successful Cases Figure 3. Qualitative results: history trajectory (red), ground truth (blue), and predicted trajectories from ou ೖྗ஋ ਅ஋ ༧ଌ஋ Y. Xu, et al., “Encoding Crowd Interaction with Deep Neural Network for Pedestrian Trajectory Prediction,” CVPR, 2018.
  18. 4PDJBM("/ΛϕʔεϞσϧͱͨ͠าߦऀͷܦ࿏༧ଌͱάϧʔϓͷݕग़ ˔ ܦ࿏༧ଌ  ("/͸4PDJBM("/ͱಉ౳ͷϞσϧ  ΠϯλϥΫγϣϯ͸TPGUBUUFOUJPOΛ࢖༻ ˔ άϧʔϓݕग़ 

    ༧ଌ͞Εͨܦ࿏͔ΒU4/&Ͱ࣍ݩѹॖ  %#4$"/Ͱۙ๣ͷาߦऀूஂΛάϧʔϓͱͯ͠ݕग़ (%("/<5'FSOBOEP "$$7 > 21 2 T. Fernando et al. have been made to incorporate handcrafted physics based features such as rel- ative distance between pedestrians, trajectory shape and motion based features to model their social a nity. Hall et. al [16] proposed a proxemic theory for such physical interactions based on di↵erent distance boundaries; however recent works [31,32] have shown these quantisations fail in cluttered environments. Furthermore, proximity doesn’t always describe the group membership. For instance two pedestrians sharing a common goal may start their trajectories in two distinct source positions, however, meet in the middle. Hence we believe being reliant on a handful of handcrafted features to be sub-optimal [1,10,19]. P1 1 P1 Tobs P2 1 P2 Tobs Pk 1 Pk Tobs Observed Trajectories P1 1 P1 Tobs P2 1 P2 Tobs Pk 1 Pk Tobs Predicted Trajectories yk Tpred y1 Tpred y2 Tpred Trajectory Predictor yk t = G ([c*,k t ,z t ]) Feature extractor k t = f( G ([c*,k t ,zt ]) ) Dimensionality Reduction ηk = t-SNE( [ k 1, k 2, …, k Tpred ] ) Clustering [β1, β2,…., βk]= DBSCAN( [η1, η2,…., ηk]) Clustered Solution 1 2 k Fig. 1. Proposed group detection framework: After observing short segments of tra- jectories for each pedestrian in the scene, we apply the proposed trajectory prediction algorithm to forecast their future trajectories. The context representation generated at GD-GAN 13 (a) GVEII - Frame 2127 (b) GVEII- Frame 2320 (c) CBE - Frame 2603 (d) CBE - Frame 2910 Fig. 5. Qualitative results from the proposed GD-GAN methods for sequences from the CBE and GVEII datasets. Connected coloured blobs indicate groups of pedestrians. T. Fernando, et al., “GD-GAN: Generative Adversarial Networks for Trajectory Prediction and Group Detection in Crowds,” ACCV, 2018. GD-GAN 5 LSTM encoder h’1 h’T [w1] [w2] [at] Left Right Front Ch,k t [wj] LSTM encoder LSTM encoder LSTM encoder h’1 h’T h1 hT h’1 h’T x C*,k t Cs,k t Fig. 2. Proposed neighbourhood modelling scheme [8]: A sample surveillance scene is shown on the left. The trajectory of the pedestrian of interest, k , is shown in green, and has two neighbours (in purple) to the left, one in front and none on right. The neighbourhood encoding scheme shown on the right: Trajectory information is encoded with LSTM encoders. A soft attention context vector Cs,k t is used to embed trajectory information from the pedestrian of interest, and a hardwired attention context vector Ch,k t is used for neighbouring trajectories. In order to generate Cs,k t we use a soft at- tention function denoted at in the above figure, and the hardwired weights are denoted by w. The merged context vector C⇤,k t is then generated by merging Cs,k t and Ch,k t . Let the trajectory of the pedestrian k, from frame 1 to Tobs be given by, pk = [p1, . . . , pTobs ], (1)
  19. ݱ࣌ࠁͷΠϯλϥΫγϣϯ৘ใ͔Β༧ଌର৅ͷະདྷͷ༧ଌܦ࿏Λߋ৽ ˔ 4UBUFTSF fi OFNFOUNPEVMF಺ͷͭͷػߏͰߴਫ਼౓ͳܦ࿏༧ଌΛ࣮ݱ  ଞର৅ͱͷিಥΛ๷͙1FEFTUSJBOBXBSFBUUFOUJPO 1"  

    ଞର৅ͷಈ͖͔Βɼ༧ଌର৅ࣗ਎͕ܦ࿏Λબ୒͢Δ.PUJPOHBUF .(  ˔ .(ͰিಥΛىͦ͜͠͏ͳର৅ͷಈ͖͔Βܦ࿏Λબ୒ ˔ 1"Ͱ༧ଌର৅ۙྡͷଞର৅ʹண໨ 43-45.<1;IBOH $713 > 22 5,36,39]. Vemula from the hidden gives an impor- et al. [33] utilize ght the important pairwise velocity who are in simi- ms to selects mo- strian during the lly aware neigh- d in previous ap- ramework. This lution Networks LSTM LSTM LSTM LSTM SR t t+1 LSTM LSTM States refinement module LSTM states Input the location to LSTM Ouput the prediction ... selects the features, where each row is related to a certain dimension of hidden feature. In Fig.6, the first column shows the trajectory patterns captured by hidden features started from origin and ended at the dots, which are extracted in similar way as Fig.2(a). The motion gate for a feature considers pairwise input tra- jectories with similar configurations. Some examples for high response of the gate are shown in the other columns of Fig.6. In these pairwise trajectory samples, the red and blue ones are respectively the trajectories of pedestrian i and j, and the time step we calculate the motion gate are shown with dots (where the trajectory ends). These pairwise sam- ples are extracted by searching from database with highest activation for the motion gate neuron. High response of gate means that the corresponding feature is selected. Figure 6. Selected feature patterns by motion gate. Each row is related to a hidden neuron (feature) of LSTM. Column 1: Activa- tion trajectory pattern of the hidden feature. Column 2-6: Pairwise trajectory examples (end with solid dots) having high activation to 3) Row 3: Thi considers more tion. 4) Row 4 hidden feature attention on th walk towards h Pedestrian- ples of the ped LSTM in Fig.7 to the close ne tention, 2) the often largely fo refinement ten bors with grou longer time ran Figure 7. Illustr magenta represe the dashed circle ment. Larger cir represents the ta ones are his/her their walking dir 5. Conclusio selects the features, where each row is related to a certain dimension of hidden feature. In Fig.6, the first column shows the trajectory patterns captured by hidden features started from origin and ended at the dots, which are extracted in similar way as Fig.2(a). The motion gate for a feature considers pairwise input tra- jectories with similar configurations. Some examples for high response of the gate are shown in the other columns of Fig.6. In these pairwise trajectory samples, the red and blue ones are respectively the trajectories of pedestrian i and j, and the time step we calculate the motion gate are shown with dots (where the trajectory ends). These pairwise sam- ples are extracted by searching from database with highest activation for the motion gate neuron. High response of gate means that the corresponding feature is selected. 3) Row 3: This case is similar to row 2. This gate element considers more distant neighbor walking in opposite direc- tion. 4) Row 4: The neighbor in blue is static, the selected hidden feature shows that pedestrian i in red potentially pay attention on this stationary neighbor in case he is about to walk towards him/her. Pedestrian-wise attention. We illustrate some exam- ples of the pedestrian-wise attention expected by our SR- LSTM in Fig.7. It shows that 1) dominant attention is paid to the close neighbors, while the others also take slight at- tention, 2) the attention given by the first refinement layer often largely focuses on the close neighbors, and the second refinement tends to strengthen the effect of farther neigh- bors with group behavior or may influence the pedestrian in longer time range. 1FEFTUSJBOBXBSFBUUFOUJPO .PUJPOHBUF ༧ଌର৅ ༧ଌର৅ P. Zhang, et al., “SR-LSTM: State Re fi nement for LSTM towards Pedestrian Trajectory Prediction,” CVPR, 2019.
  20. কདྷͷܦ࿏ͱߦಈΛಉ࣌ʹ༧ଌ͢ΔϞσϧΛఏҊ ˔ 1FSTPO#FIBWJPS.PEVMFɿาߦऀͷ֎ݟ৘ใͱࠎ֨৘ใΛΤϯίʔυ ˔ 1FSTPO*OUFSBDUJPO.PEVMFɿपลͷ੩త؀ڥ৘ใͱࣗಈं౳ͷ෺ମ৘ใΛΤϯίʔυ ˔ 7JTVBM'FBUVSF5FOTPS2ɿ্هͭͷಛ௃ͱաڈͷي੻৘ใΛΤϯίʔυ ˔ 5SBKFDUPSZ(FOFSBUPSɿকདྷͷܦ࿏Λ༧ଌ ˔

    "DUJWJUZ1SFEJDUJPOɿ༧ଌ࠷ऴ࣌ࠁͷߦಈΛ༧ଌ /FYU<+-JBOH $713 > 23 Figure 2. Overview of our model. Given a sequence of frames containing the person for prediction, our model utilizes person behavior module and person interaction module to encode rich visual semantics into a feature tensor. We propose novel person interaction module that takes into account both person-scene and person-object relations for joint activities and locations prediction. 3. Approach RoIAlign CNN Figure 6. (Better viewed in color.) Qualitative comparison between our method and the baselines. Yellow path is the observable trajectory and Green path is the ground truth trajectory during the prediction period. Predictions are shown as Blue heatmaps. Our model also predicts the future activity, which is shown in the text and with the person pose template. Figure 7. (Better viewed in color.) Qualitative analysis of o Method ETH HOTEL UN Model Linear 1.33 / 2.94 0.39 / 0.72 0.82 LSTM 1.09 / 2.41 0.86 / 1.91 0.61 J. Liang, et al., “Peeking into the Future: Predicting Future Person Activities and Locations in Videos,” CVPR, 2019.
  21. าߦऀಉ࢜ͷΠϯλϥΫγϣϯʹՃ͑ɼ੩త؀ڥ৘ใΛߟྀͨ͠༧ଌख๏ ˔ 1IZTJDBM"UUFOUJPOɿ੩త؀ڥʹؔ͢Δ"UUFOUJPOΛਪఆ ˔ 4PDJBM"UUFOUJPOɿಈత෺ମʹؔ͢Δ"UUFOUJPOΛਪఆ ֤"UUFOUJPOͱ-45.&ODPEFSͷग़ྗ͔Βকདྷͷܦ࿏Λ༧ଌ 4P1IJF<"4BEFHIJBO $713 > 24

    Physical Attention: !""#$ Social Attention: !""%& Generator Attention Module for i-th person GAN Module  concat. Discriminator  LSTM LSTM   LSTM LSTM LSTM   LSTM concat. concat. concat. z z z decoder 1st agent i-th agent N-th agent Attention Module for 1st person  Attention Module for n-th person   CNN Feature Extractor Module i-th agent calc. relative relative relative relative   LSTM   LSTM LSTM encoder (a) (b) (c) N-th agent 1st agent Figure 2. An overview of SoPhie architecture. Sophie consists of three key modules including: (a) A feature extractor module, (b) An attention module, and (c) An LSTM based GAN module. 3.3. Feature extractors where πj is the index of the other agents sorted according to their distances to the target agent i. In this framework, each Nexus 6 Li Figure 3. Using the generator to sample trajectories and the discrimin maps for SDD scenes. Maps are presented in red, and generated only Ground Truth Social LSTM Social GAN Sophie (Ours) Figure 4. Comparison of Sophie’s predictions against the ground truth trajectories and two baselines. Each pedestrian is displayed with a different color, where dashed lines are observed trajecto- A. Sadeghian, et al., “SoPhie: An Attentive GAN for Predicting Paths Compliant to Social and Physical Constraints,” CVPR, 2019.
  22. ΠϯλϥΫγϣϯΛ࣌ؒํ޲΁఻ൖͨ͠༧ଌख๏ ΠϯλϥΫγϣϯΛߟྀ͢ΔͨΊʹ(SBQI"UUFOUJPO/FUXPSL ("5 Λద༻ ˔ ("5ɿάϥϑߏ଄ΛऔΓೖΕͨ"UUFOUJPOʹجͮ͘(SBQI$POWPMVUJPOBM/FUXPSLT  γʔϯશମʹ͍Δଞର৅ͷؔ܎ͷॏཁ౓Λ"UUFOUJPOػߏͰֶश ˔ ("5ͰٻΊͨಛ௃Λ࣌ؒํ޲ʹ఻೻͢Δ͜ͱͰۭ࣌ؒͷΠϯλϥΫγϣϯΛߟྀ

     িಥͷՄೳੑ͕͋Δର৅ͷ৘ใΛաڈͷܦ࿏͔Βಋग़Մೳ 45("5<:)VBOH *$$7 > 25 GAT GAT GAT GAT c c c ] ] ] '/670 '/670 Encoder State Decoder '/670 GAT Graph Attention Network Concat Noise c · · · <latexit sha1_base64="RyHhXBLV/0cOSbkT6YVaYO8lbL8=">AAAB7XicbVBNS8NAEJ34WetX1aOXYBE8lUQFPXgoePFYwX5AG8pms2nXbnbD7kQoof/BiwdFvPp/vPlv3LY5aOuDgcd7M8zMC1PBDXret7Oyura+sVnaKm/v7O7tVw4OW0ZlmrImVULpTkgME1yyJnIUrJNqRpJQsHY4up367SemDVfyAccpCxIykDzmlKCVWj0aKTT9StWreTO4y8QvSBUKNPqVr16kaJYwiVQQY7q+l2KQE42cCjYp9zLDUkJHZMC6lkqSMBPks2sn7qlVIjdW2pZEd6b+nshJYsw4CW1nQnBoFr2p+J/XzTC+DnIu0wyZpPNFcSZcVO70dTfimlEUY0sI1dze6tIh0YSiDahsQ/AXX14mrfOaf1Hz7i+r9ZsijhIcwwmcgQ9XUIc7aEATKDzCM7zCm6OcF+fd+Zi3rjjFzBH8gfP5A60zjys=</latexit> tobs <latexit sha1_base64="vcpPldxy0fXoWuPkuol+dvCEp1Q=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoPgKeyqYI4BLx4jmAckS5idTJIhszPLTK8QlnyEFw+KePV7vPk3TpI9aGJBQ1HVTXdXlEhh0fe/vcLG5tb2TnG3tLd/cHhUPj5pWZ0axptMS206EbVcCsWbKFDyTmI4jSPJ29Hkbu63n7ixQqtHnCY8jOlIiaFgFJ3Uxn6mIzvrlyt+1V+ArJMgJxXI0eiXv3oDzdKYK2SSWtsN/ATDjBoUTPJZqZdanlA2oSPedVTRmNswW5w7IxdOGZChNq4UkoX6eyKjsbXTOHKdMcWxXfXm4n9eN8VhLcyESlLkii0XDVNJUJP572QgDGcop45QZoS7lbAxNZShS6jkQghWX14nratqcF31H24q9VoeRxHO4BwuIYBbqMM9NKAJDCbwDK/w5iXei/fufSxbC14+cwp/4H3+ALeBj8c=</latexit> t2 <latexit sha1_base64="7ItLwn8Q8RXHU3NW/LT2SA7MEc4=">AAAB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkkV7LHgxWMF0xbaUDbbTbt0swm7E6GE/gYvHhTx6g/y5r9x2+agrQ8GHu/NMDMvTKUw6Lrfzsbm1vbObmmvvH9weHRcOTltmyTTjPsskYnuhtRwKRT3UaDk3VRzGoeSd8LJ3dzvPHFtRKIecZryIKYjJSLBKFrJx0Fenw0qVbfmLkDWiVeQKhRoDSpf/WHCspgrZJIa0/PcFIOcahRM8lm5nxmeUjahI96zVNGYmyBfHDsjl1YZkijRthSShfp7IqexMdM4tJ0xxbFZ9ebif14vw6gR5EKlGXLFlouiTBJMyPxzMhSaM5RTSyjTwt5K2JhqytDmU7YheKsvr5N2veZd19yHm2qzUcRRgnO4gCvw4BaacA8t8IGBgGd4hTdHOS/Ou/OxbN1wipkz+APn8wfJU46h</latexit> t1 <latexit sha1_base64="Bdkp2HUnpwjOv5mbz6SwzTMJ8ag=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lUsMeCF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74STu7nfeeLaiEQ94jTlQUxHSkSCUbSSj4Pcmw2qNbfuLkDWiVeQGhRoDapf/WHCspgrZJIa0/PcFIOcahRM8lmlnxmeUjahI96zVNGYmyBfHDsjF1YZkijRthSShfp7IqexMdM4tJ0xxbFZ9ebif14vw6gR5EKlGXLFlouiTBJMyPxzMhSaM5RTSyjTwt5K2JhqytDmU7EheKsvr5P2Vd27rrsPN7Vmo4ijDGdwDpfgwS004R5a4AMDAc/wCm+Ocl6cd+dj2VpyiplT+APn8wfHzo6g</latexit> t3 <latexit sha1_base64="OcT9ss7O545agx6P8OIS9udeh5k=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lUsMeCF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74STu7nfeeLaiEQ94jTlQUxHSkSCUbSSj4P8ejao1ty6uwBZJ15BalCgNah+9YcJy2KukElqTM9zUwxyqlEwyWeVfmZ4StmEjnjPUkVjboJ8ceyMXFhlSKJE21JIFurviZzGxkzj0HbGFMdm1ZuL/3m9DKNGkAuVZsgVWy6KMkkwIfPPyVBozlBOLaFMC3srYWOqKUObT8WG4K2+vE7aV3Xvuu4+3NSajSKOMpzBOVyCB7fQhHtogQ8MBDzDK7w5ynlx3p2PZWvJKWZO4Q+czx/K2I6i</latexit> z M-LSTM G-LSTM Figure 2. The architecture of our proposed STGAT model. The framework is based on seq2seq model and consists of 3 parts: Encod Intermediate State and Decoder. The Encoder module includes three components: 2 types of LSTMs and GAT. The Intermediate St encapsulates the spatial and temporal information of all observed trajectories. The Decoder module generates the future trajectories bas edestrians in a scene are considered as nodes on the aph at every time-step. The edges on the graph repre- st of human-human interactions. ~ h1 <latexit sha1_base64="16Yxx8+YpYlp2tGJX9qHHqgdbbY=">AAAB8nicbVBNS8NAEJ3Ur1q/qh69BIvgqSQq6LHoxWMF+wFtKJvtpl262Q27k0IJ+RlePCji1V/jzX/jts1Bqw8GHu/NMDMvTAQ36HlfTmltfWNzq7xd2dnd2z+oHh61jUo1ZS2qhNLdkBgmuGQt5ChYN9GMxKFgnXByN/c7U6YNV/IRZwkLYjKSPOKUoJV6/Smj2TgfZH4+qNa8ureA+5f4BalBgeag+tkfKprGTCIVxJie7yUYZEQjp4LllX5qWELohIxYz1JJYmaCbHFy7p5ZZehGStuS6C7UnxMZiY2ZxaHtjAmOzao3F//zeilGN0HGZZIik3S5KEqFi8qd/+8OuWYUxcwSQjW3t7p0TDShaFOq2BD81Zf/kvZF3b+sew9XtcZtEUcZTuAUzsGHa2jAPTShBRQUPMELvDroPDtvzvuyteQUM8fwC87HN46CkWw=</latexit> ~ h5 <latexit sha1_base64="R1g6gAHeZA/axOfzhGwObt3Ay0g=">AAAB8nicbVDLSsNAFJ3UV62vqks3g0VwVRIf6LLoxmUF+4A0lMn0ph06mQkzk0IJ+Qw3LhRx69e482+ctllo64ELh3Pu5d57woQzbVz32ymtrW9sbpW3Kzu7e/sH1cOjtpapotCikkvVDYkGzgS0DDMcuokCEoccOuH4fuZ3JqA0k+LJTBMIYjIULGKUGCv5vQnQbJT3s+u8X625dXcOvEq8gtRQgWa/+tUbSJrGIAzlRGvfcxMTZEQZRjnklV6qISF0TIbgWypIDDrI5ifn+MwqAxxJZUsYPFd/T2Qk1noah7YzJmakl72Z+J/npya6DTImktSAoItFUcqxkXj2Px4wBdTwqSWEKmZvxXREFKHGplSxIXjLL6+S9kXdu6y7j1e1xl0RRxmdoFN0jjx0gxroATVRC1Ek0TN6RW+OcV6cd+dj0Vpyiplj9AfO5w+UlpFw</latexit> ~ h4 <latexit sha1_base64="zHvzoSStxfsE4+HkOUWqlEWKApg=">AAAB8nicbVBNS8NAEN3Ur1q/qh69LBbBU0m0oMeiF48V7AekoWy2k3bpZjfsbgol5Gd48aCIV3+NN/+N2zYHbX0w8Hhvhpl5YcKZNq777ZQ2Nre2d8q7lb39g8Oj6vFJR8tUUWhTyaXqhUQDZwLahhkOvUQBiUMO3XByP/e7U1CaSfFkZgkEMRkJFjFKjJX8/hRoNs4HWSMfVGtu3V0ArxOvIDVUoDWofvWHkqYxCEM50dr33MQEGVGGUQ55pZ9qSAidkBH4lgoSgw6yxck5vrDKEEdS2RIGL9TfExmJtZ7Foe2MiRnrVW8u/uf5qYlug4yJJDUg6HJRlHJsJJ7/j4dMATV8ZgmhitlbMR0TRaixKVVsCN7qy+ukc1X3ruvuY6PWvCviKKMzdI4ukYduUBM9oBZqI4okekav6M0xzovz7nwsW0tOMXOK/sD5/AGTEZFv</latexit> ~ h3 <latexit sha1_base64="4MzhxHH1VdXzxV7PaR8PPHgLBTc=">AAAB8nicbVBNS8NAEN3Ur1q/qh69LBbBU0msoMeiF48V7AekoWy2m3bpZjfsTgol5Gd48aCIV3+NN/+N2zYHbX0w8Hhvhpl5YSK4Adf9dkobm1vbO+Xdyt7+weFR9fikY1SqKWtTJZTuhcQwwSVrAwfBeolmJA4F64aT+7nfnTJtuJJPMEtYEJOR5BGnBKzk96eMZuN8kDXyQbXm1t0F8DrxClJDBVqD6ld/qGgaMwlUEGN8z00gyIgGTgXLK/3UsITQCRkx31JJYmaCbHFyji+sMsSR0rYk4IX6eyIjsTGzOLSdMYGxWfXm4n+en0J0G2RcJikwSZeLolRgUHj+Px5yzSiImSWEam5vxXRMNKFgU6rYELzVl9dJ56ruNeru43WteVfEUUZn6BxdIg/doCZ6QC3URhQp9Ixe0ZsDzovz7nwsW0tOMXOK/sD5/AGRjJFu</latexit> ~ h2 <latexit sha1_base64="hAcukdL3sj/GIPn2AtYuEsqEXSA=">AAAB8nicbVBNS8NAEN3Ur1q/qh69LBbBU0mqoMeiF48V7AekoWy2m3bpZjfsTgol5Gd48aCIV3+NN/+N2zYHbX0w8Hhvhpl5YSK4Adf9dkobm1vbO+Xdyt7+weFR9fikY1SqKWtTJZTuhcQwwSVrAwfBeolmJA4F64aT+7nfnTJtuJJPMEtYEJOR5BGnBKzk96eMZuN8kDXyQbXm1t0F8DrxClJDBVqD6ld/qGgaMwlUEGN8z00gyIgGTgXLK/3UsITQCRkx31JJYmaCbHFyji+sMsSR0rYk4IX6eyIjsTGzOLSdMYGxWfXm4n+en0J0G2RcJikwSZeLolRgUHj+Px5yzSiImSWEam5vxXRMNKFgU6rYELzVl9dJp1H3ruru43WteVfEUUZn6BxdIg/doCZ6QC3URhQp9Ixe0ZsDzovz7nwsW0tOMXOK/sD5/AGQB5Ft</latexit> ~ h6 <latexit sha1_base64="Kuq84mWaskTChJNeNnyTi5aZ6oc=">AAAB8nicbVBNS8NAEN3Ur1q/qh69LBbBU0lU1GPRi8cK9gPSUDbbSbt0sxt2N4US8jO8eFDEq7/Gm//GbZuDtj4YeLw3w8y8MOFMG9f9dkpr6xubW+Xtys7u3v5B9fCorWWqKLSo5FJ1Q6KBMwEtwwyHbqKAxCGHTji+n/mdCSjNpHgy0wSCmAwFixglxkp+bwI0G+X97DrvV2tu3Z0DrxKvIDVUoNmvfvUGkqYxCEM50dr33MQEGVGGUQ55pZdqSAgdkyH4lgoSgw6y+ck5PrPKAEdS2RIGz9XfExmJtZ7Goe2MiRnpZW8m/uf5qYlug4yJJDUg6GJRlHJsJJ79jwdMATV8agmhitlbMR0RRaixKVVsCN7yy6ukfVH3Luvu41WtcVfEUUYn6BSdIw/doAZ6QE3UQhRJ9Ixe0ZtjnBfn3flYtJacYuYY/YHz+QOWG5Fx</latexit> ~ h1 <latexit sha1_base64="KoV/fuxRJ+UPzkEixIjJIx41qvU=">AAAB+HicbVBNS8NAEJ34WetHox69LBbRU0lU0GPRi8cK9gPaGDbbTbt0swm7m0IN+SVePCji1Z/izX/jts1BWx8MPN6bYWZekHCmtON8Wyura+sbm6Wt8vbO7l7F3j9oqTiVhDZJzGPZCbCinAna1Exz2kkkxVHAaTsY3U799phKxWLxoCcJ9SI8ECxkBGsj+XalN6YkG+aP2WnuZ27u21Wn5syAlolbkCoUaPj2V68fkzSiQhOOleq6TqK9DEvNCKd5uZcqmmAywgPaNVTgiCovmx2eoxOj9FEYS1NCo5n6eyLDkVKTKDCdEdZDtehNxf+8bqrDay9jIkk1FWS+KEw50jGapoD6TFKi+cQQTCQztyIyxBITbbIqmxDcxZeXSeu85l7UnPvLav2miKMER3AMZ+DCFdThDhrQBAIpPMMrvFlP1ov1bn3MW1esYuYQ/sD6/AHv15NC</latexit> ~12 <latexit sha1_base64="7wH2WVdWISdwYnqsMTVyR5rAKM0=">AAAB+nicbVBNS8NAEJ34WetXqkcvi0XwVJIq6LHoxWMF+wFtCJvtpl262YTdTaXE/BQvHhTx6i/x5r9x2+agrQ8GHu/NMDMvSDhT2nG+rbX1jc2t7dJOeXdv/+DQrhy1VZxKQlsk5rHsBlhRzgRtaaY57SaS4ijgtBOMb2d+Z0KlYrF40NOEehEeChYygrWRfLvSn1CS9TFPRjj3M7ee+3bVqTlzoFXiFqQKBZq+/dUfxCSNqNCEY6V6rpNoL8NSM8JpXu6niiaYjPGQ9gwVOKLKy+an5+jMKAMUxtKU0Giu/p7IcKTUNApMZ4T1SC17M/E/r5fq8NrLmEhSTQVZLApTjnSMZjmgAZOUaD41BBPJzK2IjLDERJu0yiYEd/nlVdKu19yLmnN/WW3cFHGU4ARO4RxcuIIG3EETWkDgEZ7hFd6sJ+vFerc+Fq1rVjFzDH9gff4AT7aUBQ==</latexit> ~13 <latexit sha1_base64="Oj5OOfl5UD0Mrh/Gd9yVwPeeGOY=">AAAB+nicbVBNS8NAEJ3Ur1q/Uj16WSyCp5KooMeiF48V7Ae0IWy2m3bpZhN2N5US81O8eFDEq7/Em//GbZuDtj4YeLw3w8y8IOFMacf5tkpr6xubW+Xtys7u3v6BXT1sqziVhLZIzGPZDbCinAna0kxz2k0kxVHAaScY3878zoRKxWLxoKcJ9SI8FCxkBGsj+Xa1P6Ek62OejHDuZ+5F7ts1p+7MgVaJW5AaFGj69ld/EJM0okITjpXquU6ivQxLzQineaWfKppgMsZD2jNU4IgqL5ufnqNTowxQGEtTQqO5+nsiw5FS0ygwnRHWI7XszcT/vF6qw2svYyJJNRVksShMOdIxmuWABkxSovnUEEwkM7ciMsISE23SqpgQ3OWXV0n7vO5e1J37y1rjpoijDMdwAmfgwhU04A6a0AICj/AMr/BmPVkv1rv1sWgtWcXMEfyB9fkDUTuUBg==</latexit> ~11 <latexit sha1_base64="Z1DxO/2cTeAPqRO0wzlGCoK0+XY=">AAAB+nicbVBNS8NAEJ3Ur1q/Uj16WSyCp5KooMeiF48V7Ae0IWy2m3bpZhN2N5US81O8eFDEq7/Em//GbZuDtj4YeLw3w8y8IOFMacf5tkpr6xubW+Xtys7u3v6BXT1sqziVhLZIzGPZDbCinAna0kxz2k0kxVHAaScY3878zoRKxWLxoKcJ9SI8FCxkBGsj+Xa1P6Ek62OejHDuZ66b+3bNqTtzoFXiFqQGBZq+/dUfxCSNqNCEY6V6rpNoL8NSM8JpXumniiaYjPGQ9gwVOKLKy+an5+jUKAMUxtKU0Giu/p7IcKTUNApMZ4T1SC17M/E/r5fq8NrLmEhSTQVZLApTjnSMZjmgAZOUaD41BBPJzK2IjLDERJu0KiYEd/nlVdI+r7sXdef+sta4KeIowzGcwBm4cAUNuIMmtIDAIzzDK7xZT9aL9W59LFpLVjFzBH9gff4ATjGUBA==</latexit> ~14 <latexit sha1_base64="bUiB7GM/EueyHNLIaRxhSyx8/6A=">AAAB+nicbVBNS8NAEN34WetXqkcvi0XwVBIt6LHoxWMF+wFtCJPtpl262YTdTaXE/BQvHhTx6i/x5r9x2+agrQ8GHu/NMDMvSDhT2nG+rbX1jc2t7dJOeXdv/+DQrhy1VZxKQlsk5rHsBqAoZ4K2NNOcdhNJIQo47QTj25nfmVCpWCwe9DShXgRDwUJGQBvJtyv9CSVZH3gygtzP3Hru21Wn5syBV4lbkCoq0PTtr/4gJmlEhSYclOq5TqK9DKRmhNO83E8VTYCMYUh7hgqIqPKy+ek5PjPKAIexNCU0nqu/JzKIlJpGgemMQI/UsjcT//N6qQ6vvYyJJNVUkMWiMOVYx3iWAx4wSYnmU0OASGZuxWQEEog2aZVNCO7yy6ukfVFzL2vOfb3auCniKKETdIrOkYuuUAPdoSZqIYIe0TN6RW/Wk/VivVsfi9Y1q5g5Rn9gff4AUsCUBw==</latexit> ~15 <latexit sha1_base64="cC09I5yAhT8haElJRiWSguUZ4i0=">AAAB+nicbVDLSsNAFJ34rPWV6tLNYBFclcQHuiy6cVnBPqAJYTK9bYdOJmFmUikxn+LGhSJu/RJ3/o3TNgttPXDhcM693HtPmHCmtON8Wyura+sbm6Wt8vbO7t6+XTloqTiVFJo05rHshEQBZwKammkOnUQCiUIO7XB0O/XbY5CKxeJBTxLwIzIQrM8o0UYK7Io3Bpp5hCdDkgeZe5kHdtWpOTPgZeIWpIoKNAL7y+vFNI1AaMqJUl3XSbSfEakZ5ZCXvVRBQuiIDKBrqCARKD+bnZ7jE6P0cD+WpoTGM/X3REYipSZRaDojoodq0ZuK/3ndVPev/YyJJNUg6HxRP+VYx3iaA+4xCVTziSGESmZuxXRIJKHapFU2IbiLLy+T1lnNPa859xfV+k0RRwkdoWN0ilx0heroDjVQE1H0iJ7RK3qznqwX6936mLeuWMXMIfoD6/MHVEWUCA==</latexit> ~16 <latexit sha1_base64="Th2k0L1quCqnOCL5jErebK97YEU=">AAAB+nicbVBNS8NAEN34WetXqkcvi0XwVBIV9Vj04rGC/YAmhM122i7dbMLuplJifooXD4p49Zd489+4bXPQ1gcDj/dmmJkXJpwp7Tjf1srq2vrGZmmrvL2zu7dvVw5aKk4lhSaNeSw7IVHAmYCmZppDJ5FAopBDOxzdTv32GKRisXjQkwT8iAwE6zNKtJECu+KNgWYe4cmQ5EHmXuaBXXVqzgx4mbgFqaICjcD+8noxTSMQmnKiVNd1Eu1nRGpGOeRlL1WQEDoiA+gaKkgEys9mp+f4xCg93I+lKaHxTP09kZFIqUkUms6I6KFa9Kbif1431f1rP2MiSTUIOl/UTznWMZ7mgHtMAtV8YgihkplbMR0SSag2aZVNCO7iy8ukdVZzz2vO/UW1flPEUUJH6BidIhddoTq6Qw3URBQ9omf0it6sJ+vFerc+5q0rVjFziP7A+vwBVcqUCQ==</latexit> n illustration of graph attention layer. It allows a node fferent importance to different nodes within a neigh- propose to use another LSTM to model the temp lations between interactions explicitly. We term t as G-LSTM: gt i = G-LSTM(gt 1 i , ˆ mt i ;Wg) where ˆ mt i is from Eq. 5. Wg is the G-LSTM we shared among all the sequences. In Encoder component, two LSTMs (M-L LSTM) are used to model the motion pattern of e trian, and the temporal correlations of interaction tively. We combine these two parts to accomplish of spatial and temporal information. At time-step are two hidden variables (mTobs i , gTobs i ) from two each pedestrian. In our implementation, these two ༧ଌର৅ ଞର৅ ଞର৅ ଞର৅ ଞର৅ ଞର৅ Y. Huang, et al., “STGAT: Modeling Spatial-Temporal Interactions for Human Trajectory Prediction,” ICCV, 2019.
  23. 1SFEJDUFE&OEQPJOU$POEJUJPOFE/FUXPSL 1&$/FU  ˔ ༧ଌ࠷ऴ஍఺ ΤϯυϙΠϯτ Λॏࢹֶͨ͠शΛߦ͏ܦ࿏༧ଌख๏  ɹɹͰΤϯυϙΠϯτΛ༧ଌ͠ɼ1BTU&ODPEJOHͷग़ྗͱ࿈݁ DPODBUFODPEJOH

      ࿈݁ͨ͠ಛ௃ྔ͔Β4PDJBM1PPMJOH಺ͷ֤ύϥϝλಛ௃ྔΛऔಘ  าߦऀYาߦऀͷ4PDJBM.BTLͰาߦऀؒͷΠϯλϥΫγϣϯΛٻΊΔ  DPODBUFODPEJOHͱΠϯλϥΫγϣϯ৘ใ͔ΒͰܦ࿏Λ༧ଌ 1&$/FU<,.BOHBMBN &$$7 > 26 6 K. Mangalam, H. Girase, S. Agarwal, K. Lee, E. Adeli, J. Malik, A. Gaidon Fig. 2. Architecture of PECNet: PECNet uses past history, Ti along with ground truth endpoint Gc to train a VAE for multi-modal endpoint inference. Ground-truth Dlatent PECNet: Pedestrian Endpoint Conditioned Trajectory Prediction Network 13 lower prediction error than way-points in the middle! This in a nutshell, con- firms the motivation of this work. E↵ect of Number of samples (K): All the previous works use K = 20 sam- ples (except DESIRE which uses K = 5) to evaluate the multi-modal predictions for metrics ADE & FDE. Referring to Figure 5, we see the expected decreas- ing trend in ADE & FDE with time as K increases. Further, we observe that our proposed method achieves the same error as the previous works with much smaller K. Previous state-of-the-art achieves 12.58 [39] ADE using K = 20 sam- ples which is matched by PECNet at half the number of samples, K = 10. This further lends support to our hypothesis that conditioning on the inferred way- point significantly reduces the modeling complexity for multi-modal trajectory forecasting, providing a better estimate of the ground truth. Lastly, as K grows large (K ! 1) we observe that the FDE slowly gets closer to 0 with more number of samples, as the ground truth Gc is eventually found. However, the ADE error is still large (6.49) because of the errors in the rest of the predicted trajectory. This is in accordance with the observed ADE (8.24) for the oracle conditioned on the last observed point (i.e. 0 FDE error) in Fig. 4. Design choice for VAE: We also evaluate our design choice of using the in- ferred future way-points ˆ Gc for training subsequent modeules (social pooling & prediction) instead of using the ground truth Gc . As mentioned in Section 3.2, this is also a valid choice for training PECNet end to end. Empirically, we find Fig. 6. Visualizing Multimodality: We show visualizations for some multi-modal and diverse predictions produced by PECNet. White represents the past 3.2 seconds while red & cyan represents predicted & ground truth future respectively over next ೖྗ஋ ਅ஋ ༧ଌ஋ K. Mangalam, et al., “It is Not the Journey but the Destination: Endpoint Conditioned Trajectory Prediction,” ECCV, 2020. Pfuture
  24. ෳ਺ର৅Λಈతͳάϥϑߏ଄Ͱޮ཰తʹϞσϧԽ ˔ /)&ɿ؍ଌ࣌ࠁͷ/PEFಛ௃Λ-45.΁ೖྗ ˔ /'&ɿֶश࣌ʹ/PEFͷະདྷͷਅͷي੻ΛΤϯίʔυ͢ΔͨΊʹ#J-45.Λద༻ ˔ &&ɿಛఆൣғ಺ͷશର৅͔Β"UUFOUJPOΛٻΊΔ  ॏཁ౓ͷߴ͍&EHF৘ใΛऔಘ 

    ࣌ࠁຖʹ&EHF৘ใ͸มಈ ˔ ֤ಛ௃͔Β%FDPEFSͰܦ࿏Λ༧ଌ  ಺෦ͷ$7"&ͰϚϧνϞʔμϧͳܦ࿏Λ༧ଌ  (BVTTJBO.JYUVSF.PEFMͰ༧ଌܦ࿏Λચ࿅ ؀ڥ৘ใΛ௥Ճͨ͠5SBKFDUSPO <54BM[NBOO &$$7 > ͕ൃද 5SBKFDUSPO<#*WBOPWJD *$$7 > 27 Overall, we chose to make our model part of the “graph as architecture” methods, as a result of their stateful graph representation (leading to efficient iterative predictions on- line) and modularity (enabling model reuse and extensive parameter sharing). 3. Problem Formulation In this work, we are interested in jointly reasoning and generating a distribution of future trajectories for each agent in a scene simultaneously. We assume that each scene is preprocessed to track and classify agents as well as obtain their spatial coordinates at each timestep. As a result, each agent i has a classification type Ci (e.g. “Pedestrian”). Let Xt i = (xt i , yt i ) represent the position of the ith agent at time t and let Xt 1,...,N represent the same quantity for all agents in a scene. Further, let X (t1:t2) i = (Xt1 i , Xt1+1 i , . . . , Xt2 i ) denote a sequence of values for time steps t 2 [t1, t2]. As in previous works [1, 16, 49], we take as input the previous trajectories of all agents in a scene X (1:tobs) 1,...,N and aim to produce predictions b X (tobs+1:tobs+T ) 1,...,N that match the true future trajectories X (tobs+1:tobs+T ) 1,...,N . Note that we have not assumed N to be static, i.e. we can have N = f(t). !" /$ !% /& !' /( ! "-! % !' -!% Legend Modeled Node !" /$ Node $ is of type !" !" -!% Edge is of type !" -!% Edge being created Normal Edge !" /) !" -!% Attention !(#$%) !(#$') !(#) !(#(') !(#(%) !(#()) F C F C EE NHE NFE Encoder ℎ+ ℎ, -(.|0) 1(.|0, 3) F C . !(#) ., ℎ+ 4 !(#(') ! #(' ., ℎ+ 4 !(#(%) ! #(% ., ℎ+ 4 !(#(') GMM GMM GMM 4 !(#(%) 4 !(#()) Decoder 5) 5) 5) 6(#$%) + 8(#$%) 6(#$') + 8(#$') 6(#) + 8(#) 5' -5) 9(#$%) 9(#$') 9(#) 5% -5) Legend LSTM Cell Modulating Function FC Fully-Connected Layer Projection to a GMM Concatenation Randomly sampled Train time only Predict time only Train and Predict GMM M M M M M M M + Figure 2. Top: An example graph with four nodes. a is our mod- T. Salzmann, et al., “Trajectron++: Dynamically-Feasible Trajectory Forecasting with Heterogeneous Data,” ECCV, 2020. B. Ivanovic, et al., “The Trajectron: Probabilistic Multi-Agent Trajectory Modeling with Dynamic Spatiotemporal Graphs,” ICCV, 2019.
  25. ܦ࿏༧ଌͰ༻͍Δ-45.͸఺ͷ໰୊͕͋Δ ˔ -45.͸ෳࡶͳ࣌ؒґଘੑͷϞσϧԽ͕ࠔ೉ ˔ "UUFOUJPOϞσϧͷ༧ଌख๏͕ΠϯλϥΫγϣϯΛ׬શʹϞσϧԽͰ͖ͳ͍ 5SBOTGPSNFSΛۭ࣌ؒత"UUFOUJPO΁֦ு͠ɼܦ࿏༧ଌλεΫʹԠ༻ ˔ 5FNQPSBM5SBOTGPSNFSͰي੻ಛ௃ΛΤϯίʔυ ˔ 4QBUJBM5SBOTGPSNFSͰ࣌ࠁຖʹಠཱͨ͠ΠϯλϥΫγϣϯΛநग़

    ˔ ͭͷ5SBOTGPSNFSΛ༻͍Δ͜ͱͰɼ-45.Λ༻͍ͨ༧ଌख๏ͷ༧ଌਫ਼౓Λେ෯ʹվળ 45"3<$:V &$$7 > 28 6 Yu C., Ma X., Ren J., Zhao H., Yi S. VHOIDWWHQWLRQ ) & (a) Temporal Transformer (b) Spatial Transformer Fig. 3. STAR has two main components, Temporal Transformer and Spatial Tr former. (a) Temporal Transformer treats each pedestrians independently and extr the temporal dependencies by Transformer model (h is the embedding of pedest Yu C., Ma X., Ren J., Zhao H., Yi S. SHGHVWULDQ WUDMHFWRULHV ) & ) & 7HPSRUDO 7UDQVIRUPHU 6SDWLDO 7UDQVIRUPHU ) & FRQFDW 6SDWLDO 7UDQVIRUPHU ) & RXWSXW 7HPSRUDO 7UDQVIRUPHU UHDG ZULWH HQFRGHU HQFRGHU GHFRGHU *UDSK 0HPRU\ DGGWRKLVWRU\ Temporal Transformer Spatial Transformer C. Yu, et al., “Spatio-Temporal Graph Transformer Networks for Pedestrian Trajectory Prediction,” ECCV, 2020.
  26. ୯७ʹϊΠζϕΫτϧΛ෇༩͢Δͱɼߴ͍෼ࢄΛ࣋ͭܦ࿏Λ༧ଌͯ͠͠·͏ ˔ طଘݚڀ͸ਅʹϚϧνϞʔμϧͳ෼෍ΛֶशͰ͖͍ͯͳ͍ ༧ଌܦ࿏ͱϊΠζϕΫτϧؒͷજࡏతදݱΛֶश ˔ ϊΠζϕΫτϧ͔Βੜ੒ͨ͠༧ଌܦ࿏Λ-45.&ODPEFS΁ೖྗ ˔ ݩͷϊΠζϕΫτϧͱྨࣅ͢ΔΑ͏ʹϚοϐϯά ˔ ਅʹϚϧνϞʔμϧͳܦ࿏Λੜ੒Մೳ

    4PDJBM#J("5<7,PTBSBKV /FVS*14 > 29 Figure 2: Architecture for the proposal Social-BiGAT model. The model consists of a single generator, two Figure 4: Generated trajectories visualized for the S-GAN-P, Sophie, and Social-BiGAT models across four main scenes. Observed trajectories are shown as solid lines, ground truth future movements are shown as dashed lines, and generated samples are shown as contour maps. Different colors correspond to different pedestrians. V. Kosaraju, et al., “Social-BiGAT: Multimodal Trajectory Forecasting using Bicycle-GAN and Graph Attention Networks,” NeurIPS, 2019.
  27. Ҡಈର৅ؒͷؔ܎ͱपลͷ੩త؀ڥͱͷؔ܎ͷ྆ํΛ໌ࣔతʹϞσϧԽ ˔ ੩త؀ڥ৘ใɿࣄલֶशࡁΈͷ6/FUʹΑΔηάϝϯςʔγϣϯը૾ ˔ Ҡಈର৅ؒͷؔ܎ɿ4PDJBMBXBSF$POUFYUNPEVMF 4$. ͷ/POMPDBMCMPDLͰଊ͑Δ  /POMPDBMCMPDLɿಈը૾ೝࣝͷͨΊͷTFMGBUUFOUJPOߏ଄ 

    ͋Δۭؒൣғͷςϯιϧಛ௃͔Β/POMPDBMCMPDLͰΠϯλϥΫγϣϯಛ௃Λଊ͑Δ %4$.1<$5BP &$$7 > 30 C. Tao, et al., “Dynamic and Static Context-Aware LSTM for Multi-agent Motion Prediction,” ECCV, 2020. Dynamic and Static Context-aware LSTM for Multi-agent Motion Prediction 3 (a) Illustration of multi-agent motion prediction. (b) Comparisons of di↵erent approaches. (c) Comparisons of the Relative Average Distance Error (ADE) on various datasets. Fig. 1. (a) illustrates motion prediction in a real-world scenario, where both the dy- namic motion context across agents and static context are involved. (b) compares various methods including LSTM, social-aware models (e.g. SGAN [5]) and DSCMP. The queue mechanism in DSCMP enriches the scope of dynamic context at each frame, 6 C. Tao, Q. Jiang, L. Duan, P. Luo Fig. 2. The overview of our framework (DSCMP). Given a sequence of the observed motions, we construct agent-specific queue to store the LSTM features of previous frames within a queue length. The queue length is set to 3 as example. (a) For each iteration during observation, the current motion state and queues are encoded via ICM. (b) The queues are updated by appending the latest features and popping the earliest ones. In the SCM, the queues are adaptively refined by considering the pair- wise relations of features in the neighbours’ queues. (c) The semantic map of static context is incorporated with observed motion to generate a learnable latent variable z. (d) We concatenate the last hidden feature in the encoder and the latent variable to predict the motions via LSTM decoder. 3.2 Individual Context Module Based on the aforementioned queues, we firstly capture the temporal correlations of trajectories from individual level. In order to handle with multiple inputs in i is often based only on the current and the latest time steps (e.g., j = i or i − 1). The non-local operation is also different from a fully- connected (fc) layer. Eq.(1) computes responses based on relationships between different locations, whereas fc uses learned weights. In other words, the relationship between xj and xi is not a function of the input data in fc, unlike in non- local layers. Furthermore, our formulation in Eq.(1) supports inputs of variable sizes, and maintains the corresponding size in the output. On the contrary, an fc layer requires a fixed-size input/output and loses positional correspondence (e.g., that from xi to yi at the position i). A non-local operation is a flexible building block and can Ƨ: 1×1×1 ƴ: 1×1×1 g: 1×1×1 1×1×1 softmax z T×H×W×1024 T×H×W×512 T×H×W×512 T×H×W×512 THW×512 512×THW THW×THW THW×512 THW×512 T×H×W×512 T×H×W×1024 x Dynamic and Static Context-aware LSTM for Multi-agent Motion Prediction 13 (a) Walk in parallel (b) Turning (c) Face-to-face Fig. 5. Comparison among our model, SGAN and STGAT in di↵erent scenarios. These
  28. ၆ᛌࢹ఺ͷาߦऀͷܦ࿏ͱਓশࢹ఺ͷը૾Λ༻͍ͨܦ࿏༧ଌ ˔ γϛϡϨʔλ্Ͱ၆ᛌࢹ఺ͷσʔλΛਓশࢹ఺ʹϨϯλϦϯά ˔ ͭͷ"UUFOUJPOػߏ͸TFMGBUUFOUJPOͰߏ੒  4PDJBMBXBSF"UUFOUJPO.PEVMFɿาߦऀؒͷΠϯλϥΫγϣϯΛϞ デ ϧԽ 

    7JFXBXBSF"UUFOUJPO.PEVMFɿҰਓশࢹ఺ʹ͓͚Δาߦऀͷܦ࿏ͱࢹ֮తಛ௃ͷؔ܎Λଊ͑Δ '75SBK<)#J &$$7 > 31 6 H. Bi et al. FvSim LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM Resnet-18 Social-aware Attention Module View-aware Attention Module Traj-Encoder MLP MLP MLP MLP View-Encoder Traj-Decoder sort noise concat MLP multi-layer perceptron noise concat MLP multi-layer perceptron ↵i M LN MLP LN hi te,t Hi te,t Wh te,n WH te,n  ↵i te,1 te,1↵i te,n i ei soci Add Add Social-aware Attention Module Fig. 3. (Upper) Pipeline overview of the FvTraj model. Given the pedestrian trajectories in the FvTraj: Using First-person View Fig. 7. Visualization of the predicted attention weig mechanism with four heads (n = 4) for each pedest the average attention weights (green circles) of th attention module at Tobs. Note, the green circles’ the red circles represent the position of the focus p the focus pedestrian whose attention weights are p other pedestrians in the scene. t3 t4 t7 t3 t4 t7 t0 t1 (a) Fig. 8. Visualization of learned attention weights b in observation period and simulated first-person vie Each scene shows at least one of the scenarios am sion avoiding. Note, the colored circles’ radii are p color show the corresponding step, respectively), trian whose attention weights are predicted, and th No new experimental results involved here. FvTraj: Using First-person View for Pedestrian Trajectory Pre Fig. 7. Visualization of the predicted attention weights by FvTraj. We use the mul mechanism with four heads (n = 4) for each pedestrian in our implementation. H the average attention weights (green circles) of the four head attentions used in attention module at Tobs. Note, the green circles’ radii are proportional to the at the red circles represent the position of the focus pedestrian at Tobs, the red traje the focus pedestrian whose attention weights are predicted, and the blue trajector other pedestrians in the scene. t3 t4 t7 t3 t4 t7 t0 t1 t2 t0 t1 t2 t0 t1 t0 t1 t7 (a) (b) Fig. 8. Visualization of learned attention weights by our view-aware attention m Visualization of the predicted attention weights by FvTraj. We use the multi-head attention nism with four heads (n = 4) for each pedestrian in our implementation. Here, we visualize rage attention weights (green circles) of the four head attentions used in the social-aware n module at Tobs. Note, the green circles’ radii are proportional to the attention weights, circles represent the position of the focus pedestrian at Tobs, the red trajectories represent us pedestrian whose attention weights are predicted, and the blue trajectories represent the edestrians in the scene. t3 t4 t7 t4 t7 t0 t1 t2 t0 t1 t2 t0 t1 t7 t0 t1 t7 (a) (b) (c) t0 t7 observed predicted t1 t2 t3 t4 t5 t6 H. Bi, et al., “How Can I See My Future? FvTraj: Using First-person View for Pedestrian Trajectory Prediction,” ECCV, 2020.
  29. ຊαʔϕΠͷ໨త 32 %FFQ-FBSOJOHΛ༻͍ͨܦ࿏༧ଌख๏ͷಈ޲ௐࠪ ˔ ֤ΧςΰϦʹଐ͢Δܦ࿏༧ଌख๏ຖͷಛ௃Λ·ͱΊΔ  ΠϯλϥΫγϣϯ͋Γ w ϓʔϦϯάϞσϧ w

    ΞςϯγϣϯϞσϧ w ͦͷଞͷϞσϧ  ΠϯλϥΫγϣϯҎ֎ ˔ ఆྔతධՁͷͨΊͷσʔληοτɼධՁࢦඪ΋঺հ ˔ ୅දతϞσϧΛ࢖༻ͯ͠ɼ֤Ϟσϧͷਫ਼౓ͱ༧ଌ݁Ռʹ͍ͭͯٞ࿦
  30. -45.ͷήʔτߏ଄ʹ৮ൃ͞Εͨ3FMBUJPO(BUF.PEVMF 3(. ΛఏҊ ˔ ۭ࣌ؒతಛ௃͔ΒҠಈର৅ؒͷؔ܎ͱपลͷ੩త؀ڥͱͷؔ܎ͷ྆ํΛଊ͑Δ  Ҡಈର৅ؒͷΠϯλϥΫγϣϯ͸ը૾্ʹөΔର৅͔ΒٻΊΔ  ੩త؀ڥ΋ಉ༷ ˔

    ֤ؔ܎ͷۭ࣌ؒతಛ௃͸ͭͷΤϯίʔμͰٻΊΔ  4QBUJBM#FIBWJPS&ODPEFSɿ%$POWͰ֤ؔ܎ͷ֤࣌ࠁͷΠϯλϥΫγϣϯΛٻΊΔ  5FNQPSBM*OUFSBDUJPO&ODPEFSɿ%$POWͰۭ࣌ؒͷಛ௃Λଊ͑Δ 3(.<$$IPJ *$$7 > 33 C. Choi, et al., “Looking to relations for future trajectory forecast,” ICCV, 2019. sequence of images, the GRE visually analyzes spatial behavior of road users and their temporal interactions vironments. The subsequent RGM of GRE infers pair-wise relations from these interactions and determines re meaningful from a target agent’s perspective. The aggregated relational features are used to generate hrough the TPN. Then, the following SRN further refines these initial predictions with a guidance of their ies. We additionally embed the uncertainty of the problem into our system at test time. nation are highly correlated. However, as an interaction oriented approaches, these generalize well to unseen locations as the onsider the road layout. Inference definition of ‘object’ in [34] to a spatio- representation extracted from each region d grid over time. It enables us to visu- human-human interactions where there ad users interacting with each other over space interactions from their interactive vironments, and (iii) environmental repre- oding structural information of the road. ations between objects (i.e., local spatio- ) are inferred from a global perspective. sign a new operation function to control so that the network can extract descriptive s by looking at relations that have a high nce the future motion of the target. mporal Interactions j j ͙ t=t0 -IJ+1 t=t0 t=t0 -IJ+1 t=t0 e.g., Human-space interactions e.g., Human-human interactions Time Time Time t=t0 t=t0 -IJ+1 Spatial representations, S I i j ͙ s i s j Spatio-temporal features, O c d d SBE SBE Spatial Behavior Encoder TIE Temporal Interaction Encoder ͙ ͙ c d d c c Time Figure 3: We model human-human and human-space inter- actions by visually encoding spatio-temporal features from each region of the discretized grid. tanh tanh sig tanh ݍ௞ FC with non-linear concatenate element-wise multiplication ݂௜௝ ௞ ݎ௜௝ ܱ ݋௜ ݋௝ ݋௜௝ ; ௞ LSTM Figure 4: The relation gate module controls information flow through multiple switches and determines not only whether the given object pair has meaningful relations from a spatio-temporal perspective, but also how important their relations are with respect to the motion context of the target. instead of inferring every relational interactions of all road users. In this view, we propose a module which is able to address the benefits of discriminatory information process with respect to their relational importance. ual road users generate unique relational same set of objects O with a distinct mot 4. Future Trajectory Prediction The proposed approach aims to predi ture locations Y k = {Y k t0+1 , Y k t0+2 , ..., Yt road user k using Xk = {I, X k }. Rath numerical coordinates of future locations of likelihood heatmaps following the succ estimation in [38, 25, 5]. The following s the proposed method learns future locatio 4.1. Trajectory Prediction Network To effectively identify the pixel-leve we specifically design a trajectory predic with a set of deconvolutional layers. Det architecture are described in the supple 3(.ͷུ֓ਤ ۭ࣌ؒతಛ௃ͷநग़ (a) (b) (c) (d) Figure 7: The proposed approach properly encodes (a) human-human and (b) human-space interactions by inferring relationa (a) (b) (c) (d) Figure 7: The proposed approach properly encodes (a) human-human and (b) human-space interactions by inferring relational behavior from a physical environment (highlighted by a dashed arrow). However, we sometimes fail to predict a future
  31. 4QBUJBM5FNQPSBM(SBQIΛ༻͍ͯϞσϧԽ ˔ (SBQI$POWPMVUJPO/FUXPSL ($/ ͰΠϯλϥΫγϣϯʹؔ͢Δಛ௃நग़  ྡ઀ߦྻ͔ΒΠϯλϥΫγϣϯ৘ใΛٻΊΔ ˔ ($/Ͱಘͨಛ௃͔Β5FNQPSBM$POWPMVUJPOBM/FUXPSL 5$/

    Ͱ༧ଌ෼෍Λग़ྗ  -45.͸༧ଌܦ࿏Λஞ࣍ग़ྗ͢Δ͕ɼ5$/͸༧ଌܦ࿏Λฒྻʹग़ྗ  ਪ࿦଎౓Λେ෯ʹվળ 4PDJBM45($//<".PIBNFE $713 > 34 Figure 2. The Social-STGCNN Model. Given T frames, we construct the spatio-temporal graph representing G = (V, A). Then G is forwarded through the Spatio-Temporal Graph Convolution Neural Networks (ST-GCNNs) creating a spatio-temporal embedding. Following this, the TXP-CNNs predicts future trajectories. P is the dimension of pedestrian position, N is the number of pedestrians, T is the number ˆ A. Mohamed, et al., “Social-STGCNN: A Social Spatio-Temporal Graph Convolutional Neural Network for Human Trajectory Prediction,” CVPR, 2020.
  32. าߦऀؒͷؔ܎Λௐࠪ͢ΔάϧʔϓϕʔεͷΠϯλϥΫγϣϯΛϞσϧԽ ˔ άϧʔϓɿಉ͡໨త஍ʹߦ͘าߦऀɼԕํͷಉํ޲΁ਐΉาߦऀɼFUD ˔ άϧʔϓͷ൑ผͷͨΊʹɼਓ͕άϧʔϓ৘ใΛΞϊςʔγϣϯ 3FMBUJPOBM4PDJBM3FQSFTFOUBUJPOͰάϧʔϓͷΠϯλϥΫγϣϯΛٻΊΔ ˔ पғͷ੩త؀ڥ৘ใ΍աڈͷي੻৘ใͱ࿈݁͠ɼܦ࿏Λ༧ଌ 34#(<+4VO $713

    > 35 BiLSTM CNN RSBG Generator Coordinates Imagepatch Coordinates GCN LSTM IndividualRepresentation Decoder RelationalSocialRepresentation Features RSBG J. Sun, et al., “Recursive Social Behavior Graph for Trajectory Prediction,” CVPR, 2020.
  33. ਪ࿦࣌ʹ༧ଌର৅ͷ໨ඪΛ৚݅෇͚Δ13&EJDUJPO$POEJUJPOFEPO(PBMT 13&$0( ΛఏҊ ˔ ໨ඪɿલʹਐΉ ఀࢭ͢Δ FUD ˔ ৚݅෇͚ΒΕͨ໨ඪʹΑΓɼଞର৅ͷ༧ଌܦ࿏͕มԽ 

    3//Λϕʔεͱͨ͠໬౓ਪ࿦ͷ&TUJNBUJOH4PDJBMGPSFDBTU1SPCBCJMJUJFTͰ৚݅෇͚͢Δ 13&$0(</3IJOFIBSU *$$7 > 36 N. Rhinehart, et al., “PRECOG: Prediction conditioned on goals in visual multi-agent settings,” ICCV, 2019. Zr 1 Zr 2 · · · Zr T Sr 1 Sr 2 · · · Sr T Sh 1 Sh 2 · · · Sr T Zh 1 Zh 2 · · · Zh T (a) ESP forecasting zr 1 zr 2 · · · zr T sr 1 Sr 2 · · · Sr T Sh 1 Sh 2 · · · Sr T Zh 1 Zh 2 · · · Zh T (b) PRECOG planning (c) ESP model implementation Figure 3: Our factorized latent variable model of forecasting and planning shown for 2 agents. In Fig. 3a our model uses latent variable Za t+1 to represent variation in agent a’s plausible scene-conditioned reactions to all agents St , causing uncertainty in every agents’ future states S. Variation exists because of unknown driver goals and different driving styles observed in the training data. Beyond forecasting, our model admits planning robot decisions by deciding Zr = zr (Fig. 3b). Shaded nodes represent observed or determined variables, and square nodes represent robot decisions [2]. Thick arrows represent grouped dependencies of non-Makovian St “carried forward” (a regular edge exists between any pair of nodes linked by a chain of thick edges). Note Z factorizes across agents, isolating the robot’s reaction variable zr. Human reactions remain uncertain (Zh is unobserved) and uncontrollable (the robot cannot decide Zh), and yet the robot’s decisions zr will still influence human drivers Sh 2:T (and vice-versa). Fig. 3c shows our implementation. See Appendix C for details. of other drivers from rich perceptual information. Towards these capabilities, we present a probabilistic forecasting model of future interactions between a variable number of agents. We perform both standard forecasting and the novel task of conditional forecasting, which reasons about how all agents will likely respond to the goal of a controlled agent (here, the AV). We train models on real and simulated data to forecast vehicle trajectories given past positions and LI- DAR. Our evaluation shows that our model is substantially more accurate in multi-agent driving scenarios compared to existing state-of-the-art. Beyond its general ability to perform conditional forecasting queries, we show that our model’s predictions of all agents improve when conditioned on knowledge of the AV’s goal, further illustrating its capa- bility to model agent interactions. 1. Introduction Autonomous driving requires reasoning about the fu- ture behaviors of agents in a variety of situations: at stop signs, roundabouts, crosswalks, when parking, when merg- ing etc. In multi-agent settings, each agent’s behavior af- fects the behavior of others. Motivated by people’s ability to reason in these settings, we present a method to fore- cast multi-agent interactions from perceptual data, such as images and LIDAR. Beyond forecasting the behavior of all agents, we want our model to conditionally forecast how other agents are likely to respond to different decisions each agent could make. We want to forecast what other agents would likely do in response to a robot’s intention to achieve a goal. This reasoning is essential for agents to make good decisions in multi-agent environments: they must rea- son how their future decisions could affect the multi-agent system around them. Examples of forecasting and condi- tioning forecasts on robot goals are shown in Fig. 1 and Fig. 2. Videos of the outputs of our approach are available at https://sites.google.com/view/precog. Left Front Right Figure 1: Forecasting on nuScenes [4]. The input to our model is a high-dimensional LIDAR observation, which informs a distribu- tion over all agents’ future trajectories. Forecasting Conditional Forecast: Set Car 1 Goal=Ahead Conditional Forecast: Set Car 1 Goal=Stop Goal=Ahead Goal=Stop Car 1 Car 1 Car 1 Car 2 Car 3 Car 3 Car 3 Figure 2: Conditioning the model on different Car 1 goals pro- duces different predictions: here it forecasts Car 3 to move if Car 1 yields space, or stay stopped if Car 1 stays stopped. arXiv:1905.01296v3 [cs.CV] 30
  34. ຊαʔϕΠͷ໨త 37 %FFQ-FBSOJOHΛ༻͍ͨܦ࿏༧ଌख๏ͷಈ޲ௐࠪ ˔ ֤ΧςΰϦʹଐ͢Δܦ࿏༧ଌख๏ຖͷಛ௃Λ·ͱΊΔ  ΠϯλϥΫγϣϯ͋Γ w ϓʔϦϯάϞσϧ w

    ΞςϯγϣϯϞσϧ w ͦͷଞͷϞσϧ  ΠϯλϥΫγϣϯͳ͠ ˔ ఆྔతධՁͷͨΊͷσʔληοτɼධՁࢦඪ΋঺հ ˔ ୅දతϞσϧΛ࢖༻ͯ͠ɼ֤Ϟσϧͷਫ਼౓ͱ༧ଌ݁Ռʹ͍ͭͯٞ࿦
  35. $//Λ༻͍ͨ༧ଌख๏ ˔ աڈͷي੻৘ใΛΤϯίʔυ͠ɼεύʔεͳϘΫηϧʹ֨ೲ ˔ $POWPMVUJPOͱ.BY1PPMJOHΛෳ਺ճߦ͍ɼ%FDPOWPMVUJPOͰ༧ଌܦ࿏Λग़ྗ ˔ -PDBUJPOCJBTNBQʹΑΓಛఆγʔϯͷ෺ମ৘ใͱજࡏతಛ௃දݱͰཁૉੵΛͱΔ  ಛఆγʔϯʹΑͬͯมԽ͢ΔาߦऀͷৼΔ෣͍Λߟྀ #FIBWJPS$//<4:J

    &$$7 > 38 output of CNN, as they are of variable lengths and observed in different periods. 3 Pedestrian Behavior Modeling and Prediction The overall framework is shown in Fig. 2. The input to our system is pedestrian walking paths in previous frames (colored curves in Fig. 2(a)). They could be obtained by simple trackers such as KLT [41]. They are then encoded into a displacement volume (Fig. 2(b)) with the proposed walking behavior encoding scheme. Behavior-CNN in Fig. 2(c) takes the encoded displacement volume as Fig. 2. System flowchart. (a) Pedestrian walking paths in previous frames. Three exam- ples are shown in different colors. Rectangles indicate current locations of pedestrians. (b) The displacement volume encoded from pedestrians’ past walking paths in (a). (c) Behavior-CNN. (d) The predicted displacement volume by Behavior-CNN. (e) Pre- dicted future pedestrian walking paths decoded from (d). Three bottom convolution layers, conv1, conv2, and conv3, are to be con- volved with input data of size X × Y × 2M. conv1 contains 64 filters of size 3 × 3 × 2M, while both conv2 and conv3 contain 64 filters of size 3 × 3 × 64. Zeros are padded to each convolution input in order to guarantee feature maps of these layers be of the same spatial size with the input. The three bottom convolution layers are followed by max pooling layers max-pool with stride 2. The output size of max-pool is X/2 × Y/2 × 64. In this way, the receptive field of the network can be doubled. Large receptive field is necessary for the task of pedestrian walking behavior modeling because each individual’s behavior are significantly influenced by his/her neighbors. A learnable location bias map of size X/2×Y/2 is channel-wisely added to each of the pooled feature maps. Every spatial location has one independent bias value shared across channels. With the location bias map, location information of the scene can be automatically learned by the proposed Behavior-CNN. As for the three top convolution layers, conv4 and conv5 contain 64 filters of size 3 × 3 × 64, while conv6 contains 2M∗ filters of size 3 × 3 × 64 to output the predicted displacement volume. Zeros are also S. Yi, et al., “Pedestrian Behavior Understanding and Prediction with Deep Neural Networks,” ECCV, 2016.
  36. ༧ଌର৅पลͷہॴྖҬͷใुϚοϓ͔Β༧ଌର৅͕Ҡಈ͢ΔՄೳੑΛදݱ ˔ 4QBUJBM.BUDIJOH/FUXPSLɿύονྖҬؒͷྨࣅ౓͔ΒใुϚοϓΛਪఆ ˔ 0SJFOUBUJPO/FUXPSLɿ༧ଌର৅ͷ޲͖Λਪఆ 7JTVBM1BUI1SFEJDUJPO<4)VBOH 5*1 > 39 S.

    Huang, et al., “Deep learning driven visual path prediction from a single image,” TIP, 2016. 5 Fig. 3. The overview of our framework. Spatial Matching Network and Orientation Network are two CNNs, which respectively model the spatial and temporal contexts. We repeatedly input images of the object and local environment patches into Spatial Matching Network to generate the reward map of the scene. Intuitively, it helps us decide whether the object could reach certain areas of the scene. Orientation Network estimates the object’s facing orientation, which indicates the object’s preferred moving direction in the future. Then we incorporate this analysis and infer the most likely future paths with a unified path planning scheme. for an object I object and a scene image I by repeatedly inputing all the local environment patches q with the same Facing orientation șesti 8 (a) Original Image (b) Reward Map (c) Predicted Paths Fig. 7. Qualitative results generated by our algorithm. Each row represents a sample. The left column shows the input images. Red boxes on them denote the given objects. The middle column shows the generated reward maps. The right column shows predicted top-10 paths. Our framework can output discriminative reward maps and make accurate predictions on a diverse set of scenes. row represents a sample. The left column is the input images. The middle column shows the reward maps generated by our algorithm, in which those green areas are accessible (high reward) while pink areas are obstacles (low reward). We can see in the maps that the grass, tree and house are detected as low reward, while the road and parking lot are of high reward. Notice the fourth and fifth maps, where the sidewalk is recognized as high reward area for the corresponding pedestrians. The right column shows the predicted paths for the top-1 predictions and the black lines represent the other top-10 predictions. Visually, the predicted paths are close to our human’s inference. Notice how the predicted paths avoid other objects (cars, pedestrians) or obstacles (grass, trees, buildings) and go along the road. Furthermore, we can see that our framework is able to make correct prediction of the destination. In the third image, the red car will be parked in the square. In the fourth image, the person probably wants to walk across the street. A correct destination estimation will
  37. ਓশࢹ఺ʹ͓͚Δର໘ͷาߦऀͷͨΊͷҐஔ༧ଌ ˔ ਓশࢹ఺ಛ༗ͷखֻ͔ΓΛҐஔ༧ଌʹར༻  ର໘ͷาߦऀͷҐஔʹӨڹ͢ΔΤΰϞʔγϣϯ  ର໘ͷาߦऀͷεέʔϧ  ର໘ͷาߦऀͷ࢟੎ ˔

    ্هͭͷ৘ใΛ༻͍ͨϚϧνετϦʔϜϞσϧͰকདྷͷҐஔ༧ଌ 'VUVSFMPDBMJ[BUJPOJO fi STUQFSTPOWJEFPT<5:BHJ $713 > 40 Figure 2. Future Person Localization in First-Person Videos. Given a) Tprev-frames observations as input, we b) predict future locations of a target person in the subsequent Tfuture frames. Our approach makes use of c-1) locations and c-2) scales of target persons, d) ego- motion of camera wearers and e) poses of the target persons as a salient cue for the prediction. Channel-wise Concatenation Input Output Location-Scale Stream Ego-Motion Stream Pose Stream ܺ௜௡ ൌ ሺ࢞௧బି்౦౨౛౬ାଵ ǡ ڮ ࢞௧బ ሻ ܧ௜௡ ൌ ሺࢋ௧బା்౦౨౛౬ାଵ ǡ ڮ ࢋ௧బ ሻ ܲ௜௡ ൌ ሺ࢖௧బି்౦౨౛౬ାଵ ǡ ڮ ࢖௧బ ሻ ܺ௢௨௧ ൌ ሺ࢞௧బାଵ െ ࢞௧బ ǡ ڮ ǡ ࢞௧బା்౜౫౪౫౨౛ െ ࢞௧బ ሻ Figure 3. Proposed Network Architecture. Blue blocks corre- tain direction at a constant speed, our best guess based on only previous locations would be to expect them to keep go- ing in that direction in subsequent future frames too. How- ever, visual distances in first-person videos can correspond to different physical distances depending on where people are observed in the frame. In order to take into account this perspective effect, we propose to learn both locations and scales of target peo- ple jointly. Given a simple assumption that heights of people do not differ too much, scales of observed peo- ple can make a rough estimate of how large movements they made in the actual physical world. Formally, let Lin = (lt0−Tprev+1, . . . , lt0 ) be a history of previous tar- get locations. Then, we extend each location lt ∈ R2 + of ݐ଴ a) Input ݐ଴ ൅ ܶ௙௨௧௨௥௘ b) Prediction ݐ଴ െ ܶ௣௥௘௩ ൅ ͳ e) Pose ࢖௧ c-1) Location ࢒௧ c-2) Scale ݏ௧ d) Ego-motion ࢋ௧ Figure 2. Future Person Localization in First-Person Videos. Given a) Tprev-frames observations as input, we b) predict future locations of a target person in the subsequent Tfuture frames. Our approach makes use of c-1) locations and c-2) scales of target persons, d) ego- motion of camera wearers and e) poses of the target persons as a salient cue for the prediction. Channel-wise Concatenation Input Output Location-Scale Stream Ego-Motion Stream Pose Stream ܺ௜௡ ൌ ሺ࢞௧బି்౦౨౛౬ାଵ ǡ ڮ ࢞௧బ ሻ ܧ௜௡ ൌ ሺࢋ௧బା்౦౨౛౬ାଵ ǡ ڮ ࢋ௧బ ሻ ܲ௜௡ ൌ ሺ࢖௧బି்౦౨౛౬ାଵ ǡ ڮ ࢖௧బ ሻ ܺ௢௨௧ ൌ ሺ࢞௧బାଵ െ ࢞௧బ ǡ ڮ ǡ ࢞௧బା்౜౫౪౫౨౛ െ ࢞௧బ ሻ Figure 3. Proposed Network Architecture. Blue blocks corre- spond to convolution/deconvolution layers while gray blocks de- scribe intermediate deep features. tain direction at a constant speed, our best guess based on only previous locations would be to expect them to keep go- ing in that direction in subsequent future frames too. How- ever, visual distances in first-person videos can correspond to different physical distances depending on where people are observed in the frame. In order to take into account this perspective effect, we propose to learn both locations and scales of target peo- ple jointly. Given a simple assumption that heights of people do not differ too much, scales of observed peo- ple can make a rough estimate of how large movements they made in the actual physical world. Formally, let Lin = (lt0−Tprev+1, . . . , lt0 ) be a history of previous tar- get locations. Then, we extend each location lt ∈ R2 + of a target person by adding the scale information of that per- son st ∈ R+, i.e., xt = (lt , st ) . Then, the ‘location- scale’ input stream in Figure 3 learns time evolution in Xin = (xt0−Tprev+1, . . . , xt0 ), and the output stream gen- erates Xout = (xt0+1 − xt0 , . . . , xt0+Tfuture − xt0 ). Ground Truth Ours Social LSTM Input NNeighbor ݐ ൌ ݐ଴ െ ͻ ݐ ൌ ݐ଴ െ ͷ ݐ ൌ –଴ ݐ ൌ ݐ଴ ൅ ͸ ݐ ൌ ݐ଴ ൅ ͳͲ Past observations Predictions (a) (b) (c) (d) (e) Figure 5. Visual Examples of Future Person Localization. Using locations (shown with solid blue lines), scales and poses of target people (highlighted in pink, left column) as well as ego-motion of camera wearers in the past observations highlighted in blue, we predict locations of that target (the ground-truth shown with red crosses with dotted red lines) in the future frames highlighted in red. We compared T. Yagi, et al., “Future Person Localization in First-Person Videos,” CVPR, 2018.
  38. ंࡌΧϝϥө૾ʹөΔาߦऀͷকདྷͷҐஔΛ༧ଌ͢Δख๏ ˔ าߦऀͷۣܗྖҬɼࣗंͷҠಈྔɼंࡌΧϝϥը૾Λೖྗ ˔ าߦऀͷকདྷͷۣܗྖҬΛग़ྗ 0116<"#IBUUBDIBSZZB $713 > 41 Figure

    2: Two stream architecture for prediction of future pedestrian bounding boxes. ion of Pedestrian Trajectories sequence ˆ v (containing information about past pedestrian Last Observation: t Prediction: t + 5 Prediction: t + 10 Prediction: t + 15 Figure 4: Rows 1-3: Point estimates. Blue: Ground-truth, Red: Kalman Filter (Table 1 row 1), Yellow: One-stream model (Table 1 row 4), Green: Two-stream model (mean of predictive distribution, Table 4 row 3). Rows 4-6: Predictive distributions of our two-stream model as heat maps. (Link to video results in the Appendix). sequences have low error (note, log(530) ≈ 6.22 the MSE that, the predicted uncertainty upper bounds the error of the A. Bhattacharyya, et al., “Long-Term On-Board Prediction of People in Traf fi c Scenes under Uncertainty,” CVPR, 2018.
  39. ࣗಈंݕग़ɼ௥੻ɼܦ࿏༧ଌΛಉ࣌ʹਪ࿦͢ΔϞσϧΛఏҊ ˔ %఺܈σʔλΛೖྗʹ࢖༻  ࣍ݩۭؒͰεύʔεͳಛ௃දݱ  ωοτϫʔΫͷܭࢉίετΛ཈੍  ϦΞϧλΠϜͰͭͷλεΫΛಉ࣌ʹܭࢉՄೳ 'BTUBOE'VSJPVT<8-VP

    $713 > 42 Real Time End-to-End 3D Detection, Tracking and Motion orecasting with a Single Convolutional Net Wenjie Luo, Bin Yang and Raquel Urtasun Uber Advanced Technologies Group University of Toronto {wenjie, byang10, urtasun}@uber.com ract a novel deep neural network about 3D detection, track- iven data captured by a 3D about these tasks, our holis- occlusion as well as sparse h performs 3D convolutions bird’s eye view representa- is very efficient in terms of n. Our experiments on a new ed in several north american W. Luo, et al., “Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net,” CVPR, 2018.
  40. ҟͳΔҠಈ෺ମΛଐੑͱΈͳ͠ɼଐੑຖͷಛ௃తͳܦ࿏Λ༧ଌ ˔ ҟͳΔҠಈ෺ମ͕อ༗͢Δજࡏతಛ௃Λߟྀ  าߦऀɿาಓ΍ंಓΛา͘  ࣗಈंɿंಓΛ૸Δ ˔ ଐੑΛPOFIPUWFDUPSͰදݱ͢ΔͨΊɼଐੑຖʹϞσϧΛ࡞੒͢Δඞཁ͕ͳ͍ 

    ܭࢉίετͷ཈੍ ˔ ଐੑຖͷಛ௃తͳܦ࿏Λ༧ଌ͢ΔͨΊʹɼγʔϯϥϕϧΛར༻ 0"4&<).JOPVSB 7*4"11 > 43 H. Minoura, et al., “Path predictions using object attributes and semantic environment,” VISAPP, 2019. 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 ACCV2018 #128 A CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. ACCV-18 submission ID 128 11 (a) bicycle (b) bicycle (c) bicycle trajectory trajectory + attr. ground truth observation trajectory + env. trajectory + attr. + env. KF 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 ACCV2018 #128 ACCV2018 #128 CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. ACCV-18 submission ID 128 11 (a) bicycle (b) bicycle (c) bicycle trajectory trajectory + attr. ground truth observation trajectory + env. trajectory + attr. + env. KF 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 ACCV2018 #128 CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUT ACCV-18 subm (a) bicycle (d) pedestrian (b) bicycle (c) bicycl (e) pedestrian (f) pedes trajectory + attr. ground truth observation trajectory + env. KF 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 (a) bicycle (d) pedestrian (g) car (b) bicycl (e) pedes (h) car Fig. 6. Examples of prediction resul #JDZDMF 1FEFTUSJBO $BS
  41. Ұൠಓʹ͓͚Δࣗಈंͷܦ࿏༧ଌख๏ ˔ ҟͳΔγʔϯίϯςΩετΛߟྀ͢Δ$//ͱ(36ʹΑΔϞσϧΛఏҊ  ༧ଌंɾଞंͷҐஔɼίϯςΩετɼಓ࿏৘ใΛνϟωϧํ޲ʹ࿈݁  ࿈݁ͨ͠ςϯιϧΛ࣌ࠁຖʹ$//ͰΤϯίʔυ  Τϯίʔυͨ͠ಛ௃Λ(36Ͱ࣌ؒํ޲΁఻೻͠ɼকདྷͷܦ࿏Λ༧ଌ 3VMFTPGUIF3PBE<+)POH

    $713 > 44 Figure 2: Entity and world context representation. For an example scene (visualized left-most), the world is represented with the tensors shown, as described in the text. context representation. For an example scene (visualized left-most), the world is represented with ibed in the text. a) One-shot (b) with RNN decoder (a) Gaussian Regression (b) GMM-CVAE Figure 4: Examples of Gaussian Regression and GMM-CVAE methods. Ellipses represent a standard deviation of uncertainty, only drawn for the top trajectory; only trajectories with probability > 0.05 are shown, with cyan the most probable.We see that unc ellipses are larger when turning than straight, and often follow the direction of velocity. In the GMM-CVAE example, different result in turning into different lanes in a junction. γʔϯྫ ༧ଌंͷҐஔ ଞंͷҐஔ ίϯςΩετ ಓ࿏৘ใ J. Hong, et al., “Rules of the Road: Predicting Driving Behavior with a Convolutional Model of Semantic Interactions,” CVPR, 2019.
  42. ंࡌΧϝϥࢹ఺ʹ͓͚ΔϚϧνετϦʔϜͳ(36&OD%FDʹΑΔ༧ଌख๏ ˔ -PDBUJPO4DBMF&ODPEJOHɿଞंͷҐஔ৘ใ ## Λೖྗ ˔ .PUJPO"QQFBSBODF&ODPEJOHɿΦϓςΟΧϧϑϩʔը૾͔Βଞंͷ##ྖҬͷΈΛೖྗ ˔ 'VUVSF&HP.PUJPO$VFɿࣗंͷΤΰϞʔγϣϯΛೖྗ ˔

    'VUVSF-PDBUJPO4DBMF%FDPEJOHɿ্هͭͷಛ௃͔ΒকདྷͷεέʔϧΛ༧ଌ 'VUVSF7FIJDMF-PDBMJ[BUJPO<::BP *$3" > 45 Y. Yao, et al., “Egocentric vision-based future vehicle localization for intelligent driving assistance systems,” ICRA, 2019. Fig. 2: The proposed future vehicle localization framework (better in color). information is also able to represent depth (distance) as well as vehicle orientation, given that distant vehicles tend to have pose of the ego vehicle can be used to aid in predicting the relative position of nearby vehicles. Planned ego-vehicle Fig. 4: Qualitative results on HEV-I dataset (better in color).
  43. ΦϓςΟΧϧϑϩʔը૾͔Βาߦऀͷܦ࿏Λ༧ଌ ˔ 3FT/FUΛόοΫϘʔϯϞσϧͱͯ͠ར༻ ΞϊςʔγϣϯίετΛݮΒͨ͢Ίʹطଘͷาߦऀݕग़ͱ௥੻ख๏ͰσʔλΛ૿෯ ˔ ϥϕϧͷͳ͍σʔλ͔ΒࣄલֶशΛߦ͏ ˔ λʔήοτͷσʔληοτʹ fi OFUVOJOH͢Δ͜ͱͰɼ༧ଌਫ਼౓ͷ޲্ΛਤΔ

    %51<04UZMFT *7 > 46 O. Styles, et al., “Forecasting pedestrian trajectory with machine-annotated training data,” IV, 2019. trian Trajectory with Machine-Annotated Training Data Olly Styles1, Arun Ross2 and Victor Sanchez1 of pedestrian trajectory is utonomous vehicles and can ality of advanced driver as- progress has been made in orecasting pedestrian trajec- em due to the unpredictable e space of potentially useful t a deep learning approach ing using a single vehicle- dels that have revolutionized ave seen limited application due to the lack of richly ess the lack of training data ne annotation scheme that sing a large dataset without propose Dynamic Trajectory asting pedestrian trajectory DTP is trained using both ta, and anticipates dynamic near models. Experimental the proposed model. Fig. 1. We propose a model and training regime for pedestrian trajectory forecasting. Due to a lack of annotated training data, our model is trained Fig. 4. Example successful (top 2 rows) and unsuccessful (bottom row) trajectory forecasts on the JAAD test set. See main text for discussion. Best viewed in color. FPL see a reduction in error with our proposed CV cor- rection term Ct (rather than directly predicting the location displacement Lt+n ). DTP attains the best performance. Once frame-wise detections are obtained, detections are associated across frames using the Deepsort [32] tracking- by-detection algorithm resulting in a series of bounding boxes and tracking identifiers. We use the same setup as Fig. 4. Example successful (top 2 rows) and unsuccessful (bottom row) trajectory forecasts on the JAAD test set. See main text for discussion. Best viewed in color. FPL see a reduction in error with our proposed CV cor- rection term Ct (rather than directly predicting the location displacement Lt+n ). DTP attains the best performance. TABLE I INPUT MODALITY COMPARISION. Input modality MSE DE@5 DE@10 DE@15 CA 1426 15.3 28.3 52.8 CV 1148 16.0 26.4 47.5 RGB frame 1042 11.6 24.9 45.2 Optical flow frame 873 11.1 23.0 41.2 5 optical flow frames 651 9.4 19.3 35.6 9 optical flow frames 610 9.2 18.7 34.6 TABLE II MODEL COMPARISION. Model CV correction term MSE DE@15 FPL [33] 7 1405 ± 182 49.5 ± 2.9 FPL [33] 3 881 ± 44 41.3 ± 1.2 DTP 7 1404 ± 94 54.6 ± 2.6 DTP 3 610 ± 21 34.6 ± 0.5 C. Machine annotation Implementation. We annotate pedestrian bounding boxes in the BDD-10K dataset using two popular off-the-shelf object detectors, YOLOv3 [25] and Faster-RCNN [27]. Al- though the detectors are capable of detecting a variety of objects, we use the pedestrian class only. Our aim here is to evaluate the robustness of our proposed system to multiple automated detectors, rather than to compare detector performance directly. Nonetheless, for consistency, we train Once frame-wise detections are obtained, detections are associated across frames using the Deepsort [32] tracking- by-detection algorithm resulting in a series of bounding boxes and tracking identifiers. We use the same setup as the JAAD dataset and discard detections with height fewer than 50 pixels, and tracks shorter than 25 frames. Using this annotation scheme, we find a total of 16,900 valid non- overlapping pedestrian tracks using YOLOv3 and 13,200 using Faster-RCNN. Evaluation. We use an 80%-20% training-validation split for BDD-10K. We pre-train DTP on BDD-10K using the same hyperparameters as outlined in Section IV-B. Once performance on the validation set saturates, the model is fine- tuned on the JAAD training set. We evaluate the trajectory forecasting performance with and without pre-training rather than the pedestrian detection quality, owing to the lack of human-annotated bounding boxes. Results. The impact of machine-annotated pre-training using the YOLOv3 detector before fine-tuning on the human- annotated JAAD dataset is shown in Table III. TABLE III IMPACT OF PRE-TRAINING ON BDD-10K WITH YOLOV3. Model Pre-training with machine annotation MSE DE@15 FPL [33] 7 881 ± 44 41.3 ± 1.2 FPL [33] 3 805 ± 46 40.1 ± 1.2 DTP 7 610 ± 21 34.6 ± 0.5 DTP 3 539 ± 13 32.7 ± 0.4 We evaluate the impact of pre-training dataset size and
  44. ҰൠಓΛ૸ΔࣗಈंͷͨΊͷஈ֊ͷεςʔδʹΑΔܦ࿏༧ଌख๏ ˔ εςʔδɿকདྷՄೳੑͷ͋Δܦ࿏Λ༧ଌ  $//&OD%FDͰΤϯυϙΠϯτΛ༧ଌ  ΤϯυϙΠϯτͱաڈͷܦ࿏͔ΒΧʔϓϑΟοςΟϯάͰ༧ଌܦ࿏Λੜ੒ ˔ εςʔδɿީิʹڍ͕ͬͨ༧ଌܦ࿏ʹରͯ͠࠷ྑͷܦ࿏ͷ෼ྨͱิਖ਼Λߦ͏ 

    ֤ܦ࿏͸DMBTTJ fi DBUJPOϞσϧͰ෼ྨ  ܦ࿏ͷҠಈྔΛਪఆͯ͠ܦ࿏ͷิਖ਼Λߦ͏ 51/FU<-'BOH $713 > 47 L. Fang, et al., “TPNet: Trajectory Proposal Network for Motion Prediction,” CVPR, 2020. !"#$ %&"#$ Base Feature Encoding F C First Stage Proposal Generation Module F C F C Regression F C F C Classification Max- Score- Out Second Stage Proposal Generation End Point Regression CNN CNN-ED CNN-ED Proposal Refinement Past Position Future Position Prediction Regressed End Point Center Lane Line Proposals Figure 2. Framework of the Trajectory Proposal Network (TPNet). In the first stage, a rough end point is regressed to reduce the searching space and then proposals are generated. In the second stage, proposals are classified and refined to generate final predictions. The dotted proposals are the proposals that lie outside of the movable area, which will be further punished. Safety Past Position Future Position Prediction of TPNet Prediction of Uber Prediction of TPNet without prior knowledge Regressed end point First stage proposal Reference Lines Diversity Two-Stage ! = 1 % ! = 2 % Past Position Future Position Proposal Traversed End Point Regressed End Point Curvature Point Figure 3. Illustration of proposal generation. Proposals are gener- ated around the end point predicted in the first stage. γ is used to control the shape of the proposal. ting degree, we conduct experiments with different degrees and calculate the fitting errors for trajectories of time length T = Tobs +Tpre, where Tobs is the length for the history ob- servations and Tpre is the length for the future predictions. We choose cubic curve with a balance of accuracy and com- plexity. The average fitting error is 0.048 m for pedestrians on ApolloScape dataset, and 0.068 m for vehicles on Ar- goverse dataset, which is accurate enough for most cases (detailed analysis can be found in supplemental material). Since the curve is sensitive to the parameters and difficult Figure 4. Illustration of mul information. The reference lines that the vehicle could grid. By varying the val for each possible end po Finally, proposals are using Eq. 1. proposals
  45. ෳ਺ͷ໬΋Β͍͠ܦ࿏Λ༧ଌ͢Δ.VMUJWFSTFΛఏҊ ˔ աڈͷηϚϯςΟοΫϥϕϧͱݻఆάϦουΛ)JTUPSZ&ODPEFS )& ΁ೖྗ  $POWPMVUJPO3FDVSSFOU/FVSBM/FUXPSLͰۭ࣌ؒಛ௃ΛΤϯίʔυ  ηϚϯςΟοΫϥϕϧΛೖྗ͢Δ͜ͱͰɼυϝΠϯγϑτʹରͯ͠ڧݻʹͳΔ ˔

    )&ͷग़ྗͱ؍ଌ࠷ऴͷηϚϯςΟοΫϥϕϧΛ$PBSTF-PDBUJPO%FDPEFS $-% ΁ೖྗ  ("5ͰάϦουͷ֤֨ࢠʹॏΈ෇͚͠ɼॏΈϚοϓΛੜ੒ ˔ 'JOF-PDBUJPO%FDPEFS '-% ͰάϦουͷ֤֨ࢠʹڑ཭ϕΫτϧΛ֨ೲ  $-%ಉ༷ɼ("5Ͱ֤֨ࢠʹॏΈ෇͚ ˔ '-%ͱ$-%͔Βෳ਺ͷܦ࿏Λ༧ଌ γϛϡϨʔλ؀ڥͰֶशͨ͠σʔλΛݱ࣮ͷσʔλʹసҠͨ͠4JN"VH<+-JBOH &$$7 > ͕ൃද .VMUJWFSTF<+-JBOH $713 > 48 Figure 2: Overview of our model. The input to the model is the ground truth location history, and a set of video frames, which are preprocessed by a semantic segmentation model. This is encoded by the “History Encoder” convolutional RNN. The output of the encoder is fed to the convolutional RNN decoder for location prediction. The coarse location decoder outputs a heatmap over the 2D grid of size H ⇥ W. The fine location decoder outputs a vector offset within each grid cell. These are combined to generate a multimodal distribution over R2 for predicted locations. indexed from 1 to HW. Inspired by [22, 31], we encode (known as the “belief state”) be denoted by Ct(i) = p(Yt = J. Liang, et al., “The Garden of Forking Paths: Towards Multi-Future Trajectory Prediction,” CVPR, 2020. J. Liang, et al., “SimAug: Learning Robust Representations from Simulation for Trajectory Prediction,” ECCV, 2020.
  46. ຊαʔϕΠͷ໨త 49 %FFQ-FBSOJOHΛ༻͍ͨܦ࿏༧ଌख๏ͷಈ޲ௐࠪ ˔ ֤ΧςΰϦʹଐ͢Δܦ࿏༧ଌख๏ຖͷಛ௃Λ·ͱΊΔ  ΠϯλϥΫγϣϯ͋Γ w 1PPMJOHϞσϧ w

    "UUFOUJPOϞσϧ  ΠϯλϥΫγϣϯҎ֎ ˔ ఆྔతධՁͷͨΊͷσʔληοτɼධՁࢦඪ΋঺հ ˔ ୅දతϞσϧΛ࢖༻ͯ͠ɼ֤Ϟσϧͷਫ਼౓ͱ༧ଌ݁Ռʹ͍ͭͯٞ࿦
  47. ܦ࿏༧ଌͰ࠷΋࢖༻͞ΕΔσʔληοτ 50 ETH Dataset UCY Dataset Stanford Drone Dataset •

    αϯϓϧ਺ɿ786 ࢢ֗஍ͷาߦऀΛࡱӨͨ͠σʔληοτ ࢢ֗஍ͷาߦऀΛࡱӨͨ͠σʔληοτ • γʔϯ਺ɿ2 • ର৅छྨ - pedestrian • αϯϓϧ਺ɿ750 • γʔϯ਺ɿ3 • ର৅छྨ - pedestrian ελϯϑΥʔυେֶߏ಺ΛࡱӨͨ͠σʔληοτ • αϯϓϧ਺ɿ10,300 • γʔϯ਺ɿ8 • ର৅छྨ - pedestrian, car, cyclist, bus, skater, cart R. Alexandre, et al., “Learning Social Etiquette: Human Trajectory Understanding in Crowded Scenes,” ECCV, 2016. A. Lerner, et al., “Crowds by Example,” CGF, 2007. S. Pellegrini, et al., “You’ll Never Walk Alone: Modeling Social Behavior for Multi-target Tracking,” ICCV, 2009.
  48. ݻఆΧϝϥ΍υϩʔϯͰࡱӨ ˔ ๛෋ͳσʔλΛऔಘͰ͖ΔͨΊɼඇৗʹେن໛ͳσʔληοτ ၆ᛌࢹ఺ͷσʔληοτ 51 lts from NN+map(prior) m- aseline.

    The orange trajectory Red represents ground truth for n represents the multiple fore- 3 s. Top left: The car starts to Argoverse Dataset The inD Dataset: A Drone Dataset of Naturalistic Road User Trajectories at German Intersections Julian Bock1, Robert Krajewski1, Tobias Moers2, Steffen Runde1, Lennart Vater1 and Lutz Eckstein1 Fig. 1: Exemplary result of road user trajectories in the inD dataset. The position and speed of each road user is measured accurately over time and shown by bounding boxes and tracks. For privacy reasons, the buildings were made unrecognizable. Abstract —Automated vehicles rely heavily on data-driven methods, especially for complex urban environments. Large datasets of real world measurement data in the form of road user trajectories are crucial for several tasks like road user prediction models or scenario-based safety validation. So far, though, this demand is unmet as no public dataset of urban road user trajectories is available in an appropriate size, quality and variety. By contrast, the highway drone dataset (highD) has recently shown that drones are an efficient method for acquiring naturalistic road user trajectories. Compared to driving studies or ground-level infrastructure sensors, one major advantage of using a drone is the possibility to record naturalistic behavior, as road users do not notice measurements taking place. Due to the ideal viewing angle, an entire intersection scenario can be measured with significantly less occlusion than with sensors at ground level. Both the class and the trajectory of each road user can be extracted from the video recordings with high precision using state-of-the-art deep neural networks. Therefore, we propose the creation of a comprehensive, large-scale urban intersection dataset with naturalistic road user behavior using camera-equipped drones as successor of the highD dataset. The resulting dataset contains more than 11500 road users including vehicles, bicyclists and pedestrians at intersections in Germany and is called inD. The dataset consists of 10 hours of measurement data from four intersections and is available online for non- commercial research at: http://www.inD-dataset.com 1The authors are with the Automated Driving Department, Institute for Automotive Engineering RWTH Aachen University (Aachen, Germany). (E-mail: {bock, krajewski, steffen.runde, vater, eckstein}@ika.rwth- aachen.de). 2The author is with the Automated Driving Department, fka GmbH (Aachen, Germany). (E-mail: [email protected]). Index Terms —Dataset, Trajectories, Road Users, Machine Learning I. INTRODUCTION Automated driving is expected to reduce the number and severity of accidents significantly [13]. However, intersections are challenging for automated driving due to the large com- plexity and variety of scenarios [15]. Scientists and companies are researching how to technically handle those scenarios by an automated driving function and how to proof safety of these systems. An ever-increasing proportion of the approaches to tackle both challenges are data-driven and therefore large amounts of measurement data are required. For example, re- cent road user behaviour models, which are used for prediction or simulation, use probabilistic approaches based on large scale datasets [2], [11]. Furthermore, current approaches for safety validation of highly automated driving such as scenario- based testing heavily rely on large-scale measurement data on trajectory level [3], [5], [17]. However, the widely used ground-level or on-board measurement methods have several disadvantages. These include that road users can be (partly) occluded by other road users and do not behave naturally as they notice being part of a measurement due to conspicuous sensors [5]. We propose to use camera-equipped drones to record road user movements at urban intersections (see Fig. 2). Drones with high-resolution cameras allow to record traffic from a arXiv:1911.07602v1 [cs.CV] 18 Nov 2019 inD Dataset ysis. The red trajectories are single-future method predictions and the yellow-orange heatmaps are ctions. The yellow trajectories are observations and the green ones are ground truth multi-future etails. Method Single-Future Multi-Future Our full model 18.51 / 35.84 166.1 / 329.5 The Forking Paths Dataset Figure 7: Example output of the motion prediction solution supplied as part of the software development kit. A convo- lution neural network takes rasterised scenes around nearby vehicles as input, and predicts their future motion. ity and multi-threading to make it suitable for distributed machine learning. Customisable scene visualisation and rasterisation. We provide several functions to visualise and rasterise Lyft Level 5 Dataset • αϯϓϧ਺ɿ300K • γʔϯ਺ɿ113 • ର৅छྨ • ௥Ճ৘ใ - car - ंઢ৘ใɼ஍ਤσʔλɼηϯαʔ৘ใ • αϯϓϧ਺ɿ13K • γʔϯ਺ɿ4 • ର৅छྨ - pedestrian, car, cyclist • αϯϓϧ਺ɿ3B • γʔϯ਺ɿ170,000 • ର৅छྨ • ௥Ճ৘ใ - pedestrian, car, cyclist - ߤۭ৘ใ - ηϚϯςΟοΫϥϕϧ • αϯϓϧ਺ɿ0.7K • γʔϯ਺ɿ7 • ର৅छྨ • ௥Ճ৘ใ - pedestrian - ෳ਺ܦ࿏৘ใ - ηϚϯςΟοΫϥϕϧ ҰൠಓΛࡱӨͨ͠σʔληοτ ަࠩ఺ΛࡱӨͨ͠σʔληοτ ҰൠಓΛࡱӨͨ͠σʔληοτ γϛϡϨʔλͰ࡞੒͞Εͨσʔληοτ J. Houston, et al., “One Thousand and One Hours: Self-driving Motion Prediction Dataset,” CoRR, 2020. J. Bock, et al., “The inD Dataset: A Drone Dataset of Naturalistic Road User Trajectories at German Intersections,” CoRR, 2019. M.F. Chang, et al., “Argoverse: 3D Tracking and Forecasting with Rich Maps,” CVPR, 2019. J. Liang, et al., “The Garden of Forking Paths: Towards Multi-Future Trajectory Prediction,” CVPR, 2020.
  49. ࣗಈंલํͷҠಈର৅ͷܦ࿏༧ଌΛ໨త ंࡌΧϝϥࢹ఺ͷσʔληοτ 52 • αϯϓϧ਺ɿ1.8K • γʔϯ਺ɿ53 • ର৅छྨ •

    ௥Ճ৘ใ - pedestrian - ं྆৘ใɼΠϯϑϥετϥΫνϟ ҰൠಓΛࡱӨͨ͠σʔληοτ Apolloscape Dataset Figure 3. Example scenarios of the TITAN Dataset: a pedestrian bounding box with tracking ID is shown in , vehicle bounding box with ID is shown in , future locations are displayed in . Action labels are shown in different colors following Figure 2. centric views captured from a mobile platform. In the TITAN dataset, every participant (individuals, vehicles, cyclists, etc.) in each frame is localized us- ing a bounding box. We annotated 3 labels (person, 4- wheeled vehicle, 2-wheeled vehicle), 3 age groups for per- son (child, adult, senior), 3 motion-status labels for both 2 and 4-wheeled vehicles, and door/trunk status labels for 4- wheeled vehicles. For action labels, we created 5 mutually exclusive person action sets organized hierarchically (Fig- ure 2). In the first action set in the hierarchy, the annota- tor is instructed to assign exactly one class label among 9 atomic whole body actions/postures that describe primitive action poses such as sitting, standing, standing, bending, etc. The second action set includes 13 actions that involve single atomic actions with simple scene context such as jay- walking, waiting to cross, etc. The third action set includes 7 complex contextual actions that involve a sequence of atomic actions with higher contextual understanding, such as getting in/out of a 4-wheel vehicle, loading/unloading, etc. The fourth action set includes 4 transportive actions that describe the act of manually transporting an object by agent i at each past time step from 1 to Tobs, where (cu, cv ) and (lu, lv ) represent the center and the dimension of the bounding box, respectively. The proposed TITAN frame- work requires three inputs as follows: Ii t=1:Tobs for the ac- tion detector, xi t for both the interaction encoder and past object location encoder, and et = {αt, ωt } for the ego- motion encoder where αt and ωt correspond to the acceler- ation and yaw rate of the ego-vehicle at time t, respectively. During inference, the multiple modes of future bounding box locations are sampled from a bi-variate Gaussian gen- erated by the noise parameters, and the future ego-motions ˆ et are accordingly predicted, considering the multi-modal nature of the future prediction problem. Henceforth, the notation of the feature embedding func- tion using multi-layer perceptron (MLP) is as follows: Φ is without any activation, and Φr, Φt, and Φs are associated with ReLU, tanh, and a sigmoid function, respectively. 4.1. Action Recognition We use the existing state-of-the-art method as backbone TITAN Dataset Linear 123 477 1365 950 3983 223 857 2303 1565 6111 LSTM 172 330 911 837 3352 289 569 1558 1473 5766 B-LSTM[5] 101 296 855 811 3259 159 539 1535 1447 5615 PIEtraj 58 200 636 596 2477 110 399 1248 1183 4780 Table 3: Location (bounding box) prediction errors over varying future time steps. MSE in pixels is calculated over all predicted time steps, CMSE and CFMSE are the MSEs calculated over the center of the bounding boxes for the entire predicted sequence and only the last time step respectively. MSE Method 0.5s 1s 1.5s last Linear 0.87 2.28 4.27 10.76 LSTM 1.50 1.91 3.00 6.89 PIEspeed 0.63 1.44 2.65 6.77 Table 4: Speed prediction errors over varying time steps on the PIE dataset. Last stands for the last time step. The results are reported in km/h. is generally better on bounding box centers due to the fewer degrees of freedom. Context in trajectory prediction. We first evaluate the proposed speed prediction stream, PIEspeed, by comparing this model with two baseline models, a linear Kalman filter and a vanilla LSTM model. We use MSE metric and re- port the results in km/h. Table 4 shows the results of our experiments. The linear model achieves reasonable perfor- PIE Dataset Figure 5: Illustration of our TrafficPredict (TP) method on camera-based images. There are six scenarios with different road conditions and traffic situations. We only show the trajectories of several instances in each scenario. The ground truth (GT) is drawn in green and the prediction results of other methods (ED,SL,SA) are shown with different dashed lines. The prediction trajectories of our TP algorithm (pink lines) are the closest to ground truth in most of the cases. stance layer to capture the trajectories and interactions for instances and use a category layer to summarize the simi- ҰൠಓΛࡱӨͨ͠σʔληοτ • αϯϓϧ਺ɿ81K • γʔϯ਺ɿ100,000 • ର৅छྨ - pedestrian, car, cyclist ҰൠಓΛࡱӨͨ͠σʔληοτ • αϯϓϧ਺ɿ645K • γʔϯ਺ɿ700 • ର৅छྨ • ௥Ճ৘ใ - pedestrian, car, cyclist - ߦಈϥϕϧɼาߦऀͷ೥ྸ Y. Ma, et al., “Traf fi cPredict: Trajectory Prediction for Heterogeneous Traf fi c-Agents,” AAAI, 2019. A. Rasouli, et al., “PIE: A Large-Scale Dataset and Models for Pedestrian Intention Estimation and Trajectory Prediction, ” ICCV, 2019. S. Malla, et al., “TITAN: Future Forecast using Action Priors,” CVPR, 2020. nuScenes: A multimodal dataset for autonomous driving Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, Oscar Beijbom nuTonomy: an APTIV company [email protected] Abstract Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in com- puter vision tasks such as object detection, tracking and seg- mentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learn- ing based methods for detection and tracking become more prevalent, there is a need to train and evaluate such meth- ods on datasets containing range sensor data along with im- ages. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 de- gree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for li- dar and image based detection and tracking. Data, devel- opment kit and more information are available online1. 1. Introduction Figure 1. An example from the nuScenes dataset. We see 6 dif- ferent camera views, lidar and radar data, as well as the human annotated semantic map. At the bottom we show the human writ- ten scene description. Multimodal datasets are of particular importance as no single type of sensor is sufficient and the sensor types are complementary. Cameras allow accurate measurements of edges, color and lighting enabling classification and local- ization on the image plane. However, 3D localization from images is challenging [13, 12, 57, 80, 69, 66, 73]. Lidar pointclouds, on the other hand, contain less semantic infor- nuScenes Dataset ҰൠಓΛࡱӨͨ͠σʔληοτ • αϯϓϧ਺ɿ645K • γʔϯ਺ɿ700 • ର৅छྨ • ௥Ճ৘ใ - truck, bicycle, car, etc. - ηϯαʔ৘ใɼ஍ਤσʔλɼ఺܈৘ใɼΤΰϞʔγϣϯ H. Caesar, et al., “nuScenes: A multimodal dataset for autonomous driving,” CVPR, 2020.
  50. લํͷาߦऀͷܦ࿏༧ଌΛ໨త ˔ ඃݧऀʹ΢ΣΞϥϒϧΧϝϥΛ૷ண ਓশࢹ఺ͷσʔληοτ 53 • αϯϓϧ਺ɿ5K • γʔϯ਺ɿ87 •

    ର৅छྨ • ௥Ճ৘ใ - pedestrian - ࢟੎৘ใɼΤΰϞʔγϣϯ าಓΛࡱӨͨ͠σʔληοτ ݐ ൌ ݐ଴ ൅ ͸ ݐ ൌ ݐ଴ ൅ ͳͲ Predictions First-Person Locomotion Dataset T. Yagi, et al., “Future Person Localization in First-Person Videos,” CVPR, 2018.
  51. ༧ଌ͞ΕۣͨܗྖҬͱਅͷۣܗྖҬͷத৺࠲ඪͰධՁ ˔ ंࡌΧϝϥө૾ʹ͓͚Δܦ࿏༧ଌͰར༻ ˔ ۣܗྖҬͷॏͳΓ཰͔Β'஋ͰධՁ΋Ͱ͖Δ ධՁࢦඪ 54 %JTQMBDFNFOU&SSPS /FHBUJWFMPHMJLFMJIPPE .FBO4RVBSF&SSPS

    িಥ཰ (a) Average Displacement Error (b) Final Displacement Error    "%& '%& ਅ஋ͱ༧ଌ஋ͱͷϢʔΫϦουڑ཭ޡࠩ ˔ "WFSBHF%JTQMBDFNFOU&SSPS "%& ɿ༧ଌ࣌ࠁؒͷฏۉޡࠩ ˔ 'JOBM%JTQMBDFNFOU&SSPS '%& ɿ༧ଌ࠷ऴ࣌ࠁͷޡࠩ !"#$ … !"#& # of Samples: ' Prediction Horizon: ( Figure 5. An illustration of our probabilistic evaluation methodol- ogy. It uses kernel density estimates at each timestep to compute the log-likelihood of the ground truth trajectory at each timestep, averaging across time to obtain a single value. Figure 6. Mean NLL for each dataset. Error bars are bootstrapped 95% confidence intervals. 2000 trajectories were sampled per model at each prediction timestep. Lower is better. ADE and FDE are useful metrics for comparing determinis- tic regressors, they are not able to compare the distributions produced by generative models, neglecting aspects such as variance and multimodality [40]. To bridge this gap in eval- Dataset ADE / FDE, SGAN [16] ETH 0.64 / 1.13 Hotel 0.43 / 0.91 Univ 0.53 / 1.12 Zara 1 0.29 / 0.58 Zara 2 0.27 / 0.56 Average 0.43 / 0.86 Table 1. Quantitative ADE and metric where N = 100. Both of our methods signi the ETH datasets, the UCY (P <.001; two-tailed t-test and SGAN’s mean NLL). On Full model is identical in pe same t-test). However, on th model performs worse than We believe that this is caused tions more often than in other truth trajectories to frequentl tions whereas SGAN’s highe to have density there. Acros uration outperforms our zbes model’s full multimodal mo for strong performance on th We also evaluated our m to determine how much the p prediction horizon. The resu be seen, our Full model sig at every timestep (P <.001; ence between our and SGAN ਪఆͨ͠෼෍ͷݩͰͷਅ஋ͷର਺໬౓ͷظ଴஋ ˔ "%&ͱ'%&Ͱෳ਺ܦ࿏ΛධՁ͢Δͷ͸ϚϧνϞʔμϧੑΛແࢹ ˔ /FHBUJWFMPHMJLFMJIPPEͰෳ਺ܦ࿏ͷ༧ଌͷධՁࢦඪͱͯ͠ར༻ USVUI QSFEJDUJPO .4& -OPSN -OPSN ਅ஋ ༧ଌ஋ ैདྷͷධՁࢦඪ ఏҊ͢ΔධՁࢦඪ ͭͷ%JTQMBDFNFOU&SSPSͰධՁ ඇઢܗܦ࿏ͷ%JTQMBDFNFOU&SSPSɼͭͷ෺ମͱͷিಥ཰ͰධՁ -OPSN -OPSN ਅ஋ ༧ଌ஋ ैདྷͷධՁࢦඪ ఏҊ͢ΔධՁࢦඪ ͭͷ%JTQMBDFNFOU&SSPSͰධՁ %JTQMBDFNFOU&SSPS͸શαϯϓϧʹର͠ฏۉΛٻΊΔ ˔ ΠϯλϥΫγϣϯ৘ใ͕Ͳͷ༧ଌܦ࿏ʹޮՌత͔ධՁͰ͖ͳ͍ ༧ଌ஋͕֤෺ମͱিಥ͔ͨ͠൱͔ͷিಥ཰ͰධՁ ˔ ಈత෺ମɿө૾தͷଞର৅ ˔ ੩త෺ମɿݐ෺΍໦ͳͲͷো֐෺ ಈత෺ମ ੩త෺ମ
  52. ຊαʔϕΠͷ໨త 55 %FFQ-FBSOJOHΛ༻͍ͨܦ࿏༧ଌख๏ͷಈ޲ௐࠪ ˔ ֤ΧςΰϦʹଐ͢Δܦ࿏༧ଌख๏ຖͷಛ௃Λ·ͱΊΔ  ΠϯλϥΫγϣϯ͋Γ w 1PPMJOHϞσϧ w

    "UUFOUJPOϞσϧ  ΠϯλϥΫγϣϯҎ֎ ˔ ఆྔతධՁͷͨΊͷσʔληοτɼධՁࢦඪ΋঺հ ˔ ୅දతϞσϧΛ࢖༻ͯ͠ɼ֤Ϟσϧͷਫ਼౓ͱ༧ଌ݁Ռʹ͍ͭͯٞ࿦
  53. ୅දతͳϞσϧΛ༻͍ͯɼਫ਼౓ݕূΛߦ͏ ࣮ݧ৚݅ ˔ ༧ଌର৅ɿาߦऀ ˔ ؍ଌ࣌ࠁɿඵɼ༧ଌ࣌ࠁɿඵ ˔ ධՁࢦඪ  %JTQMBDFNFOU&SSPSɼ֤छিಥ཰

     ෳ਺ͷܦ࿏Λ༧ଌ͢Δख๏͸ɼݸͷ༧ଌܦ࿏ΛαϯϓϦϯάͨ͠தͰ࠷ྑͷ΋ͷΛ࢖༻  ಈత෺ମͱͷিಥ཰ͷᮢ஋͸&5)6$:ͰNɼ4%%ͰϐΫηϧͱ͢Δ  ੩త෺ମͱͷিಥ཰ͷো֐෺͸ʮݐ෺ʯʮϥ΢ϯυΞό΢τʯʮ໦ʯͱ͢Δ ධՁ࣮ݧ 56 Ϟσϧ໊ ΠϯλϥΫγϣϯ ؀ڥ σʔληοτ -45.   &5)6$: 4%% 3&%   &5)6$: 4%% 4PDJBM-45. ✔  &5)6$: 4%% 4PDJBM("/ ✔  &5)6$: 4%% 45("5 ✔  &5)6$: 4%% 5SBKFDUSPO ✔  &5)6$: 4%% &OW-45.  ✔ 4%% ΞςϯγϣϯϞσϧ ϓʔϦϯάϞσϧ
  54. &5)6$:ʹ͓͚Δ%JTQMBDFNFOU&SSPS 58 4DFOF .FUIPE $&5) &5) )05&- 6$: ;"3" ;"3"

    "7( -45.        3&%        4PDJBM-45.        4PDJBM("/        45("5        5SBKFDUSPO        4PDJBM("/        45("5        5SBKFDUSPO        4JOHMF.PEFM 0VUQVUT 4JOHMF.PEFMɿ"%&Ͱ3&%ͱ5SBKFDUSPOɼ'%&Ͱ3&%ͱ45("5͕࠷΋ޡࠩΛ௿ݮ 0VUQVUTɿ"%&Ͱ45("5ɼ'%&Ͱ45("5ɼ5SBKFDUSPO͕࠷΋ޡࠩΛ௿ݮ "%&'%& <N>
  55. &5)6$:ʹ͓͚Δ%JTQMBDFNFOU&SSPS 59 4DFOF .FUIPE $&5) &5) )05&- 6$: ;"3" ;"3"

    "7( -45.        3&%        4PDJBM-45.        4PDJBM("/        45("5        5SBKFDUSPO        4PDJBM("/        45("5        5SBKFDUSPO        4JOHMF.PEFM 0VUQVUT "%&'%& <N> ϓʔϦϯάϞσϧͱൺ΂ͯΞςϯγϣϯϞσϧ͕ߴਫ਼౓
  56. .PEFM -45. 3&% 4PDJBM-45. 4PDJBM("/ 45("5 5SBKFDUPO 4JOHMF.PEFM  

        0VUQVUT       &5)6$:ʹ͓͚Δিಥ཰ͷ݁Ռ 60 ಈత෺ମͱͷিಥ཰<> .PEFM -45. 3&% 4PDJBM-45. 4PDJBM("/ 45("5 5SBKFDUPO 4JOHMF.PEFM       0VUQVUT       ੩త෺ମͱͷিಥ཰<> 45("5͕࠷΋িಥ཰͕௿͍ ˔ 45("5͸าߦऀͷෳࡶͳΠϯλϥΫγϣϯΛۭ࣌ؒతʹଊ͑ΔͨΊ  ଞର৅͕ʮۙͮ͘ʯʮԕ͔͟Δʯ౳ͷ࣌ؒతμΠφϛΫεΛΞςϯγϣϯͰଊ͑Δ
  57. &5)6$:ʹ͓͚Δ༧ଌ݁Ռྫ<> 64 -45. 3&% 4PDJBM-45. 4PDJBM("/ 45("5 5SBKFDUSPO ༧ଌܦ࿏ ؍ଌܦ࿏

    ਅ஋ ઢܗʹಈ͘าߦऀͷσʔλ͕ଟ͘΄ͱΜͲͷख๏ͷ༧ଌਫ਼౓͕ྑ͍
  58. 4%%ʹ͓͚Δ%JTQMBDFNFOU&SSPS 67 4DFOF .FUIPE CPPLTUPSF DPVQB EFBUI$JSDMF HBUFT IZBOH MJUUMF

    OFYVT RVBE "7( -45.          3&%          4PDJBM-45.          &OW-45.          4PDJBM("/          45("5          5SBKFDUSPO          4PDJBM("/          45("5          5SBKFDUSPO          4JOHMF.PEFM 0VUQVUT "%&'%& <QJYFM> 4JOHMF.PEFMɿ"%&Ͱ3&%ɼ'%&Ͱ3&%ͱ45("5͕࠷΋ޡࠩΛ௿ݮ 0VUQVUTɿ45("5͕࠷΋ޡࠩΛ௿ݮ
  59. ࡱӨՕॴʹӨڹ͞ΕΔ ˔ &5)6$:͸௿ॴͰࡱӨ  ਓͷܦ࿏͕TFOTJUJWFʹͳΔ ˔ 4%%͸ߴॴͰࡱӨ  ਓͷܦ࿏͕JOTFOTJUJWFʹͳΔ 

    ਓͷಈ͖͕ઢܗʹͳΓɼΠϯλϥΫγϣϯΛߟྀ͠ͳ͍3&%ͷ༧ଌޡ͕ࠩ௿Լ 4%%Ͱ3&%ͷ༧ଌޡ͕ࠩ௿͍ཁҼ͸Կ͔ 68 ETH/UCY SDD
  60. 4%%ʹ͓͚Δিಥ཰ͷ݁Ռ 69 .PEFM 0CKFDU.PEFM -45. 3&% 4PDJBM-45. &OW-45. 4PDJBM("/ 45("5

    5SBKFDUPO 4JOHMF.PEFM        0VUQVUT        ಈత෺ମͱͷিಥ཰<> .PEFM -45. 3&% 4PDJBM-45. &OW-45. 4PDJBM("/ 45("5 5SBKFDUPO 4JOHMF.PEFM        0VUQVUT        ੩త෺ମͱͷিಥ཰<> 45("5͕ಈత෺ମͱͷিಥ཰͕࠷΋௿͍ ˔ 45("5͸าߦऀͷෳࡶͳΠϯλϥΫγϣϯΛۭ࣌ؒతʹଊ͑ΔͨΊ &OW-45.͕੩త෺ମͱͷিಥ཰͕࠷΋௿͍ ˔ ؀ڥ৘ใΛߟྀ͢Δ͜ͱͰɼো֐෺ͱͷিಥΛආ͚ΔͨΊ
  61. 4%%ʹ͓͚Δ༧ଌ݁Ռྫ<> 71 -45. 3&% 4PDJBM-45. 4PDJBM("/ 45("5 5SBKFDUSPO &OW-45. ༧ଌܦ࿏

    ؍ଌܦ࿏ ਅ஋ ΄ͱΜͲͷ༧ଌख๏͕ਅ஋ʹࣅͨܦ࿏Λ༧ଌ
  62. 4%%ʹ͓͚Δ༧ଌ݁Ռྫ<> 73 -45. 3&% 4PDJBM-45. 4PDJBM("/ 45("5 5SBKFDUSPO &OW-45. ༧ଌܦ࿏

    ؍ଌܦ࿏ ਅ஋ ੩త؀ڥ৘ใΛߟྀ͢Δ͜ͱͰɼো֐෺ͱͷিಥΛආ͚Δܦ࿏༧ଌΛ࣮ݱ
  63. %FFQ-FBSOJOHͷൃలʹΑΔσʔληοτͷେن໛Խ ܦ࿏༧ଌͷධՁࢦඪͷ࠶ߟ ˔ ༧ଌਫ਼౓͕ྑ͍㱠࠷΋ྑ͍༧ଌख๏ ˔ ίϛϡχςΟશମͰߟ͑௚͢ඞཁ͕͋Δ ෳ਺ܦ࿏Λ༧ଌ͢ΔΞϓϩʔνͷ૿Ճ ˔ .VMUJWFSTFΛච಄ʹෳ਺ܦ࿏Λॏࢹ͢Δܦ࿏༧ଌख๏͕૿Ճ͢Δʁ ࠓޙͷܦ࿏༧ଌ͸ʁ

    74 2016 Social-LSTM [18] [A. Alahi+, CVPR, 2016] DESIRE [19] [N. Lee+, CVPR, 2017] Conv.Social-Pooling [20] [N. Deo+, CVPRW, 2018] SoPhie [32] [A. Sadeghian+, CVPR, 2019] Social-BiGAT [34] [V. Kosaraju+, NeurIPS, 2019] Social-STGCNN [44] [A. Mohamedl+, CVPR, 2020] Social-GAN [23] [A. Gupta+, CVPR, 2018] Next [31] [J. Liang+, CVPR, 2019] STGAT [33] [Y. Huang+, ICCV, 2019] Trajectron [35] [B. Ivanovic+, ICCV, 2019] Social-Attention [27] [A. Vemula+, ICRA, 2018] MATF [24] [T. Zhao+, CVPR, 2019] MX-LSTM [21] [I. Hasan+, CVPR, 2018] CIDNN [28] [Y. Xu+, CVPR, 2018] SR-LSTM [30] [P. Zhang+, CVPR, 2019] W/ ΠϯλϥΫγϣϯ W/O ΠϯλϥΫγϣϯ Group-LSTM [22] [N. Bisagno+, CVPR, 2018] Reciprocal network [25] [S. Hao+, CVPR, 2020] PECNet [26] [K. Mangalam+, ECCV, 2020] RSBG [38] [J. SUN+, CVPR, 2020] STAR [39] [C. Yu+, ECCV, 2020] Behavior CNN [45] [S. Yi+, ECCV, 2016] FPL [47] [T. Yagi+, CVPR, 2018] Fast and Furious [49] [W. Luo+, CVPR, 2018] OPPU [48] [A. Bhattacharyya+, CVPR, 2018] OASE [50] [H. Minoura+, VISAPP, 2019] Rule of the Road [51] [J. Hong+, CVPR, 2019] 2020 Trajectron++ [36] [T. Salzmann+, ECCV, 2020] ΞςϯγϣϯϞσϧ ϓʔϦϯάϞσϧ GD GAN [29] [T. Fernando +, ACCV, 2018] EvolveGraph [37] [J. Li+, NeurIPS, 2020] FVTraj [40] [T. Salzmann+, ECCV, 2020] RGM [41] [C. Choi +, ICCV, 2019] PRECOG [42] [N. Rhinehart +, ICCV, 2019] DSCMP [43] [C. Tao +, ECCV, 2020] Visual Path Prediction [46] [S. Huang +, TIP, 2016] Future Vehicle Localization [52] [Y. Yao +, ICRA, 2019] DTP [53] [O. Styles +, IV, 2019] TPNet [54] [L. Fang +, CVPR, 2020] ͦͷଞͷϞσϧ Multiverse [55] [J. Liang+, CVPR, 2020] SimAug [56] [J. Liang+, ECCV, 2020]
  64. %FFQ-FBSOJOHΛ༻͍ͨܦ࿏༧ଌख๏ͷಈ޲ௐࠪ ˔ ֤ΧςΰϦʹଐͨ͠ܦ࿏༧ଌख๏ͷಛ௃Λௐࠪ  ΠϯλϥΫγϣϯ͋Γ w ϓʔϦϯάϞσϧ w ΞςϯγϣϯϞσϧ w

    ͦͷଞͷϞσϧ  ΠϯλϥΫγϣϯҎ֎ ˔ ఆྔతධՁͷͨΊͷσʔληοτɼධՁࢦඪΛ঺հ  %FFQ-FBSOJOHͷൃలʹΑΓେن໛ͳσʔληοτ͕૿Ճ ˔ ୅දతϞσϧΛ࢖༻ͯ͠ɼ֤Ϟσϧͷਫ਼౓ͱ༧ଌ݁Ռʹ͍ͭͯٞ࿦  ࠷΋িಥ཰͕௿͍Ϟσϧ㱠༧ଌޡ͕ࠩ௿͍Ϟσϧ  ؀ڥ৘ใΛߟྀ͢Δ͜ͱͰো֐෺ͱͷিಥΛආ͚Δܦ࿏༧ଌΛ࣮ݱ  4FOTJUJWFͳσʔληοτͰ%FFQ-FBSOJOHʹΑΔ༧ଌख๏͸ޮՌత ·ͱΊ 75