Upgrade to Pro — share decks privately, control downloads, hide ads and more …

第62回名古屋CV・PRML勉強会:CVPR2025論文紹介 (MambaOut)

Avatar for Naoki Okamoto Naoki Okamoto
August 02, 2025
13

第62回名古屋CV・PRML勉強会:CVPR2025論文紹介 (MambaOut)

2025年8月9日の第62回名古屋CV・PRML勉強会におけるCVPR2025論文紹介の発表スライドです.NLP分野で提案されたMamba (SSM) がCV分野のタスクに本当に必要なのかを分析したMambaOut: Do We Really Need Mamba for Vision?を紹介.

Avatar for Naoki Okamoto

Naoki Okamoto

August 02, 2025
Tweet

Transcript

  1. ࣗݾ঺հ  ΞϯαϯϒϧֶशͷͨΊͷ஌ࣝৠཹ  IUUQTXXXFDWBOFUQBQFSTFDDW@QBQFST@&$$7 IUNM@&$$7@@QBQFSQIQ ࣗݾڭࢣ͋ΓֶशͷνϡʔτϦΞϧ  #FTUHSBQIGPSFOTFNCMF OPEFT

    த෦େֶϩΰ த෦େֶϩΰ  ˠ"DRVJSFEJ⒎FSFOUBUUFOUJPONBQT EJWFSTJUZTVJUBCMFGPSFOTFNCMFT JOQVU #SJOHDMPTFSUP #SJOHDMPTFSFBDIPUIFS #SJOHDMPTFSUP #SJOHDMPTFSUP #SJOHDMPTFSUP 4FQBSBUFGSPN 4FQBSBUFGSPN IUUQTTQFBLFSEFDLDPNOBPL[JKJKJBPTIJBSJYVFYJ OJZPSVTIJRJBOYVFYJDWJNUJZVUPSJBSV %BSL,OPXMFEHF ,OPXMFEHF%JTUJMMBUJPO <)JOUPO /*148`> ڭࢣͷ֬཰෼෍ʢ஌ࣝʣΛ ༻͍ͯੜెΛֶश .PEFMDPNQSFTTJPO <#VDJMV㶙 4*(,%%`> Ξϯαϯϒϧͷग़ྗΛϥϕϧͱͯ͠ ͭͷχϡʔϥϧωοτϫʔΫΛֶश Ϟσϧͷ૊Έ߹Θͤ ஌ࣝͷछྨɾ஌ࣝͷసҠํ๏ ೥      44**प೥ٕज़Ϛοϓɿ஌ࣝৠཹ ෳ਺ͷڭࢣʹΑΔΞϯαϯϒϧΛར༻ .VMUJQMF5FBDIFS <:PV ,%%`> ֬཰෼෍Λू໿ '&&% <1BSL,XBL &$"*`> ಛ௃ϚοϓΛू໿ ࣗ෼ࣗ਎ͷ஌ࣝΛར༻ TFMGEJTUJMMBUJPO ਂ͍૚ͷ஌ࣝΛઙ͍૚΁సҠ -FBSOJOHBVOJpFEDMBTTJpFS <)PV $713`> #FZPVSPXOUFBDIFS <;IBOH *$$7`> ෳ਺ͷੜెͷΈͰֶश %.- <;IBOH $713`> ੜెؒͷ஌ࣝৠཹʹΑΓਫ਼౓͕޲্ 0/& <-BO /FVSM14`> $PMMBCPSBUJWFMFBSOJOH <4POHˍ$IBJ /FVSM14`> ੜెͷઙ͍૚ΛॏΈڞ༗ͯ͠ύϥϝʔλ਺Λ࡟ݮ ஈ֊తʹ஌ࣝΛసҠ   7*% <"IO $713`> ૬ޓ৘ใྔ $3% <5JBO *$-3`> ରরֶश "'% <$IVOH *$.-`> ఢରతֶश ,OPXMFEHF%J⒎VTJPO <)VBOH /FVS*14`> ֦ࢄϞσϧͷֶशํ๏ ,OPXMFEHF3FWJFX <$IFO $713`> ҟͳΔਂ͞ͷ૚ͷؒͰ ஌ࣝΛసҠ .(% <:BOH &$$7`> ϚεΫͨ͠ੜెͷಛ௃Ϛοϓ͔Β ڭࢣͷಛ௃ϚοϓΛ༧ଌ தؒ૚ͷ஌ࣝͷసҠํ๏Λվળ 3,% <1BSL $713`> αϯϓϧؒͷؔ܎ੑ 'MPXPG4PMVUJPO1SPDFEVSF <:JN $713`> ૚ؒͷग़ྗͷ૬ޓؔ܎ "UUFOUJPO5SBOTGFS <;BHPSVZLP *$-3`> "UUFOUJPONBQ தؒ૚ͷग़ྗ͔Β஌ࣝΛநग़ ".3"%*0 <3BO[JOHFS $713`> ෳ਺ͷج൫Ϟσϧ %*/0W $-*1 4". ֶशΛૣظऴྃͨ͠ڭࢣΛར༻ 3$0 <+JO *$$7`> 0OUIFF⒏DBDZ <$IPˍ)BSJIBSBO *$$7`> ೳྗΪϟοϓ໰୊ʹରԠ "VUP,% <-J *$$7`> தؒ૚ͷ஌ࣝදݱ &OTFNCMF,5( <0LBNPUP &$$7`> ஌ࣝͱଛࣦͷ૊Έ߹Θͤ ,%;FSP <-J /FVS*14`> ஌ࣝͱଛࣦͷ૊Έ߹Θͤ -BSHFTDBMFEJTUSJCVUFE <"OJM *$-3`> ֬཰෼෍Λू໿ %VBMOFU <)PV *$$7`> ಛ௃ϚοϓΛू໿ ෳ਺ͷੜెʹΑΔΞϯαϯϒϧΛར༻ %BUBTFU%JTUJMMBUJPO <8BOH BS9JW`> ֶशࡁΈϞσϧͷਫ਼౓͕ߴ͘ͳΔ Α͏ʹೖྗϊΠζΛ࠷దԽ ͦͷଞɿσʔληοτͷৠཹ ੜె ஌ࣝΛసҠ ੜె ੜె ஌ࣝΛసҠ ੜె ੜె ஌ࣝΛసҠ ੜె ஌ࣝΛసҠ ڭࢣ ڭࢣ #"/ <'VSMBOFMMP *$.-`> 4NBMMˠ4NBMMˠʜ TFMGEJTUJMMBUJPO 5FBDIFS"TTJTUBOU <.JS[BEFI """*`> -BSHFˠ.JEEMFˠ4NBMM ʢೳྗΪϟοϓ໰୊ʹରԠʣ %BUBEJTUPSUJPOHVJEFETFMGEJTUJMMBUJPO <9VBOE-JV """*`> ݩσʔλ͕ಉ֦͡ுޙͷσʔλͷग़ྗΛ༧ଌ ʢσʔλ͔Βσʔλ΁ͷTFMGEJTUJMMBUJPOʣ ஌ࣝΛసҠ ੜె σʔλ ͭͷڭࢣͰΞϯαϯϒϧ %BUBEJTUJMMBUJPO <3BEPTBWPWJD $713`> σʔλ֦ுΛར༻ 1SFQBSJOH-FTTPOT <8FO /FVSPDPNQVUJOH`> ޡೝࣝͨ͠σʔλͷ஌ࣝͱ ෆ࣮֬ͳ஌ࣝΛௐ੔ (SBEVBM4BNQMJOH(BUF <.JOBNJ .7"`> ਖ਼ղͨ͠σʔλͷ ஌ࣝͷΈΛసҠ ग़ྗ૚ͷ஌ࣝͷసҠํ๏Λվળ 'VODUJPO.BUDIJOH <#FZFS $713`> NJYVQʹΑΔଟ༷ͳը૾Λ༻͍ͯ ڭࢣͱੜెؒͰؔ਺Ϛονϯά &⒎FDUJWFOFTTPGGVODUJPONBUDIJOH JOESJWJOHTDFOFSFDPHOJUJPO <:BTIJNB &$$78`> ϥϕϧͳ͠σʔλΛ༻͍ͯؔ਺Ϛονϯά ؔ਺Ϛονϯάͱͯ͠஌ࣝৠཹΛ࠶ߟ %*45 <)VBOH /FVS*14`> ΫϥεؒʹՃ͑ͯ Ϋϥε಺ͷ૬ؔΛసҠ 0GGMJOF %JTUJMMBUJPO 0OMJOF %JTUJMMBUJPO ஌ࣝΛసҠ ڭࢣ ੜె ΑΓଟ༷ͳ৘ใΛ࣋ͭ தؒ૚ͷग़ྗΛར༻ 'JU/FUT <3PNFSP *$-3`> தؒ૚ͷ஌ࣝͱͯ͠ ಛ௃ϚοϓΛ࢖༻ ɹɹɿύϥϝʔλΛݻఆ ɹɹɿύϥϝʔλΛߋ৽ ڭࢣɿֶशࡁΈϞσϧ ੜెɿະֶशͷϞσϧ ੜెͷΈΛ༻͍ͯ ੜెؒͰ஌ࣝΛసҠ ڭࢣͷ஌ࣝΛੜె΁సҠ ஌ࣝৠཹͷࣗಈઃܭ ஌ࣝసҠΛิॿ͢ΔϞσϧΛ௥Ճ 3FTJEVBM,% <(BP BS9JW`> ஌ࣝͷࠩΛิ׬͢Δ"TTJTUBOU ҟͳΔϞσϧߏ଄ؒͰ஌ࣝΛసҠ %FJ5 <5PVWSPO *$.-`> ஌ࣝͱͯ֬͠཰෼෍Λ༻͍ͯ $//͔Β7J5΁஌ࣝৠཹ 0OFGPS"MM <)BP /FVS*14`> தؒग़ྗΛMPHJUۭؒʹ౤Ө͢Δ͜ͱͰ ҟͳΔߏ଄ͷϞσϧؒͰதؒ૚ৠཹ ஌ࣝৠཹͷࣗಈઃܭ ,5( <.JOBNJ "$$7`> Ϟσϧͱଛࣦͷ૊Έ߹Θͤ 0SBDMF,OPXMFEHF%JTUJMMBUJPO <,BOH """*`> ΞϯαϯϒϧڭࢣͷͨΊͷੜెͷϞσϧߏ଄ Ϋϥεߏ੒΍λεΫ͕ҟͳΔෳ਺ͷڭࢣͷ஌ࣝΛੜెʹू໿ 4UVEFOUCFDPNJOHUIFNBTUFS <:F $713`> ηϚηάΛֶशͨ͠ڭࢣͱਂ౓ਪఆΛֶशͨ͠ڭࢣ "NBMHBNBUJOH,OPXMFEHF <4IFO """*`> ҟͳΔ෼ྨλεΫΛֶशͨ͠ෳ਺ͷڭࢣ ಛఆͷλεΫ ֶश Ϟσϧʹ͓͚Δ஌ࣝΛઃܭ $-*1,% <'BOH $713`> $-*1ɿ$-*1ʹ͓͍ͯ ैདྷͷ஌ࣝͷ༗ޮੑΛௐࠪ .JOJ7J5 <;IBOH $713`> 7JTJPO5SBOTGPSNFSɿ ΞςϯγϣϯॏΈͱύοντʔΫϯ .BOJGPME%JTUJMMBUJPO <)BP /FVS*14`> 7JTJPO5SBOTGPSNFSɿ ύονؒͷؔ܎ੑ -BSHFTDBMFJODSFNFOUBMMFBSOJOH <8V $713`> ܧଓֶशɿաڈλεΫͰ ֶशͨ͠Ϟσϧͷ֬཰෼෍ *NQSPWJOHGBTUTFHNFOUBUJPO XJUIUFBDIFSTUVEFOUMFBSOJOH <9JF #.7$`> ηϚηάɿۙ๣ͷϐΫηϧͱͷMPHJUؔ܎ 4&&% <'BOH *$-3`> ࣗݾڭࢣ͋Γֶशɿ αϯϓϧؒͷؔ܎ੑ -FBSOJOHF⒏DJFOUPCKFDUEFUFDUJPO NPEFMTXJUILOPXMFEHFEJTUJMMBUJPO <;BHPSVZLP *$-3`> ෺ମݕग़ɿ෺ମྖҬͷۣܗ ڭࢣ ੜె ஌ࣝΛసҠ ੜె ੜె ஌ࣝΛసҠ ੜె ੜె ஌ࣝΛసҠ IUUQTDPO fi UBUMBTKQHVJEFFWFOUTTJJTUBUJD TQFDJBM@QSPKFDU@UFDI@NBQ 44**प೥ٕज़Ϛοϓɿ஌ࣝৠཹ  Ԭຊ௚थ /BPLJ0LBNPUP த෦େֶ%౻٢ݚڀࣨॴଐ ݚڀςʔϚɿϋΠύʔύϥϝʔλ୳ࡧʹΑΔֶशํ๏ͷࣗಈઃܭ ݚڀ෼໺ɹɿ஌ࣝৠཹɼ൒ڭࢣ͋Γֶशɼࣗݾڭࢣ͋Γֶश ݸਓϖʔδ 9
  2. w /-1෼໺ʹ͓͚Δঢ়ଶۭؒϞσϧ 44.T ʹج͍ͮͨ৽͍͠ϞσϧΞʔΩςΫνϟ w ߏ଄Խঢ়ଶۭؒϞσϧ 4 <(V *$-3>Λϕʔεʹͭͷख๏Λಋೖ 

    4FMFDUJWF44.T ɿϞσϧ͕ೖྗʹԠͯ͡ύϥϝʔλΛಈతʹௐ੔  ϋʔυ΢ΣΞʹదͨ͠ΞϧΰϦζϜ ɿޮ཰తͳֶशͱਪ࿦Λ࣮ݱ  γϯϓϧͳϞσϧߏ଄ ɿ)º(BUFE.-1ˠ.BNCB .BNCB<(VBOE%BP BS9JW>  H3 Gated MLP Mamba Linear projection Sequence transformation Nonlinearity (activation or multiplication) X X X ! X Conv SSM X ! ! Conv SSM ⨂ Project Discretize !! ℎ!"# ℎ! "! # $! %! Selection Mechanism GPU SRAM GPU HBM ∆! Selective State Space Model with Hardware-aware State Expansion 4FMFDUJWF44.T .BNCB#MPDL ϋʔυ΢ΣΞʹదͨ͠ΞϧΰϦζϜ <(VBOE%BP BS9JW>͔ΒҾ༻ɼҰ෦վม
  3. w ࣌ؒʹ൐͏γεςϜͷಈతͳڍಈΛදݱ͢ΔͨΊͷ਺ֶతϑϨʔϜϫʔΫ<,BMNBO >  ݱࡏͷ࣌ࠁ ʹ͓͚Δೖྗ ͱग़ྗ ͷؔ܎Λঢ়ଶ Λհͯ͠ϞσϧԽ 

    ੍ޚ޻ֶ෼໺Ͱݹ͔͘Βར༻ Ұൠతʹɺଟ͘ͷ44.TͰ͸ग़ྗํఔࣜͷୈ߲Λলུʢ ʣˠਂ૚ֶशϞσϧʹ͓͚ΔεΩοϓ઀ଓͱͯ͠ղऍ t x(t) ∈ ℝ y(t) ∈ ℝ h(t) ∈ ℝN Dx(t) = 0 ঢ়ଶۭؒϞσϧ 44.T4UBUF4QBDF.PEFMT  0VUQVU TFRVFODF *OQVU TFRVFODF 4UBUF4QBDF.PEFMT 44.T y(t) x(t) h′  (t) = Ah(t) + Bx(t) y(t) = Ch(t) + Dx(t) ঢ়ଶભҠߦྻ ɿঢ়ଶͷ࣌ؒมԽ A ∈ ℝN×N ೖྗߦྻ ɿೖྗ͕ঢ়ଶมԽʹ༩͑ΔӨڹΛ੍ޚ B ∈ ℝN×1 ग़ྗߦྻ ɿݱࡏͷঢ়ଶʹج͍ͮͯੜ੒ C ∈ ℝ1×N ೖྗ͕ग़ྗʹ௚઀༩͑ΔӨڹΛܾఆ͢Δ܎਺D ∈ ℝ ঢ়ଶํఔࣜɿ ग़ྗํఔࣜɿ ঢ়ଶ ͷ࣌ؒඍ෼ h(t) A h B C x(t) y(t) D
  4. w ܥྻσʔλʢFH ୯ޠྻʣΛѻ͏ʹ͸࿈ଓදݱΛ཭ࢄԽ͢Δඞཁ͋Γ  ࿈ଓ࣌ؒΛ౳͍͠ੵ෼ྖҬΛ࣋ͭ ݸͷ཭ࢄతͳ۠ؒʹ෼ׂʢ;FSP0SEFS)PME;0)ʣ ؔ਺஋͕۠ؒ ͷؒͰҰఆͰ͋ΔͱԾఆ ;0)ʹΑΔ཭ࢄԽޙͷ44.Tɿ 

    ཭ࢄԽ͞Εͨ44.T͸࠶ؼతͳදݱͱͳΓɺ3//ʹྨࣅͨ͠ߏ଄Λ࣋ͭ ˠશͯͷೖྗʹରͯ͠஫ҙػߏΛܭࢉ͢Δ5SBOTGPSNFSϕʔεͷϞσϧΑΓߴޮ཰ͳਪ࿦͕Մೳ K Δ = [tk−1 , tk ] 44.Tͷ཭ࢄԽ  hk = ¯ Ahk−1 + ¯ Bxk yk = ¯ Chk ¯ A = exp(ΔA) ¯ B = (ΔA)−1(exp(ΔA) − I) ⋅ ΔB xt xt−1 xt+1 ht ht−1 ht+1 yt yt−1 yt+1 ¯ A ¯ A ¯ A ¯ A ¯ B ¯ B ¯ B ¯ C ¯ C ¯ C ʢ ཭ࢄతͳ࣌ؒεςοϓʣ k ঢ়ଶํఔࣜɿ ग़ྗํఔࣜɿ
  5. w 44.TʹೖྗʹԠͯ͡ύϥϝʔλΛಈతʹௐ੔͢Δબ୒ϝΧχζϜΛಋೖ  ೖྗ Λೖྗͱ͢Δઢܗ૚ͷग़ྗΛ44.ͷύϥϝʔλ   ʹ࢖༻ xt Bt

    Ct Δt 4FMFDUJWF44.T  ˠೖྗ ʹԠͯ͡อ࣋͞ΕΔ৘ใΛ੍ޚ xt Project Discretize !! ℎ!"# ℎ! "! # $! %! Selection Mechanism GPU SRAM GPU HBM ∆! Selective State Space Model with Hardware-aware State Expansion ࣍ݩ D જࡏঢ়ଶ(N = 4) ࣍ݩ D (B ∈ ℝN×D) (C ∈ ℝD×N) (A ∈ ℝN×N) 1 − gt gt ೖྗ ग़ྗ જࡏঢ়ଶ ht = (1 − gt )ht−1 + gt xt ͱԾఆͨ͠৔߹: N = 1, A = − 1, B = 1, gt = σ(Linear(xt )) <(VBOE%BP BS9JW>͔ΒҾ༻ɼҰ෦վม
  6. .BNCBͷޮՌ  ܭࢉίετɾ଎౓ͷൺֱ <2V BS9JW`>͔ΒҾ༻ capability allows SSMs to achieve

    not only e￿cient inference but also parallel training. However, it should be noted that the most conventional SSMs are time-invariant, meaning that their A, B, C, and are unrelated to the model input G. This would limit context-aware modeling, which leads to inferior performance of SSMs in certain tasks such as selective copying [55]. Table 1. Pros and cons of three primary architectures-RNNs, Transformers, and SSMs-in auto-regressive sequential modeling tasks. Comparison RNNs Transformers SSMs Training Speed Slow (Recurrent) Fast (Parallel) Fast (Convolutional) Inference Speed Fast (Recurrent) Slow (Quadratic-Time) Fast (Recurrent) Complexity $(!⇡2) $(!2⇡) $(!⇡2) Modeling Capabilities (Hidden State) (Attention) (Time-Invariance) Manuscript submitted to ACM θϩγϣοτੑೳͷൺֱ <(VBOE%BP BS9JW>͔ΒҾ༻ɼҰ෦վม RWKV (B. Peng et al. 2023) which were trained with the same tokenizer, dataset, and training length (300B tokens) as our models. (Note that Mamba and Pythia are trained with context length 2048, while RWKV was trained with context length 1024.) Table 3: (Zero-shot Evaluations.) Best results for each size in bold. We compare against open source LMs with various tokenizers, trained for up to 300B tokens. Pile refers to the validation split, comparing only against models trained on the same dataset and tokenizer (GPT-NeoX-20B). For each model size, Mamba is best-in-class on every single evaluation result, and generally matches baselines at twice the model size. M￿￿￿￿ T￿￿￿￿. P￿￿￿ LAMBADA LAMBADA H￿￿￿￿S￿￿￿ PIQA A￿￿￿E A￿￿￿C W￿￿￿G￿￿￿￿￿ A￿￿￿￿￿￿ ￿￿￿ # ￿￿￿ # ￿￿￿ " ￿￿￿ " ￿￿￿ " ￿￿￿ " ￿￿￿ " ￿￿￿ " ￿￿￿ " Hybrid H3-130M GPT2 — 89.48 25.77 31.7 64.2 44.4 24.2 50.6 40.1 Pythia-160M NeoX 29.64 38.10 33.0 30.2 61.4 43.2 24.1 51.9 40.6 Mamba-130M NeoX 10.56 16.07 44.3 35.3 64.5 48.0 24.3 51.9 44.7 Hybrid H3-360M GPT2 — 12.58 48.0 41.5 68.1 51.4 24.7 54.1 48.0 Pythia-410M NeoX 9.95 10.84 51.4 40.6 66.9 52.1 24.6 53.8 48.2 Mamba-370M NeoX 8.28 8.14 55.6 46.5 69.5 55.1 28.0 55.3 50.0 Pythia-1B NeoX 7.82 7.92 56.1 47.2 70.7 57.0 27.1 53.5 51.9 Mamba-790M NeoX 7.33 6.02 62.7 55.1 72.1 61.2 29.5 56.1 57.1 GPT-Neo 1.3B GPT2 — 7.50 57.2 48.9 71.1 56.2 25.9 54.9 52.4 Hybrid H3-1.3B GPT2 — 11.25 49.6 52.6 71.3 59.2 28.1 56.9 53.0 OPT-1.3B OPT — 6.64 58.0 53.7 72.4 56.7 29.6 59.5 55.0 Pythia-1.4B NeoX 7.51 6.08 61.7 52.1 71.0 60.5 28.5 57.2 55.2 RWKV-1.5B NeoX 7.70 7.04 56.4 52.5 72.4 60.5 29.4 54.6 54.3 Mamba-1.4B NeoX 6.80 5.04 64.9 59.1 74.2 65.5 32.8 61.5 59.7 GPT-Neo 2.7B GPT2 — 5.63 62.2 55.8 72.1 61.1 30.2 57.6 56.5 Hybrid H3-2.7B GPT2 — 7.92 55.7 59.7 73.3 65.6 32.3 61.4 58.0 OPT-2.7B OPT — 5.12 63.6 60.6 74.8 60.8 31.3 61.0 58.7 Pythia-2.8B NeoX 6.73 5.04 64.7 59.3 74.0 64.1 32.9 59.7 59.1 RWKV-3B NeoX 7.00 5.24 63.9 59.6 73.7 67.8 33.1 59.6 59.6 Mamba-2.8B NeoX 6.22 4.23 69.2 66.1 75.2 69.7 36.3 63.5 63.3 GPT-J-6B GPT2 – 4.10 68.3 66.3 75.4 67.0 36.6 64.1 63.0 OPT-6.7B OPT – 4.25 67.7 67.2 76.3 65.6 34.9 65.5 62.9 Pythia-6.9B NeoX 6.51 4.45 67.1 64.0 75.2 67.3 35.5 61.3 61.7 RWKV-7.4B NeoX 6.31 4.38 67.2 65.5 76.1 67.8 37.5 61.0 62.5 4.3 DNA Modeling Motivated by the success of large language models, there has been recent exploration into using the foundation model paradigm for genomics. DNA has been likened to language in that it consists of sequences of discrete tokens with a ￿nite  4.2.2 Downstream Evaluations Table 3 shows the performance of Mamba on a range of popular downstream zero-shot evaluation tasks. We compare against the most well-known open source models at these sizes, most importantly Pythia (Biderman et al. 2023) and RWKV (B. Peng et al. 2023) which were trained with the same tokenizer, dataset, and training length (300B tokens) as our models. (Note that Mamba and Pythia are trained with context length 2048, while RWKV was trained with context length 1024.) Table 3: (Zero-shot Evaluations.) Best results for each size in bold. We compare against open source LMs with various tokenizers, trained for up to 300B tokens. Pile refers to the validation split, comparing only against models trained on the same dataset and tokenizer (GPT-NeoX-20B). For each model size, Mamba is best-in-class on every single evaluation result, and generally matches baselines at twice the model size. M￿￿￿￿ T￿￿￿￿. P￿￿￿ LAMBADA LAMBADA H￿￿￿￿S￿￿￿ PIQA A￿￿￿E A￿￿￿C W￿￿￿G￿￿￿￿￿ A￿￿￿￿￿￿ ￿￿￿ # ￿￿￿ # ￿￿￿ " ￿￿￿ " ￿￿￿ " ￿￿￿ " ￿￿￿ " ￿￿￿ " ￿￿￿ " Hybrid H3-130M GPT2 — 89.48 25.77 31.7 64.2 44.4 24.2 50.6 40.1 Pythia-160M NeoX 29.64 38.10 33.0 30.2 61.4 43.2 24.1 51.9 40.6 Mamba-130M NeoX 10.56 16.07 44.3 35.3 64.5 48.0 24.3 51.9 44.7 Hybrid H3-360M GPT2 — 12.58 48.0 41.5 68.1 51.4 24.7 54.1 48.0 Pythia-410M NeoX 9.95 10.84 51.4 40.6 66.9 52.1 24.6 53.8 48.2 Mamba-370M NeoX 8.28 8.14 55.6 46.5 69.5 55.1 28.0 55.3 50.0 Pythia-1B NeoX 7.82 7.92 56.1 47.2 70.7 57.0 27.1 53.5 51.9 Mamba-790M NeoX 7.33 6.02 62.7 55.1 72.1 61.2 29.5 56.1 57.1 GPT-Neo 1.3B GPT2 — 7.50 57.2 48.9 71.1 56.2 25.9 54.9 52.4 Hybrid H3-1.3B GPT2 — 11.25 49.6 52.6 71.3 59.2 28.1 56.9 53.0 OPT-1.3B OPT — 6.64 58.0 53.7 72.4 56.7 29.6 59.5 55.0 Pythia-1.4B NeoX 7.51 6.08 61.7 52.1 71.0 60.5 28.5 57.2 55.2 RWKV-1.5B NeoX 7.70 7.04 56.4 52.5 72.4 60.5 29.4 54.6 54.3 Mamba-1.4B NeoX 6.80 5.04 64.9 59.1 74.2 65.5 32.8 61.5 59.7 GPT-Neo 2.7B GPT2 — 5.63 62.2 55.8 72.1 61.1 30.2 57.6 56.5 ˠͷϞσϧαΠζͰ5SBOTGPSNFSΑΓߴ͍ੑೳ ˠ44.Tʢ.BNCBͷϕʔεख๏ʣ͸γʔέϯγϟϧͳॲཧˠ3//Tͱಉ౳ͷਪ࿦଎౓ɾܭࢉྔ
  7. w .BNBCͷBS9JWެ։ɿ೥݄೔ w &$$7ͷ࿦จ౤ߘక੾ɿ೥݄೔  .BNCBొ৔͔Βϲ݄ؒͷݚڀ  ࿦จ਺ɿຊ ˠԬຊͷҹ৅ɿͲͷΑ͏ʹ.BNCBΛը૾ೝࣝλεΫʹద༻͢Δͷ͔ʢྫɿ.BNCB/%ʣ w

    $713ͷ࿦จ౤ߘక੾ɿ೥݄೔  ຊ֨తʹ.BNCB࿦จ͕$7ܥͷτοϓࠃࡍձٞʹొ৔  ࿦จ਺ɿຊ ˠ.BNCBºը૾ೝࣝλεΫͷ෼ੳΛߦͬͨ࿦จʮ.BNCB0VUʯΛ঺հ͠·͢ʂ w .BNCB0VU%P8F3FBMMZ/FFE.BNCBGPS7JTJPO $713ʹ͓͚Δ.BNCB࿦จͷཱͪҐஔ  .BNCB/%ͷղઆεϥΠυ ɿIUUQTTQFBLFSEFDLDPNOBPLEJIVJNJOHHVXVDWQSNMNJBORJBOHIVJFDDWMVOXFOTIBPKJFNBNCBOE
  8. w .BNCBΛը૾ೝࣝλεΫʹԠ༻ͨ͠Ϟσϧ͕਺ଟ͘ొ৔  ͔͠͠ɼ৞ΈࠐΈ΍"UUFOUJPOϕʔεͷ405"Ϟσϧͱൺ΂ͯੑೳ໘ͰྼΔ͜ͱ͕ଟ͍ w ஶऀΒͷٙ໰ɿ.BNCB 44. ͸ຊ౰ʹը૾ೝࣝλεΫʹඞཁʁ  .BNCBͷಛੑͱը૾ೝࣝλεΫͷಛੑ͔ΒԾઆΛఏى

     .BNCB͔Β44.ΛऔΓআ͍ͨ.BNCB0VUϞσϧΛߏங͠ɼੑೳ໘͔ΒԾઆΛݕূ .BNCB0VU<:VBOE8BOH $713>  MambaOut: Do We Really Need Mamba for Vision?* Weihao Yu Xinchao Wang National University of Singapore [email protected] [email protected] Code: https://github.com/yuweihao/MambaOut Linear Linear Conv σ SSM Linear σ Linear Linear Conv Linear σ Gated CNN block (e.g. Our MambaOut) Mamba block (e.g. Vision Mamba) (a) (b)
  9. w "UUFOUJPOɿ֤τʔΫϯ͸શͯͷτʔΫϯΛࢀরͯ͠৘ใΛू໿  ⭕ɿશͯͷτʔΫϯΛࢀর͢ΔͨΊɼ৘ใͷফࣦ͕ൃੜ͠ͳ͍  ✖ɿγʔέϯε௕ʢτʔΫϯ਺ʣͷ৐ͱ͍͏ߴ͍ܭࢉίετ w 44.ɿ֤τʔΫϯ͸લͷτʔΫϯͱݱࡏͷτʔΫϯΛࢀরͯ͠৘ใΛू໿  ⭕ɿ࠶ؼతͳੑ࣭ʹΑΔ௿͍ܭࢉίετ

     ✖ɿӅΕঢ়ଶ͕ϝϞϦͱͯ͠ػೳ͢Δ͕ɼݻఆαΠζͷͨΊ৘ใͷҰ෦͕ফࣦ .BNCBͷಛੑ<>  ˠ.BNCBͷಛੑɿ௕͍γʔέϯεͷॲཧʹద͍ͯ͠Δ ୹͍γʔέϯεɿੑೳ໘͔Β"UUFOUJPO44. ௕͍γʔέϯεɿܭࢉίετ໘͔Β44."UUFOUJPO
  10. w ը૾ೝࣝλεΫ͕௕͍γʔέϯεͷϞσϦϯάΛඞཁͱ͢Δ͔Λߟ࡯  .BNCBɿ5SBOTGPSNFSͷܭࢉίετΛվળ͍ͨ͠ˠ5SBOTGPSNFSͷܭࢉίετΛج४ʹߟ࡯ w τʔΫϯ௕ ɼ࣍ݩ਺ ͷσʔλΛೖྗͱͨ͠ͱ͖ͷ5SBOTGPSNFSϒϩοΫͷ'-01T L D

    ը૾ೝࣝλεΫͷಛੑɿ௕͍γʔέϯεॲཧΛඞཁͱ͢Δ͔ʁ<>  FLOPs = 24D2L + 4DL2 -JOFBS "UUFOUJPO ΑΓ ͷ৔߹ɼ"UUFOUJPOͷܭࢉෛՙ͕-JOFBSͷܭࢉෛՙΛ্ճΔ rL = 4DL2 24D2L = L 6D L > 6D ˠλεΫʹ௕͍γʔέϯεؚ͕·Ε͍ͯΔ͔ͷࢦඪͱͯ͠࢖༻ ʢτʔΫϯ௕ ͕ Λ௒͍͑ͯΔ͔ΛධՁʣ L 6D
  11. w τʔΫϯ௕ ͕ Λ௒͍͑ͯΔ͔ΛධՁ  7J54ɿ ɼ7J5#ɿ  w Ϋϥε෼ྨ

     *NBHF/FUɿೖྗը૾αΠζ͕ ɼύοναΠζ͕ ͷͱ͖ɼτʔΫϯ௕͸  ˠ  ͷͨΊɼ௕͍γʔέϯεͷλεΫͰ͸ແ͍ w ෺ମݕग़ɼΠϯελϯεηάϝϯςʔγϣϯɼηϚϯςΟοΫηάϝϯςʔγϣϯ  $0$0 ɿೖྗը૾αΠζ͕ ɼύοναΠζ͕ ͷͱ͖ɼτʔΫϯ௕͸   "%&, ɿೖྗը૾αΠζ͕ ɼύοναΠζ͕ ͷͱ͖ɼτʔΫϯ௕͸  ˠ  ͷͨΊɼ௕͍γʔέϯεͷλεΫͰ͋Δ L 6D τsmall = 6 × 384 = 2304 τbase = 6 × 768 = 4608 2242 162 142 = 196 196 < τsmall 196 < τbase 800 × 1280 162 4000 512 × 2048 162 4000 4000 > τsmall 4000 ≈ τbase ը૾ೝࣝλεΫͷಛੑɿ௕͍γʔέϯεॲཧΛඞཁͱ͢Δ͔ʁ<> 
  12. Ծઆ  σʔληοτ λεΫ ঢ়ଶۭؒϞσϧ 44. ͷಛੑ Ծઆ ঢ়ଶۭؒϞσϧ͸ ௕͍γʔέϯεॲཧ

    ࣗݾճؼੑ *NBHF/FU Ϋϥε෼ྨ ෆཁ $0$0 ෺ମݕग़ ✔︎ ༗༻ʢՄೳੑʣ $0$0 Πϯελϯε ηάϝϯςʔγϣϯ ✔︎ ༗༻ʢՄೳੑʣ "%&, ηϚϯςΟοΫ ηάϝϯςʔγϣϯ ✔︎ ༗༻ʢՄೳੑʣ
  13. w ԾઆΛݕূ͢ΔͨΊʹ.BNCB͔Β44.ΛऔΓআ͍ͨ.BNCB0VUϞσϧΛߏங .BNCB0VU  ˠ(BUFE $// Weihao Yu Xinchao Wang National

    University of Singapore [email protected] [email protected] Code: https://github.com/yuweihao/MambaOut Linear Linear Conv σ SSM Linear σ Linear Linear Conv Linear σ Gated CNN block (e.g. Our MambaOut) Mamba block (e.g. Vision Mamba) (a) (b) Figure 1. (a) Architecture of Gated CNN [20] and Mamba [28] blocks (omitting Normalization and shortcut). The Mamba block extends the Gated CNN with an additional state space model (SSM). As will be conceptually discussed in Section 3, SSM is not necessary for image classification on ImageNet [21, 72]. To empirically verify this claim, we stack Gated CNN blocks to build a series of models named MambaOut. (b) MambaOut outperforms visual Mamba models, e.g., Vision Mamhba [112], VMamba [56] and PlainMamba [96], on ImageNet image classification. .BNCBϒϩοΫ Weihao Yu X National Universit [email protected] Code: https://github.com Linear Linear Conv σ SSM Linear σ Linear Linear Conv Linear σ Gated CNN block (e.g. Our MambaOut) Mamba block (e.g. Vision Mamba) (a) Figure 1. (a) Architecture of Gated CNN [20] and Mamba [28] blocks the Gated CNN with an additional state space model (SSM). As will b image classification on ImageNet [21, 72]. To empirically verify this cla MambaOut. (b) MambaOut outperforms visual Mamba models, e.g., ImageNet image classification. .BNCB0VUϒϩοΫ 44.Λ࡟আ ੑೳ͕.BNCBϞσϧ.BNCB0VUͷͱ͖44.͕༗༻ͱ൑ఆ
  14. w σʔληοτɿ*NBHF/FU ੑೳධՁɿΫϥε෼ྨ  Model Token Mixing Type Param (M)

    Test@2242 MAC (G) Acc (%) VAN-B0 [31] Conv 4 0.9 75.4 MogaNet-T [50] Conv 5 1.1 79.0 FasterNet-T1 [9] Conv 8 0.9 76.2 InceptionNeXt-A [101] Conv 4 0.5 75.3 DeiT-Ti [79] Attn 6 1.3 72.2 T2T-ViT-7 [102] Attn 4 1.1 71.7 PVTv2-B0 [85] Conv + Attn 3 0.6 70.5 MobileViTv3-XS [83] Conv + Attn 3 0.9 76.7 EMO-6M [109] Conv + Attn 6 1.0 79.0 Vim-Ti [112] Conv + SSM 7 1.5 76.1 LocalVim-T [41] Conv + SSM 8 1.5 76.2 EfficientVMamba-T [64] Conv + SSM 6 0.8 76.5 EfficientVMamba-S [64] Conv + SSM 11 1.3 78.7 MambaOut-Femto Conv 7 1.2 78.9 PoolFormer-S24 [99] Pool 21 3.4 80.3 ConvNeXt-T [58] Conv 29 4.5 82.1 VAN-B2 [31] Conv 27 5.0 82.8 ConvFormer-S18 [100] Conv 27 3.9 83.0 MogaNet-S [50] Conv 25 5.0 83.4 InternImage-T [86] Conv 30 5 83.5 InceptionNeXt-T [101] Conv 28 4.2 82.3 DeiT-S [79] Attn 22 4.6 79.8 T2T-ViT-14 [102] Attn 22 4.8 81.5 Swin-T [57] Attn 29 4.5 81.3 Focal-Tiny [97] Attn 29 4.9 82.2 CSWin-T [24] Attn 23 4.3 82.7 CoAtNet-0 [19] Conv + Attn 25 4.2 81.6 iFormer-S [76] Conv + Attn 20 4.8 83.4 MOAT-0 [95] Conv + Attn 28 5.7 83.3 CAFormer-S18 [100] Conv + Attn 26 4.1 83.6 SG-Former-S [71] Conv + Attn 23 4.8 83.2 TransNeXt-Tiny [75] Conv + Attn 28 5.7 84.0 Vim-S [112] Conv + SSM 26 5.1 80.5 VMamba-T [56] Conv + SSM 22 5.6 82.2 Mamba-2D-S [49] Conv + SSM 24 – 81.7 LocalVim-S [41] Conv + SSM 28 4.8 81.2 LocalVMamba-T [41] Conv + SSM 26 5.7 82.7 EfficientVMamba-B [64] Conv + SSM 33 4.0 81.8 PlainMamba-L1 [96] Conv + SSM 7 3.0 77.9 VMambaV3-T* [55] Conv + SSM 31 4.9 82.6 MambaOut-Tiny Conv 27 4.5 82.7 Model Token Mixing Type Param (M) Test@2242 MAC (G) Acc (%) ConvNeXt-S [58] Conv 50 8.7 83.1 VAN-B3 [31] Conv 45 9.0 83.9 ConvFormer-S36 [100] Conv 40 7.6 84.1 InternImage-S [86] Conv 50 8 84.2 MogaNet-B [50] Conv 44 9.9 84.3 T2T-ViT-19 [102] Attn 39 8.5 81.9 Swin-S [57] Attn 50 8.7 83.0 Focal-Small [97] Attn 51 9.1 83.5 CSWin-S [24] Attn 35 6.9 83.6 MViTv2-S [51] Attn 35 7.0 83.6 CoAtNet-1 [19] Conv + Attn 42 8.4 83.3 UniFormer-B [48] Conv + Attn 50 8.3 83.9 CAFormer-S36 [100] Conv + Attn 39 8.0 84.5 SG-Former-M [71] Conv + Attn 39 7.5 84.1 TransNeXt-Small [75] Conv + Attn 50 10.3 84.7 VMamba-S [56] Conv + SSM 44 11.2 83.5 LocalVMamba-S [41] Conv + SSM 50 11.4 83.7 PlainMamba-L2 [96] Conv + SSM 25 8.1 81.6 VMambaV3-S* [55] Conv + SSM 50 8.7 83.6 MambaOut-Small Conv 48 9.0 84.1 ConvNeXt-B [58] Conv 89 15.4 83.8 RepLKNet-31B [23] Conv 79 15.3 83.5 ConvFormer-M36 [100] Conv 57 12.8 84.5 HorNet-B [70] Conv 88 15.5 84.3 MogaNet-L [50] Conv 83 15.9 84.7 InternImage-B [86] Conv 97 16 84.9 DeiT-B [79] Attn 86 17.5 81.8 T2T-ViT-24 [102] Attn 64 13.8 82.3 Swin-B [57] Attn 88 15.4 83.5 CSwin-B [24] Attn 78 15.0 84.2 MViTv2-B [51] Attn 52 10.2 84.4 CoAtNet-2 [19] Conv + Attn 75 15.7 84.1 iFormer-L [76] Conv + Attn 87 14.0 84.8 MOAT-2 [95] Conv + Attn 73 17.2 84.7 CAFormer-M36 [100] Conv + Attn 56 13.2 85.2 TransNeXt-Base [75] Conv + Attn 90 18.4 84.8 VMamba-B [56] Conv + SSM 75 18.0 83.7 Mamba-2D-B [49] Conv + SSM 92 – 83.0 PlainMamba-L3 [96] Conv + SSM 50 14.4 82.3 VMambaV3-B* [55] Conv + SSM 89 15.4 83.9 MambaOut-Base Conv 85 15.8 84.2 Table 1. Performance of models on ImageNet at the resolution of 2242. Our MambaOut model employs the Gated CNN block [20]. The Mamba block [28], derived from the Gated CNN block, incorporates an additional SSM (state space model). It is evident that visual .BNCB0VUϞσϧ͕Ұ؏ͯ͠ .BNCBϞσϧΛ্ճΔੑೳΛൃش ˣ Ϋϥε෼ྨʹ44.ͷಋೖ͸ෆཁ $POW "UUOϞσϧ͕ಉαΠζͷ .BNCBϞσϧΛ௒͑ΔੑೳΛൃش ˣ Ϋϥε෼ྨʹ͓͍ͯ .BNCBϞσϧͷੑೳվળʹ௅ઓ͢Δࡍ͸ ৞ΈࠐΈͱ44.ͷ૊Έ߹Θ͕ͤඞཁ
  15. w σʔληοτɿ$0$0 ੑೳධՁɿ෺ମݕग़ɼΠϯελϯεηάϝϯςʔγϣϯ  Backbone Token Mixing Type Param (M)

    MAC (G) Mask R-CNN 1→ schedule APb APb 50 APb 75 APm APm 50 APm 75 ConvNeXt-T [54] Conv 48 262 44.2 66.6 48.3 40.1 63.3 42.8 FocalNet-T [98] Conv 49 268 46.1 68.2 50.6 41.5 65.1 44.5 Swin-T [57] Attn 48 267 42.7 65.2 46.8 39.3 62.2 42.2 ViT-Adapter-S [12] Attn 48 403 44.7 65.8 48.3 39.9 62.5 42.8 CSWin-T [24] Attn 42 279 46.7 68.6 51.3 42.2 65.6 45.4 PVTv2-B2 [85] Conv + Attn 45 309 45.3 67.1 49.6 41.2 64.2 44.4 SG-Former-S [71] Conv + Attn 41 – 47.4 69.0 52.0 42.6 65.9 46.0 TransNeXt-Tiny [75] Conv + Attn 48 356 49.9 71.5 54.9 44.6 68.6 48.1 VMamba-T [56] Conv + SSM 42 286 46.5 68.5 50.7 42.1 65.5 45.3 LocalVMamba-T [41] Conv + SSM 45 291 46.7 68.7 50.8 42.2 65.7 45.5 EfficientVMamba-B [64] Conv + SSM 53 252 43.7 66.2 47.9 40.2 63.3 42.9 VMambaV3-T [55] Conv + SSM 50 270 47.4 69.5 52.0 42.7 66.3 46.0 PlainMamba-L1 [96] Conv + SSM 31 388 44.1 64.8 47.9 39.1 61.6 41.9 MambaOut-Tiny Conv 43 262 45.1 67.3 49.6 41.0 64.1 44.1 ConvNeXt-S [54] Conv 70 348 45.4 67.9 50.0 41.8 65.2 45.1 FocalNet-S [98] Conv 72 365 48.3 70.5 53.1 43.1 67.4 46.2 Swin-S [57] Attn 69 354 44.8 66.6 48.9 40.9 63.2 44.2 CSWin-S [24] Attn 54 342 47.9 70.1 52.6 43.2 67.1 46.2 PVTv2-B3 [85] Conv + Attn 65 397 47.0 68.1 51.7 42.5 65.7 45.7 SG-Former-M [71] Conv + Attn 51 – 48.2 70.3 53.1 43.6 66.9 47.0 TransNeXt-Small [75] Conv + Attn 69 516 51.1 72.6 56.2 45.5 69.8 49.1 VMamba-S [56] Conv + SSM 64 400 48.2 69.7 52.5 43.0 66.6 46.4 LocalVMamba-S [41] Conv + SSM 69 414 48.4 69.9 52.7 43.2 66.7 46.5 VMambaV3-S [55] Conv + SSM 64 357 48.7 70.0 53.4 43.7 67.3 47.0 MambaOut-Small Conv 65 354 47.4 69.1 52.4 42.7 66.1 46.2 ConvNeXt-B [54] Conv 108 486 47.0 69.4 51.7 42.7 66.3 46.0 FocalNet-B [98] Conv 111 507 49.0 70.9 53.9 43.5 67.9 46.7 Swin-B [57] Attn 107 496 46.9 – – 42.3 – – ViT-Adapter-B [12] Attn 102 557 47.0 68.2 51.4 41.8 65.1 44.9 CSWin-B [24] Attn 97 526 48.7 70.4 53.9 43.9 67.8 47.3 PVTv2-B5 [85] Conv + Attn 102 557 47.4 68.6 51.9 42.5 65.7 46.0 TransNeXt-Base [75] Conv + Attn 109 728 51.7 73.2 56.9 45.9 70.5 49.7 VMamba-B [56] Conv + SSM 96 540 48.5 69.6 53.0 43.1 67.0 46.4 PlainMamba-L2 [96] Conv + SSM 53 542 46.0 66.9 50.1 40.6 63.8 43.6 VMambaV3-B [55] Conv + SSM 108 485 49.2 70.9 53.9 43.9 67.7 47.6 MambaOut-Base Conv 100 495 47.4 69.3 52.2 43.0 66.4 46.3 le 2. Performance of object detection and instance segmentation on COCO with Mask R-CNN. The MACs are measured with ut size of 800 → 1280. ImageNet is unnecessary, aligning with the principle of the longer side does not exceed 1333 pixels. The AdamW 405"ͷ.BNCBϞσϧ͸ .BNCB0VUϞσϧΛ্ճΔੑೳΛൃش ˣ ௕͍γʔέϯεͷλεΫʹ͓͍ͯ44.͸༗༻ .BNCBϞσϧͷੑೳ͸ 405"ͷ$POW "UUOϞσϧ 5SBOT/F9U  ͱͷؒʹେ͖ͳΪϟοϓ͕ଘࡏ ˣ ը૾ೝࣝʹ͓͚Δ.BNCBͷ༗ޮੑΛࣔ͢ʹ͸ ͜ΕΒͷ405"ϞσϧͷੑೳΛ௒͑Δඞཁ͕͋Δ
  16. w σʔληοτɿ"%&, ੑೳධՁɿηϚϯςΟοΫηάϝϯςʔγϣϯ  Backbone Token Mixing Type UperNet Param

    MAC mIoU mIoU (M) (G) (SS) (MS) ConvNeXt-T [54] Conv 60 939 46.0 46.7 HorNet-T [70] Conv 55 924 49.2 49.3 ConvFormer-S18 [100] Conv 54 925 47.5 48.6 InternImage-T [86] Conv 59 944 47.9 48.1 Swin-T [57] Attn 60 945 44.4 45.8 Twins-S [15] Attn 54 901 46.2 47.1 Focal-T [97] Attn 62 998 45.8 47.0 CSWin-T [24] Attn 60 959 49.3 50.7 UniFormer-S [48] Conv + Attn 52 955 47.0 48.5 CAFormer-S18 [100] Conv + Attn 54 1024 48.1 48.9 SG-Former-S [71] Conv + Attn 53 989 49.9 51.5 TransNeXt-Tiny [75] Conv + Attn 59 978 51.1 51.2 VMamba-T [56] Conv + SSM 55 964 47.3 48.3 LocalVMamba-T [41] Conv + SSM 57 970 47.9 49.1 EfficientVMamba-B [64] Conv + SSM 65 930 46.5 47.3 PlainMamba-L2 [96] Conv + SSM 55 285 – 46.8 PlainMamba-L3 [96] Conv + SSM 81 419 – 49.1 VMambaV3-T [56] Conv + SSM 62 949 47.9 48.8 MambaOut-Tiny Conv 54 938 47.4 48.6 ConvNeXt-S [54] Conv 82 1027 48.7 49.6 HorNet-S [70] Conv 85 1027 50.0 50.5 ConvFormer-S36 [100] Conv 67 1003 49.6 50.7 InternImage-S [86] Conv 80 1017 50.1 50.9 Swin-S [57] Attn 81 1038 47.6 49.5 Twins-B [15] Attn 89 1020 47.7 48.9 to be continued to the right Backbone Token Mixing Type UperNet Param MAC mIoU mIoU (M) (G) (SS) (MS) continued from the left Focal-S [97] Attn 85 1130 48.0 50.0 CSWin-S [24] Attn 65 1027 50.4 51.5 CAFormer-S36 [100] Conv + Attn 67 1197 50.6 50.8 SG-Former-M [71] Conv + Attn 68 1114 51.2 52.1 TransNeXt-Small [75] Conv + Attn 80 1089 52.2 52.3 VMamba-S [56] Conv + SSM 76 1081 49.5 50.5 LocalVMamba-S [41] Conv + SSM 81 1095 50.0 51.0 VMambaV3-S [55] Conv + SSM 82 1028 50.6 51.2 MambaOut-Small Conv 76 1032 49.5 50.6 ConvNeXt-B [54] Conv 122 1170 49.1 49.9 FocalNet-B [98] Conv 126 1192 50.5 51.4 HorNet-B [70] Conv 126 1171 50.5 50.9 ConvFormer-M36 [100] Conv 85 1113 50.4 51.3 InternImage-B [86] Conv 128 1185 50.8 51.3 Swin-B [57] Attn 121 1188 48.1 49.7 Twins-L [15] Attn 133 1164 48.8 50.2 Focal-B [97] Attn 126 1354 49.0 50.5 CSWin-B [24] Attn 110 1222 51.1 52.2 UniFormer-B [48] Conv + Attn 80 1106 49.5 50.7 CAFormer-M36 [100] Conv + Attn 84 1346 51.7 51.7 SG-Former-B [71] Conv + Attn 109 1304 52.0 52.7 TransNeXt-Base [75] Conv + Attn 121 1268 53.0 53.4 VMamba-B [56] Conv + SSM 110 1226 50.0 51.3 VMambaV3-B [55] Conv + SSM 122 1170 51.0 51.6 MambaOut-Base Conv 112 1178 49.6 51.0 Table 3. Performance of Semantic segmentation with UperNet [93] on ADE20K [111] validation set. The MACs are measured with input size of 512 → 2048. egories. It includes 20,000 images in the training set and 2,000 images in the validation set. In our experiments, Mamba is employed as the backbone for UperNet [93], with initialization from ImageNet pre-trained weights. The train- tion tasks, which align with at least the long-sequence char- acteristic, merits further exploration. To substantiate our claims empirically, we develop MambaOut models that em- ploy Mamba blocks without their core token mixer, SSM. $0$0σʔληοτͱ ಉ༷ͷ܏޲
  17. w .BNCB  ঢ়ଶۭؒϞσϧʹج͍ͮͨ/-1෼໺ͷ৽͍͠ϞσϧΞʔΩςΫνϟ  5SBOTGPSNFSͱൺ΂ͯ௿͍ܭࢉྔɼͷύϥϝʔλ਺Ͱಉ౳Ҏ্ͷੑೳΛൃش w .BNCB0VU  .BNCB

    44. ͸ຊ౰ʹը૾ೝࣝλεΫʹඞཁͳͷ͔Λ࣮ݧతʹ෼ੳ  Ϋϥε෼ྨͷΑ͏ͳ୹͍γʔέϯεͰ͸44.͕ͳ͍৔߹͕ߴ͍ੑೳΛൃش  ෺ମݕग़΍ηάϝϯςʔγϣϯͷΑ͏ͳ௕͍γʔέϯεͰ͸44.ͷಋೖʹΑͬͯߴ͍ੑೳΛൃش  ͨͩ͠ɼ405"ͷ$POW "UUOϞσϧͱൺ΂Δͱ.BNCBϞσϧͷੑೳ͸௿͍ ·ͱΊ