rights reserved. Amazon Confidential and Trademark. π0 で VLA の fine-tuning https://github.com/Physical- Intelligence/openpi/blob/main/scripts/train_pytorch.py model = Pi0Config( paligemma_variant="gemma_2b_lora", # VLM バックボーン側 action_expert_variant="gemma_300m_lora" # Action Expert 側 ) freeze_filter = Pi0Config(...).get_freeze_filter() # 更新される重みの範囲 π0 のアーキテクチャに対応させると: +----------------------------------------------+ | SigLIP (Vision Encoder) | frozen +----------------------------------------------+ | PaliGemma (VLM backbone, 2B params) | | "gemma_2b_lora": | LoRA adapters only | low-rank matrices injected into | (Attention + MLP) | Attention + MLP layers | +----------------------------------------------+ | Action Expert (300M params) | | "gemma_300m_lora": | LoRA adapters only | same approach as above | (Attention + MLP) +----------------------------------------------+ 15