Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Introducing Research Units of Matsuo-Iwasawa La...

Matsuo Lab
September 06, 2024

Introducing Research Units of Matsuo-Iwasawa Laboratory

Matsuo Lab

September 06, 2024
Tweet

More Decks by Matsuo Lab

Other Decks in Research

Transcript

  1. 1 Message from Mr. Yusuke Iwasawa, Associate Professor & Fundamental

    Research team leader The Matsuo-Iwasawa Lab is conducting research under the mission of “Creating Intelligence”. Deep learning has made breakthroughs in many areas in just over a decade. Technological progress has revealed many things, and combined research with different fields centered on deep learning is progressing. Although many fundamental problems remain to be solved, I believe that we are entering an interesting era with more tools to create intelligence. It goes without saying that solving the great mystery of intelligence has great industrial significance. In order to create intelligence, various approaches are needed to keep up with technological progress. The Matsuo-Iwasawa Lab is conducting a wide range of research, from basic research ([1] World Models, [2] Next Generation Neural Networks, [3] Brain-Inspired Intelligence) to applications ([4] Robotics, [5] Large-scale Language Models, [6] Empirical Research dealing with real data and real applications). By combining these broad research areas, we can take an integrated approach that is not limited to individual technologies. By exploring the marginal areas of application, we aim to establish a cycle of advancing basic research based on missing technologies and thereby expanding the application areas again. Of course, various approaches will be necessary as technology advances to create intelligence, and we plan to continue to expand our domain to achieve our goals. The current research themes are listed below for reference, and we sincerely welcome the participation of those who share our commitment to these efforts. Reference: Interview article of Mr. Iwasawa “It broadens my view and allows me to devote myself to long-term research.” Nine years at the Matsuo Laboratory, where change constantly happens
  2. Research Mission Acquire representations that capture various aspects of environment

    from diverse information sources and construct world models that are adaptable to unseen environments Integrate different types of information sources, such as different sensor/actuator data and large-scale language models, to achieve highly accurate understanding of the environment The goal is to acquire abstract representations of external objects and long-term behavior at various spatio-temporal levels, and to learn and predict complex environments efficiently. Through the training of LLM and the integration and transfer of multiple world models trained in different environments, the goal is to build models that can respond flexibly and accurately in unknown environments. [1] World Model • Multimodal Learning • Vision Language Models • Language-guided World Models • Model Merge • Sequential Generative Models • State Representation Learning • Hierarchical Reinforcement Learning • Object-centric representation learning • Action abstraction (Option, Skill, Action Primitive) • Model Generalization • Meta-learning • Transfer Learning • Scaling / Video Pre-training Key words Key words Key words Theme 3 Adaptation to an unseen environments Theme 2 Spatio-temporal representation learning Theme 1 Multimodality
  3. Research mission Develop new models and learning algorithms that push

    the limits of deep learning Theme 1 Structure Search Automatically discover network structures that reflect the nature of the data using self- supervised learning. • (Strong) Lottery Hypothesis • Self-supervised learning • Neural Architecture Search • Dynamic Sparse Training • Graph Representation • Grokking Theme 2 New Learning Algorithm • Backpropagation-Free Training • Energy-Based Models • Deep Equilibrium Models • Predictive Coding • Reservoir Computing Develop a new learning algorithm beyond the back propagation. Theme 3 Modular NN Prevent catastrophic forgetting via local weight updates (modular NN) [2] Next Generation Neural Networks Key words Key words Key words • Disentanglement • Modular NN • Catastrophic Forgetting, Continuous Learning • Mixture of Experts • Circuit Discovery • Curriculum learning
  4. Research Mission Unraveling the mechanisms of the brain through model

    development and analysis based on the Brain Reference Architecture (BRA) data generation. Based on neuroscientific findings, we construct and evaluate BRA data across the entire brain. Meanwhile, we build hypotheses for partial computational functions and proceed to implement computational models. • Brain Reference Architecture (BRA) • Brain Information Flow (BIF) • Hypothetical Component Diagram (HCD) • Functional realization graph (FRG) • Structure-constrained Interface Decomposition (SCID) method Theme 2 Brain- inspired AGI • AI Alignment (including brain-based interpretability, etc.) • Brain simulation and brain analysis • Modeling of brain dysfunction • Human-compatible communication Using BRA, we implement brain models and analyze brain data. We explore analyzing dysfunctions and interpreting brain-like functions and states (e.g., intentions, deception). Theme 3 Automated BRA Data Generation • Automated BRA Data Evaluation and Construction • WBA Technology Roadmap • Large-scale language models • BRA Editorial System (BRAES) • Bibliographic database for BRA (BDBRA) Build a pipeline for the creation/evaluation of BRA data and automate it using LLM. The goal is to build the first whole-brain BRA by 2027, and thereafter aim for a situation where it can be updated automatically. [3] Brain-Inspired Intelligence Key words Key words Key words Theme 1 Whole BRA Construction
  5. Research Mission Creating intelligent behavior as an embodied system through

    efficient data collection and system implementation in the real world [4] Robotics Key words Key words Key words Develop robotic foundation models through efficient data collection and establish methods for generalization and adaptation to diverse tasks, environments, and robot morphologies. Develop robot recognition and control models beyond vision and language to perform dexterous object manipulation in the real world. Developing robot system that continuously recognizes the environment and generates motion without a stop in the real world in real time Theme 1 Robotic Foundation Models Theme 2 Dexterous Manipulation Theme 3 Whole-body Control • Remote data collection and teleoperation • Remote data collection and teleoperation • Scalable reinforcement and imitation learning • Adaptation to new environments • State representation learning and world models • Modular robots • Integration of recognition and control • Service robotics especially in household environments • Mobile Manipulation • Locomanipulation and humanoids • Integration of foundation models into asynchronous and distributed robotic systems • Task and Motion Planning (TAMP) • Hardware design optimization • Learning object manipulation using haptics • Dual- and multi-arm object manipulation • Manipulation and modeling of flexible objects • Cooking robotics • Laboratory automation • Soft robots
  6. Research Mission Understanding and controlling the behavior of Large Language

    Models (LLM) to study the next generation of LLM Understand the performance / success sources of LLM and use that knowledge to control model behavior. • State Space Model • Lightweight • Multimodal, VLM • Multi-token Prediction • Model fusion • LLM agents (using tools) • LLM self-evolution The challenge is to break out of the current paradigm of LLM based on the Transformer structure, and to research the next generation of LLM that are more efficient and perform better. • Domain-specific Fine-tuning • Continual pre-training • PEFT (LoRA, Prompt-tuning) • Integration with knowledge base • Compliance with rules • Medical LLM Conduct domain-specific research that is important for social implementation of LLM. Research on specialization in specific domains such as medicine, finance, etc., and research on specialization methods themselves will also be conducted. [5] Large Language Model Key words Key words Key words Theme 2 Beyond Transformer Theme 3 Domain specialization Theme 1 Understanding and control of operating principles • Analysis of internal behavior (Logit Lenses, Circuits, Induction Head, Task Vector) • In-context learning • AI Safety (Hallucination, Bias, Watermark, Prompt Attack, Unlearning, Copyright) • Science of training data, pseudo-data generation • Integration with computational linguistics
  7. Research Mission Advance real-world solutions through the application of foundational

    technologies and drive fundamental research by providing insights into their applicability and limitations Given the challenges of preparing large annotated datasets for training and deploying machine learning models in real- world applications, we aim to develop methodologies to address these challenges. • Data Extension • Active Learning • Meta-learning • Few-label/no-label evaluation • Gaussian processes • Structured data, spatio-temporal data • Design of Experiments/Black Box Optimization • Causal inference/anti-realistic virtual machine learning • Human-in-the-loop • Interpretability/Explainability Not only will we apply existing methods to real- world data, but we will also leverage domain knowledge and advanced technologies to discover new solutions to critical issues. • Automatic Driving • Medical domain • Local Administration • User Interface • Education We contribute to a better society by tackling diverse challenges in technological domains that enhance people's lives, such as medical AI and autonomous driving. [6] Social Implementation Key words Key words Key words Theme 1 Data Efficient Training Theme 3 AI for Social Good Theme 2 Domain Knowledge Integration