Upgrade to Pro — share decks privately, control downloads, hide ads and more …

20231115-クリエイティブAIとAIDXが拓く新市場 - メタバース・放送・メディアアー...

20231115-クリエイティブAIとAIDXが拓く新市場 - メタバース・放送・メディアアートのその先に

DCEXPO企画:クリエイティブAIとAIDXが拓く新市場 - メタバース・放送・メディアアートのその先に
本講演「クリエイティブAIとAIDXが拓く新市場 - メタバース・放送・メディアアートのその先に」ではゲームエンジン、ミュージアム、メディアアート、SNS、メタバースR&Dや国際的な教育機関での30年以上の経験を持つ講演者がクリエイティブAIとAIDX(AIによるユーザ体験の創出とコミュニケーション)がどのように新たな市場を開拓し、メタバース、放送、メディアアートなどの分野を超えた未来に影響を与えるかに焦点を当てます。
https://www.inter-bee.com/ja/forvisitors/conference/session/?conference_id=2383

日→米・米→日の両輪で活動するAIスタートアップ企業「AICU」の活動紹介とともに、ライブエンタメ・放送業界の協力企業における具体的な最新事例も紹介します。

生成AIの社会実装に関連する課題や倫理的な側面、具体的な解決手法も取り上げ、これからの展望について議論します。
最後に、質疑応答セッションを通じて参加者からの質問にお答えし、深い洞察を提供します。
この講演を通じて、クリエイティブAIとAIDXが新たな市場の可能性を拓く方法について深く理解できるでしょう。

本講演の告知
https://corp.aicu.ai/ja/dcexpo2023
PDF版ダウンロードはこちら
https://hubs.ly/Q028Z-Sn0
スライド中で紹介した「自分のLoRAを愛でる本」
https://dhgs.shirai.as/blog/techbook15-lora

Akihiko SHIRAI - 白井暁彦

November 15, 2023
Tweet

More Decks by Akihiko SHIRAI - 白井暁彦

Other Decks in Business

Transcript

  1. Akihiko SHIRAI - 白井暁彦 Digital Hollywood University - Graduate School

    - Professor AICU, Inc. CEO [email protected] X@o_ob クリエイティブAI AIDXが拓く新市場 - メタバース・放送・メディアアートの その先 クリエイティブAI AIDXが拓く新市場 - メタバース・放送・メディアアートの その先
  2. Akihiko SHIRAI, Ph.D in a Manga Manga from my book

    ”The future of game design - Science in Entertainment systems” (2013) 2010-2018 [Teacher] Associate Professor In Information Media in KAIT, Japan. “Creating people who creates!” 2018- Visiting Professor in Digital Hollywood University Graduate School
  3. GREE VR Studio Laboratory, Director 2018年からグリーグループにJoin。世界1000万ダウンロードを達成したスマホ向けメタバース「REALITY」を 開発運営するREALITY株式会社の研究開発部門「GREE VR Studio Laboratory」のDirectorを担当。世界に向

    けてメタバース知財を多数開発・発信。並列してデジタルハリウッド大学大学院「クリエイティブAIラボ」 を主宰。AI画像生成「Stable Diffusion」のリリースから2ヵ月で「AIとコラボして神絵師になる」を執筆。
  4. シンギュラリティ singularity 「シンギュラリティ」という壮大な仮説 真の脅威はその「検証力」にあり 2015/11/02・日塔 史・株式会社電通 https://dentsu-ho.com/articles/3260 「シンギュラリティ」とは「特異点」、もともとは 物理学や数学の用語。 「通常の範囲を逸脱した地点」を意味していて、例

    えばブラックホールの中心には見かけの体積がゼロ なのに質量が無限大となる特異点があるとされる。 カーツワイルが提唱した「収穫加速の法則」 (The Law of Accelerating Returns)、 「技術は直線的(Linear)ではなく、 指数関数的(Exponential)に進化する」。 指数関数的な進化は、初めは緩やかで変化が感じら れないが、ある点(つまり特異点)を迎えると、一 気に人知の及ばないところに行ってしまう。「2045 年」と予測している。 技術は指数関数的に進化する
  5. Founders: Akihiko SHIRAI, Ph.D - 30 years in Entertainment Technologies

    From my book ”The future of game design - Science in Entertainment systems” (2013) ▶ To be Continued!!
  6. Founders: Akihiko SHIRAI, Ph.D 30 years experience in Entertainment Technologies

    Manga from my book ”The future of game design - Science in Entertainment systems” (2013) 1973[born] in Yokohama, Japan 1983[10y.o.] Studied programming by magazine. Made 3D flight-sim by character graphics in PC-6001mkII. 1988[15y.o.] Played NES until forget high school admission. Met camera, publishing (news paper editing) and sciences. 1990[18y.o.] Worked arcade games operator and paperboy. (Financially, difficult to go university directly) In 1992 to 1998. I studied in Tokyo Polytechnic University. "Photo engineering" contains a lot. Optics, functional chemicals, electronics, laser holographics, printings...and CG As a club activity, I immersed into art photography. Fortunately, I met to photoshop and Ray tracing program in 1994. And my first laboratory, Prof Yuichiro Kume give an opportunity to study virtual reality, I was fevered about real time graphics. Because it is a latest technology since the video game. In 1998, I was offered a job at Namco as a planner (rare). However, joined Canon, it was offered a job on the recommendation of his professor. Then, I assigned to the office administration section at the Fukushima Plant.
  7. Founders: Akihiko SHIRAI, Ph.D 30 years experience in Entertainment Technologies

    Manga from my book ”The future of game design - Science in Entertainment systems” (2013) In 2000, I moved to Criterion, Inc. in UK. It was game engine developer named "RenderWare", which was applied as an official game engine for PlayStation2. So, I got a chance to become a game development consultant in a family of canon. It was very good opportunity to study about game development and its industry. After launch fever of PlayStation2, I have some hit titles thanks to the middle ware, multi-platform technologies and shaders in early GPU environment. In this era, there were common sense "Once graphics become greater, the games becomes more fun." but in my mind, I also had a different hypothesis, "Exploring VR experience makes it more interesting and meaningful". In 2001, I moved to Tokyo Institute of Technology, as a full time PhD student. (Prof. Sato’s lab, well-known about SPIDAR interface) After years of practice about the Japanese mass product industry and its work in IT division,
  8. Fantastic Phantom Slipper (1997) The another key technology is the

    “Phantom Sensation”, a special psycho-physical phenomenon on human skin. When two mechanical stimuli of the same intensity are applied to different locations of skin surface with appropriate spacing, the two stimuli are fused and one sensation is perceived. When the intensity of one stimulus increases, the location of fused sensation shifts to the location of stronger stimulus. This psycho-physical phenomenon has been known as phantom sensation, which was found by Bekesy, a Novel Prize Winner in 1961. This project was realized before the rumble pad of game controllers as known as vibrotactile in 1997. The system has been realized with a real-time optical motion capture system using PSD and Infrared-LEDs. In the system, two LEDs are fixed on each slipper. The locations and directions of slippers on the floor are measured in real-time. Since feet are usually on the floor, two dimensional measurement is sufficient for this application. The similar technology of this is used into Nintendo's WiiRemote as its image sensor with sensor bar in later.
  9. Fantastic Phantom Slipper (1997) 400fps mocap (by own) Dynamic Phantom

    Sensation (by own) DirectX5 graphics (by own) Hemisphere dome screen (by own) Vibrator implemented slipper (by own) by 300MHz CPU + VGA projector Prototype, handmade, in Lab
  10. Labyrinth Walker (2002) Demo at SIGGRAPH 2002 Emerging Technologies https://doi.org/10.1145/1242073.1242098

    https://www.youtube.com/watch?v=jclB9u_R4gA Demo of “A new step-in-place locomotion interface for virtual environment with large display system” The project presents a new locomotion interface for virtual environment with large display system. Users will be able to direct and control the traveling in the VE by in-place stepping and turning actions. Using a turntable technology, Visual feedback is continuously provided through the use of screen of limited size.
  11. Tangible Playroom: Penguin Hockey (2001-2003) Demo at SIGGRAPH KIDS 2003,

    Best Multilingual and/or Omnilingual award. "Tangible Playroom" was designed as a future computer entertainment system for children. It can provide some Virtual Reality contents with a safe force feedback and a large floor image. Players can interact with content's world using their full body. To feel force feedback, player grasps a tangible (graspable, perceptible by touch) grip like a cork ball. It is linked to encoder motors by strings. Then it can input 3D location to system. When player touch to some objects, system calculate a correct force feedback to player via string and tangible grip. Our system can provide high quality force representation with safe rather than metal linked arm system. "Penguin Hockey" is a typical demonstration content. It is a simple 3D hockey game playing with automotive penguins. The pucks are snowmen. To get points, players should throw snowmen to the enemy's goal hole. Our software has a real-time dynamics simulator then children can interact with computer-generated characters with real reactions.
  12. Founders: Akihiko SHIRAI, Ph.D 30 years experience in Entertainment Technologies

    Manga from my book ”The future of game design - Science in Entertainment systems” (2013) 2003-2005 [Post-doc in NHK-ES] Axi-Vision & Projection Lighting for next-generation digital TV broadcasting and production environments. 2005-2007 [Post-doc in Laval France] ENSAM Presence & Innovation Lab, in Laval France. Studying theme park attraction development and social deployment through “Laval Virtual ReVolution” exposition. 2008-2010 [Science Communicator in Museum “Miraikan”] At a national science museum in the Tokyo Bay area, she learned the specialized skills of communicators, interpreters and service engineering through her work in the science communicator training program, and developed exhibits that incorporate entertainment ideas into the world of science.
  13. [Post-doc] Axi-Vision & Projection Lighting • A new archiving system

    for TV studio sets using depth camera and global illumination (Nico2004) • Entertainment Applications of Human-Scale Virtual Reality Systems (PCM2004) https://doi.org/10.1007/978-3-540-30543-9_5 PoC demo development in cooperate with NHK Science & Technology Research Laboratories(STRL), for next-generation digital TV broadcasting and production environments.
  14. “Entertainment Systems” •Definition (in post-doc era) ◦ Computer systems that

    was designed to affect to human amusements. ◦ Video Games, media arts, real time interactive and entertainment virtual reality are possible to be included. ◦ Cinemas and DVDs are also possible to be included but should focus to “computer system” like interactivity.
  15. WiiMedia (2007) “WiiMedia: motion analysis methods and applications using a

    consumer video game controller” https://doi.org/10.1145/1274940.1274966 "WiiMedia" is a study using the WiiRemote, a new consumer video game controller from Nintendo's, for media art, pedagogical applications, scientific research and innovative unprecedented entertainment systems. Normally, consumer hardwares, like standard controllers of new video game platforms, are closed to public developers. The Nintendo's WiiRemote however can be connected easily to an ordinary PC thanks to a BlueTooth adapter. Thus, public developers can access to the WiiRemote's acceleration and IR sensors via this wireless connection. We think it might enlarge the non-professional game development environment with a new innovative game controller. However, when we tried to develop our projects with the WiiRemote, we encountered many difficulties because the only data that can be captured are basic data and not the full player's motion. Through the WiiMedia project, with the development of a few applications, we developed some motion analysis methods using the WiiRemote. This paper describes case studies that include states of the arts and several motion analysis methods.
  16. [Book] WiiRemote Programming https://www.ohmsha.co.jp/book/9784274067501/ A Programming Study Book on Interactive

    Technology Using WiiRemote  Programming using the "WiiRemote," the distinctive controller of the popular home video game console "Wii," on a PC has been attracting attention. With the introduction of the WiiRemote, it has become inexpensive and readily available, and many people are interested in it. With the advent of WiiRemote, it has become cheap and readily available, and many people are interested.  This book explains how to program the WiiRemote from a PC for beginning programmers who are interested in applying the WiiRemote. As a hardware-oriented introduction to game programming, it explains the source code step by step. You can learn interaction techniques by yourself while developing specific samples. Languages supported include C/C++, C#, ActionScript 3, and Processing.
  17. [Museum] Songs of ANAGURA (2008-2010) In Miraikan (The National Museum

    of Emerging Science and Innovation in Japan) permanent exhibition since 2010 to 2023 August 31st. https://www.miraikan.jst.go.jp/sp/anagura/ Concept design and technical basement research. - 10x16m projection clusters - Laser tracker clusters - Players don’t wear any tracking devices - “Wall of Wisdom” - observation console - Unity based contents creation
  18. Laval Virtual “ReVolution” in France Akihiko SHIRAI as Session chair

    of ReVolution. • The role of ReVolution ◦ Bring experiences on new technology, or futures to Laval Virtual since 2006. ◦ For the ReVo exhibitor: chance to get feedback from VR industries and general public. ◦ Art festival “RectoVRso” had launched since 2017, now ReVolution can make focus to technology revolutions from all of the world. • The theme [2019] VR5.0 [2018] 1+1=∞ (one plus one equals unlimited) [2017] TransHumanism++ [2016] REAL VIRTUALITY [2015] Kiddy Dream in Virtual Reality [2014] Frontier village in Virtual Reality [2013] The NEXT BIG STEP [2012] Virtual Reality That Moves You [2011] Converging [2010] Diverseness [2009] ReVolution Causes Revolutions [2008]World Performance of VR Applications
  19. Akihiko SHIRAI, Ph.D in a Manga - 30 years in

    Entertainment R&D Manga from my book ”The future of game design - Science in Entertainment systems” (2013) 2010-2018 [Teacher] Associate Professor In Information Media in KAIT, Japan. “Creating people who creates!” 2018- Visiting Professor in Digital Hollywood University Graduate School
  20. [Labo Project] Real Baby - Real Family “Real Baby -

    Real Family: Holdable tangible baby VR” (SIGGRAPH 2017) This project "Real Baby - Real Family" is a project aimed at expressing love and family ties utilizing Virtual Reality system (VR). In this paper, we described the version presented at the International collegiate Virtual Reality Contest (IVRC 2016). It reports system design and implementation, user evaluation during exhibition, and future possibilities. The single player version of "Real Baby" exhibited at the IVRC 2016 preview was made up of three main elements. The first element is a holdable baby mock-up without using HMD. The second is a generator of baby face based on a user's 2D facial image. The third elements consists of a visual, vocal, and haptic feedback as well as event generation. In the future, we are hoping to examine alternative expressions of "love in family ties" with this project.
  21. [Display] 2x3D & ExPixel(2011-2016) “2x3D: Real time shader for simultaneous

    2D/3D hybrid theater” (SIGGRAPH ASIA 2012) “ExPixel: PixelShader for multiplex-image hiding in consumer 3D flat panels” (SIGGRAPH 2014) [Award] DCEXPO2013にて経済産業省か Innovative Technologies 2013賞を受賞
  22. GREE VR Studio Laboratory, Director Joined the GREE Group in

    2018, working in R&D management of GREE VR Studio Laboratory, the R&D division of REALITY Corporation, which develops and operates REALITY, a metaverse for smartphones that has achieved 10 million downloads worldwide, and as a professor at Digital Hollywood University. He has written two technical books "Collaborate with AI to become a god painter" within two months after the release of "Stable Diffusion," an AI image generator that has become a hot topic outside of his work while working as a professor at Digital Hollywood University.
  23. “REALITY”, metaverse via your smartphone 3D Avatar Driven Ecosystem for

    5 years (Director, GREE VR Studio Laboratory) 36 Avatar Creation Live Viewing Gifting & Collaborating Live Streaming Games
  24. グリー技術書典部・過去のラインナップ 過去の白井の執筆(基本的に業務外!) • VTuberライブエンタメ技術に求め 研究開発 • Virtual Cast と Hapbeat

    を使った国際双方向アバター触覚ライブの開発 • Mozilla Hubs を用いたオンラインイベント WebVR 化テクニック https://techbookfest.org/organization/276320001 StableDiffusionの公開(8/26) ↓ 技術書典13(9/25)に向けて、 1か月で「グリー技術書典部誌 2022 年秋号」に寄稿(50p) ↓ 翌日に編集者から商業書籍化の オファーを頂く ↓(100ページ近く加筆…) 10/28に発売開始!
  25. Published a book: “Collaborating with AI to become a godhand

    illustrator - How to comprehend Stable Diffusion” Kindle version https://ivtv.page.link/ak print version https://ivtv.page.link/ap Launch on Oct 28th 2022 = 2 months after the release of StableDiffusion! Speed only can tell the stream…!
  26. 目次[AIとコラボして神絵師になる] (2022/10/28発行) はじめに 第1章 冒険の始ま :AIとコラボして神絵師にな 第2章 Midjourneyを使ってみ 第3章 DreamStudioでAI神絵師にな 第4章 Stable DiffusionをGoogle Colabで動かす

    第5章 論文で読み解くStable Diffusion 第6章 人気AI絵師になってわかったこと 第7章 日本の法律とAI神絵師の適法性 第8章 人工知能とともに絵を描くという行為が 人類にどんな影響を与えてい か? 第9章 プロの仕事でAIを活用してみ 第10章 作例紹介とクリエーター対談(852話さん) 付録A 参考文献・URLs あとがき:冒険のおわ に
  27. 目次[AIとコラボして神絵師になる] (2022/10/28発行) はじめに 第1章 冒険の始ま :AIとコラボして神絵師にな 第2章 Midjourneyを使ってみ 第3章 DreamStudioでAI神絵師にな 第4章 Stable DiffusionをGoogle Colabで動かす

    第5章 論文で読み解くStable Diffusion 第6章 人気AI絵師になってわかったこと 第7章 日本の法律とAI神絵師の適法性 第8章 人工知能とともに絵を描くという行為が 人類にどんな影響を与えてい か? 第9章 プロの仕事でAIを活用してみ 第10章 作例紹介とクリエーター対談(852話さん) 付録A 参考文献・URLs あとがき:冒険のおわ に
  28. [AI Fusion] アバターカラオケを実現する技術 GREE VR Studio Laboratory、メタバース時代のアバターを活用したクリエイ ターエコノミーを刺激~バーチャル演奏技術映像 「AI Fusion」を公開~

    グリー株式会社 2022年10月19日 https://prtimes.jp/main/html/rd/p/000000239.000021973.html メタバース開発者の卵が「自分は何者か」を発見する場所【GREE VR Studio Laboratoryリサーチ系インターンシップ】CGWORLD 2023/2/16 https://cgworld.jp/article/202302-greevr.html ここでも「つくる人をつくる」が大事! ・[AI自動化]ではなく 人間の表現との融合 ・UGC/クリエイター  エコノミーの刺激 ・既存のKaraoke産業を  MetaverseR&Dで刺激
  29. [Metaverse Mode Maker] SDに 画像生成と HMD装着での モーション合成 米国CG国際会議 「SIGGRAPH」で 論文採択

    フランスのVR 「Laval Virtual」 日本のゲーム開発 者会議「CEDEC」 で展示採択 オープンソースも https://github. com/gree/Mus cleCompressor
  30. [Metaverse Mode Maker] Work in GREE VR Studio Lab Word2Textile

    + Motion UGC In HMD Metaverse [Accepted] Laval Virtual SIGGRAPH CEDEC2023 Opensource https://github.com /gree/MuscleComp ressor
  31. Amazonレビューより (Bad😢) 人間の感情に訴え ネガティブ 作文。しかし く見 と… 買う本を間違えているのでは …? Artificial

    Images Midjourney / Stable Diffusionに AIアートコレクション(852話さん) すみません…解説していますが足りな かったようですね… いま(2023/2/10)NovelAIを使ってる人 どれぐらいいらっしゃいますか? AI画像生成に対するヘイトがすごい! 😢😢😢
  32. 国内AI画像生成の星取表(2023年2月ごろの記録) スマホサポート 日本語でOK シード固定 (乱数制御) img2img サブスク料金 (月払い) Meitu ◎

    プロンプト不要 ◎ 〇 〇 広告なし 800円/月 ば ぐっどくん ◎ LINEチャット 〇 × × プレミアム 550円/月 AIピカソ ◎ アプリ 〇 × 〇 広告を見て生成 600円/週 midJourney にじジャーニー 〇 Discord 〇 にじジャーニー × △ remix mode $10/月 〜200生成/月 Lexica △ △ × × $10/月~ 1,000生成 MemePlex ◎ ◎ ◎ 〇 無料 (有料は1200円~) NovelAI △ △ 〇 〇 $10/月~ 約200生成 Dream Studio △ △ 〇 〇 $10/月 約5,000生成 AIのべ すと 〇 〇 ◎ ◎ 970円+税/月~ 約300生成
  33. Other presence related GenAI - Web Media: Series “Generative AI

    stream” (by Impress) - Television: https://www.fnn.jp/articles/-/578546 (FujiTV) - LINE Chatbot - AIDX Consultings - “CreativeAI Lab” in DigitalHollywood GradSc.
  34. Generative AI は仕事を奪う? → 本当だった Here’s what we know about

    generative AI’s impact on white-collar work (FT) https://www.ft.com/content/b29280 76-5c52-43e9-8872-08fda2aa2fcf
  35. [動的ペルソナ] Dynamic Persona “The future of game design - science

    of entertainment systems” by A.Shirai in 2013 Born at 2000. Her dream is becoming a princess. At 2014, in Junior High. She loves to play “Pokemon X/Y”. At 2016, in High school. She loves to sing Hatsune Miku in Karaoke. Her dream is becoming an idol. “Pokemon Go“ told her what is AR. At 2018, in collage. She is studying fashion. Still loves to sing in Karaoke. But bit bored to RPGs. Experienced contents will be affected to your life. At 2020, She want to see Olympic games with her cosplay costume. “COVID19” Pandemic!
  36. メタバース・放送・メディアアートのその先 ... 1990-1995 VideoGame2D 1995-2000 Media Art 2000-2005 VideoGame3D 2005-2010

    Museum 2010-2015 SNS 2015-2020 VR4.0 2020-2025 CreativeAI 研究開発をきちん やっ きた
  37. References: VibeShare and Directional Haptics 1. Yuichiro Kume. 1998. Foot

    interface: fantastic phantom slipper. In ACM SIGGRAPH 98 Conference abstracts and applications (SIGGRAPH '98). Association for Computing Machinery, New York, NY, USA, 114. DOI:https://doi.org/10.1145/280953.284801 2. Yusuke Yamazaki, Shoichi Hasegawa, Hironori Mitake, and Akihiko Shirai. 2019. NeckStrap Haptics: An Algorithm for Non-Visible VR Information Using Haptic Perception on the Neck. InACM SIGGRAPH 2019 Posters(Los Angeles, California)(SIGGRAPH ’19). Association for Computing Machinery, New York, NY, USA, Article 60, 2 pages. https://doi.org/10.1145/3306214.333 3. Yusuke Yamazaki and Akihiko Shirai. 2021. Pseudo Real-Time Live Event: Virtualization for Nonverbal Live Entertainment and Sharing. In2021: Laval Virtual VRIC, ConVRgence Proceedings 2021(Laval, France)(VRIC ’21). https://doi.org/10.20870/IJVR.2021.1.1.479
  38. References: Map and Walking interactions 1. Makiko Suzuki Harada, Hidenori

    Watanave and Shuuichi Endou, : "Tuvalu Visualization Project - Net Art on Digital Globe: Telling the Realities of Remote Places", page 559- 572, 2011. 2. Akihiko Shirai, Kiichi Kobayashi, Masahiro Kawakita, Shoichi Hasegawa, Masayuki Nakajima, and Makoto Sato. 2004. Entertainment applications of human-scale virtual reality systems. In Proceedings of the 5th Pacific Rim conference on Advances in Multimedia Information Processing - Volume Part III (PCM'04). Springer-Verlag, Berlin, Heidelberg, 31–38. DOI:https://doi.org/10.1007/978-3-540-30543-9_5 3. Akihiko Shirai, Yuki Kose, Kumiko Minobe, and Tomoyuki Kimura. 2015. Gamification and construction of virtual field museum by using augmented reality game "Ingress". In Proceedings of the 2015 Virtual Reality International Conference (VRIC '15). Association for Computing Machinery, New York, NY, USA, Article 4, 1–4. DOI:https://doi.org/10.1145/2806173.2806182
  39. References: Avatar and metaverse in Education 1. Rex Hsieh, Akihiko

    Shirai, and Hisashi Sato. 2019. Evaluation of Avatar and Voice Transform in Programming E-Learning Lectures. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (IVA '19). Association for Computing Machinery, New York, NY, USA, 197–199. DOI:https://doi.org/10.1145/3308532.3329430 2. Rex Hsieh, Akihiko Shirai, and Hisashi Sato. 2019. Effectiveness of facial animated avatar and voice transformer in elearning programming course. In ACM SIGGRAPH 2019 Posters (SIGGRAPH '19). Association for Computing Machinery, New York, NY, USA, Article 82, 1–2. DOI:https://doi.org/10.1145/3306214.3338540 3. Liudmila Bredikhina, Toya Sakaguchi, and Akihiko Shirai. 2020. Web3D Distance LiveWorkshop for Children in Mozilla Hubs. In The 25th International Conference on3D Web Technology(Virtual Event, Republic of Korea)(Web3D ’20). Association for Computing Machinery, New York, NY, USA, Article 27, 2 pages. https://doi.org/10.1145/3424616.342472 4. Stewart Culin. 1920. THE JAPANESE GAME OF SUGOROKU.The Brooklyn Museum Quarterly 7, 4 (1920), 213–233. http://www.jstor.org/stable/2645
  40. References: XR Live Entertainment 1. Akihiko Shirai. 2019. REALITY: broadcast

    your virtual beings from everywhere. In ACM SIGGRAPH 2019 Appy Hour (SIGGRAPH '19). Association for Computing Machinery, New York, NY, USA, Article 5, 1–2. DOI:https://doi.org/10.1145/3305365.3329727 2. Bredikhina, Liudmila et al. “Avatar Driven VR Society Trends in Japan.” 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (2020): 497-503. 3. Akihiko Shirai, et.al. 2019. Global Bidirectional Remote Haptic Live Entertainment by Virtual Beings. ACM SIGGRAPH ASIA 2019 Real-Time Live! SIGGRAPH ASIA 2018
  41. Collaboration with “Hapbeat” through internship Product Technical Info Hapbeat Introduction

    movie https://www.youtube.com/ watch?v=DAXH5MMpBDY Paper link to origin of Hapbeat Tension-Based Wearable Vibroacoustic Device for Music Appreciation https://bit.ly/3kicJif Hapbeat-Duo is an easy-to-wear necklace typed device which can present powerful and impactive vibration. Hapbeat-Duo has two core-less DC motors linked by a neck strap. The dynamic range of frequency and amplitude is high enough to be perceived the modulation. Vibrations on either side of the neck do not resonate with each other because the connector is not a rigid body but a flexible neck strap, made of satin ribbon. Therefore, the device can render various distinguishable vibration waves on both sides of the neck independently. Hapbeat-Duo
  42. Description of HapticMineSweeper Neck Strap Haptics: Algorithm for Non-visible VR

    inoformation Using Perceptual Characteristic on Neck (Siggraph 2019) Please watch a video here: https://youtu.be/AgmFWBu 5ZnM Haptic rendering algorithm that dynamically modulates wave parameters to convey distance, direction, and object type by utilizing neck perception and the Hapbeat-Duo, a haptic device composed of two actuators linked by a neck strap.
  43. Amount of applause from audiences Amount of cheering from audiences

    Performer's (real) LEDs glow in sync with the left and right bars The performers all wear Hapbeat in real, and the vibrations show how excited the audience is. The audience can visually see that their cheers are reaching the performers, which contributes to enhance the excitement. Vibeshare::Performer at SIGGRAPH ASIA 2019 Real-Time Live! Please watch a video here: https://www.yout ube.com/watch? v=yTrDRKazksM &t=1s
  44. Today’s following talk “Continuous tap” emoji, button-tap type “continuous tap”

    Continuous tap detection: The emoji button is tapped twice (or more) within 1 second. • Solving practical problem such as large number of server requests. • Expressing the magnitude of the viewers’ emotion.
  45. Bottom axis: Video Playback time [s] Result Requests Left axis:

    The cumulative count of taps or requests within 5 s. Right axis: The single moving average (5 s window with the resolution of 1 s/red line) Taps
  46. [UXDev]New user experience in Metaverse R&D results by a Music

    Video “AI Fusion” technology Everyone can play a band, in a same time, like Karaoke!
  47. [VTech Challenge] VR students go to Metaverse Industry GREE group

    supports IVRC, “Interverse VR challenge”. (Prof Inami will have a talk) Some graduated students work with us. Experienced VR engineers are highly valuable in Metaverse era. Some works infrastructure in cloud.
  48. 生成AI市場の今後 米大手VCアンドリーセン・ホロウィッツ(a16z)の最近のブログ「生 成AIプラットフォームは誰のものなのか?」 。 [End-to-End]MidJourney,RunwayML [アプリ]Github Copilot,Jasper [ソース非公開/API経由]GPT-3 [モデルHub]HuggingFace, Replicate

    [オープンソースモデル]StableDiffusion [クラウド]AWS,GCP,Azure,CoreWeave [計算ハードウェア]NVIDIA,Google(TPU) 生成AI市場の多くの資金は最終的にインフラ企業に。大雑把な数字 で、アプリ企業は収益の約2〜4割を推論とチューニングに費やして い 。こ はクラウドのインスタンスに直接支払わ か、モデル提 供者に支払わ ます。モデル提供者は収益の約半分をクラウドに費 やしてい 。つま 現在の生成AI総収益の10〜20%はクラウドに 流 てい と推測す のは合理的な数字といえ 。 ★その他のレイヤーでの収益構造構築はまだまだ未知数。 Who Owns the Generative AI Platform? (Andreessen Horowitz) https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
  49. Transformersについて学ぼう 「Natural Language Processing with Transformers   Building Language Applications with

    Hugging Face」 [日本語版]「機械学習エンジニアのためのTransformers       最先端自然言語処理ライブラリに モデル開発」      Lewis Tunstall, Leandro von Werra,Thomas Wolf著,中山光樹 訳      発行年月日 2022年08月 ★Stable Diffusion登場前だか かもだけど 副題は「Hugging Faceで!」と書か た HuggingFaceの人が書いた本。 ・GPTのTはTransformer、です。
  50. OpenAIのCEO - Sam Altman CEO頭の中を覗きたい StrictlyVCに OpenAIのCEO、Sam Altmanに対す Connie Loizos

    (@Cookie, StrictlyVC/TechCrunch) に 長めのインタビュー(2023/01/18)が面白かったので個人ブロ グで紹介しています。 https://note.com/o_ob/n/n9493438e24fb
  51. OpenAIのCEO - Sam Altman CEO頭の中を覗いた • マネタイズを最初か 考えていたわけではない • AGI(汎用人工知能)が出てく

    までは市場の原理 が有効に働くし、OpenAIは歓迎 • 社会か 様々な意見が出てく ことは想定済みで、 そこの理解が得 ことに意味があ • AGIができないことについては法律が定め べき • エージェント同士が遊んだ (つま AI vs AI)は既 に想定さ てい • 「人々がAIに何を許可させたいのか?」を知 こと が重要。 • しかしこ を調べ ためにはエッジの効いたシステ ムが必要で、創造的で探索的である必要があるが、 そ に対して「気に入 ない」「不快に思う」とい う人々もい 。 • AIそのものをパーソナライズしていく必要があ (個々のTech企業が1つのルールで1つのシステム を作 こと もは かに優 てい ) • (Microsoftとの提携などに関係なく) OpenAIは人々がトークンを支払う価値があ ことを開発すべき • 「書籍を読む ChatGPTを通して読んだほ うがいい→論文をこ で書いた 楽にな 」は 子供でも分か • 教育の現場でGPTが使われることは想定済み であり、学校側がそれを検出する技術を必要と している、そのほうが健康な社会である • →ウォーターマーキング技術が必要に • 国の政策や個々の学校や何かへの依存は政府 のポリシーで決ま がこ を完璧にす ことは 不可能 • 個人やグループの自由な言語モデルとポリシー 設定と、企業や政府の設定は異な • 動画生成についても確実に来 (いつとは言わない)
  52. OpenAIのCEO - Sam Altman 質疑応答から(抜粋) Q:人類の進化(7万年規模ではなく1年規模で)での良い ケースと悪いケースとは。 • (良いケースは信じ ないぐ

    いいいものであ のでおいておくが)悪いケースについて、AGIがど の うなものにな か、普通の人 も深く細かく 知 たい。システムを安全にす ためにはその逆を 知 ことが非常に重要。 Q:AGIのタイムラインは? • AGIの離陸の「速い/遅い」と向かってい 「完全に 安全な世界」のタイムラインを「長い/短い」の2x2マ トリックスで表現す と「短いタイムラインでゆっく と離陸す (short timeline-slow takeoff)」。 一方でAGIに対して人々が「AIの勝利宣言」には 様々な意見を抱くだ う。 Q:ChatGPTの誇大広告、規模について。 • 印象的ですが100回使えばわか ので。人々 がフェイクニュースの存在を理解す のと同じ。 Q:Googleと社会貢献について。 • 大きな企業が終わ というのはまあ間違い。検 索には変化が起き 。人々が思う うな衝撃的 な変わ 方はしないが。 • AGIが経済に寄与す のは時間の問題。 • 人々が創造的なことをやめ ことはない。 • 科学の進歩は我々(人類)全体が進歩す 方法 なので我々はそういう社会貢献をしてい 。
  53. OpenAIのCEO - Sam Altman 質疑応答から(抜粋) Q:テックワーカーの働く場所について。 • 個人的にはハイブリッド。しかし40年後も人々がそ を望んでい か?「その場にいない人」は取

    残 さ うなものではないだ うか。 Q:安全性について。 • たくさんの問題が新技術、特に狭い分野のAIにはあ 。過去数十年、70-80年、安全システムと安全プ ロセスを構築す 方法を研究す 。間違いない。 • AGI安全工学については独自のカテゴリーとして研 究す 価値があ 。投資が十分に必要であ (stakes is high)不可逆的な状況が続くことは簡 単に想像でき ので、そうす 必要があ 。そ を 別の方法で扱う安全プロセスとスタンダードについ ても研究す 必要があ 。 • 2012-2021の終わ にかけたメガバブル で人を雇うのが大変で、競合他社との差異 を出すのが大変だった。資本金が安く、永続 的な価値を創造す のが難しかった。現在 は資本はタフで、は かに簡単になった。 • 才能があ 人や、顧客であ かどうかにか かわ ず、ユーザーを知ってい 人。 • 資金を調達す のは大変だが、他のことは 楽になった。垂直のAIを実行したい。 • たぶんAIスタートアップは、他の企業と差異 をつけ ために、顧客に寄 添った深いネッ トワークを構築して効果を上げてい と思 います。OpenAIは年々 良くなってい くと思いますが、プラットフォームとして基 盤部分に高い価値を持つ存在でいたい。 喋り方がChatGPTそっくりなのが印象的 …
  54. OpenAIのCEO - Sam Altman 質疑応答の最後に Q:今後「何が来 か」? • こ までの

    うに 「価値を作 出していく方法こそが価値」であ (way way more new value created) こ が黄金の時代にな ことを感じています。 • 「価値を作り出していく方法こそが価値」 「これが黄金の時代になる」
  55. AICU: Vision Create people who creates 創る人を作る AIとコラボレーションする人類の可能性を 全ての人にクリエイティブで共有する Sharing

    human potential of collaboration with AI for all people by creativity. だれもが魔法のような メディアを作れる時代…
  56. AICU Inc. - Business Units Augmented Media Technology Business Unit

    [拡張メディア技術事業] AI Driven User Experience Laboratory [AIDX Lab] Media Communication Business Unit [メディアコミュニケーション事業]
  57. Founders: Koji Tokuda - CFO in Silicon Valley International Business

    Creator / Serial entrepreneur Specialist of i-CEO, COO, CFO, Venture Capitalist, Technology & Financial Consultant, Focused on Data Science, Artificial Intelligence, AI business. Builder, Hard learner of AI/Machine Learning
  58. AICU: Art Inspired Creative Unit AICUは 生成AIとクリエイティブコミュニケーションを開発する技術開発企業 AICU is an

    AI-AdTech-Communication development company that develops generative AI and creative. クリエイティブAIによるAIDXを、放送、アニメ、メディア、Webなどの プロフェッショナルに向けて実施する技術開発企業です。 A technology development company that implements AIDX with Creative AI for broadcast, animation, media, web and other professionals. これに関する独自のB2B2C向け基盤技術を主力に日米同時に事業開発進行します We develop business in Japan and the U.S. at the same time with our unique B2B2C platform technology as the core. AIクリエイションにコンテンツ工学としてのアプローチを与え、 クリエイターのスキルを認定し、コミュニティに陽の目の当たるビジネスを提供します Giving AI Creation a content engineering approach, and we certificates the skills of creators and provides a sunlit business for the community.
  59. AICU Inc. the name [Art Intelligence Creative Universe] [AI Creative

    Unit] [AI Creator Union] [AI Character Cookie Utility] … And “I see you”!
  60. AICU Inc. - Business Units Augmented Media Technology Business Unit

    [拡張メディア技術事業] AI Driven User Experience Laboratory [AIDX Lab] Media Communication Business Unit [メディアコミュニケーション事業] [Hidden Pixel Technology Inc.] ・Deep Tech R&D for Large/Public Display ・AI Driven Interaction and Communication ・PoC, Patent, Licensing ・US market to global ・Consulting PoC ・AICuty ・AITuber/VTuber ・Chatbot for Pro ・Blog media    ・Visual/Publication media ・Creative Workshop ・Creative Events / Hackathon ・Advocating/DevRel ・Technical Writing and Research
  61. [Kanagawa Prefecture] Metaverse Manga Creative workshop for challenged person Person

    with disabilities can create their own character trading card and their original manga story using REALITY and IbisPaint on iPad. Organized by Kanagawa Prefecture, Executed by REALITY xr cloud Inc. [AIDX] Inclusion Workshop “Communication” 10 years old girl on stretcher Face expression collaboration 8 person max in 2 hours workshop
  62. Partners & Creators Akihiko
 SHIRAI
 @o_ob
 
 Founder
 Director
 AIDX


    Gamification
 VTuber R&D
 30years in
 UX Dev
 Multi-Language
 Ja/En/Fr/Zh
 Masahiro Nishimi
 @mah_lab
 
 LLM UX
 
 Word2Vec
 
 Backend
 
 Sho
 Wagatsuma
 @flymywife
 
 QA
 
 AWS
 
 ChatBot Dev
 Kokushin
 
 @kokushing
 
 Front-End
 
 React
 TypeScript
 Web3D
 Node
 
 Kotone
 
 @9shokugakari
 
 Art Dev
 
 Web Dev
 
 Live2D
 
 Illustration
 Koji
 TOKUDA
 
 
 Founder
 CFO

  63. AICuty: new innovation for websites 世界に向けたB2B向け キャラクターIPコミュニケーション技術 既存のWebサイトの構造をそのままにナビゲーション 多言語対応 各国のプライバシー対応

    ゲームやアニメのプロモーションサイトや 個人の情報サイトへの簡単な実装が可能 AI経由の自然な広告出稿と個人情報の法的遵守 AICUtyボット作者に収益還元 [特許出願中]
  64. Media Communication Business Unit Writing as a Production Team: “Stable

    Diffusion Super Illustration Bible” (tentative title) Release in 2024Q1 TBA
  65. [Inclusion Workshop] Metaverse Manga Creative workshop for challenged person Person

    with disabilities can create their own character trading card and their original manga story using REALITY and IbisPaint on iPad. Organized by Kanagawa Prefecture, Executed by REALITY xr cloud Inc. Media Communication Business Unit “Communication” 10 years old girl on stretcher Face expression collaboration 8 person max in 2 hours workshop
  66. Presented by GREE VR Studio Laboratory (C)GREE, Inc. / (C)

    REALITY, Inc. Presented by GREE VR Studio Laboratory (C)GREE, Inc. / (C) REALITY, Inc. REALITY xr cloud社と協働