Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Generic AI Talk

Generic AI Talk

This is set of slides to talk about the history of AI, the ARPANET, FOSS and where were are today.

Harish Pillay

April 20, 2024
Tweet

More Decks by Harish Pillay

Other Decks in Technology

Transcript

  1. 1950 • Alan Turing proposes the Turing Test (1950) a.

    In the 1950 “MIND - A QUARTERLY REVIEW OF PSYCHOLOGY AND PHILOSOPHY”, Alan Turing asks “Can Machines Think?”
  2. 1950 • Alan Turing proposes the Turing Test (1950) a.

    In the 1950 “MIND - A QUARTERLY REVIEW OF PSYCHOLOGY AND PHILOSOPHY”, Alan Turing asks “Can Machines Think?” b. He rephrases that statement instead to “The Imitation Game”. [0]
  3. 1956 • The Dartmouth Workshop (1956) held at the Dartmouth

    College a. This is lead by Prof John McCarthy, and he invited Marvin Minsky, Claude Shannon, Nathaniel Rochester
  4. 1956 • The Dartmouth Workshop (1956) held at the Dartmouth

    College a. This is lead by Prof John McCarthy, and he invited Marvin Minsky, Claude Shannon, Nathaniel Rochester b. By the end of the workshop, which lasted 8 weeks, they coined the term "artificial intelligence" [1]
  5. 1956 • The Dartmouth Workshop (1956) held at the Dartmouth

    College a. This is lead by Prof John McCarthy, and he invited Marvin Minsky, Claude Shannon, Nathaniel Rochester b. By the end of the workshop, which lasted 8 weeks, they coined the term "artificial intelligence" [1] • Early AI programs: Logic Theorist and checkers-playing program
  6. 1960s • The “perceptron”, an algorithm invented in 1957 at

    the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research was implemented, in software, on an IBM 704.
  7. 1960s • The “perceptron”, an algorithm invented in 1957 at

    the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research was implemented, in software, on an IBM 704. • That was subsequently implemented in custom-built hardware known as the "Mark 1 Perceptron”.
  8. 1960s • The “perceptron”, an algorithm invented in 1957 at

    the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research was implemented, in software, on an IBM 704. • That was subsequently implemented in custom-built hardware known as the "Mark 1 Perceptron”. • It was one of the first artificial neural networks to be produced.
  9. 1960s • The “perceptron”, an algorithm invented in 1957 at

    the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research was implemented, in software, on an IBM 704. • That was subsequently implemented in custom-built hardware known as the "Mark 1 Perceptron”. • It was one of the first artificial neural networks to be produced. • First AI Winter kicks in as the systems have severe limitations and research funding began to dwindle.
  10. 1970s - 1980s • Research focus moves to Symbolic AI

    and Knowledge Representation • Expert systems - Solving problems in specific domains
  11. 1970s - 1980s • Research focus moves to Symbolic AI

    and Knowledge Representation • Expert systems - Solving problems in specific domains • Limitations of knowledge engineering and reasoning hampered adoption
  12. 1970s - 1980s • Research focus moves to Symbolic AI

    and Knowledge Representation • Expert systems - Solving problems in specific domains • Limitations of knowledge engineering and reasoning hampered adoption • 2nd AI Winter happens
  13. 1980s - 1990s • Statistical learning algorithms gain prominence •

    Support Vector Machines (SVMs - a form of perceptron), and decision trees gain popularity
  14. 1980s - 1990s • Statistical learning algorithms gain prominence •

    Support Vector Machines (SVMs - a form of perceptron), and decision trees gain popularity • Increased focus on data-driven approaches
  15. 1980s - 1990s • Statistical learning algorithms gain prominence •

    Support Vector Machines (SVMs - a form of perceptron), and decision trees gain popularity • Increased focus on data-driven approaches • Groundwork laid for future advances
  16. 1990s • Linux - an open source operating system (runs

    this laptop) • Apache - webservers
  17. 1990s • Linux - an open source operating system (runs

    this laptop) • Apache - webservers • Python, Perl, PHP
  18. 1990s • Linux - an open source operating system (runs

    this laptop) • Apache - webservers • Python, Perl, PHP • Dot.com boom
  19. 2000s • Increased computational power and availability of large datasets

    • Revival of neural networks - Deep learning architectures
  20. 2000s • Increased computational power and availability of large datasets

    • Revival of neural networks - Deep learning architectures • Breakthroughs in image recognition, speech recognition, and natural language processing
  21. 2010s • Big Data revolution - explosion of data volume

    and variety • Cloud computing platforms - Scalable and accessible resources
  22. 2010s • Big Data revolution - explosion of data volume

    and variety • Cloud computing platforms - Scalable and accessible resources • Democratization of AI - Increased accessibility for businesses and researchers
  23. 2010s - current • Specialization of deep learning architectures for

    specific tasks • Deep learning applications in various domains: self-driving cars, healthcare, finance • Ethical considerations of AI - Biasness, Fairness, and Transparency and 8 other metrics
  24. 2010s - current - Open Source & Collaboration • Rise

    of open source AI frameworks and tools (TensorFlow, PyTorch) • Collaborative research and development efforts • Fostering innovation and accelerating progress
  25. 2020s - current • Foundational Models - Large/Small Language Models

    capable of generating text, translating languages, and writing different kinds of creative content • Generative models for creating realistic images and other types of data • Multimodal models that can process and understand different types of data (text, images, audio)
  26. 2020s - current • Mixture Of Experts (MOE) - A

    framework for training and using large, efficient multiple foundational models • Improves scalability and reduces training costs • Potential for wider adoption of complex AI models • OpenMOE [2] - an open source implementation to drive innovation
  27. 2020s - current • RAG - Retrieval Augmented Generation •

    Using open source (as per Open Source Initiative’s definitions [3]) foundational models to train against corpus of private data for subsequent enquiry
  28. 2020s - current • Testing of AI solutions for fairness,

    unbiasedness etc, via open source testing tools • AI Verify’s toolkit [4], is a community driven global effort of the AI Verify Foundation to create a commonly agreed to test framework • Guiding principles of AVF is on AIVerifyFoundation.sg
  29. The Future from Today • Sovereign AI - nation states

    taking charge • Personal AI - kwaai.ai [5] • AI Governance Frameworks v2 - from Singapore’s Personal Data Protection Commission [6]
  30. [0] https://academic.oup.com/mind/article/LIX/236/433/986238 [1] https://spectrum.ieee.org/dartmouth-ai-workshop [2] https://arxiv.org/abs/2402.01739 [3] https://opensource.org/deepdive [4] https://github.com/IMDA-BTG/aiverify

    [5] https://www.kwaai.ai [6] https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for -Organisation/AI/SGModelAIGovFramework2.pdf References