Upgrade to Pro — share decks privately, control downloads, hide ads and more …

A (short) History of AI

A (short) History of AI

Talk given at FOSSAsia 2024 at Hanoi, Vietnam on 9 April 2024

Harish Pillay

April 08, 2024
Tweet

More Decks by Harish Pillay

Other Decks in Technology

Transcript

  1. A (really) Short History of Artificial Intelligence From Early Dreams

    to Open Source Collaboration Harish Pillay @harishpillay:matrix.org, [email protected] AI Verify Foundation Slides at: https://tinyurl.com/8czuu59w
  2. 1950 • Alan Turing proposes the Turing Test (1950) a.

    In the 1950 MIND - A QUARTERLY REVIEW OF PSYCHOLOGY AND PHILOSOPHY, Alan Turing asks “Can Machines Think”.
  3. 1950 • Alan Turing proposes the Turing Test (1950) a.

    In the 1950 MIND - A QUARTERLY REVIEW OF PSYCHOLOGY AND PHILOSOPHY, Alan Turing asks “Can Machines Think”. b. He rephrases that statement instead to “The Imitation Game”. [0]
  4. 1956 • The Dartmouth Workshop (1956) a. Lead by Prof

    John McCarthy, he invited Marvin Minsky, Claude Shannon, Nathaniel Rochester and together they coined the term "artificial intelligence"
  5. 1956 • The Dartmouth Workshop (1956) a. Lead by Prof

    John McCarthy, he invited Marvin Minsky, Claude Shannon, Nathaniel Rochester and together they coined the term "artificial intelligence" b. Held over an eight week period, between 11 to 47 people participated [1]
  6. 1956 • The Dartmouth Workshop (1956) a. Lead by Prof

    John McCarthy, he invited Marvin Minsky, Claude Shannon, Nathaniel Rochester and together they coined the term "artificial intelligence" b. Held over an eight week period, between 11 to 47 people participated [1] • Early AI programs: Logic Theorist and checkers-playing program
  7. 1960s • The “perceptron”, an algorithm invented in 1957 at

    the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research and was implemented, in software, on an IBM 704.
  8. 1960s • The “perceptron”, an algorithm invented in 1957 at

    the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research and was implemented, in software, on an IBM 704. • That was subsequently implemented in custom-built hardware known as the "Mark 1 Perceptron”.
  9. 1960s • The “perceptron”, an algorithm invented in 1957 at

    the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research and was implemented, in software, on an IBM 704. • That was subsequently implemented in custom-built hardware known as the "Mark 1 Perceptron”. • It was one of the first artificial neural networks to be produced.
  10. 1960s • The “perceptron”, an algorithm invented in 1957 at

    the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research and was implemented, in software, on an IBM 704. • That was subsequently implemented in custom-built hardware known as the "Mark 1 Perceptron”. • It was one of the first artificial neural networks to be produced. • First AI Winter kicks in as the systems have severe limitations and research funding began to dwindle.
  11. 1970s - 1980s • Research focus moves to Symbolic AI

    and Knowledge Representation • Expert systems - Solving problems in specific domains
  12. 1970s - 1980s • Research focus moves to Symbolic AI

    and Knowledge Representation • Expert systems - Solving problems in specific domains • Limitations of knowledge engineering and reasoning hampered adoption
  13. 1980s - 1990s • Statistical learning algorithms gain prominence •

    Support Vector Machines (SVMs - a form of perceptron), and decision trees gain popularity
  14. 1980s - 1990s • Statistical learning algorithms gain prominence •

    Support Vector Machines (SVMs - a form of perceptron), and decision trees gain popularity • Increased focus on data-driven approaches
  15. 1980s - 1990s • Statistical learning algorithms gain prominence •

    Support Vector Machines (SVMs - a form of perceptron), and decision trees gain popularity • Increased focus on data-driven approaches • Groundwork laid for future advances
  16. 2000s • Increased computational power and availability of large datasets

    • Revival of neural networks - Deep learning architectures
  17. 2000s • Increased computational power and availability of large datasets

    • Revival of neural networks - Deep learning architectures • Breakthroughs in image recognition, speech recognition, and natural language processing
  18. 2010s • Big Data revolution - explosion of data volume

    and variety • Cloud computing platforms - Scalable and accessible resources
  19. 2010s • Big Data revolution - explosion of data volume

    and variety • Cloud computing platforms - Scalable and accessible resources • Democratization of AI - Increased accessibility for businesses and researchers
  20. 2010s - current • Specialization of deep learning architectures for

    specific tasks • Deep learning applications in various domains: self-driving cars, healthcare, finance • Ethical considerations of AI - Biasness, Fairness, and Transparency and 8 other metrics
  21. 2010s - current - Open Source & Collaboration • Rise

    of open source AI frameworks and tools (TensorFlow, PyTorch) • Collaborative research and development efforts • Fostering innovation and accelerating progress
  22. 2020s - current • Foundational Models - Large/Small Language Models

    capable of generating text, translating languages, and writing different kinds of creative content • Generative models for creating realistic images and other types of data • Multimodal models that can process and understand different types of data (text, images, audio)
  23. 2020s - current • Mixture Of Experts (MOE) - A

    framework for training and using large, efficient multiple foundational models • Improves scalability and reduces training costs • Potential for wider adoption of complex AI models • OpenMOE [2] - an open source implementation to drive innovation
  24. 2020s - current • RAG - Retrieval Augmented Generation •

    Using open source (as per Open Source Initiative’s definitions [3]) foundational models to train against corpus of private data for subsequent enquiry
  25. 2020s - current • Testing of AI solutions for fairness,

    unbiasedness etc, via open source testing tools • AI Verify’s toolkit [4], is a community driven global effort of the AI Verify Foundation to create a commonly agreed to test framework • Guiding principles of AVF is on AIVerifyFoundation.sg
  26. The Future from Today • Sovereign AI - nation states

    taking charge • Personal AI - kwaai.ai [5] • AI Governance Frameworks v2 - from Singapore’s Personal Data Protection Commission [6]
  27. [0] https://academic.oup.com/mind/article/LIX/236/433/986238 [1] https://spectrum.ieee.org/dartmouth-ai-workshop [2] https://arxiv.org/abs/2402.01739 [3] https://opensource.org/deepdive [4] https://github.com/IMDA-BTG/aiverify

    [5] https://www.kwaai.ai [6] https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for -Organisation/AI/SGModelAIGovFramework2.pdf References