$30 off During Our Annual Pro Sale. View Details »

Modern software development and fundamentals of AI

Modern software development and fundamentals of AI

Modern software development and fundamentals of AI

Jussi Pohjolainen

November 29, 2024
Tweet

More Decks by Jussi Pohjolainen

Other Decks in Technology

Transcript

  1. What is Modern Software Development? The process of creating applications

    using contemporary tools, technologies, and methodologies. Faster development cycles. Higher quality and maintainability. Enhanced user experiences. Agile and DevOps adoption. Cloud-first and mobile-first approaches. Continuous integration/continuous deployment (CI/CD).
  2. Lifecycle • Requirement Gathering and Analysis • System Design •

    Implementation (Coding) • Testing • Deployment • Maintenance and Support
  3. Cons • Inflexibility • Late Testing • High Risk and

    Uncertainty • Customer Involvement
  4. Modern Core Principles • Agile Development: • Iterative and incremental

    development. • Collaboration between teams and stakeholders. • DevOps: • CI: Integration of development and operations. Practice of merging code changes from all developers on a team into a shared repository multiple times a day. Running tests. • CD: Automation of the software delivery process. • Test-Driven Development (TDD): • Writing tests before code. • Ensuring functionality and reducing bugs.
  5. Agile • Agile promotes iterative and incremental development, allowing teams

    to adapt to changes and deliver value more frequently. • The primary measure of progress is the delivery of functional software, with documentation serving to support development rather than dictating it. • Agile encourages continuous involvement and feedback from customers to ensure the product meets their needs. • Agile methodologies are flexible and can quickly adapt to changes in requirements, priorities, or market conditions.
  6. Agile • Daily Stand-up • What did I accomplish yesterday

    • What will I work on today • Any blockers • Sprint planning • What features to implement (priority) • Sprint, 2 – 4 weeks • Sprint Review • Feedback from the customers (Stakeholders)
  7. Best Practices • Write clean, modular code. • Emphasize code

    reviews. • Automate repetitive tasks. • Use meaningful version control commit messages. • Prioritize security at every step. • Keep documentation up-to-date.
  8. Tools • Version Control: Git, GitHub, GitLab. • IDEs: VS

    Code, IntelliJ IDEA, Xcode. • Build Tools: Maven, Gradle. • Containerization: Docker • Cloud Platforms: AWS, Azure, Google Cloud • Testing Frameworks: Selenium, JUnit, Cypress
  9. AI IDEs • Visual Studio Code with GitHub Copilot •

    Cursor • ChatGPT app with connection to VS Code and terminal • JetBrains IDEs with AI Assistant • Replit with Ghostwriter
  10. Github Copilot • VS Code github copilot extension • Sign

    in with github account • Use • Free Access: Verified students, teachers • Paid Subscription: Other users can start with a one-time 30-day free trial. Post-trial, a subscription is required: $10 USD per month or $100 USD per year. • Context-aware code completions as you type, suggesting entire lines or blocks of code. • Also chatbot
  11. Cursor • AI-driven by default, with a focus on simplifying

    development workflows. • Fork of VS Code: Similar interface and extensions • Limited customization compared to VS Code. • Free with a focus on AI tools, potentially requiring a subscription for advanced AI capabilities. • GPT and Sonnet
  12. Challenges • Rapidly evolving technologies. • Cross-platform compatibility. • Security

    vulnerabilities. • Balancing speed with quality. • Managing distributed teams.
  13. Future Trends • Artificial Intelligence in development. • Ethical considerations

    in AI and automation. • Low-code/no-code platforms.
  14. AI • Revolution for all Fields? • Marketing, Doctors ...

    • Revolution for Developers? • Or just fancy way of googling stuff? • Usually we • Overestame the impact in short time. • Underestame in the long term.
  15. Introduction to Artificial Intelligence • AI is the simulation of

    human intelligence in machines programmed to think, reason, and learn. • Perception, reasoning, learning, and problem-solving • Chatbots • Recommendation systems, • Image recognition • Self-driving cars
  16. Types of AI • Narrow AI: • Performs specific tasks

    (e.g., virtual assistants). • Current state of today. • Goals • General AI: • Hypothetical; can perform any intellectual task a human can do. • Super AI: • Theoretical; surpasses human intelligence. • Alan Turing (1950), John McCarthy and Marvin Minsky (1956)
  17. AI Thinking, reasoning, and learning like humans Machine Learning Learning

    from Data Deep Learning Neural networks with multiple layers to model complex patterns and relationships in data Example: ChatGPT Reinforcement Learning Reinforcement learning involves an agent that learns to make decisions by interacting with an environment Example: Self driving car Unsupervised Learning Works with input data only, without any target labels, categories Example: Categorize customers Supervised Learning Trains on labeled data Example: Spam filters
  18. Supervised Learning • In supervised learning, the algorithm is trained

    on labeled data, which means that for each input (features), the correct output (target/label) is already provided. • The goal is to learn a mapping from input to output, so that the model can predict the output for new, unseen data. • The model learns from a training set containing input-output pairs (labeled data) • It then uses this knowledge to predict the correct label for new inputs • Predicting a category (e.g., spam detection in emails).
  19. Example Dataset • Email 1: “Congratulations! You have won a

    free iPhone!” • → Label: spam • Email 2: “Meeting tomorrow at 10 AM” • → Label: not spam • Email 3: “Get your free gift card now” • → Label: spam • Email 4: “Let’s catch up over coffee this weekend” • → Label: not spam • Model might learn that certain words (e.g., “free”, “winner”, “congratulations”) are more likely to appear in spam emails. • Demo: Ai6
  20. Unsupervised Learning • In unsupervised learning, the algorithm is trained

    on data that does not have labeled outputs. The algorithm tries to find hidden patterns or structure in the data • The model works with input data only, without any target labels
  21. Example • The algorithm might group customers into three clusters:

    • Cluster 1: Customers who buy high-end electronics frequently. • Cluster 2: Customers who buy occasional home items. • Cluster 3: Customers who mainly buy grocery products. • The business can then target each cluster with customized marketing strategies.
  22. Example • A company has a large collection of customer

    feedback, but they don’t have predefined categories for each feedback. They want to identify the topics being discussed in these feedbacks. • Topic 1: “Product Quality” • Topic 2: “Customer Service” • Topic 3: “Shipping and Delivery" • Example: Ai5
  23. Reinforcement Learning • Reinforcement learning involves an agent that learns

    to make decisions by interacting with an environment. • The agent takes actions and receives feedback in the form of rewards or penalties. • It uses this feedback to improve its future actions and decisions, iteratively learning the best strategy.
  24. Key Concepts • Agent: The entity making decisions. • Environment:

    The external system with which the agent interacts. • State: A snapshot of the environment at any given time. • Action: The decision or move made by the agent. • Reward: The feedback received after an action is taken.
  25. Example • Agent: The entity making decisions. • Agent: A

    robot trying to navigate a maze. • Action: The decision or move made by the agent. • Actions: Move forward, turn left, turn right. • Reward: The feedback received after an action is taken. • Reward: Positive reward for reaching the goal, negative reward for hitting a wall.
  26. Example • Agent: The self-driving car's AI system. • Environment:

    The road, traffic, pedestrians, and rules of driving. • Actions: The car can accelerate, brake, turn, or change lanes. • Reward System: • Positive rewards: Staying in the lane, avoiding collisions, and reaching the destination efficiently. • Negative rewards: Collisions, sudden braking, veering out of the lane, or violating traffic rules.
  27. Summary Type of Learning Description Example Algorithms Applications Supervised Learning

    Trained on labeled data, learns a mapping from inputs to outputs. Linear Regression, Logistic Regression, Decision Trees Classification (spam detection), Regression (predicting house prices) Unsupervised Learning Trained on unlabeled data, aims to find hidden patterns in the data. k-Means, PCA, Hierarchical Clustering, Autoencoders Clustering (customer segmentation), Anomaly Detection (fraud) Reinforcement Learning Agent learns by interacting with an environment, receiving rewards/penalties. Q-Learning, DQN, Policy Gradient Methods Game playing (AlphaGo), Robotics (robot navigation), Autonomous vehicles
  28. Deep Learning • Deep Learning (DL) and Machine Learning (ML)

    • both subsets of Artificial Intelligence (AI), but they differ in their methods, complexity, and capabilities. • DL focuses on using deep neural networks to automatically learn features from raw data (e.g., images, audio, text) and make predictions or classifications
  29. How it Works? • Data: DL models typically work with

    large amounts of data (e.g., millions of labeled images for image recognition). • Neural Networks: DL uses neural networks with many layers of neurons (hence “deep”). Each layer learns progressively more complex features. • Training: The model learns by adjusting weights using techniques like backpropagation and gradient descent. The deeper the network, the more complex features it can learn. • Prediction: After training, the model can be used for tasks like image classification, speech recognition, natural language processing (NLP), and game playing.
  30. Deep Learning in AI (Simplified) • Think of deep learning

    as teaching a computer to solve problems by mimicking how our brains work. • It's like training a child to recognize something, for example, identifying a cat.
  31. What You Do for a Child • Show the child

    lots of pictures of cats (examples). • Tell them, "This is a cat" for each one. • Over time, they learn to identify cats by recognizing patterns like fur, whiskers, and pointy ears.
  32. What a Computer Does • Instead of a brain, the

    computer uses a structure called a neural network. • You feed it lots of examples (data), and it learns patterns by adjusting its "knowledge" (numbers inside the network). • After training, it can tell if a new picture is a cat or not.
  33. What is a Deep Neural Network? • A neural network

    is like a giant Lego tower, where each layer of Lego bricks represents a step in figuring something out. • Input Layer (The Start): • Imagine you're trying to recognize a handwritten number (like "5"). • The first layer gets the raw information (pixels of the image). • Hidden Layers (The Brains): • Layer 1: Looks for basic shapes (lines, edges). • Layer 2: Combines shapes into letters or digits. • Layer 3: Decides, "This looks like a 5." • The deeper the network (more layers), the more complex patterns it can recognize. • Output Layer (The Answer): • This layer gives the final result, e.g., "This is a 5."
  34. Analogy: Making a Sandwich • Input Layer: You show the

    ingredients (bread, cheese, etc.). • Hidden Layers: • Layer 1: Understands what each ingredient is. • Layer 2: Learns the steps (bread first, cheese second). • Layer 3: Figures out what a complete sandwich looks like. • Output Layer: The robot makes a perfect sandwich.
  35. Applications • Image Recognition (e.g., identifying objects in photos, facial

    recognition). • Speech Recognition (e.g., voice assistants like Siri or Alexa). • Natural Language Processing (e.g., chatbots, translation services, text generation). • Autonomous Vehicles (e.g., self-driving cars using image recognition for navigation).
  36. Basic ML vs DL • Basic ML • You have

    a smaller dataset (typically <10,000 examples). • You don’t have access to large computational resources (GPUs/TPUs). • You require a model that is more interpretable or explainable. • DL • You have large amounts of data (millions of examples). • Your task involves complex data types like images, audio, or text. • You have access to significant computational power (e.g., GPUs or TPUs).
  37. ChatGPT • ChatGPT is powered by GPT (Generative Pre-trained Transformer),

    which is a type of deep learning model that processes and generates human-like text. Let’s break it down: • Pre-training: • The model is trained on massive amounts of text data from the internet. • It learns patterns, relationships between words, grammar, context, and even nuanced meanings. • This stage teaches the model general language understanding. • Fine-tuning • After pre-training, it is fine-tuned on more specific datasets with human feedback to make its responses helpful, safe, and aligned with user needs. • Transformer Architecture: • ChatGPT uses the Transformer neural network, which excels at handling sequential data (like text).
  38. Programming Language: Python • Python: It’s widely adopted in the

    AI community, has excellent libraries for data science, and integrates well with frameworks for machine learning and AI tasks. • Other languages: JavaScript, Java, and C++ can also be used for AI, but Python remains the primary choice due to its extensive ecosystem.
  39. APIs • OpenAI's GPT-4 API: • Generates text, answers questions,

    and performs other language-based tasks. • Google Cloud AI/ML APIs: • Provides prebuilt AI services like speech recognition and image classification. • Microsoft Azure AI APIs: • Offers tools for vision, speech, and language processing.
  40. Libraries Tool Category Primary Use Strengths Use Case Examples scikit-learn

    Machine Learning (ML) Classical ML algorithms (e.g., regression, classification, clustering) Easy to use, excellent for beginners Predicting house prices, customer segmentation - Supervised Learning - Unsupervised Learning PyTorch Deep Learning (DL) general-purpose deep learning framework for building and training custom machine learning models. You train your own model. You define everything Research-friendly, flexible debugging Lot of options: - Research in NLP, - computer vision, - generative models Hugging Face Deep Learning (DL) Pre-trained transformer models for NLP tasks. Simplifies PyTorch usage Plug-and-play models: NLP, BERT, GPT, RoBERTa, and T5 Text summarization, chatbots, translation
  41. PyTorch vs Hugging Face Aspect Using PyTorch Directly Using Hugging

    Face Model Implementation Requires coding the entire model architecture manually. Provides pre-trained models with a one-line command. Pre-Trained Models Few available; downloading and integrating them requires effort. Thousands of pre-trained models are readily available via the Model Hub. Tokenization Must handle tokenization and preprocessing manually. Tokenization is built-in and seamlessly integrates with models. Inference (Prediction) Requires defining input/output logic and loading weights. Pre-built pipelines make inference easy and intuitive. Training & Fine- Tuning Must manually write training loops and optimization code. Offers tools like the Trainer API for streamlined training. Ease of Use Suitable for custom solutions but requires more setup. Ideal for quick prototyping and pre-built solutions.
  42. Hugging Face Pretrained models: NLP • BERT (Bidirectional Encoder Representations

    from Transformers): • Trained on a large corpus of text (e.g., Wikipedia). • Used for tasks like text classification, question answering, etc. • GPT (Generative Pre-trained Transformer): • Trained on large text datasets for language modeling and text generation. • T5 (Text-to-Text Transfer Transformer): • Converts every NLP task into a text-to-text format (e.g., "Translate English to French: Hello" → "Bonjour").
  43. Hugging Face • Hugging Face is an excellent starting point

    for building a ChatGPT- like app • Different models also for vision and audio • Requires PyTorch • Hugging Face is an open-source library for Natural Language Processing (NLP) • Provides pre-trained transformer models (e.g., GPT, BERT) • Supports NLP tasks: text generation, translation, question answering, etc
  44. Basic Usage • Install the library: pip install transformers. •

    Import a pre-trained model: from transformers import pipeline. • Use a pipeline for tasks like text generation or sentiment analysis: • generator = pipeline('text-generation', model='gpt2') • Fine-tune models on your dataset if needed.
  45. Example from transformers import pipeline # Load a pre-trained text

    generation model chatbot = pipeline("text-generation", model="gpt2") # Start a conversation print("Chatbot: Hello! I'm here to chat with you. Type 'exit' to end the conversation.") while True: user_input = input("You: ") if user_input.lower() !== "exit": print("Chatbot: Goodbye! Have a great day!") break # Generate chatbot's response response = chatbot(user_input, max_length=50, num_return_sequences=1) print(f"Chatbot: {response[0]['generated_text']}")
  46. GPT4All • GPT4All is an open-source ecosystem that enables users

    to run large language models (LLMs) locally on their personal devices • Local Execution: Operates entirely on your hardware, ensuring data privacy and eliminating dependency on external servers • User-Friendly Interface: Provides an intuitive desktop application for seamless interaction with LLMs
  47. GPT4All Aspect GPT4All Hugging Face Core Purpose Focused on enabling

    local use of LLMs. Provides a platform for pre- trained models and fine- tuning pipelines. Model Hub Limited to specific GPT- based LLMs. A vast collection of models (transformers, vision, audio, etc.). Framework Dependency Independent; runs models directly on CPUs. Requires PyTorch, TensorFlow, or similar frameworks. Hardware Requirements Optimized for consumer hardware (e.g., CPUs). Often optimized for GPUs and cloud infrastructure.
  48. GPT4All • Run everything locally: This ensures data privacy and

    avoids dependency on cloud-based APIs. • Easy Python SDK: Simplifies development and integration into custom projects. • Good macOS app: A user-friendly GUI is essential for non- developer interactions. • ChatGPT-like UI (GUI and CLI): Provides both visual and command-line interfaces for flexibility. • Local document training: Ability to train or fine-tune models on specific local data/documents.
  49. Usage • Use GPT4All if: • You need a fully

    offline, privacy-focused solution. • You want lightweight models that work efficiently on CPUs without GPUs. • Use Hugging Face if: • You need access to a wider variety of models (e.g., vision, audio, multilingual NLP). • You want a framework for fine-tuning, experimentation, and cloud deployment.
  50. Avoid GPT4All • You need cutting-edge model capabilities. • Your

    hardware is insufficient. • Fast inference and high scalability are critical. • Training from scratch is a key requirement. • Long-term updates or regulatory compliance is a concern. • For these scenarios, cloud-based or larger frameworks like OpenAI, Hugging Face, or LangChain may be better suited. Let me know if you'd like specific guidance for any of these alternatives!
  51. GPT4All from gpt4all import GPT4All # Load the GPT4All model

    model = GPT4All("Meta-Llama-3-8B-Instruct.Q4_0.gguf") # Download or load the model print("Chatbot: Hello! I'm here to chat with you. Type 'exit' to end the conversation.") with model.chat_session(): # Start a chat session while True: user_input = input("You: ") if user_input.lower() == "exit": print("Chatbot: Goodbye! Have a great day!") break # Generate a response from the model bot_response = model.generate(user_input, max_tokens=150) print(f"Chatbot: {bot_response}")
  52. Challenges • Resource-intensive: Requires GPUs for efficient inference. • Limited

    to pre-trained models unless fine-tuned. • Some models may have licensing restrictions.
  53. Sub dependencies • These modules are dependent on • torch

    • torchvision • If these are not automatically installed, then • pip install torch torchvision transformers sentence-transformers langchain llama-cpp-python flask