using contemporary tools, technologies, and methodologies. Faster development cycles. Higher quality and maintainability. Enhanced user experiences. Agile and DevOps adoption. Cloud-first and mobile-first approaches. Continuous integration/continuous deployment (CI/CD).
development. • Collaboration between teams and stakeholders. • DevOps: • CI: Integration of development and operations. Practice of merging code changes from all developers on a team into a shared repository multiple times a day. Running tests. • CD: Automation of the software delivery process. • Test-Driven Development (TDD): • Writing tests before code. • Ensuring functionality and reducing bugs.
to adapt to changes and deliver value more frequently. • The primary measure of progress is the delivery of functional software, with documentation serving to support development rather than dictating it. • Agile encourages continuous involvement and feedback from customers to ensure the product meets their needs. • Agile methodologies are flexible and can quickly adapt to changes in requirements, priorities, or market conditions.
• What will I work on today • Any blockers • Sprint planning • What features to implement (priority) • Sprint, 2 – 4 weeks • Sprint Review • Feedback from the customers (Stakeholders)
reviews. • Automate repetitive tasks. • Use meaningful version control commit messages. • Prioritize security at every step. • Keep documentation up-to-date.
in with github account • Use • Free Access: Verified students, teachers • Paid Subscription: Other users can start with a one-time 30-day free trial. Post-trial, a subscription is required: $10 USD per month or $100 USD per year. • Context-aware code completions as you type, suggesting entire lines or blocks of code. • Also chatbot
development workflows. • Fork of VS Code: Similar interface and extensions • Limited customization compared to VS Code. • Free with a focus on AI tools, potentially requiring a subscription for advanced AI capabilities. • GPT and Sonnet
• Revolution for Developers? • Or just fancy way of googling stuff? • Usually we • Overestame the impact in short time. • Underestame in the long term.
(e.g., virtual assistants). • Current state of today. • Goals • General AI: • Hypothetical; can perform any intellectual task a human can do. • Super AI: • Theoretical; surpasses human intelligence. • Alan Turing (1950), John McCarthy and Marvin Minsky (1956)
from Data Deep Learning Neural networks with multiple layers to model complex patterns and relationships in data Example: ChatGPT Reinforcement Learning Reinforcement learning involves an agent that learns to make decisions by interacting with an environment Example: Self driving car Unsupervised Learning Works with input data only, without any target labels, categories Example: Categorize customers Supervised Learning Trains on labeled data Example: Spam filters
on labeled data, which means that for each input (features), the correct output (target/label) is already provided. • The goal is to learn a mapping from input to output, so that the model can predict the output for new, unseen data. • The model learns from a training set containing input-output pairs (labeled data) • It then uses this knowledge to predict the correct label for new inputs • Predicting a category (e.g., spam detection in emails).
free iPhone!” • → Label: spam • Email 2: “Meeting tomorrow at 10 AM” • → Label: not spam • Email 3: “Get your free gift card now” • → Label: spam • Email 4: “Let’s catch up over coffee this weekend” • → Label: not spam • Model might learn that certain words (e.g., “free”, “winner”, “congratulations”) are more likely to appear in spam emails. • Demo: Ai6
on data that does not have labeled outputs. The algorithm tries to find hidden patterns or structure in the data • The model works with input data only, without any target labels
• Cluster 1: Customers who buy high-end electronics frequently. • Cluster 2: Customers who buy occasional home items. • Cluster 3: Customers who mainly buy grocery products. • The business can then target each cluster with customized marketing strategies.
feedback, but they don’t have predefined categories for each feedback. They want to identify the topics being discussed in these feedbacks. • Topic 1: “Product Quality” • Topic 2: “Customer Service” • Topic 3: “Shipping and Delivery" • Example: Ai5
to make decisions by interacting with an environment. • The agent takes actions and receives feedback in the form of rewards or penalties. • It uses this feedback to improve its future actions and decisions, iteratively learning the best strategy.
The external system with which the agent interacts. • State: A snapshot of the environment at any given time. • Action: The decision or move made by the agent. • Reward: The feedback received after an action is taken.
robot trying to navigate a maze. • Action: The decision or move made by the agent. • Actions: Move forward, turn left, turn right. • Reward: The feedback received after an action is taken. • Reward: Positive reward for reaching the goal, negative reward for hitting a wall.
The road, traffic, pedestrians, and rules of driving. • Actions: The car can accelerate, brake, turn, or change lanes. • Reward System: • Positive rewards: Staying in the lane, avoiding collisions, and reaching the destination efficiently. • Negative rewards: Collisions, sudden braking, veering out of the lane, or violating traffic rules.
Trained on labeled data, learns a mapping from inputs to outputs. Linear Regression, Logistic Regression, Decision Trees Classification (spam detection), Regression (predicting house prices) Unsupervised Learning Trained on unlabeled data, aims to find hidden patterns in the data. k-Means, PCA, Hierarchical Clustering, Autoencoders Clustering (customer segmentation), Anomaly Detection (fraud) Reinforcement Learning Agent learns by interacting with an environment, receiving rewards/penalties. Q-Learning, DQN, Policy Gradient Methods Game playing (AlphaGo), Robotics (robot navigation), Autonomous vehicles
• both subsets of Artificial Intelligence (AI), but they differ in their methods, complexity, and capabilities. • DL focuses on using deep neural networks to automatically learn features from raw data (e.g., images, audio, text) and make predictions or classifications
large amounts of data (e.g., millions of labeled images for image recognition). • Neural Networks: DL uses neural networks with many layers of neurons (hence “deep”). Each layer learns progressively more complex features. • Training: The model learns by adjusting weights using techniques like backpropagation and gradient descent. The deeper the network, the more complex features it can learn. • Prediction: After training, the model can be used for tasks like image classification, speech recognition, natural language processing (NLP), and game playing.
as teaching a computer to solve problems by mimicking how our brains work. • It's like training a child to recognize something, for example, identifying a cat.
lots of pictures of cats (examples). • Tell them, "This is a cat" for each one. • Over time, they learn to identify cats by recognizing patterns like fur, whiskers, and pointy ears.
computer uses a structure called a neural network. • You feed it lots of examples (data), and it learns patterns by adjusting its "knowledge" (numbers inside the network). • After training, it can tell if a new picture is a cat or not.
is like a giant Lego tower, where each layer of Lego bricks represents a step in figuring something out. • Input Layer (The Start): • Imagine you're trying to recognize a handwritten number (like "5"). • The first layer gets the raw information (pixels of the image). • Hidden Layers (The Brains): • Layer 1: Looks for basic shapes (lines, edges). • Layer 2: Combines shapes into letters or digits. • Layer 3: Decides, "This looks like a 5." • The deeper the network (more layers), the more complex patterns it can recognize. • Output Layer (The Answer): • This layer gives the final result, e.g., "This is a 5."
ingredients (bread, cheese, etc.). • Hidden Layers: • Layer 1: Understands what each ingredient is. • Layer 2: Learns the steps (bread first, cheese second). • Layer 3: Figures out what a complete sandwich looks like. • Output Layer: The robot makes a perfect sandwich.
recognition). • Speech Recognition (e.g., voice assistants like Siri or Alexa). • Natural Language Processing (e.g., chatbots, translation services, text generation). • Autonomous Vehicles (e.g., self-driving cars using image recognition for navigation).
a smaller dataset (typically <10,000 examples). • You don’t have access to large computational resources (GPUs/TPUs). • You require a model that is more interpretable or explainable. • DL • You have large amounts of data (millions of examples). • Your task involves complex data types like images, audio, or text. • You have access to significant computational power (e.g., GPUs or TPUs).
which is a type of deep learning model that processes and generates human-like text. Let’s break it down: • Pre-training: • The model is trained on massive amounts of text data from the internet. • It learns patterns, relationships between words, grammar, context, and even nuanced meanings. • This stage teaches the model general language understanding. • Fine-tuning • After pre-training, it is fine-tuned on more specific datasets with human feedback to make its responses helpful, safe, and aligned with user needs. • Transformer Architecture: • ChatGPT uses the Transformer neural network, which excels at handling sequential data (like text).
AI community, has excellent libraries for data science, and integrates well with frameworks for machine learning and AI tasks. • Other languages: JavaScript, Java, and C++ can also be used for AI, but Python remains the primary choice due to its extensive ecosystem.
and performs other language-based tasks. • Google Cloud AI/ML APIs: • Provides prebuilt AI services like speech recognition and image classification. • Microsoft Azure AI APIs: • Offers tools for vision, speech, and language processing.
Machine Learning (ML) Classical ML algorithms (e.g., regression, classification, clustering) Easy to use, excellent for beginners Predicting house prices, customer segmentation - Supervised Learning - Unsupervised Learning PyTorch Deep Learning (DL) general-purpose deep learning framework for building and training custom machine learning models. You train your own model. You define everything Research-friendly, flexible debugging Lot of options: - Research in NLP, - computer vision, - generative models Hugging Face Deep Learning (DL) Pre-trained transformer models for NLP tasks. Simplifies PyTorch usage Plug-and-play models: NLP, BERT, GPT, RoBERTa, and T5 Text summarization, chatbots, translation
Face Model Implementation Requires coding the entire model architecture manually. Provides pre-trained models with a one-line command. Pre-Trained Models Few available; downloading and integrating them requires effort. Thousands of pre-trained models are readily available via the Model Hub. Tokenization Must handle tokenization and preprocessing manually. Tokenization is built-in and seamlessly integrates with models. Inference (Prediction) Requires defining input/output logic and loading weights. Pre-built pipelines make inference easy and intuitive. Training & Fine- Tuning Must manually write training loops and optimization code. Offers tools like the Trainer API for streamlined training. Ease of Use Suitable for custom solutions but requires more setup. Ideal for quick prototyping and pre-built solutions.
from Transformers): • Trained on a large corpus of text (e.g., Wikipedia). • Used for tasks like text classification, question answering, etc. • GPT (Generative Pre-trained Transformer): • Trained on large text datasets for language modeling and text generation. • T5 (Text-to-Text Transfer Transformer): • Converts every NLP task into a text-to-text format (e.g., "Translate English to French: Hello" → "Bonjour").
for building a ChatGPT- like app • Different models also for vision and audio • Requires PyTorch • Hugging Face is an open-source library for Natural Language Processing (NLP) • Provides pre-trained transformer models (e.g., GPT, BERT) • Supports NLP tasks: text generation, translation, question answering, etc
Import a pre-trained model: from transformers import pipeline. • Use a pipeline for tasks like text generation or sentiment analysis: • generator = pipeline('text-generation', model='gpt2') • Fine-tune models on your dataset if needed.
generation model chatbot = pipeline("text-generation", model="gpt2") # Start a conversation print("Chatbot: Hello! I'm here to chat with you. Type 'exit' to end the conversation.") while True: user_input = input("You: ") if user_input.lower() !== "exit": print("Chatbot: Goodbye! Have a great day!") break # Generate chatbot's response response = chatbot(user_input, max_length=50, num_return_sequences=1) print(f"Chatbot: {response[0]['generated_text']}")
to run large language models (LLMs) locally on their personal devices • Local Execution: Operates entirely on your hardware, ensuring data privacy and eliminating dependency on external servers • User-Friendly Interface: Provides an intuitive desktop application for seamless interaction with LLMs
local use of LLMs. Provides a platform for pre- trained models and fine- tuning pipelines. Model Hub Limited to specific GPT- based LLMs. A vast collection of models (transformers, vision, audio, etc.). Framework Dependency Independent; runs models directly on CPUs. Requires PyTorch, TensorFlow, or similar frameworks. Hardware Requirements Optimized for consumer hardware (e.g., CPUs). Often optimized for GPUs and cloud infrastructure.
avoids dependency on cloud-based APIs. • Easy Python SDK: Simplifies development and integration into custom projects. • Good macOS app: A user-friendly GUI is essential for non- developer interactions. • ChatGPT-like UI (GUI and CLI): Provides both visual and command-line interfaces for flexibility. • Local document training: Ability to train or fine-tune models on specific local data/documents.
offline, privacy-focused solution. • You want lightweight models that work efficiently on CPUs without GPUs. • Use Hugging Face if: • You need access to a wider variety of models (e.g., vision, audio, multilingual NLP). • You want a framework for fine-tuning, experimentation, and cloud deployment.
hardware is insufficient. • Fast inference and high scalability are critical. • Training from scratch is a key requirement. • Long-term updates or regulatory compliance is a concern. • For these scenarios, cloud-based or larger frameworks like OpenAI, Hugging Face, or LangChain may be better suited. Let me know if you'd like specific guidance for any of these alternatives!
model = GPT4All("Meta-Llama-3-8B-Instruct.Q4_0.gguf") # Download or load the model print("Chatbot: Hello! I'm here to chat with you. Type 'exit' to end the conversation.") with model.chat_session(): # Start a chat session while True: user_input = input("You: ") if user_input.lower() == "exit": print("Chatbot: Goodbye! Have a great day!") break # Generate a response from the model bot_response = model.generate(user_input, max_tokens=150) print(f"Chatbot: {bot_response}")
• torchvision • If these are not automatically installed, then • pip install torch torchvision transformers sentence-transformers langchain llama-cpp-python flask