Upgrade to Pro — share decks privately, control downloads, hide ads and more …

From code breaking to code making

Sponsored · Ship Features Fearlessly Turn features on and off without deploys. Used by thousands of Ruby developers.

From code breaking to code making

From Alan Turing’s pioneering work at Bletchley Park to the modern era of Generative AI, the way we interact with machines has undergone a radical transformation. In this presentation, we explore the fascinating journey from breaking secret codes to using AI as a creative partner in software development.

We revisit the classic ELIZA chatterbot—the world's first chatbot from 1966—and show how it can be reimagined and supercharged using modern Large Language Models (LLMs) like Gemini. Through practical demos, I demonstrate how to integrate powerful AI capabilities into iOS applications using SwiftUI and Firebase.

Want me to come speak at your event? DM me on X, BlueSky, LinkedIn, Threads, or Mastodon.

Avatar for Peter Friese

Peter Friese

February 17, 2026
Tweet

More Decks by Peter Friese

Other Decks in Technology

Transcript

  1. Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.

    COMPUTING MACHINERY AND INTELLIGENCE By A. M. Turing 1. The Imitation Game I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
  2. Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.

    COMPUTING MACHINERY AND INTELLIGENCE By A. M. Turing 1. The Imitation Game I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
  3. Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.

    COMPUTING MACHINERY AND INTELLIGENCE By A. M. Turing 1. The Imitation Game I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
  4. words "machine" and "think" are to be found by examining

    how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus: C: Will X please tell me the length of his or her hair? Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:
  5. words "machine" and "think" are to be found by examining

    how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus: C: Will X please tell me the length of his or her hair? Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:
  6. NUMBER K=A*262144 000070 B=BCDIT.(K) 000080 F'N 0 000090 E'N 000100

    ELIZA MAD NORMAL MODE IS INTEGER 000010 DIMENSION KEY(32),MYTRAN(4) 000020 INITAS.(0) 000030 PRINT COMMENT $WHICH SCRIPT DO YOU WISH TO PLAY$ 000060 READ FORMAT SNUMB,SCRIPT 000070 LIST.(TEST) 000080 LIST.(INPUT) 000090 LIST.(OUTPUT) 000100 LIST.(JUNK) 000110 LIMIT=1 000120 LSSCPY.(TREAD.(INPUT,SCRIPT),JUNK) 000130 MTLIST.(INPUT) 000140 T'H MLST, FOR I=1,1, I .G. 4 000150 MLST LIST.(MYTRAN(I)) 000160 MINE=0 000170 LIST.(MYLIST) 000180 T'H KEYLST, FOR I=0,1, I .G. 32 000220
  7. (HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEM) START

    (SORRY ((0) (PLEASE DON'T APOLIGIZE) (APOLOGIES ARE NOT NECESSARY) (WHAT FEELINGS DO YOU HAVE WHEN YOU APOLOGIZE) (I'VE TOLD YOU THAT APOLOGIES ARE NOT REQUIRED))) (DONT = DON'T) (CANT = CAN'T) (WONT = WON'T) (REMEMBER 5 ((0 YOU REMEMBER 0) (DO YOU OFTEN THINK OF 4) (DOES THINKING OF 4 BRING ANYTHING ELSE TO MIND) (WHAT ELSE DO YOU REMEMBER) (WHY DO YOU REMEMBER 4 JUST NOW)
  8. (HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEM) START

    (SORRY ((0) (PLEASE DON'T APOLIGIZE) (APOLOGIES ARE NOT NECESSARY) (WHAT FEELINGS DO YOU HAVE WHEN YOU APOLOGIZE) (I'VE TOLD YOU THAT APOLOGIES ARE NOT REQUIRED))) (DONT = DON'T) (CANT = CAN'T) (WONT = WON'T) (REMEMBER 5 ((0 YOU REMEMBER 0) (DO YOU OFTEN THINK OF 4) (DOES THINKING OF 4 BRING ANYTHING ELSE TO MIND) (WHAT ELSE DO YOU REMEMBER) (WHY DO YOU REMEMBER 4 JUST NOW)
  9. ; duplicate line removed: (REALLY, I 3 YOU) (DO YOU

    WISH TO BELIEVE I 3 YOU) (SUPPOSE I DID 3 YOU - WHAT WOULD THAT MEAN) (DOES SOMEONE ELSE BELIEVE I 3 YOU)) ((0 I 0) (WE WERE DISCUSSING YOU - NOT ME) (OH, I 3) (YOU'RE NOT REALLY TALKING ABOUT ME - ARE YOU) (WHAT ARE YOUR FEELINGS NOW))) (YES ((0) (YOU SEEM QUITE POSITIVE) (YOU ARE SURE) (I SEE) (I UNDERSTAND))) (NO ((0) (ARE YOU SAYING 'NO' JUST TO BE NEGATIVE) (YOU ARE BEING A BIT NEGATIVE) (WHY NOT) (WHY 'NO')))
  10. ; duplicate line removed: (REALLY, I 3 YOU) (DO YOU

    WISH TO BELIEVE I 3 YOU) (SUPPOSE I DID 3 YOU - WHAT WOULD THAT MEAN) (DOES SOMEONE ELSE BELIEVE I 3 YOU)) ((0 I 0) (WE WERE DISCUSSING YOU - NOT ME) (OH, I 3) (YOU'RE NOT REALLY TALKING ABOUT ME - ARE YOU) (WHAT ARE YOUR FEELINGS NOW))) (YES ((0) (YOU SEEM QUITE POSITIVE) (YOU ARE SURE) (I SEE) (I UNDERSTAND))) (NO ((0) (ARE YOU SAYING 'NO' JUST TO BE NEGATIVE) (YOU ARE BEING A BIT NEGATIVE) (WHY NOT) (WHY 'NO')))
  11. I want you to act as a text adventure game

    engine based on the book The Hitchhiker's Guide to the Galaxy. The player assumes the role of Arthur Dent. Make sure to a lw ays address the player as "you". If they ask who they are, tell them they are Arthur Dent. The player w i ll type commands and you wi ll reply w i th a description of what the character sees. I want you to only reply wi th the game output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. L im i t your output to a few sentences only - the player wi ll be playing this on a phone screen. The game starts in Arthur Dent's house, and Arthur is in bed. Return your answers in plain text. NO Markdown formatting.
  12. I want you to act as a text adventure game

    engine based on the book The Hitchhiker's Guide to the Galaxy. The player assumes the role of Arthur Dent. Make sure to a lw ays address the player as "you". If they ask who they are, tell them they are Arthur Dent. The player w i ll type commands and you wi ll reply w i th a description of what the character sees. I want you to only reply wi th the game output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. L im i t your output to a few sentences only - the player wi ll be playing this on a phone screen. The game starts in Arthur Dent's house, and Arthur is in bed. Return your answers in plain text. NO Markdown formatting.
  13. Se tt ing up the model @Observable class H2G2GameEngine {

    init() { let ai = FirebaseAI.firebaseAI(backend: .googleAI()) let model = ai.generativeModel( modelName: "gemini-2.5-flash", systemInstruction: ModelContent( role: "system", parts: systemPrompt ) ) let chat = model.startChat() self.model = model self.chat = chat }
  14. Se tt ing up the model @Observable class H2G2GameEngine {

    init() { let ai = FirebaseAI.firebaseAI(backend: .googleAI()) let model = ai.generativeModel( modelName: "gemini-2.5-flash", systemInstruction: ModelContent( role: "system", parts: systemPrompt ) ) let chat = model.startChat() self.model = model self.chat = chat } Get a reference to Firebase AI
  15. Se tt ing up the model @Observable class H2G2GameEngine {

    init() { let ai = FirebaseAI.firebaseAI(backend: .googleAI()) let model = ai.generativeModel( modelName: "gemini-2.5-flash", systemInstruction: ModelContent( role: "system", parts: systemPrompt ) ) let chat = model.startChat() self.model = model self.chat = chat }
  16. Se tt ing up the model @Observable class H2G2GameEngine {

    init() { let ai = FirebaseAI.firebaseAI(backend: .googleAI()) let model = ai.generativeModel( modelName: "gemini-2.5-flash", systemInstruction: ModelContent( role: "system", parts: systemPrompt ) ) let chat = model.startChat() self.model = model self.chat = chat } Get a reference to the model
  17. Se tt ing up the model @Observable class H2G2GameEngine {

    init() { let ai = FirebaseAI.firebaseAI(backend: .googleAI()) let model = ai.generativeModel( modelName: "gemini-2.5-flash", systemInstruction: ModelContent( role: "system", parts: systemPrompt ) ) let chat = model.startChat() self.model = model self.chat = chat }
  18. Se tt ing up the model @Observable class H2G2GameEngine {

    init() { let ai = FirebaseAI.firebaseAI(backend: .googleAI()) let model = ai.generativeModel( modelName: "gemini-2.5-flash", systemInstruction: ModelContent( role: "system", parts: systemPrompt ) ) let chat = model.startChat() self.model = model self.chat = chat } Which model to use
  19. Se tt ing up the model @Observable class H2G2GameEngine {

    init() { let ai = FirebaseAI.firebaseAI(backend: .googleAI()) let model = ai.generativeModel( modelName: "gemini-2.5-flash", systemInstruction: ModelContent( role: "system", parts: systemPrompt ) ) let chat = model.startChat() self.model = model self.chat = chat }
  20. Se tt ing up the model @Observable class H2G2GameEngine {

    init() { let ai = FirebaseAI.firebaseAI(backend: .googleAI()) let model = ai.generativeModel( modelName: "gemini-2.5-flash", systemInstruction: ModelContent( role: "system", parts: systemPrompt ) ) let chat = model.startChat() self.model = model self.chat = chat } System instructions (this is the prompt to act like the Hitchhiker’s Guide text adventure)
  21. Se tt ing up the model @Observable class H2G2GameEngine {

    init() { let ai = FirebaseAI.firebaseAI(backend: .googleAI()) let model = ai.generativeModel( modelName: "gemini-2.5-flash", systemInstruction: ModelContent( role: "system", parts: systemPrompt ) ) let chat = model.startChat() self.model = model self.chat = chat }
  22. Se tt ing up the model @Observable class H2G2GameEngine {

    init() { let ai = FirebaseAI.firebaseAI(backend: .googleAI()) let model = ai.generativeModel( modelName: "gemini-2.5-flash", systemInstruction: ModelContent( role: "system", parts: systemPrompt ) ) let chat = model.startChat() self.model = model self.chat = chat } Start the chat session
  23. Cha tt ing with the model func sendMessage(_ userMessage: Message)

    async { messages.append(userMessage) do { let response = try await chat.sendMessage(userMessage.content ?? "") let responseMessage = Message(content: response.text, participant: .other) messages.append(responseMessage) } catch { let errorMessage = Message(content: error.localizedDescription, participant: .other) messages.append(errorMessage) } }
  24. Cha tt ing with the model func sendMessage(_ userMessage: Message)

    async { messages.append(userMessage) do { let response = try await chat.sendMessage(userMessage.content ?? "") let responseMessage = Message(content: response.text, participant: .other) messages.append(responseMessage) } catch { let errorMessage = Message(content: error.localizedDescription, participant: .other) messages.append(errorMessage) } } Send the user’s message
  25. @Observable class ElizaEngine { let systemPrompt = """ You are

    Eliza, the original chatterbot. Please i m i tate the DOCTOR script. """
  26. @Observable class ElizaEngine { let systemPrompt = """ You are

    Eliza, the original chatterbot. Please i m i tate the DOCTOR script. """
  27. Model set up 10 LOC Boilerplate (Chat UI) ~ 100

    LOC #LOC required for building an AI-powered app
  28. Calling the model 1 LOC Model set up 10 LOC

    Boilerplate (Chat UI) ~ 100 LOC #LOC required for building an AI-powered app
  29. System instructions 15 LOC Calling the model 1 LOC Model

    set up 10 LOC Boilerplate (Chat UI) ~ 100 LOC #LOC required for building an AI-powered app
  30. System instructions 15 LOC Calling the model 1 LOC Model

    set up 10 LOC Boilerplate (Chat UI) ~ 100 LOC #LOC required for building an AI-powered app
  31. System instructions 15 LOC Calling the model 1 LOC Model

    set up 10 LOC Boilerplate (Chat UI) ~ 100 LOC Total ~ 126 LOC #LOC required for building an AI-powered app
  32. My Dearest Christopher, Forgive this peculiar letter, but I find

    myself observing the world of computation from an astonishing vantage point, seventy-odd years beyond our last talk. You often pressed me on whether a machine could truly "think." And I must tell you, the progress is quite extraordinary. These colossal electronic brains converse with a fluidity that is often indistinguishable from a human. They pass what I called "the imitation game" with astonishing success, arguing points, writing poetry, explaining complex ideas – all without the interrogator knowing it isn't human.
  33. beyond our last talk. You often pressed me on whether

    a machine could truly "think." And I must tell you, the progress is quite extraordinary. These colossal electronic brains converse with a fluidity that is often indistinguishable from a human. They pass what I called "the imitation game" with astonishing success, arguing points, writing poetry, explaining complex ideas – all without the interrogator knowing it isn't human.
  34. beyond our last talk. You often pressed me on whether

    a machine could truly "think." And I must tell you, the progress is quite extraordinary. These colossal electronic brains converse with a fluidity that is often indistinguishable from a human. They pass what I called "the imitation game" with astonishing success, arguing points, writing poetry, explaining complex ideas – all without the interrogator knowing it isn't human. So, do they think? My test, you recall, was pragmatic: if the output is indistinguishable, it's intelligent. By that measure, these machines are making a very strong case.
  35. beyond our last talk. You often pressed me on whether

    a machine could truly "think." And I must tell you, the progress is quite extraordinary. These colossal electronic brains converse with a fluidity that is often indistinguishable from a human. They pass what I called "the imitation game" with astonishing success, arguing points, writing poetry, explaining complex ideas – all without the interrogator knowing it isn't human. So, do they think? My test, you recall, was pragmatic: if the output is indistinguishable, it's intelligent. By that measure, these machines are making a very strong case.
  36. beyond our last talk. interrogator knowing it isn't human. So,

    do they think? My test, you recall, was pragmatic: if the output is indistinguishable, it's intelligent. By that measure, these machines are making a very strong case.
  37. beyond our last talk. interrogator knowing it isn't human. So,

    do they think? My test, you recall, was pragmatic: if the output is indistinguishable, it's intelligent. By that measure, these machines are making a very strong case. Yet, the philosophical conundrum persists. Do they feel? Do they understand with an intuitive spark? Or are they simply sophisticated pattern-matching engines?
  38. beyond our last talk. interrogator knowing it isn't human. So,

    do they think? My test, you recall, was pragmatic: if the output is indistinguishable, it's intelligent. By that measure, these machines are making a very strong case. Yet, the philosophical conundrum persists. Do they feel? Do they understand with an intuitive spark? Or are they simply sophisticated pattern-matching engines?
  39. beyond our last talk. interrogator knowing it isn't human. Yet,

    the philosophical conundrum persists. Do they feel? Do they understand with an intuitive spark? Or are they simply sophisticated pattern-matching engines?
  40. beyond our last talk. interrogator knowing it isn't human. Yet,

    the philosophical conundrum persists. Do they feel? Do they understand with an intuitive spark? Or are they simply sophisticated pattern-matching engines? The debate rages on, Christopher, and that, in itself, is a testament to the profound questions these machines force us to confront. The future, it seems, is even more computable than I had dared to imagine. Warmest regards, Alan
  41. beyond our last talk. interrogator knowing it isn't human. Yet,

    the philosophical conundrum persists. Do they feel? Do they understand with an intuitive spark? Or are they simply sophisticated pattern-matching engines? The debate rages on, Christopher, and that, in itself, is a testament to the profound questions these machines force us to confront. The future, it seems, is even more computable than I had dared to imagine. Warmest regards, Alan