Slide 1

Slide 1 text

Delightful on-device AI experiences @POLPIELLADEV @[email protected] 🦸 SWIFT HEROES 2024

Slide 2

Slide 2 text

App maker and content creator based in Barcelona Hi! I’m Pol

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

Get early access to Helm!

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

So… I had an idea

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

🔐 PRIVACY FRIENDLY 🎨 HIGHLY CUSTOMIZABLE 🛫 OFFLINE FIRST

Slide 12

Slide 12 text

No content

Slide 13

Slide 13 text

I need this NOW!

Slide 14

Slide 14 text

The concepts

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

Stable Di ff usion

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

A neon drawing of an astronaut fl oating in space, digital art by Mór Than, unsplash contest winner, space art, sci- fi , retrowave, synthwave

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

https://inpainter.vercel.app/

Slide 21

Slide 21 text

Stable Di ff usion

Slide 22

Slide 22 text

Stable Di ff usion + ControlNet

Slide 23

Slide 23 text

A high quality photo of a sur fi ng dog

Slide 24

Slide 24 text

ControlNet Input

Slide 25

Slide 25 text

ControlNet Input https://github.com/apple/ml-stable-di ff usion?tab=readme-ov- fi le#-using-controlnet

Slide 26

Slide 26 text

A drawing of Darth Vader from Star Wars

Slide 27

Slide 27 text

ControlNet Input

Slide 28

Slide 28 text

ControlNet Input

Slide 29

Slide 29 text

Finding the right solution

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

No content

Slide 32

Slide 32 text

Houston, we have TWO problems

Slide 33

Slide 33 text

🔐 PRIVACY FRIENDLY 🎨 HIGHLY CUSTOMIZABLE 🛫 OFFLINE FIRST #1

Slide 34

Slide 34 text

🛫 OFFLINE FIRST #1

Slide 35

Slide 35 text

#2

Slide 36

Slide 36 text

Change the pricing model? #2 🤑

Slide 37

Slide 37 text

On-device Stable Di ff usion

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

⚙ PyTorch to CoreML Converter 📦 Swi ft Package to load models

Slide 40

Slide 40 text

No content

Slide 41

Slide 41 text

No content

Slide 42

Slide 42 text

No content

Slide 43

Slide 43 text

No content

Slide 44

Slide 44 text

Did somebody say ControlNet?

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

No content

Slide 47

Slide 47 text

No content

Slide 48

Slide 48 text

git clone https: / / github.com/apple/ml-stable-diffusion.git cd ml-stable-diffusion

Slide 49

Slide 49 text

git clone https: / / github.com/apple/ml-stable-diffusion.git cd ml-stable-diffusion python3 -m venv venv source venv/bin/activate pip install -e .

Slide 50

Slide 50 text

No content

Slide 51

Slide 51 text

git clone https: / / github.com/apple/ml-stable-diffusion.git cd ml-stable-diffusion python3 -m venv venv source venv/bin/activate pip install -e . python -m python_coreml_stable_diffusion.torch2coreml \ -- bundle-resources-for-swift-cli \ -- attention-implementation SPLIT_EINSUM_V2 \ -- convert-unet \ -- convert-text-encoder \ -- convert-vae-decoder \ -- convert-vae-encoder \ -- model-version runwayml/stable-diffusion-v1-5 \ -- unet-support-controlnet \ -- convert-controlnet DionTimmer/controlnet_qrcode-control_v1p_sd15 \ -o generated

Slide 52

Slide 52 text

git clone https: / / github.com/apple/ml-stable-diffusion.git cd ml-stable-diffusion python3 -m venv venv source venv/bin/activate pip install -e . python -m python_coreml_stable_diffusion.torch2coreml \ -- bundle-resources-for-swift-cli \ -- attention-implementation SPLIT_EINSUM_V2 \ -- convert-unet \ -- convert-text-encoder \ -- convert-vae-decoder \ -- convert-vae-encoder \ -- model-version runwayml/stable-diffusion-v1-5 \ -- unet-support-controlnet \ -- convert-controlnet DionTimmer/controlnet_qrcode-control_v1p_sd15 \ -o generated

Slide 53

Slide 53 text

git clone https: / / github.com/apple/ml-stable-diffusion.git cd ml-stable-diffusion python3 -m venv venv source venv/bin/activate pip install -e . python -m python_coreml_stable_diffusion.torch2coreml \ -- bundle-resources-for-swift-cli \ -- attention-implementation SPLIT_EINSUM_V2 \ -- convert-unet \ -- convert-text-encoder \ -- convert-vae-decoder \ -- convert-vae-encoder \ -- model-version runwayml/stable-diffusion-v1-5 \ -- unet-support-controlnet \ -- convert-controlnet DionTimmer/controlnet_qrcode-control_v1p_sd15 \ -o generated

Slide 54

Slide 54 text

git clone https: / / github.com/apple/ml-stable-diffusion.git cd ml-stable-diffusion python3 -m venv venv source venv/bin/activate pip install -e . python -m python_coreml_stable_diffusion.torch2coreml \ -- bundle-resources-for-swift-cli \ -- attention-implementation SPLIT_EINSUM_V2 \ -- convert-unet \ -- convert-text-encoder \ -- convert-vae-decoder \ -- convert-vae-encoder \ -- model-version runwayml/stable-diffusion-v1-5 \ -- unet-support-controlnet \ -- convert-controlnet DionTimmer/controlnet_qrcode-control_v1p_sd15 \ -o generated

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

Loading Stable Di ff usion models

Slide 57

Slide 57 text

// swift-tools-version: 5.9 import PackageDescription let package = Package( name: "StableDiffusionControlNet", platforms: [.macOS(.v14)], dependencies: [ ], targets: [ ] )

Slide 58

Slide 58 text

// swift-tools-version: 5.9 import PackageDescription let package = Package( name: "StableDiffusionControlNet", platforms: [.macOS(.v14)], dependencies: [ ], targets: [ ] ) .package(url: "https: // github.com/apple/ml-stable-diffusion.git", exact: "1.1.0")

Slide 59

Slide 59 text

// swift-tools-version: 5.9 import PackageDescription let package = Package( name: "StableDiffusionControlNet", platforms: [.macOS(.v14)], dependencies: [ .package(url: "https: // github.com/apple/ml-stable-diffusion.git", exact: "1.1.0") ], targets: [ ] ) .executableTarget( name: "StableDiffusionControlNet", dependencies: [.product(name: "StableDiffusion", package: "ml-stable-diffusion")], resources: [.process("Resources")] ),

Slide 60

Slide 60 text

import Foundation import StableDiffusion import CoreML func generate(prompt: String, startingImageURL: URL, numberOfImages: Int) async throws -> [CGImage] { guard let resourcesURL = Bundle.module.url(forResource: "Resources", withExtension: nil) ?. path() else { return [] } let url = URL(fileURLWithPath: resourcesURL) let configuration = MLModelConfiguration() configuration.computeUnits = .all let pipeline = try StableDiffusionPipeline(resourcesAt: url, controlNet: ["DiontimmerControlnetQrcodeControlV1PSd15"], configuration: configuration, disableSafety: false, reduceMemory: true) try pipeline.loadResources() let startingNSImage = NSImage(contentsOf: startingImageURL) ?. resized(to: .init(width: 512, height: 512)) guard let startingImage = startingNSImage ?. cgImage(forProposedRect: nil, context: nil, hints: nil) else { return [] } var pipelineConfig = StableDiffusionPipeline.Configuration(prompt: prompt) pipelineConfig.negativePrompt = "ugly, disfigured, low quality, blurry, nsfw" pipelineConfig.controlNetInputs = [startingImage] pipelineConfig.startingImage = startingImage pipelineConfig.useDenoisedIntermediates = true pipelineConfig.strength = 0.9 pipelineConfig.seed = UInt32.random(in: (0 .. < UInt32.max)) pipelineConfig.guidanceScale = 7.5 pipelineConfig.stepCount = 50 pipelineConfig.originalSize = 512 pipelineConfig.targetSize = 512 pipelineConfig.imageCount = numberOfImages return try pipeline.generateImages(configuration: pipelineConfig, progressHandler: { _ in true }) .compactMap { $0 } }

Slide 61

Slide 61 text

import Foundation import StableDiffusion import CoreML func generate(prompt: String, startingImageURL: URL, numberOfImages: Int) async throws -> [CGImage] { guard let resourcesURL = Bundle.module.url(forResource: "Resources", withExtension: nil) ?. path() else { return [] } let url = URL(fileURLWithPath: resourcesURL) let configuration = MLModelConfiguration() configuration.computeUnits = .all let pipeline = try StableDiffusionPipeline(resourcesAt: url, controlNet: ["DiontimmerControlnetQrcodeControlV1PSd15"], configuration: configuration, disableSafety: false, reduceMemory: true) try pipeline.loadResources() let startingNSImage = NSImage(contentsOf: startingImageURL) ?. resized(to: .init(width: 512, height: 512)) guard let startingImage = startingNSImage ?. cgImage(forProposedRect: nil, context: nil, hints: nil) else { return [] } var pipelineConfig = StableDiffusionPipeline.Configuration(prompt: prompt) pipelineConfig.negativePrompt = "ugly, disfigured, low quality, blurry, nsfw" pipelineConfig.controlNetInputs = [startingImage] pipelineConfig.startingImage = startingImage pipelineConfig.useDenoisedIntermediates = true pipelineConfig.strength = 0.9 pipelineConfig.seed = UInt32.random(in: (0 .. < UInt32.max)) pipelineConfig.guidanceScale = 7.5 pipelineConfig.stepCount = 50 pipelineConfig.originalSize = 512 pipelineConfig.targetSize = 512 pipelineConfig.imageCount = numberOfImages return try pipeline.generateImages(configuration: pipelineConfig, progressHandler: { _ in true }) .compactMap { $0 } }

Slide 62

Slide 62 text

import Foundation import StableDiffusion import CoreML func generate(prompt: String, startingImageURL: URL, numberOfImages: Int) async throws -> [CGImage] { guard let resourcesURL = Bundle.module.url(forResource: "Resources", withExtension: nil) ?. path() else { return [] } let url = URL(fileURLWithPath: resourcesURL) let configuration = MLModelConfiguration() configuration.computeUnits = .all let pipeline = try StableDiffusionPipeline(resourcesAt: url, controlNet: ["DiontimmerControlnetQrcodeControlV1PSd15"], configuration: configuration, disableSafety: false, reduceMemory: true) try pipeline.loadResources() let startingNSImage = NSImage(contentsOf: startingImageURL) ?. resized(to: .init(width: 512, height: 512)) guard let startingImage = startingNSImage ?. cgImage(forProposedRect: nil, context: nil, hints: nil) else { return [] } var pipelineConfig = StableDiffusionPipeline.Configuration(prompt: prompt) pipelineConfig.negativePrompt = "ugly, disfigured, low quality, blurry, nsfw" pipelineConfig.controlNetInputs = [startingImage] pipelineConfig.startingImage = startingImage pipelineConfig.useDenoisedIntermediates = true pipelineConfig.strength = 0.9 pipelineConfig.seed = UInt32.random(in: (0 .. < UInt32.max)) pipelineConfig.guidanceScale = 7.5 pipelineConfig.stepCount = 50 pipelineConfig.originalSize = 512 pipelineConfig.targetSize = 512 pipelineConfig.imageCount = numberOfImages return try pipeline.generateImages(configuration: pipelineConfig, progressHandler: { _ in true }) .compactMap { $0 } }

Slide 63

Slide 63 text

import Foundation import StableDiffusion import CoreML func generate(prompt: String, startingImageURL: URL, numberOfImages: Int) async throws -> [CGImage] { guard let resourcesURL = Bundle.module.url(forResource: "Resources", withExtension: nil) ?. path() else { return [] } let url = URL(fileURLWithPath: resourcesURL) let configuration = MLModelConfiguration() configuration.computeUnits = .all let pipeline = try StableDiffusionPipeline(resourcesAt: url, controlNet: ["DiontimmerControlnetQrcodeControlV1PSd15"], configuration: configuration, disableSafety: false, reduceMemory: true) try pipeline.loadResources() let startingNSImage = NSImage(contentsOf: startingImageURL) ?. resized(to: .init(width: 512, height: 512)) guard let startingImage = startingNSImage ?. cgImage(forProposedRect: nil, context: nil, hints: nil) else { return [] } var pipelineConfig = StableDiffusionPipeline.Configuration(prompt: prompt) pipelineConfig.negativePrompt = "ugly, disfigured, low quality, blurry, nsfw" pipelineConfig.controlNetInputs = [startingImage] pipelineConfig.startingImage = startingImage pipelineConfig.useDenoisedIntermediates = true pipelineConfig.strength = 0.9 pipelineConfig.seed = UInt32.random(in: (0 .. < UInt32.max)) pipelineConfig.guidanceScale = 7.5 pipelineConfig.stepCount = 50 pipelineConfig.originalSize = 512 pipelineConfig.targetSize = 512 pipelineConfig.imageCount = numberOfImages return try pipeline.generateImages(configuration: pipelineConfig, progressHandler: { _ in true }) .compactMap { $0 } }

Slide 64

Slide 64 text

import Foundation import StableDiffusion import CoreML func generate(prompt: String, startingImageURL: URL, numberOfImages: Int) async throws -> [CGImage] { guard let resourcesURL = Bundle.module.url(forResource: "Resources", withExtension: nil) ?. path() else { return [] } let url = URL(fileURLWithPath: resourcesURL) let configuration = MLModelConfiguration() configuration.computeUnits = .all let pipeline = try StableDiffusionPipeline(resourcesAt: url, controlNet: ["DiontimmerControlnetQrcodeControlV1PSd15"], configuration: configuration, disableSafety: false, reduceMemory: true) try pipeline.loadResources() let startingNSImage = NSImage(contentsOf: startingImageURL) ?. resized(to: .init(width: 512, height: 512)) guard let startingImage = startingNSImage ?. cgImage(forProposedRect: nil, context: nil, hints: nil) else { return [] } var pipelineConfig = StableDiffusionPipeline.Configuration(prompt: prompt) pipelineConfig.negativePrompt = "ugly, disfigured, low quality, blurry, nsfw" pipelineConfig.controlNetInputs = [startingImage] pipelineConfig.startingImage = startingImage pipelineConfig.useDenoisedIntermediates = true pipelineConfig.strength = 0.9 pipelineConfig.seed = UInt32.random(in: (0 .. < UInt32.max)) pipelineConfig.guidanceScale = 7.5 pipelineConfig.stepCount = 50 pipelineConfig.originalSize = 512 pipelineConfig.targetSize = 512 pipelineConfig.imageCount = numberOfImages return try pipeline.generateImages(configuration: pipelineConfig, progressHandler: { _ in true }) .compactMap { $0 } }

Slide 65

Slide 65 text

import Foundation import StableDiffusion import CoreML func generate(prompt: String, startingImageURL: URL, numberOfImages: Int) async throws -> [CGImage] { guard let resourcesURL = Bundle.module.url(forResource: "Resources", withExtension: nil) ?. path() else { return [] } let url = URL(fileURLWithPath: resourcesURL) let configuration = MLModelConfiguration() configuration.computeUnits = .all let pipeline = try StableDiffusionPipeline(resourcesAt: url, controlNet: ["DiontimmerControlnetQrcodeControlV1PSd15"], configuration: configuration, disableSafety: false, reduceMemory: true) try pipeline.loadResources() let startingNSImage = NSImage(contentsOf: startingImageURL) ?. resized(to: .init(width: 512, height: 512)) guard let startingImage = startingNSImage ?. cgImage(forProposedRect: nil, context: nil, hints: nil) else { return [] } var pipelineConfig = StableDiffusionPipeline.Configuration(prompt: prompt) pipelineConfig.negativePrompt = "ugly, disfigured, low quality, blurry, nsfw" pipelineConfig.controlNetInputs = [startingImage] pipelineConfig.startingImage = startingImage pipelineConfig.useDenoisedIntermediates = true pipelineConfig.strength = 0.9 pipelineConfig.seed = UInt32.random(in: (0 .. < UInt32.max)) pipelineConfig.guidanceScale = 7.5 pipelineConfig.stepCount = 50 pipelineConfig.originalSize = 512 pipelineConfig.targetSize = 512 pipelineConfig.imageCount = numberOfImages return try pipeline.generateImages(configuration: pipelineConfig, progressHandler: { _ in true }) .compactMap { $0 } }

Slide 66

Slide 66 text

import Foundation import StableDiffusion import CoreML func generate(prompt: String, startingImageURL: URL, numberOfImages: Int) async throws -> [CGImage] { guard let resourcesURL = Bundle.module.url(forResource: "Resources", withExtension: nil) ?. path() else { return [] } let url = URL(fileURLWithPath: resourcesURL) let configuration = MLModelConfiguration() configuration.computeUnits = .all let pipeline = try StableDiffusionPipeline(resourcesAt: url, controlNet: ["DiontimmerControlnetQrcodeControlV1PSd15"], configuration: configuration, disableSafety: false, reduceMemory: false) try pipeline.loadResources() let startingNSImage = NSImage(contentsOf: startingImageURL) ?. resized(to: .init(width: 512, height: 512)) guard let startingImage = startingNSImage ?. cgImage(forProposedRect: nil, context: nil, hints: nil) else { return [] } var pipelineConfig = StableDiffusionPipeline.Configuration(prompt: prompt) pipelineConfig.negativePrompt = "ugly, disfigured, low quality, blurry, nsfw" pipelineConfig.controlNetInputs = [startingImage] pipelineConfig.startingImage = startingImage pipelineConfig.useDenoisedIntermediates = true pipelineConfig.strength = 0.9 pipelineConfig.seed = UInt32.random(in: (0 .. < UInt32.max)) pipelineConfig.guidanceScale = 7.5 pipelineConfig.stepCount = 50 pipelineConfig.originalSize = 512 pipelineConfig.targetSize = 512 pipelineConfig.imageCount = numberOfImages return try pipeline.generateImages(configuration: pipelineConfig, progressHandler: { _ in true }) .compactMap { $0 } }

Slide 67

Slide 67 text

import Foundation import StableDiffusion import CoreML func generate(prompt: String, startingImageURL: URL, numberOfImages: Int) async throws -> [CGImage] { guard let resourcesURL = Bundle.module.url(forResource: "Resources", withExtension: nil) ?. path() else { return [] } let url = URL(fileURLWithPath: resourcesURL) let configuration = MLModelConfiguration() configuration.computeUnits = .all let pipeline = try StableDiffusionPipeline(resourcesAt: url, controlNet: ["DiontimmerControlnetQrcodeControlV1PSd15"], configuration: configuration, disableSafety: false, reduceMemory: false) try pipeline.loadResources() let startingNSImage = NSImage(contentsOf: startingImageURL) ?. resized(to: .init(width: 512, height: 512)) guard let startingImage = startingNSImage ?. cgImage(forProposedRect: nil, context: nil, hints: nil) else { return [] } var pipelineConfig = StableDiffusionPipeline.Configuration(prompt: prompt) pipelineConfig.negativePrompt = "ugly, disfigured, low quality, blurry, nsfw" pipelineConfig.controlNetInputs = [startingImage] pipelineConfig.startingImage = startingImage pipelineConfig.useDenoisedIntermediates = true pipelineConfig.strength = 0.9 pipelineConfig.seed = UInt32.random(in: (0 .. < UInt32.max)) pipelineConfig.guidanceScale = 7.5 pipelineConfig.stepCount = 50 pipelineConfig.originalSize = 512 pipelineConfig.targetSize = 512 pipelineConfig.imageCount = numberOfImages return try pipeline.generateImages(configuration: pipelineConfig, progressHandler: { _ in true }) .compactMap { $0 } }

Slide 68

Slide 68 text

let prompt = """ Style-NebMagic, award winning photo, A Dark-Eyed Junco, sitting Great Basin National Park, intricate, nature background, wildlife photography, hyper realistic, Style-LostTemple, deep shadow, high contrast, dark, sunrise, morning, full moon """

Slide 69

Slide 69 text

let prompt = """ Style-NebMagic, award winning photo, A Dark-Eyed Junco, sitting Great Basin National Park, intricate, nature background, wildlife photography, hyper realistic, Style-LostTemple, deep shadow, high contrast, dark, sunrise, morning, full moon """ let url = URL(filePath: "/my-qr-code.png")

Slide 70

Slide 70 text

let prompt = """ Style-NebMagic, award winning photo, A Dark-Eyed Junco, sitting Great Basin National Park, intricate, nature background, wildlife photography, hyper realistic, Style-LostTemple, deep shadow, high contrast, dark, sunrise, morning, full moon """ let url = URL(filePath: "/my-qr-code.png") let image = try await generate( prompt: prompt, startingImageURL: url, numberOfImages: 1 )

Slide 71

Slide 71 text

No content

Slide 72

Slide 72 text

https://github.com/huggingface/swi ft -coreml-di ff users

Slide 73

Slide 73 text

One more thing…

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

No content

Slide 76

Slide 76 text

Host your models remotely And load them on demand

Slide 77

Slide 77 text

No content

Slide 78

Slide 78 text

let modelURL = URL(string: "https: // huggingface.co/:user/:model/resolve/main/:file.zip?download=true")! let (location, downloadFileResponse) = try await URLSession.shared.download(from: modelURL) guard let httpResponse = downloadFileResponse as? HTTPURLResponse, httpResponse.statusCode == 200 else { exit(1) } try FileManager.default .moveItem( at: location, to: URL.desktopDirectory.appending(component: "model.zip") ) Model from HuggingFace

Slide 79

Slide 79 text

Thank you for listening! @POLPIELLADEV @[email protected] 🦸 SWIFT HEROES 2024