Slide 1

Slide 1 text

DIGITAL SIGNAL PROCESSING WITH SWIFT デジタル信号処理理 @DAISYR317

Slide 2

Slide 2 text

AREAS OF DSP デジタル信号処理理の分野 WHAT IS DSP? ▸ Digital Cameras -> Image compression | Enhancements ▸ Audio -> Speech generation | Voice recognition ▸ Space -> Data aggregation | Transmission

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

DIGITAL MUSIC COULD NOT EXIST WITHOUT THE FOURIER TRANSFORM

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

DIGITAL SIGNAL PROCESSING WITH SWIFT HOW DO WE USE DSP IN IOS APPS? ▸ Accelerate framework -vDSP (For computing the Fast Fourier Transform- FFT) ▸ AVFoundation/AVAudioEngine/AudioUnit (For complex audio processing) ▸ The Sampling theorem - Otherwise known as the Shannon sampling theorem or the Nyquist sampling theorem

Slide 8

Slide 8 text

Taken from, What’s new in Audio
 https://developer.apple.com/videos/play/wwdc2017/501/

Slide 9

Slide 9 text

DIGITAL SIGNAL PROCESSING WITH SWIFT NOTABLE SWIFT/OBJ-C FRAMEWORKS BUILT FOR AUDIO PROCESSING ▸ AudioKit ▸ EZAudio (Deprecated) ▸ The Amazing Audio Engine (Retired)

Slide 10

Slide 10 text

DIGITAL SIGNAL PROCESSING WITH SWIFT PROCESSING AUDIO WITH PURE SWIFT ▸ Capture audio input ▸ Process it ▸ Display some unique points

Slide 11

Slide 11 text

DIGITAL SIGNAL PROCESSING WITH SWIFT PRACTICAL APPROACH: USING AVAUDIOENGINE import AVFoundation class AudioListener: NSObject { var audioEngine: AVAudioEngine! var audioInputNode : AVAudioInputNode! var audioBuffer: AVAudioPCMBuffer! var sessionActive = false override init(){ super.init() startAudioSession() if sessionActive { installTap() } }

Slide 12

Slide 12 text

DIGITAL SIGNAL PROCESSING WITH SWIFT PRACTICAL APPROACH: USING AVAUDIOENGINE private func startAudioSession(){ let audioSession = AVAudioSession.sharedInstance() let preferredSampleRate = 44100.0 /// Targeted default hardware rate let preferredIOBufferDuration = 0.02 /// 1024 / 44100 = 0.02 do { try audioSession.setCategory(AVAudioSessionCategoryRecord, mode: AVAudioSessionModeMeasurement, options: []) try audioSession.setPreferredSampleRate(preferredSampleRate) try audioSession.setPreferredIOBufferDuration(preferredIOBufferDuration) try audioSession.setActive(true) sessionActive = true } catch let error as NSError { print("Audio session error: \(error)") } }

Slide 13

Slide 13 text

DIGITAL SIGNAL PROCESSING WITH SWIFT PRACTICAL APPROACH: USING AVAUDIOENGINE private func installTap() { audioEngine = AVAudioEngine() audioInputNode = audioEngine.inputNode /// Sample Rate let frameLength = UInt32(2048) audioBuffer = AVAudioPCMBuffer(pcmFormat: audioInputNode.outputFormat(forBus: 0), frameCapacity: frameLength) audioBuffer.frameLength = frameLength audioInputNode.installTap(onBus: 0, bufferSize: frameLength, format: audioInputNode.outputFormat(forBus: 0), block: { (buffer, time) in /// We're only given 1 channel, so we need to exract that data into a normalized format let channels = UnsafeBufferPointer(start: buffer.floatChannelData, count: Int(buffer.format.channelCount)) let floats = [Float](UnsafeBufferPointer(start: channels[0], count: Int(buffer.frameLength))) for i in 0..

Slide 14

Slide 14 text

DIGITAL SIGNAL PROCESSING WITH SWIFT PRACTICAL APPROACH: USING AVAUDIOENGINE func stopAudioEngine(){ audioEngine.inputNode.removeTap(onBus: 0) audioEngine.stop() }

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

DIGITAL SIGNAL PROCESSING WITH SWIFT PRACTICAL APPROACH: COMPUTING THE FFT /// - Parameter buffer: Audio data in PCM format func fftForward(_ buffer: AVAudioPCMBuffer) -> [Float] { let size: Int = Int(buffer.frameLength) /// Set up the transform let log2n = UInt(round(log2(Double(size)))) let bufferSize = Int(1 << log2n) let inputCount = size / 2 let fftSetup = vDSP_create_fftsetup(log2n, Int32(kFFTRadix2))

Slide 17

Slide 17 text

DIGITAL SIGNAL PROCESSING WITH SWIFT PRACTICAL APPROACH: COMPUTING THE FFT /// Create the complex split value to hold the output of the transform var realp = [Float](repeating: 0, count: inputCount) var imagp = [Float](repeating: 0, count: inputCount) var output = DSPSplitComplex(realp: &realp, imagp: &imagp) var transferBuffer = [Float](repeating: 0, count: bufferSize) ///Supplying a hanning window smooths the edges of the incoming waveform and reduces output errors from the FFT vDSP_hann_window(&transferBuffer, vDSP_Length(bufferSize), Int32(vDSP_HANN_NORM)) vDSP_vmul((buffer.floatChannelData?.pointee)!, 1, transferBuffer, 1, &transferBuffer, 1, vDSP_Length(bufferSize)) let temp = UnsafePointer(transferBuffer) temp.withMemoryRebound(to: DSPComplex.self, capacity: transferBuffer.count) { (typeConvertedTransferBuffer) -> Void in /// Copies the contents of the interleaved complex vector from our buffer to a split complex vector, the output value vDSP_ctoz(typeConvertedTransferBuffer, 2, &output, 1, vDSP_Length(inputCount)) }

Slide 18

Slide 18 text

DIGITAL SIGNAL PROCESSING WITH SWIFT PRACTICAL APPROACH: COMPUTING THE FFT /// Do the fast Fourier forward transform vDSP_fft_zrip(fftSetup!, &output, 1, log2n, Int32(FFT_FORWARD)) /// Convert the complex output to magnitude var magnitudes = [Float](repeating: 0.0, count: inputCount) vDSP_zvmags(&output, 1, &magnitudes, 1, vDSP_Length(inputCount)) /// Scale = [2.0/Float(inputCount)] var normalizedMagnitudes = [Float](repeating: 0.0, count: inputCount) vDSP_vsmul(sqrtq(magnitudes), 1, [2.0/Float(inputCount)], &normalizedMagnitudes, 1, vDSP_Length(inputCount)) /// Release the setup vDSP_destroy_fftsetup(fftSetup) return normalizedMagnitudes }

Slide 19

Slide 19 text

DEMO

Slide 20

Slide 20 text

DIGITAL SIGNAL PROCESSING WITH SWIFT ANOTHER APPROACH: BUILDING AN AUDIO UNIT private func setupAudioUnit() { var desc: AudioComponentDescription = AudioComponentDescription() desc.componentType = kAudioUnitType_Output desc.componentSubType = kAudioUnitSubType_RemoteIO desc.componentFlags = 0 desc.componentFlagsMask = 0 desc.componentManufacturer = kAudioUnitManufacturer_Apple var status: OSStatus = noErr let inputComponent: AudioComponent = AudioComponentFindNext(nil, &desc)! var tempAudioUnit: AudioUnit? status = AudioComponentInstanceNew(inputComponent, &tempAudioUnit) self.audioUnit = tempAudioUnit guard let au = self.audioUnit else { return } var one: UInt32 = 1 status = AudioUnitSetProperty(au, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, kInputBus, &one, UInt32(MemoryLayout.size)) // Set format to 32-bit Floats, linear PCM let numChannels = 2 // 2 channel stereo var audioFormat: AudioStreamBasicDescription! = AudioStreamBasicDescription() audioFormat.mSampleRate = 1024 audioFormat.mFormatID = kAudioFormatLinearPCM audioFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked audioFormat.mFramesPerPacket = 1 audioFormat.mChannelsPerFrame = 2 audioFormat.mBitsPerChannel = UInt32(8 * MemoryLayout.size) audioFormat.mBytesPerPacket = UInt32(numChannels * MemoryLayout.size) audioFormat.mBytesPerFrame = UInt32(numChannels * MemoryLayout.size) audioFormat.mReserved = 0 status = AudioUnitSetProperty(au, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, kOutputBus, &audioFormat, UInt32(MemoryLayout.size)) status = AudioUnitSetProperty(au, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &audioFormat, UInt32(MemoryLayout.size)) // Set input/recording callback var inputCallbackStruct = AURenderCallbackStruct(inputProc: recordingCallback, inputProcRefCon: UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque())) status = AudioUnitSetProperty(au, AudioUnitPropertyID(kAudioOutputUnitProperty_SetInputCallback), AudioUnitScope(kAudioUnitScope_Global), kInputBus, &inputCallbackStruct, UInt32(MemoryLayout.size)) // Ask CoreAudio to allocate buffers for us on render. status = AudioUnitSetProperty(au, AudioUnitPropertyID(kAudioUnitProperty_ShouldAllocateBuffer), AudioUnitScope(kAudioUnitScope_Output), kInputBus, &one, UInt32(MemoryLayout.size)) flag = Int(status) } let recordingCallback: AURenderCallback = { ( inRefCon, ioActionFlags, inTimeStamp, inBusNumber, frameCount, ioData ) -> OSStatus in let audioObject = unsafeBitCast(inRefCon, to: AudioUnitListener.self) var err: OSStatus = noErr // set mData to nil, AudioUnitRender() should be allocating buffers var bufferList = AudioBufferList( mNumberBuffers: 1, mBuffers: AudioBuffer( mNumberChannels: UInt32(2), mDataByteSize: 16, mData: nil)) if let au = audioObject.audioUnit { err = AudioUnitRender(au,

Slide 21

Slide 21 text

DIGITAL SIGNAL PROCESSING WITH SWIFT ANOTHER APPROACH: BUILDING AN AUDIO UNIT ▸ Set up the AudioComponent ▸ Handle OSStatus ▸ Set up the Audio Stream format ▸ Handle the input/recording callback

Slide 22

Slide 22 text

DIGITAL SIGNAL PROCESSING WITH SWIFT RIGHT APPROACH: ? 1. Don’t hold locks on the audio thread. pthread_mutex_lock or @synchronized.
 2. Don’t use Objective-C/Swift on the audio thread.[myInstance doAThing] or myInstance.something.
 3. Don’t allocate memory on the audio thread.Like malloc(), or new Abcd or [MyClass alloc].
 4. Don’t do file or network IO on the audio thread.Like read, write or sendto.
 http://atastypixel.com/blog/four-common-mistakes-in-audio-development/

Slide 23

Slide 23 text

Accelerate exposes SIMD instructions available in modern CPUs to significantly improve performance of certain calculations. Because of its relative obscurity and inconvenient APIs, Accelerate is not commonly used by developers, which is a shame, since many applications could benefit from these performance optimizations. Mattt Thompson- https://github.com/mattt/Surge DIGITAL SIGNAL PROCESSING WITH SWIFT

Slide 24

Slide 24 text

DIGITAL SIGNAL PROCESSING WITH SWIFT ▸ 2,800 APIs ▸ Less code to maintain ▸ Faster ▸ Energy Efficient ▸ Runs all architectures WHY SHOULD WE USE FRAMEWORKS LIKE ACCELERATE?

Slide 25

Slide 25 text

DIGITAL SIGNAL PROCESSING WITH SWIFT INSPIRATION FOR THIS TALK

Slide 26

Slide 26 text

DIGITAL SIGNAL PROCESSING WITH SWIFT ▸The Scientist and Engineer’s Guide to Digital Signal Processing by Steven W. Smith, Ph.D. http://www.dspguide.com ▸Ronald Nicholson https://dsp.stackexchange.com/users/154/hotpaw2 ▸Apple’s vDSP Programming Guide
 https://developer.apple.com/library/content/documentation/ Performance/Conceptual/vDSP_Programming_Guide/ UsingFourierTransforms/UsingFourierTransforms.html ▸Fourier Transforms and FFTs by Mike Ash https://www.mikeash.com/ pyblog/friday-qa-2012-10-26-fourier-transforms-and-ffts.html ▸Github link:https://github.com/daisyramos317/DSPSwift INTERESTED IN LEARNING MORE?

Slide 27

Slide 27 text

Thank you! @daisyr317 ありがとうございました!