IN IOS APPS? ▸ Accelerate framework -vDSP (For computing the Fast Fourier Transform- FFT) ▸ AVFoundation/AVAudioEngine/AudioUnit (For complex audio processing) ▸ The Sampling theorem - Otherwise known as the Shannon sampling theorem or the Nyquist sampling theorem
AVFoundation class AudioListener: NSObject { var audioEngine: AVAudioEngine! var audioInputNode : AVAudioInputNode! var audioBuffer: AVAudioPCMBuffer! var sessionActive = false override init(){ super.init() startAudioSession() if sessionActive { installTap() } }
func installTap() { audioEngine = AVAudioEngine() audioInputNode = audioEngine.inputNode /// Sample Rate let frameLength = UInt32(2048) audioBuffer = AVAudioPCMBuffer(pcmFormat: audioInputNode.outputFormat(forBus: 0), frameCapacity: frameLength) audioBuffer.frameLength = frameLength audioInputNode.installTap(onBus: 0, bufferSize: frameLength, format: audioInputNode.outputFormat(forBus: 0), block: { (buffer, time) in /// We're only given 1 channel, so we need to exract that data into a normalized format let channels = UnsafeBufferPointer(start: buffer.floatChannelData, count: Int(buffer.format.channelCount)) let floats = [Float](UnsafeBufferPointer(start: channels[0], count: Int(buffer.frameLength))) for i in 0..<Int(self.audioBuffer.frameLength) { self.audioBuffer.floatChannelData?.pointee[i] = floats[i] } }) try! audioEngine.start() }
/// - Parameter buffer: Audio data in PCM format func fftForward(_ buffer: AVAudioPCMBuffer) -> [Float] { let size: Int = Int(buffer.frameLength) /// Set up the transform let log2n = UInt(round(log2(Double(size)))) let bufferSize = Int(1 << log2n) let inputCount = size / 2 let fftSetup = vDSP_create_fftsetup(log2n, Int32(kFFTRadix2))
/// Create the complex split value to hold the output of the transform var realp = [Float](repeating: 0, count: inputCount) var imagp = [Float](repeating: 0, count: inputCount) var output = DSPSplitComplex(realp: &realp, imagp: &imagp) var transferBuffer = [Float](repeating: 0, count: bufferSize) ///Supplying a hanning window smooths the edges of the incoming waveform and reduces output errors from the FFT vDSP_hann_window(&transferBuffer, vDSP_Length(bufferSize), Int32(vDSP_HANN_NORM)) vDSP_vmul((buffer.floatChannelData?.pointee)!, 1, transferBuffer, 1, &transferBuffer, 1, vDSP_Length(bufferSize)) let temp = UnsafePointer<Float>(transferBuffer) temp.withMemoryRebound(to: DSPComplex.self, capacity: transferBuffer.count) { (typeConvertedTransferBuffer) -> Void in /// Copies the contents of the interleaved complex vector from our buffer to a split complex vector, the output value vDSP_ctoz(typeConvertedTransferBuffer, 2, &output, 1, vDSP_Length(inputCount)) }
hold locks on the audio thread. pthread_mutex_lock or @synchronized. 2. Don’t use Objective-C/Swift on the audio thread.[myInstance doAThing] or myInstance.something. 3. Don’t allocate memory on the audio thread.Like malloc(), or new Abcd or [MyClass alloc]. 4. Don’t do file or network IO on the audio thread.Like read, write or sendto. http://atastypixel.com/blog/four-common-mistakes-in-audio-development/
improve performance of certain calculations. Because of its relative obscurity and inconvenient APIs, Accelerate is not commonly used by developers, which is a shame, since many applications could benefit from these performance optimizations. Mattt Thompson- https://github.com/mattt/Surge DIGITAL SIGNAL PROCESSING WITH SWIFT
to Digital Signal Processing by Steven W. Smith, Ph.D. http://www.dspguide.com ▸Ronald Nicholson https://dsp.stackexchange.com/users/154/hotpaw2 ▸Apple’s vDSP Programming Guide https://developer.apple.com/library/content/documentation/ Performance/Conceptual/vDSP_Programming_Guide/ UsingFourierTransforms/UsingFourierTransforms.html ▸Fourier Transforms and FFTs by Mike Ash https://www.mikeash.com/ pyblog/friday-qa-2012-10-26-fourier-transforms-and-ffts.html ▸Github link:https://github.com/daisyramos317/DSPSwift INTERESTED IN LEARNING MORE?