Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Mac OS X 與 iOS 的 Audio API

Mac OS X 與 iOS 的 Audio API

Weizhong Yang

August 15, 2013
Tweet

More Decks by Weizhong Yang

Other Decks in Technology

Transcript

  1. 與 Audio 有關的 API • AVFoundation • Audio Service •

    OpenAL • Audio Queue • Audio Unit • Audio Session • MPNowPlayingCenter • Background Task • Remote Control Events • … Thursday, August 15,
  2. AVAudioPlayer • ⾼高階的 Audio Player • iOS 2.2 • ⽀支援多種格式

    • 可以從 file URL 或是 NSData 建⽴立 player 播放,但只能夠播放 local file • 適合製作遊戲背景⾳音樂 Thursday, August 15,
  3. AVPlayer • ⾼高階的 Audio Player • iOS 4.0 • ⽀支援本機檔案與遠端檔案

    • 適合網路廣播 •無法知道檔案總⻑⾧長度 Thursday, August 15,
  4. Audio Session • ⽤用來表⽰示⺫⽬目前的 Audio 屬於哪⼀一種類 • 設定 Audio Session

    Category • ⼀一般背景、Media Playback…etc • 然後呼叫 Audio Session Start • 有 C 與 Objective-C API Thursday, August 15,
  5. Audio Session • 還要處理 Interrupt 與 Resume • 實作 delegate,更新

    UI • Interrupt 的場合: • 別的 app 播放⾳音樂 • 來電 • 鬧鐘… Thursday, August 15,
  6. 基本觀念 • Audio 資料是連續的 binary 資料 • 資料中包含的是連續的sample/frame,⼀一 秒鐘會有 44100

    個 sample • MP3/AAC等⺫⽬目前通⽤用的壓縮⾳音檔格式, 會將若干⼤大⼩小的frame變成⼀一個packet • 如果是VBR(變動碼率)的⾳音檔,每個 packet裡頭的資料量不⼀一樣⼤大 Thursday, August 15,
  7. ⼀一些術語 • Sample Rate:每秒鐘有多少 sample • Packet Size:每個 Packet 有多少

    sample • Bit Rate:每秒鐘有多少 bit Thursday, August 15,
  8. 完整流程 • 建⽴立網路連線,從連線 callback 讀取資料 • 解密,並將解密過的資料放⼊入記憶體 • 建⽴立 Audio

    Queue 或是 Audio Unit Graph,收取系統通知需要下⼀一段資料的 callback • 提供資料 Thursday, August 15,
  9. 我們專注在 • 將資料讀⼊入記憶體 • Parse 出 packet 並保存 • 將資料餵給

    Audio API • 註冊 callback • 在 callback 中回傳下⼀一段資料 • 資料要轉換成 Linear PCM 格式 Thursday, August 15,
  10. Audio Unit 與 Audio Queue 兩組 API 的差別? • 在上⾴頁的流程中,最⼤大的差別在於Audio

    Queue API 不需要⼿手動將資料轉換成 Linear PCM 格式 • 不容易設定 • 可以直接對Linear PCM資料做⼿手腳… • Audio Unit 中可以在 Audio Graph 中增加 mixer 與 EQ effect node Thursday, August 15,
  11. Packet ID3 data MP3 Header MP3 Data MP3 Header MP3

    Data MP3 Header MP3 Data Packet Thursday, August 15,
  12. 辨識 MP3 header • MP3 Header 共 4 個 bytes

    • 其中前11個bit是sync word(都是1),看 到 sync word就可以知道是⼀一個packet的 開頭 • sync word之後描述這個packet的格式 • http://www.mp3-tech.org/programmer/ frame_header.html Thursday, August 15,
  13. Sample Packet Parser def parse(content): i = 0 while i

    + 2 < len(content): frameSync = (content[i] << 8) | (content[i + 1] & (0x80 | 0x40 | 0x20)) if frameSync != 0xffe0: if foundFirstFrame: pass i += 1; continue if not foundFirstFrame: foundFirstFrame = True audioVersion = (content[i + 1] >> 3) & 0x03; layer = (content[i + 1] >> 1) & 0x03 hasCRC = not(content[i + 1] & 0x01) bitrateIndex = content[i + 2] >> 4; sampleRateIndex = content[i + 2] >> 2 & 0x03; bitrate = [0, 32000, 40000, 48000, 56000, 64000, 80000, 96000, 112000, 128000, 160000, 192000, 224000, 256000, 320000, 0][bitrateIndex]; hasPadding = not(not((content[i + 2] >> 1) & 0x01)) frameLength = 144 * bitrate / 44100 + \ (1 if hasPadding else 0) + \ (2 if hasCRC else 0) i += frameLength Thursday, August 15,
  14. CoreAudio 提供我們 ⼀一組 Audio Parser • ⽤用途是找出 Packet,並分析出檔案格式 • C

    API • iOS 2/Mac OS X 10.5 • 本機檔案可以呼叫 AudioFileOpenURL • 串流資料可以呼叫AudioFileStreamOpen Thursday, August 15,
  15. AudioFileStreamOpen • AudioFileStreamOpen(self, ZBAudioFileStreamPropertyListener, ZBAudioFileStreamPacketsCallback, kAudioFileMP3Type, &audioFileStreamID); • self :傳遞⼀一個callback可以使⽤用的物件

    • ZBAudioFileStreamPropertyListener 檔案格式 callback • ZBAudioFileStreamPacketsCallback parse 出 packet 的 callback • kAudioFileMP3Type 給 parser 的 hint • audioFileStreamID 產⽣生 audio file stream ID Thursday, August 15,
  16. 在記憶體中保存 packet? • 其實我們只要⽤用個簡單的 structure 就可以 保存 typedef struct {

    size_t length; // packet ⻑⾧長度 void *data; // packet 資料的指標 } Thursday, August 15,
  17. 保存檔案格式 void ZBAudioFileStreamPropertyListener(void * inClientData, AudioFileStreamID inAudioFileStream, AudioFileStreamPropertyID inPropertyID, UInt32

    * ioFlags) { ZBSimplePlayer *self = (ZBSimplePlayer *)inClientData; if (inPropertyID == kAudioFileStreamProperty_DataFormat) { UInt32 dataSize = 0; OSStatus status = 0; AudioStreamBasicDescription audioStreamDescription; Boolean writable = false; status = AudioFileStreamGetPropertyInfo(inAudioFileStream, kAudioFileStreamProperty_DataFormat, &dataSize, &writable); status = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_DataFormat, &dataSize, &audioStreamDescription); // 然後把 audioStreamDescription 存起來 } } Thursday, August 15,
  18. 保存 Packet static void ZBAudioQueueOutputCallback(void * inUserData, AudioQueueRef inAQ,AudioQueueBufferRef inBuffer)

    { ZBSimplePlayer *self = (ZBSimplePlayer *)inClientData; for (int i = 0; i < inNumberPackets; ++i) { SInt64 packetStart = inPacketDescriptions[i].mStartOffset; UInt32 packetSize = inPacketDescriptions[i].mDataByteSize; assert(packetSize > 0); self->packetData[self->packetCount].length = (size_t)packetSize; self->packetData[self->packetCount].data = malloc(packetSize); memcpy(packetData[self->packetCount].data, inInputData + packetStart, packetSize); self->packetCount++; } } Thursday, August 15,
  19. 下⼀一步 • 為了要播放順利,我們會在有⼀一定數量 的 packet 之後,才會開始呼叫 Audio API 開始播放。資料不夠會產⽣生爆⾳音 •

    等待 packet 的過程叫buffering…緩衝處理 • 資料換算成時間的⽅方式:packet 數量 * frames per packet / sample rate • 100 * 1152 / 44100 = 2.61... Thursday, August 15,
  20. Audio Queue 的播放 • 每次 callback,提供⼀一個新的buffer struct • buffer物件包含 •

    packet 數量 • 指向 packet 資料的指標 • packet格式 • 將 buffer 送⼊入 queue 中 Thursday, August 15,
  21. AudioUnit 的播放 • Audio Unit API 會給你⼀一個 struct 的指, 標,叫做IOData,並傳⼊入⼀一定⼤大⼩小的

    frame 數量 • 對IOData指定所需要⼤大⼩小的Linear PCM 資料 Thursday, August 15,
  22. 建⽴立 Audio Queue OSStatus status = AudioQueueNewOutput(audioStreamBasicDes cription, ZBAudioQueueOutputCallback, self,

    CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &outputQueue); assert(status == noErr); Thursday, August 15,
  23. Enqueue Data AudioQueueBufferRef buffer; status = AudioQueueAllocateBuffer(outputQueue, totalSize, &buffer); assert(status

    == noErr); buffer->mAudioDataByteSize = totalSize; buffer->mUserData = self; AudioStreamPacketDescription *packetDescs = calloc(inPacketCount, sizeof(AudioStreamPacketDescription)); totalSize = 0; for (index = 0 ; index < inPacketCount ; index++) { size_t readIndex = index + readHead; memcpy(buffer->mAudioData + totalSize, packetData[readIndex].data, packetData[readIndex].length); AudioStreamPacketDescription description; description.mStartOffset = totalSize; description.mDataByteSize = (UInt32)packetData[readIndex].length; description.mVariableFramesInPacket = 0; totalSize += packetData[readIndex].length; memcpy(&(packetDescs[index]), &description, sizeof(AudioStreamPacketDescription)); } status = AudioQueueEnqueueBuffer(outputQueue, buffer, (UInt32)inPacketCount, packetDescs); free(packetDescs); Thursday, August 15,
  24. Audio Unit Output Node Effect Node Mixer Node Render Callback

    Audio Unit Graph Mixer Unit Effect Unit Output Unit Get Info Get Info Get Info Thursday, August 15,
  25. 使⽤用 audio converter AudioBufferList *list; UInt32 packetSize = 1024; AudioConverterFillComplexBuffer(conve

    rter, ZBPlayerConverterFiller, self, &packetSize, list, NULL); Thursday, August 15,
  26. audio converter callback OSStatus ZBPlayerConverterFiller (AudioConverterRef inAudioConverter, UInt32* ioNumberDataPackets, AudioBufferList*

    ioData, AudioStreamPacketDescription** outDataPacketDescription, void* inUserData) { ZBSimpleAUPlayer *self = (ZBSimpleAUPlayer *)inUserData; *ioNumberDataPackets = 1; static AudioStreamPacketDescription aspdesc; ioData->mNumberBuffers = 1; void *data = self->packetData[readHead].data; UInt32 length = self->packetData[readHead].length; ioData->mBuffers[0].mData = data; ioData->mBuffers[0].mDataByteSize = length; *outDataPacketDescription = &aspdesc; aspdesc.mDataByteSize = length; aspdesc.mStartOffset = 0; aspdesc.mVariableFramesInPacket = 1; readHead++; return noErr; } Thursday, August 15,
  27. NewAUGraph(&audioGraph); // 建立 audio grapg AudioComponentDescription cdesc; bzero(&cdesc, sizeof(AudioComponentDescription)); cdesc.componentType

    = kAudioUnitType_Output; cdesc.componentSubType = kAudioUnitSubType_DefaultOutput; cdesc.componentManufacturer = kAudioUnitManufacturer_Apple; cdesc.componentFlags = 0; cdesc.componentFlagsMask = 0; AUGraphAddNode(audioGraph, &cdesc, &outputNode); // 建立 output node AUGraphOpen(audioGraph); AUGraphNodeInfo(audioGraph, outputNode, &cdesc, &outputUnit); // 建立 output unit AudioStreamBasicDescription destFormat = LFPCMStreamDescription(); // 設定輸出格式 AudioUnitSetProperty(outputUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &destFormat, sizeof(destFormat)); AURenderCallbackStruct callbackStruct; callbackStruct.inputProc = ZBPlayerAURenderCallback; callbackStruct.inputProcRefCon = self; AudioUnitSetProperty(outputUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &callbackStruct, sizeof(callbackStruct)); // 建立 render callback AUGraphInitialize(audioGraph); // Init audio graph CAShow(audioGraph); // 開始 audio graph Thursday, August 15,