Thomas Royal

Composer | Pianist | Technologist

Writings


[Deprecated] Swift and Current Limitations of Audio Unit Implementations

June 16, 2014

UPDATE (07/31/2014): Apple has since modified pointer usage, which might allow for the use of user-provided callbacks in CoreAudio. I have been up to my neck in dissertation work, so I have not been able to test this. Good luck!

Swift currently (June 16, 2014) is less capable than Objective-C for real-time audio.

Generally, a real-time audio API relies on the use of user-provided (programmer provided) callbacks. The programmer usually writes a function that generates audio. The reference to this function is then passed into an initializing function for the audio API. Something like this usually occurs:

// the call back function

void generate_audio(int n_frames, float *output_samples){
    for(int i=0; i < n_frames; i++){
      output_samples[i] = generate_audio(i);
    }
}

int main(){
  // init api with above function
  realtime_audio_api_init_with_callback(generate_audio);
}

The same thing happens with Core Audio. With Core Audio, you create an audio unit, attach it to the processing graph, and provide a callback using the function AUGraphSetNodeInputCallback. Instead of passing the callback directly to that function, you pass an AURenderCallbackStruct which contains a reference to the audio generating callback.

Comparing the Swift and Objective-C AURenderCallbackStruct documentation, one notices that in Swift, there is no inputProc field, which is the field that holds the reference to the callback. This is particularly odd, seeing that the AURenderCallbackStruct is only useful if one is going to provide a callback.

One can only speculate as to why the implementation is incomplete. Swift is still in beta, so there is sure to be some bugs and kinks. Ultimately, for real-time audio, Swift currently (June 16, 2014) is not an Objective-C killer. Correct me if I am wrong.