As Swift was released only a few days ago, regard this code with skepticism. For more authoritative information, there is a pdf of the original WWDC 2014 presentation, whose code examples are written in Objective C, but should be easily translatable. Prerelease documentation is available on Apple's website. All of this information is taken from the above links.
Playing Audio Programmatically in Swift
AVAudioEngine is responsible for managing graphs of AVAudioNode's, which in turn generate or process sound. It is instantiated this way:
// import the library
// that has AVAudioEngine
import AVFoundation
// instantiate AVAudioEngine()
var ae = AVAudioEngine()
// we need to access AVAudioEngine's mixer,
// which is a class of AVAudioNode which serves
// as the output node for an audio graph
var mixer = ae.mainMixerNode
// sample rate needed for sound generation
var sr:Float = Float(mixer.outputFormatForBus(0).sampleRate)
In order to play audio, and AVAudioPlayerNode needs to be instantiated and then connected to the AVAudioEngine's mixer node.
var player = AVAudioPlayerNode()
// attach the player to the audio engine
ae.attachNode(player)
// connect the player to the mixer
// using the player's defautl output format
ae.connect(player, to: mixer, format: player.outputFormatForBus(0))
The generated audio is placed in an AVAudioPCMBuffer.
// create a PCMBuffer with same format as
// player's default output bus
// the buffer's capcity will
buffer = AVAudioPCMBuffer(PCMFormat: player.outputFormatForBus(0), frameCapacity: 100)
// importantly, the buffer's length is different
// than its capacity
buffer.frameLength = 100
I filled the buffer in this way, which might not be correct. I am hazy about dealing with the UnsafePointer< UnsafePointer < T > >
type:
for var i = 0; i < Int(buffer.frameLength); i+=Int(n_channels) {
var val = sinf(441.0*Float(i)*2*Float(M_PI)/sr)
buffer.floatChannelData.memory[i] = val * 0.5
}
Finally, one starts the AVAudioEngine, starts the player, and then schedules the AVAudioPlayer to play the buffer:
// you shoud, of course, be prepared
// to handle the error in production code
ae.startAndReturnError(nil)
player.play()
// schedule the buffer
player.scheduleBuffer(buffer, atTime: nil, options: .Loops, completionHandler: nil)
I made the above into a class.
import AVFoundation
class SinePlayer{
// store persistent objects
var ae:AVAudioEngine
var player:AVAudioPlayerNode
var mixer:AVAudioMixerNode
var buffer:AVAudioPCMBuffer
init(){
// initialize objects
ae = AVAudioEngine()
player = AVAudioPlayerNode()
mixer = ae.mainMixerNode;
buffer = AVAudioPCMBuffer(PCMFormat: player.outputFormatForBus(0), frameCapacity: 100)
buffer.frameLength = 100
// generate sine wave
var sr:Float = Float(mixer.outputFormatForBus(0).sampleRate)
var n_channels = mixer.outputFormatForBus(0).channelCount
for var i = 0; i < Int(buffer.frameLength); i+=Int(n_channels) {
var val = sinf(441.0*Float(i)*2*Float(M_PI)/sr)
buffer.floatChannelData.memory[i] = val * 0.5
}
// setup audio engine
ae.attachNode(player)
ae.connect(player, to: mixer, format: player.outputFormatForBus(0))
ae.startAndReturnError(nil)
// play player and buffer
player.play()
player.scheduleBuffer(buffer, atTime: nil, options: .Loops, completionHandler: nil)
}
}
var sp = SinePlayer()
Note that in this ecosystem, the way to generate sounds in realtime involves using AudioUnits, a topic I am still exploring.
Good luck, and sorry if there are bugs. I cannot say whether or not this works consistently in XCode, but it could be helpful for getting up to speed in Swift Audio programming.