As part of a project to create a GPU based reaction diffusion simulation, I stated to look at using Metal in Swift this weekend.
I've done similar work in the past targeting the Flash Player and using AGAL. Metal is a far higher level language than AGAL: it's based on C++ with a richer syntax and includes compute functions. Whereas in AGAL, to run cellular automata, I'd create a rectangle out of two triangles with a vertex shader and execute the reaction diffusion functions in a separate fragment shader, a compute shader is more direct: I can get and set textures and it can operate of individual pixels of that texture without the need for a vertex shader.
The Swift code I discuss in this article is based heavily on two articles at Metal By Example: Introduction to Compute Programming in Metal and Fundamentals of Image Processing in Metal. Both of which include Objective-C source code, so hopefully my Swift implementation will help some.
My application has four main steps: initialise Metal, create a Metal texture from a UIImage, apply a kernel function to that texture, convert the newly generated texture back into a UIImage and display it. I'm using a simple example shader that changes the saturation of the input image. so I've also added a slider that changes the saturation value.
Let's look at each step one by one:
Initialising Metal
Initialising Metal is pretty simple: inside my view controller's overridden viewDidLoad(), I create a pointer to the default Metal device:
var device: MTLDevice! = nil
[...]
device = MTLCreateSystemDefaultDevice()
I also need to create a library and command queue:
defaultLibrary = device.newDefaultLibrary()
commandQueue = device.newCommandQueue()
Finally, I add a reference to my Metal function to the library and synchronously create and compile a compute pipeline state:
let kernelFunction = defaultLibrary.newFunctionWithName("kernelShader")
pipelineState = device.newComputePipelineStateWithFunction(kernelFunction!, error: nil)
The kernelShader points to the saturation image processing function, written in Metal, that lives in my Shaders.metal file:
kernel void kernelShader(texture2d<float, access::read> inTexture [[texture(0)]],
texture2d<float, access::write> outTexture [[texture(1)]],
constant AdjustSaturationUniforms &uniforms [[buffer(0)]],
uint2 gid [[thread_position_in_grid]])
{
float4 inColor = inTexture.read(gid);
float value = dot(inColor.rgb, float3(0.299, 0.587, 0.114));
float4 grayColor(value, value, value, 1.0);
float4 outColor = mix(grayColor, inColor, uniforms.saturationFactor);
outTexture.write(outColor, gid);
}
Creating a Metal Texture from a UIIMage
There are a few steps in converting a UIImage into a MTLTexture instance. I create an array of UInt8 to hold an empty CGBitmapInfo, then use CGContextDrawImage() to copy the image into a bitmap context
let image = UIImage(named: "grand_canyon.jpg")
let imageRef = image.CGImage
let imageWidth = CGImageGetWidth(imageRef)
let imageHeight = CGImageGetHeight(imageRef)
let bytesPerRow = bytesPerPixel * imageWidth
var rawData = [UInt8](count: Int(imageWidth * imageHeight * 4), repeatedValue: 0)
let bitmapInfo = CGBitmapInfo(CGBitmapInfo.ByteOrder32Big.toRaw() | CGImageAlphaInfo.PremultipliedLast.toRaw())
let context = CGBitmapContextCreate(&rawData, imageWidth, imageHeight, bitsPerComponent, bytesPerRow, rgbColorSpace, bitmapInfo)
CGContextDrawImage(context, CGRectMake(0, 0, CGFloat(imageWidth), CGFloat(imageHeight)), imageRef)
Once all of those steps have executed, I can create a new texture use its replaceRegion() method to write the image into it:
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptorWithPixelFormat(MTLPixelFormat.RGBA8Unorm, width: Int(imageWidth), height: Int(imageHeight), mipmapped: true)
texture = device.newTextureWithDescriptor(textureDescriptor)
let region = MTLRegionMake2D(0, 0, Int(imageWidth), Int(imageHeight))
texture.replaceRegion(region, mipmapLevel: 0, withBytes: &rawData, bytesPerRow: Int(bytesPerRow))
I also create an empty texture which the kernel function will write into:
let outTextureDescriptor = MTLTextureDescriptor.texture2DDescriptorWithPixelFormat(texture.pixelFormat, width: texture.width, height: texture.height, mipmapped: false)
outTexture = device.newTextureWithDescriptor(outTextureDescriptor)
Invoking the Kernel Function
The next block of work is to set the textures and another variable on the kerne function and execute the shader. The first step is to instantiate a command buffer and command encoder:
let commandBuffer = commandQueue.commandBuffer()
let commandEncoder = commandBuffer.computeCommandEncoder()
...then set the pipeline state (we got from device.newComputePipelineStateWithFunction() earlier) and textures on the command encoder:
commandEncoder.setComputePipelineState(pipelineState)
commandEncoder.setTexture(texture, atIndex: 0)
commandEncoder.setTexture(outTexture, atIndex: 1)
The filter requires an addition parameter that defines the saturation amount. This is passed into the shader via an MTLBuffer. To populate the buffer, I've created a small struct:
struct AdjustSaturationUniforms
{
var saturationFactor: Float
}
Then newBufferWithBytes() to pass in my saturationFactor float value:
var saturationFactor = AdjustSaturationUniforms(saturationFactor: self.saturationFactor)
var buffer: MTLBuffer = device.newBufferWithBytes(&saturationFactor, length: sizeof(AdjustSaturationUniforms), options: nil)
commandEncoder.setBuffer(buffer, offset: 0, atIndex: 0)
This is now accessible inside the shader as an argument to its kernel function:
constant AdjustSaturationUniforms &uniforms [[buffer(0)]]
Now I'm ready invoke the function itself. Metal kernel functions use thread groups to break up their workload into chunks. In my example, I create 64 thread groups, then send them off to the GPU:
let threadGroupCount = MTLSizeMake(8, 8, 1)
let threadGroups = MTLSizeMake(texture.width / threadGroupCount.width, texture.height / threadGroupCount.height, 1)
commandQueue = device.newCommandQueue()
commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupCount)
commandEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
Converting the Texture to a UIImage
Finally, now that the kernel function has executed, we need to do the reverse of above and get the image held in outTexture into a UIImage so it can be displayed. Again, I use a region to define the size and the texture's getBytes() to populate an array on UInt8:
let imageSize = CGSize(width: texture.width, height: texture.height)
let imageByteCount = Int(imageSize.width * imageSize.height * 4)
let bytesPerRow = bytesPerPixel * UInt(imageSize.width)
var imageBytes = [UInt8](count: imageByteCount, repeatedValue: 0)
let region = MTLRegionMake2D(0, 0, Int(imageSize.width), Int(imageSize.height))
outTexture.getBytes(&imageBytes, bytesPerRow: Int(bytesPerRow), fromRegion: region, mipmapLevel: 0)
Now that imageBytes holds the raw data, it's a few lines to create a CGImage:
let providerRef = CGDataProviderCreateWithCFData(
NSData(bytes: &imageBytes, length: imageBytes.count * sizeof(UInt8))
)
let bitmapInfo = CGBitmapInfo(CGBitmapInfo.ByteOrder32Big.toRaw() | CGImageAlphaInfo.PremultipliedLast.toRaw())
let renderingIntent = kCGRenderingIntentDefault
let imageRef = CGImageCreate(UInt(imageSize.width), UInt(imageSize.height), bitsPerComponent, bitsPerPixel, bytesPerRow, rgbColorSpace, bitmapInfo, providerRef, nil, false, renderingIntent)
imageView.image = UIImage(CGImage: imageRef)
...and we're done!
Metal requires an A7 or A8 processor and this code has been built and tested under Xcode 6. All the source code is available at my GitHub repository here.
Thanks for the nice tutorial. While running some practice leveraging your example, I ran into an issue related to memory alignment. I am practicing zero-copy data transfer by using 'newBufferWithBytesNoCopy.' This seems to require the memory to be aligned to a certain size. Could you please give me some advice on how to align pointer to an Obj-C structure in Swift for creating a Metal buffer object with newBufferWithBytesNoCopy?
ReplyDeleteThanks for the article. Any thoughts on how to apply this to a SCNScene or SCNRenderer to get barrel distortion?
ReplyDeleteActually, I have a barrel distortion CIKernel which you can apply as a CIFilter to an SCNScene. It's part of y CRT Core Image filter and available here: https://github.com/FlexMonkey/Filterpedia/tree/master/Filterpedia/customFilters
ReplyDeleteHi Simon,
ReplyDeleteIs there any chance that you could migrate your code to Swift 3 or later? Especially for the Filterpedia app? I tried converting myself, but am running into lots of Async errors that I don't know how to address (and that don't get resolved by Xcode's code migration)...I'm following along your excellent image processing book, but my swift knowhow is a bit lacking. Thank you for all these amazing resources...