My earlier post, A First Look at Metal Performance Shaders in iOS 9, is all very well and good for using Apple's new Metal Performance Shaders (MPS) framework to filter scenes generated in Metal, but how could you use MPS to filter ordinary images such a PNG or JPG?
This is where the new MTKTextureLoader comes into play. This class hugely simplifies the effort required to create Metal textures from common image formats and once an image is a Metal texture, we can use the code from earlier to apply MPS filters to it.
Let's take a simple example where we want to add a Gaussian blur and Sobel edge detection to an image, flower.jpg. We're going to create a simple single view iOS 9 project and, in the view did load function, check the device supports Metal:
guard let device = MTLCreateSystemDefaultDevice() else
{
// device doesn't support metal - handle appropriately
return
}
...if all is good, we can continue and load an image and take a note of its size:
let image = UIImage(named: "flower.jpg")
let imageSize: CGSize = (image?.size)!
Now we know the image size, we can create an MTKView and add it to the stage - this component will display our filtered image:
let metalView = MTKView(frame: CGRect(x: 30, y: 30, width: imageSize.width / 2, height: imageSize.height / 2))
metalView.framebufferOnly = false
view.addSubview(metalView)
Now for the magic: using the image's CGImage property, we create a texture loader and invoke textureWithCGImage() to create a Metal texture that represents the image:
let imageTexture:MTLTexture
let textureLoader = MTKTextureLoader(device: device)
do
{
imageTexture = try textureLoader.textureWithCGImage((image?.CGImage!)!, options: nil)
}
catch
{
// unable to create texture from image
return
}
Because we're going to apply two filters, we also create an intermediate texture that bridges between the two:
let intermediateTextureDesciptor = MTLTextureDescriptor.texture2DDescriptorWithPixelFormat(MTLPixelFormat.RGBA8Unorm, width: Int(imageSize.width), height: Int(imageSize.height), mipmapped: false)
let intermediateTexture = device.newTextureWithDescriptor(intermediateTextureDesciptor)
We can now create our two filter objects:
let blur = MPSImageGaussianBlur(device: device, sigma: 5)
let sobel = MPSImageSobel(device: device)
Then some familiar Metal code to create a command queue and command buffer:
let commandQueue = device.newCommandQueue()
let commandBuffer = commandQueue.commandBuffer()
...and using the intermediate texture, execute the two filters with the metal view's currentDrawable!.texture as the final target:
sobel.encodeToCommandBuffer(commandBuffer, sourceTexture: imageTexture, destinationTexture: intermediateTexture)
blur.encodeToCommandBuffer(commandBuffer, sourceTexture: intermediateTexture, destinationTexture: metalView.currentDrawable!.texture)
Finally, we can display the image on the screen:
commandBuffer.presentDrawable(metalView.currentDrawable!)
commandBuffer.commit();
Easy!
As a point to note: Core Image filters will actually be powered my Metal Performance Filters in iOS 9, so it's unlikely you'll need to do this for kernels such as Gaussian blur. But this is useful knowledge if you want to cut out the Core Image layer and use MPS filters directly.
Add a comment