As part of a project to create a GPU based reaction diffusion simulation, I stated to look at using Metal in Swift this weekend.

I've done similar work in the past targeting the Flash Player and using AGAL. Metal is a far higher level language than AGAL: it's based on C++ with a richer syntax and includes compute functions. Whereas in AGAL, to run cellular automata, I'd create a rectangle out of two triangles with a vertex shader and execute the reaction diffusion functions in a separate fragment shader, a compute shader is more direct: I can get and set textures and it can operate of individual pixels of that texture without the need for a vertex shader.

The Swift code I discuss in this article is based heavily on two articles at Metal By Example: Introduction to Compute Programming in Metal and Fundamentals of Image Processing in Metal. Both of which include Objective-C source code, so hopefully my Swift implementation will help some. 

My application has four main steps: initialise Metal, create a Metal texture from a UIImage, apply a kernel function to that texture, convert the newly generated texture back into a UIImage and display it. I'm using a simple example shader that changes the saturation of the input image. so I've also added a slider that changes the saturation value.

Let's look at each step one by one:

Initialising Metal

Initialising Metal is pretty simple: inside my view controller's overridden viewDidLoad(), I create a pointer to the default Metal device:

    var device: MTLDevice! = nil
    [...]
    device = MTLCreateSystemDefaultDevice()

I also need to create a library and command queue:

    defaultLibrary = device.newDefaultLibrary()
    commandQueue = device.newCommandQueue()

Finally, I add a reference to my Metal function to the library and synchronously create and compile a compute pipeline state:

    let kernelFunction = defaultLibrary.newFunctionWithName("kernelShader")
    pipelineState = device.newComputePipelineStateWithFunction(kernelFunction!, error: nil)

The kernelShader points to the saturation image processing function, written in Metal, that lives in my Shaders.metal file:

    kernel void kernelShader(texture2d<float, access::read> inTexture [[texture(0)]],
                         texture2d<float, access::write> outTexture [[texture(1)]],
                         constant AdjustSaturationUniforms &uniforms [[buffer(0)]],
                         uint2 gid [[thread_position_in_grid]])
    {
        float4 inColor = inTexture.read(gid);
        float value = dot(inColor.rgb, float3(0.299, 0.587, 0.114));
        float4 grayColor(value, value, value, 1.0);
        float4 outColor = mix(grayColor, inColor, uniforms.saturationFactor);
        outTexture.write(outColor, gid);
    }

Creating a Metal Texture from a UIIMage

There are a few steps in converting a UIImage into a MTLTexture instance. I create an array of UInt8 to hold an empty CGBitmapInfo, then use CGContextDrawImage() to copy the image into a bitmap context 

    let image = UIImage(named: "grand_canyon.jpg")
    let imageRef = image.CGImage
        
    let imageWidth = CGImageGetWidth(imageRef)
    let imageHeight = CGImageGetHeight(imageRef)

    let bytesPerRow = bytesPerPixel * imageWidth
        
    var rawData = [UInt8](count: Int(imageWidth * imageHeight * 4), repeatedValue: 0)
  
    let bitmapInfo = CGBitmapInfo(CGBitmapInfo.ByteOrder32Big.toRaw() | CGImageAlphaInfo.PremultipliedLast.toRaw())

    let context = CGBitmapContextCreate(&rawData, imageWidth, imageHeight, bitsPerComponent, bytesPerRow, rgbColorSpace, bitmapInfo)
        
    CGContextDrawImage(context, CGRectMake(0, 0, CGFloat(imageWidth), CGFloat(imageHeight)), imageRef)

Once all of those steps have executed, I can create a new texture use its replaceRegion() method to write the image into it:

    let textureDescriptor = MTLTextureDescriptor.texture2DDescriptorWithPixelFormat(MTLPixelFormat.RGBA8Unorm, width: Int(imageWidth), height: Int(imageHeight), mipmapped: true)
        
    texture = device.newTextureWithDescriptor(textureDescriptor)

    let region = MTLRegionMake2D(0, 0, Int(imageWidth), Int(imageHeight))
    texture.replaceRegion(region, mipmapLevel: 0, withBytes: &rawData, bytesPerRow: Int(bytesPerRow))

I also create an empty texture which the kernel function will write into:

    let outTextureDescriptor = MTLTextureDescriptor.texture2DDescriptorWithPixelFormat(texture.pixelFormat, width: texture.width, height: texture.height, mipmapped: false)
    outTexture = device.newTextureWithDescriptor(outTextureDescriptor)

Invoking the Kernel Function

The next block of work is to set the textures and another variable on the kerne function and execute the shader. The first step is to instantiate a command buffer and command encoder:

    let commandBuffer = commandQueue.commandBuffer()
    let commandEncoder = commandBuffer.computeCommandEncoder()

...then set the pipeline state (we got from device.newComputePipelineStateWithFunction() earlier) and textures on the command encoder:

    commandEncoder.setComputePipelineState(pipelineState)
    commandEncoder.setTexture(texture, atIndex: 0)
    commandEncoder.setTexture(outTexture, atIndex: 1)

The filter requires an addition parameter that defines the saturation amount. This is passed into the shader via an MTLBuffer. To populate the buffer, I've created a small struct:

    struct AdjustSaturationUniforms 
    {
        var saturationFactor: Float
    }

Then newBufferWithBytes() to pass in my saturationFactor float value:

    var saturationFactor = AdjustSaturationUniforms(saturationFactor: self.saturationFactor)
    var buffer: MTLBuffer = device.newBufferWithBytes(&saturationFactor, length: sizeof(AdjustSaturationUniforms), options: nil)
    commandEncoder.setBuffer(buffer, offset: 0, atIndex: 0)

This is now accessible inside the shader as an argument to its kernel function:

    constant AdjustSaturationUniforms &uniforms [[buffer(0)]]

Now I'm ready invoke the function itself. Metal kernel functions use thread groups to break up their workload into chunks. In my example, I create 64 thread groups, then send them off to the GPU:

    let threadGroupCount = MTLSizeMake(8, 8, 1)
    let threadGroups = MTLSizeMake(texture.width / threadGroupCount.width, texture.height / threadGroupCount.height, 1)
        
    commandQueue = device.newCommandQueue()
        
    commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupCount)
    commandEncoder.endEncoding()
    commandBuffer.commit()
    commandBuffer.waitUntilCompleted()

Converting the Texture to a UIImage

Finally, now that the kernel function has executed, we need to do the reverse of above and get the image held in outTexture into a UIImage so it can be displayed. Again, I use a region  to define the size and the texture's getBytes() to populate an array on UInt8:

    let imageSize = CGSize(width: texture.width, height: texture.height)
    let imageByteCount = Int(imageSize.width * imageSize.height * 4)
        
    let bytesPerRow = bytesPerPixel * UInt(imageSize.width)
    var imageBytes = [UInt8](count: imageByteCount, repeatedValue: 0)
    let region = MTLRegionMake2D(0, 0, Int(imageSize.width), Int(imageSize.height))
        
    outTexture.getBytes(&imageBytes, bytesPerRow: Int(bytesPerRow), fromRegion: region, mipmapLevel: 0)

Now that imageBytes holds the raw data, it's a few lines to create a CGImage:

    let providerRef = CGDataProviderCreateWithCFData(
            NSData(bytes: &imageBytes, length: imageBytes.count * sizeof(UInt8))
        )
        
    let bitmapInfo = CGBitmapInfo(CGBitmapInfo.ByteOrder32Big.toRaw() | CGImageAlphaInfo.PremultipliedLast.toRaw())
    let renderingIntent = kCGRenderingIntentDefault
        
    let imageRef = CGImageCreate(UInt(imageSize.width), UInt(imageSize.height), bitsPerComponent, bitsPerPixel, bytesPerRow, rgbColorSpace, bitmapInfo, providerRef, nil, false, renderingIntent)
        
    imageView.image = UIImage(CGImage: imageRef)

...and we're done! 

Metal requires an A7 or A8 processor and this code has been built and tested under Xcode 6. All the source code is available at my GitHub repository here.


4

View comments

  1. Anonymous10:10 AM

    Thanks for the nice tutorial. While running some practice leveraging your example, I ran into an issue related to memory alignment. I am practicing zero-copy data transfer by using 'newBufferWithBytesNoCopy.' This seems to require the memory to be aligned to a certain size. Could you please give me some advice on how to align pointer to an Obj-C structure in Swift for creating a Metal buffer object with newBufferWithBytesNoCopy?

    ReplyDelete
  2. Anonymous9:44 PM

    Thanks for the article. Any thoughts on how to apply this to a SCNScene or SCNRenderer to get barrel distortion?

    ReplyDelete
  3. Actually, I have a barrel distortion CIKernel which you can apply as a CIFilter to an SCNScene. It's part of y CRT Core Image filter and available here: https://github.com/FlexMonkey/Filterpedia/tree/master/Filterpedia/customFilters

    ReplyDelete
  4. Hi Simon,
    Is there any chance that you could migrate your code to Swift 3 or later? Especially for the Filterpedia app? I tried converting myself, but am running into lots of Async errors that I don't know how to address (and that don't get resolved by Xcode's code migration)...I'm following along your excellent image processing book, but my swift knowhow is a bit lacking. Thank you for all these amazing resources...

    ReplyDelete


It's been a fairly busy few months at my "proper" job, so my recreational Houdini tinkering has taken a bit of a back seat. However, when I saw my Swarm Chemistry hero, Hiroki Sayama tweeting a link to How a life-like system emerges from a simple particle motion law, I thought I'd dust off Houdini to see if I could implement this model in VEX.

The paper discusses a simple particle system, named Primordial Particle Systems (PPS), that leads to life-like structures through morphogenesis. Each particle in the system is defined by its position and heading and, with each step in the simulation, alters its heading based on the PPS rule and moves forward at a defined speed. The heading is updated based on the number of neighbors to the particle's left and right. 

The project set up is super simple: 



Inside a geometry node, I create a grid, and randomly scatter 19,000 points across it. An attribute wrangle node assigns a random value to @angle:
@angle = $PI * 2 * rand(@ptnum); 
The real magic happens inside another attribute wrangle inside the solver.

In a nutshell, my VEX code iterates over each point's neighbors and sums the neighbor count to its left and right. To figure out the chirality, I use some simple trigonometry to rotate the vector defined by the current particle and the neighbor by the current particle's angle, then calculate the angle of the rotated vector. 
while(pciterate(pointCloud)) {

    vector otherPosition;
    pcimport(pointCloud, "P", otherPosition);

    vector2 offsetPosition = set(otherPosition.x - @P.x, otherPosition.z - @P.z);
    float xx = offsetPosition.x * cos(-@angle) - offsetPosition.y * sin(-@angle);
    float yy = offsetPosition.x * sin(-@angle) + offsetPosition.y * cos(-@angle);
    
    float otherAngle = atan2(yy, xx); 

    if (otherAngle >= 0) {
        L++;
    } 
    else {
        R++;
    }   
}
After iterating over the nearby particles, I update the angle based on the PPS rule:
float N = float(L + R);
@angle += alpha + beta * N * sign(R - L);
...and, finally, I can update the particle's position based on its angle and speed:
vector velocity = set(cos(@angle) * @speed, 0.0, sin(@angle) * @speed);  
@P += velocity ;
Not quite finally, because to make things pretty, I update the color using the number of neighbors to control hue:
@Cd = hsvtorgb(N / maxParticles, 1.0, 1.0); 
Easy!

Solitons Emerging from Tweaked Model



I couldn't help tinkering with the published PPS math by making the speed a function of the number of local neighbors:
@speed = 1.5 * (N / maxParticles);
In the video above, alpha is 182° and beta is -13°.

References

Schmickl, T. et al. How a life-like system emerges from a simple particle motion law. Sci. Rep. 6, 37969; doi: 10.1038/srep37969 (2016).


5

View comments

  1. ok. I've got to finish current job, then crash course in programming, and ... this is very inspirational!

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
About Me
About Me
Labels
Labels
Blog Archive
Loading