Hidden away in Core Image's Geometry Adjustment category are a set of perspective related filters that change the geometry of flat images to simulate them being viewed in 3D space. If you work in architecture or out-of-home advertising, these filters, used in conjunction with Core Image's rectangle detector, are perfect for mapping images onto 3D surfaces. Alternatively, the filters can synthesise the effects of a perspective control lens.


Project Assets

This post comes with a companion Swift playground which is available here. The two assets we'll use are this picture of a billboard:



...and this picture of The Mona Lisa:



The assets are declared as:

    let monaLisa = CIImage(image: UIImage(named: "monalisa.jpg")!)!
    let backgroundImage = CIImage(image: UIImage(named: "background.jpg")!)!

Detecting the Target Rectangle

Our first task is to find the co-ordinates of the corners of the white rectangle and for that, we'll use a CIDetector. The detector needs a core image context and will return a CIRectangleFeature. In real life, there's no guarantee that it will not return nil, in the playground, with known assets, we can live life on the edge and unwrap it with a !.


    let ciContext =  CIContext()

    let detector = CIDetector(ofType: CIDetectorTypeRectangle,
        context: ciContext,
        options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])

    let rect = detector.featuresInImage(backgroundImage).first as! CIRectangleFeature

Performing the Perspective Transform

Now we have the four points that define the corners of the white billboard, we can apply those, along with the background input image, to a perspective transform filter. The perspective transform moves an image's original corners to a new set of coordinates and maps the pixels of the image accordingly: 


    let perspectiveTransform = CIFilter(name: "CIPerspectiveTransform")!


    perspectiveTransform.setValue(CIVector(CGPoint:rect.topLeft),
        forKey: "inputTopLeft")
    perspectiveTransform.setValue(CIVector(CGPoint:rect.topRight),
        forKey: "inputTopRight")
    perspectiveTransform.setValue(CIVector(CGPoint:rect.bottomRight),
        forKey: "inputBottomRight")
    perspectiveTransform.setValue(CIVector(CGPoint:rect.bottomLeft),
        forKey: "inputBottomLeft")
    perspectiveTransform.setValue(monaLisa,
             forKey: kCIInputImageKey)

The output image of the perspective transform filter now looks like this:




We can now use a source atop compositing filter to simply composite the perspective transformed Mona Lisa over the background:


    let composite = CIFilter(name: "CISourceAtopCompositing")!

    composite.setValue(backgroundImage,
        forKey: kCIInputBackgroundImageKey)
    composite.setValue(perspectiveTransform.outputImage!,

        forKey: kCIInputImageKey)

The result is OK, but the aspect ratio of the transformed image is wrong and The Mona Lisa is stretched:




Fixing Aspect Ratio with Perspective Correction

To fix the aspect ratio, we'll use Core Image's perspective correction filter. This filter works in the opposite way to a perspective transform: it converts four points (which typically map to the corners of an image subject to perspective distortion) and converts them to a flat, two dimensional rectangle. 

We'll pass in the corner coordinates of the white billboard to a perspective correction filter which will return a version of the Mona Lisa cropped to the aspect ration of the billboard if we were looking at it head on:


    let perspectiveCorrection = CIFilter(name: "CIPerspectiveCorrection")!

    perspectiveCorrection.setValue(CIVector(CGPoint:rect.topLeft),
        forKey: "inputTopLeft")
    perspectiveCorrection.setValue(CIVector(CGPoint:rect.topRight),
        forKey: "inputTopRight")
    perspectiveCorrection.setValue(CIVector(CGPoint:rect.bottomRight),
        forKey: "inputBottomRight")
    perspectiveCorrection.setValue(CIVector(CGPoint:rect.bottomLeft),
        forKey: "inputBottomLeft")
    perspectiveCorrection.setValue(monaLisa,
        forKey: kCIInputImageKey)



A little bit of tweaking to centre the corrected image to the centre of the billboard rectangle:


    let perspectiveCorrectionRect = perspectiveCorrection.outputImage!.extent
    let cropRect = perspectiveCorrection.outputImage!.extent.offsetBy(
        dx: monaLisa.extent.midX - perspectiveCorrectionRect.midX,
        dy: monaLisa.extent.midY - perspectiveCorrectionRect.midY)


    let croppedMonaLisa = monaLisa.imageByCroppingToRect(cropRect)

...and we now have an output image of a cropped Mona Lisa at the correct aspect ration:



Finally, using the original perspective transform filter, we pass in the new cropped version rather than the original version to get a composite with the correct aspect ratio:


    perspectiveTransform.setValue(croppedMonaLisa,
        forKey: kCIInputImageKey)

    composite.setValue(perspectiveTransform.outputImage!,

        forKey: kCIInputImageKey)

Which gives the result we're probably after:




Core Image for Swift

Although my book doesn't actually cover detectors or perspective correction, Core Image for Swift, does take a detailed look at almost every aspect of still image processing with Core Image.

Core Image for Swift is available from both Apple's iBooks Store or, as a PDF, from Gumroad. IMHO, the iBooks version is better, especially as it contains video assets which the PDF version doesn't.




8

View comments

It's been a fairly busy few months at my "proper" job, so my recreational Houdini tinkering has taken a bit of a back seat. However, when I saw my Swarm Chemistry hero, Hiroki Sayama tweeting a link to How a life-like system emerges from a simple particle motion law, I thought I'd dust off Houdini to see if I could implement this model in VEX.

The paper discusses a simple particle system, named Primordial Particle Systems (PPS), that leads to life-like structures through morphogenesis. Each particle in the system is defined by its position and heading and, with each step in the simulation, alters its heading based on the PPS rule and moves forward at a defined speed. The heading is updated based on the number of neighbors to the particle's left and right.
5

This blog post discusses a technique for rendering SideFX Houdini FLIP fluids as sparse fields of wormlike particles (hence my slightly over-the-top Sparse Vermiform moniker) with their color based on the fluid system's gas pressure field.

The project begins with an oblate spheroid that is converted to a FLIP Fluid using the FLIP Fluid from Object shelf tool. The fluid object sits within a box that's converted to a static body with its volume inverted to act as a container.
3

This post continues from my recent blog entry, Particle Advection by Reaction Diffusion in SideFX Houdini. In this project, I've done away with the VEX in the DOP network, and replaced it with a VDB Analysis node to create a vector field that represents the gradient in the reaction diffusion volume. This allows me to use a POP Advect by Volumes node in the DOP network rather than hand coding by own force wrangle.

After watching this excellent tutorial that discusses advecting particles by magnetic fields to create an animation of the sun, I was inspired to use the same technique to advect particles by fields that are a function of reaction diffusion systems. 

The source for my reaction diffusion vector field is a geometry node.
1

I played with animating mitosis in Houdini last year (see Simulating Mitosis in Houdini), but the math wasn't quite right, so I thought I'd revisit my VEX to see if it could be improved. After some tinkering, the video above shows my latest (hopefully improved) results.
4

This post describes a simple way to create a system comprising of a regularly surfaced fluid and a faux grain system. The video above contains three clips using the same basic technique: creating a single point source for the FLIP SOP initial data but using groups to render some as a fluid and some as individual tiny spheres - the grains. 

The first clip shows a granular sphere dropping into a fluid tank.
1

Fibonacci spheres are created from a point set that follows a spiral path to form a sphere (you can see an example, with code at OpenProcessing).

This video contains five clips using SideFX Houdini's Grain Solver with an attached POP Wrangle that uses VEX to generate custom forces. Here's a quick rundown of the VEX I used for each clip (please forgive the use of a variable named oomph).

Clip One "Twin Peaks"

Here, I compare each grain's current angle to the scene's origin to the current time.

The Rayleigh-Taylor instability is the instability between two fluids of different densities. It can appear as "fingers" of a denser liquid dropping into a less dense liquid or as a mushroom cloud in an explosion.

The phenomenon "comes for free" in SideFX Houdini FLIP Fluids.
1

Following on from my recent blog post, Mixing Fluids in Houdini, I wanted to simulate a toroidal eddy effect where the incoming drip takes the form of a torus and the fluid flows around the circumference of its minor radius. 

My first thought was to use a POP Axis Force, but that rotates particles around the circumference of the major radius. So, I took another approach: create lots of curves placed around a circle and use those as the geometry source for a POP Curve Force.
About Me
About Me
Labels
Labels
Blog Archive
Loading