Simple Dynamic Image Lighting with CoreImage

With the kind of apps I usually make, I often end up doing a lot of gamey looking things right inside of UIKit. The addition of UIDynamics made one of those jobs, gravity, super easy. I wanted the same kind of simplicity for lights.

Animated figure being dynamically lit by 3 moving coloured lights

Using The Code

It only works on image views for now, but it works well and frame rates are good (much better than the gif lets on) for all but very large images on older devices. You can get all the code on github and using it should be pretty simple.

You just create a lighting controller, add some light fixtures and image views you want to be lit to the controller, and let it know when you need to update the lighting (when we’re moving the lights in the example above). Here’s the interface for the MBLightingController:

@interface MBLightingController : NSObject

@property(nonatomic) BOOL lightsConstantlyUpdating;

-(void)addLightFixture:(id<MBLightFixture>)light;
-(void)addLitView:(MBLitAnimationView *)litView;
-(void)setNeedsLightingUpdate;

@end

Only set lightsConstantlyUpdating if the lighting is always changing (this came about because I was playing around with adding a light to a rope with UIDynamics, which you can see in the project on github.)

So, there are a couple of things there that you won’t know what they are, the MBLightFixture protocol, and MBLitAnimationView.

Anything can be a light, so long as it implements the protocol, which means it needs a position, intensity, range and color. I’ve just been using a UIView subclass but maybe your light will be a CAEmitterLayer or something.

MBLitAnimationView can be used everywhere you’d use a UIImageView, it just adds the ability to be lit, and makes working with animation easier.

Your view controller’s viewDidLoad might include something like this:

//create the ligthing controller
self.lightingController = [[MBLightingController alloc] init];
    
//add an image to be lit
MBLitAnimationView *bg = [[MBLitAnimationView alloc] initWithFrame:self.view.bounds];
bg.ambientLightLevel = 0.1; // very dark
[bg setImage:[UIImage imageNamed:@"wall"]];
[self.view addSubview:bg];
[_lightingController addLitView:bg];
    
//add a light
SimpleLightView *lightView = [[SimpleLightView alloc] initWithFrame:CGRectMake(200, 200, 25, 25)];
lightView.intensity = @0.8;
lightView.tintColor = [UIColor whiteColor];
lightView.range = @250.0;
    
[self.view addSubview:lightView];
[_lightingController addLightFixture:lightView];

How It Works

The light effect is achieved using CoreImage filters and everything happens in the applyLights method of MBLitAnimationView.

I experimented with a bunch of different filters trying to get the right effect, and there were several that worked so just try switching out the filters if you want something a little different.

Multiple filters are chained together, first up we need to darken the image using CIColorControls:

CIFilter *darkenFilter = [CIFilter filterWithName:@"CIColorControls"
                                           keysAndValues:
                                 @"inputImage", currentFrameStartImage,
                                 @"inputSaturation", @1.0,
                                 @"inputContrast", @1.0,
                                 @"inputBrightness", @(0-(1-_ambientLightLevel)), nil];

Then, for every light that we have, we create a CIRadialGradient:

CIFilter *gradientFilter = [CIFilter filterWithName:@"CIRadialGradient"
                                              keysAndValues:
                                    @"inputRadius0", [light constantIntensityOverRange] ? [light range] : @0.0,
                                    @"inputRadius1", [light range],
                                    @"inputCenter", [CIVector vectorWithCGPoint:inputPoint0],
                                    @"inputColor0", color0,
                                    @"inputColor1", color1, nil];

Then we composite the gradients with the darkened image using CIAdditionCompositing:

lightFilter = [CIFilter filterWithName:@"CIAdditionCompositing"
                                     keysAndValues:
                           @"inputImage", gradients[i],
                           @"inputBackgroundImage",[lightFilter outputImage],nil];

Finally, we mask the image to the shape of the original image:

CIFilter *maskFilter = [CIFilter filterWithName:@"CISourceInCompositing"
                                      keysAndValues:
                            @"inputImage", [lightFilter outputImage],
                            @"inputBackgroundImage",currentFrameStartImage,nil];

Just set the image view’s image property to a UIImage created from the final filter’s output and we’re done!

CGImageRef cgimg = [coreImageContext createCGImage:[maskFilter outputImage]
                                                  fromRect:[currentFrameStartImage extent]];
        
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
imageView.image = newImage;
        
CGImageRelease(cgimg);

What’s Next?

Playing with CoreImage was fun so I think I’ll revisit the code at some point in the future, I’d like to try it out with SpriteKit’s SKEffectNode where it really makes more sense for using with games. Or I might keep working with UIKit and get it working for any view–shiny / shadowy interfaces might be interesting.

UIImageView Animation, But Less Crashy

Animation with UIImageView is super simple and for basic animations it is just what you need. Just throw an array of images at your image view and tell it to go, and it will go. For animations of more than a few frames though its simplicity is also its failing–an array of UIImage s is handy to put together but if you want large images or a reasonable number of frames then that array could take up a serious chunk of memory. If you’ve tried any large animations with UIImageView you’ll know things get crashy very quickly.

There are also a few features, like being able to know what frame is currently being displayed and setting a completion block that you regularly find yourself wanting when dealing with animations, so I’ve created MBAnimationView to provide those, and to overcome the crash inducing memory problems.

My work was informed by the excellent Mo DeJong and you should check out his PNGAnimatorDemo which I’ve borrowed from for my class.

How It Works

The premise for the memory improvements is the fact that image data is compressed, and loading it into a UIImage decompresses it. So, instead of having an array of UIImage objects (the decompressed image data), we’re going to work with an array of NSData objects (the compressed image data). Of course, in order to ever see the image, it will have to be decompressed at some point, but what we’re going to do is create a UIImage on demand for the frame we want to display next, and let it go away when we’re done displaying it.

So the MBAniamtionView has a UIImageView, it creates an array of NSData objects and then on a timer creates the frame images from the data, and sets the image view’s image to it, it’s that simple.

Comparison

As expected crashes using the animationImages approach disappeared with MBAnimationView, but to understand why, I tested the following 2 pieces of code, for different numbers of frames recording memory usage, CPU utilisation and load time.

MBAnimationView *av = [[MBAnimationView alloc] initWithFrame:CGRectMake(0, 0, 350, 285)];
    
[av playAnimation: @"animationFrame"
                       withRange : NSMakeRange(0, 80)
                  numberPadding  : 2
                          ofType : @"png"
                             fps : 25
                          repeat : kMBAnimationViewOptionRepeatForever
                      completion : nil];
    
[self.view addSubview:av];
UIImageView *iv = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 350, 285)];
    iv.animationImages = @[[UIImage imageNamed:@"animationFrame00"],
                           [UIImage imageNamed:@"animationFrame01"],
                           
                           ... 

                           [UIImage imageNamed:@"animationFrame79"]];
    
[self.view addSubview:iv];
[iv startAnimating];

Results

Starting off with small numbers of frames it’s not looking too good for our new class, UIImageView is using less memory and significantly less CPU.

10 FramesMemory Average / PeakCPU Average / Peak
UIImageView4.1MB / 4.1MB 0% / 1%
MBAnimationView4.6MB / 4.6MB11% / 11%
20 FramesMemory Average / PeakCPU Average / Peak
UIImageView4.4MB / 4.4MB 0% / 1%
MBAnimationView4.9MB / 4.9MB11% / 11%

But things start looking up for us as more frames are added. MBAnimationView continues to use the same amount of CPU–memory usage is creeping up, but there are no spikes. UIImageView however is seeing some very large spikes during setup (highlighted in red).

40 FramesMemory Average / PeakCPU Average / Peak
UIImageView4.1MB / 65MB 0% / 8%
MBAnimationView5.7MB / 5.7MB11% / 11%
80 FramesMemory Average / PeakCPU Average / Peak
UIImageView4.5MB / 119MB 0% / 72%
MBAnimationView8.4MB / 8.4MB11% / 11%

Those red memory numbers are big enough to start crashing in a lot of situations, and remember this is for a single animation.

The Trade Off

There has to be one of course, but it turns out not to be a deal breaker. Decompressing the image data takes time, we’re doing it during the animation rather than up front but it’s not preventing us playing animations up to 30 fps and even higher. On the lower end devices I’ve tested on (iPad 2, iPhone 4) there doesn’t seem to be any negative impact, in light of that I’m surprised the default animation mechanism provided by UIImageView doesn’t take the same approach as MBAnimationView.

MBAnimationView on github

Creating a Rope with UIDynamics

I’ve made rope simulations for games with Box2D before but I wanted to see if I could make a rope that could be used easily with UIKit elements, and without having to use Box2D directly. Below is the result, a highly practical user interface, I’m sure you’ll agree!

A UIButton dangling on the end of a swaying rope

There are 2 distinct problems to consider when create a rope:

  1. The physics joint that connects two elements together as though they were connected with a rope
  2. Drawing the rope

Box2D has a bunch of different joints for connecting physics bodies and b2RopeJoint is just what you need to solve problem 1. UIDynamics though, only exposes 1 joint for us to join dynamic bodies through UIAttachmentBehavior. Fortunately the joint it appears to be using under the hood is b2DistanceJoint which, with the right amount of parameter fiddling, can be made into a b2RopeJoint.

So thats problem 1 sorted, right? Just draw the rope with the help of some verlet integration and you’re done? Well I could be done, but I wanted to try something different and a bit more light weight, something that didn’t involve wikipedia pages full of equations to understand fully.

More Chain Than Rope

By simply connecting a series of small views by their ends with UIAttachmentBehavior you get a chain, with enough links in that chain, and with the right attachment parameters you can get something that behaves pretty rope like. You can attach one view to another like so:

UIAttachmentBehavior *chainAttachment = [[UIAttachmentBehavior alloc] initWithItem:view1 attachedToItem:view2];

I just do this in a loop, joining together a whole bunch of views. It ends up looking like this not very ropey looking thing.

Connected boxes make up the rope segments

But there’s no reason why, just because we’re using the views to create the rope like joint, that we have to look at the views. Instead I draw a path connecting their centres

[path moveToPoint:[links[0] center]];
for(int i = 1; i < links.count; i++)
{
   [path addLineToPoint:[links[i]center]];
}

and we end up with what you see up top.

The Code

All the code is on github. It still needs a bit of work but you can get a simple rope up and running with just a couple of lines of code. Import the header and do something like this:

MBRope *rope = [[MBRope alloc] initWithFrame:CGRectMake(350, 180, 5, 200) numSegments:15];
[self.view addSubview:rope];
[rope addRopeToAnimator:animator];

To attach something, like the button in the example, you can get the last view by calling attachmentView on the rope and attach your other view with your own UIAttachmentBehavior. The top of your rope will be fixed to the origin of the rect supplied when you init the rope, but it wouldn’t take much to change it so you can attach your own stuff to both ends.

Storyboards, Multiple Developers and Git.

An updated version of this article is available on my employer’s blog

Storyboards are great. You can get the flow of your app set up in a few minutes without writing a line of code, you can initialise your navigation controllers and tabs ridiculously easily and zoom out a bit and you get a lovely picture of your entire app on a single screen, with lines and boxes and everything.

But storyboards can be cruel if you’re not careful. Git pulls become nervy affairs, a slip merging by hand can render your storyboard unreadable by Xcode and not knowing when to stop using them can turn your lovely lines and boxes into a maintenance nightmare.

John McClane, crawling through some ducting, wishing he still used nibs

“Come to the coast, we’ll get together, have a few segues”

We’ve done a bunch of apps of all shapes and sizes using storyboards over the last year or so and we’re working on perfecting our use of them. I’ve been investigating and testing storyboard best practice and this is what I’ve learned so far.

The Precap

(Wiktionary says it’s a word)

  • Everyone on the same build of Xcode
  • Multiple storyboards
  • Use Nibs for custom views
  • One person owns the storyboard setup / decides granularity
  • Think about which storyboards are involved when assigning tasks
  • Merge storyboards often
  • Xcode is your git client

Xcode

We’ve had hassle sharing storyboards across even minor versions of Xcode, storyboards created in one will do crazy stuff in another, or just plain won’t work. Don’t let anyone sneak ahead to the latest developer preview unless they’re doing a separate installation.

Multiple storyboards

But what about the lovely whole app view, all those lovely lines and boxes all perfectly arranged? That was never what storyboards were about, and you’ve still got your whiteboard for that. Divide and conquer is your mantra for everything else you do so it should be for storyboards too. It’s easier to reason about storyboards with a single purpose and devs are less likely to trip each other up if they’re not working on the same storyboards at the same time.

So how do you break it up?

Per user story is a decent approach, but that can be too granular at times. A separate storyboard for login and one for viewing an account makes sense, but maybe you should keep the lost password flow in with your login - If all you’ve got in each storyboard is a single view controller you might as well be using Nibs, the beauty of storyboards is making connections between view controllers.

But you said to keep using Nibs?

Yep, for custom views, table view cells and the like. A view can’t exist outside a view controller in a storyboard so if you don’t need a view controller for it you really shouldn’t be adding it to a storyboard.

So what about single view controllers in Nibs then?

We’ve a project in which we’ve used some Nibs from an existing project in conjunction with a storyboard and it hasn’t been a problem, but if I was making those components again now they’d be in a storyboard. Having a single view controller in a storyboard will make sense at times, and when you end up wanting to add additional screens to your account details say and all you have to do is drag a new view controller in beside the existing one and hook up a segue you’ll be happy.

Multiple devs

Even with all your concerns perfectly separated, and making sure everyone’s on the same page you’re going to end up working on the same storyboard as someone else at the same time. Having to wait for someone to give you their changes before you can do something is no fun so I wanted to see just how careful you really have to be.

I created a simple Xcode project with a single storyboard and set out with my new git friend to put together a few screens.

Test 1: Adding to the same empty view controller

I started off simply adding a label and having Testy add an empty Image View.

We’re working on the same view controller right away so I expected there to be a conflict to sort out and indeed there was.

The XML is clearly not intended to be parsed by human eyes but this looks straight forward enough, I can see two separate additions so accepting one followed by the other should work fine. This looks just like the kind of conflict that comes up on .xcodeproj files when two devs are adding files.

It worked, I had to look at some XML but nothing blew up, for extremely simple changes to the same view controller we don’t have to worry too much.

Test 2: Editing different view controllers

I added a lovely purple view to one view controller and had Testy add a view to a different view controller. We really shouldn’t have any problems here

And we don’t. It seems editing the same storyboard is fine so long as we keep to different view controllers. But, sometimes we might edit another view controller without meaning to, so I looked at some more scenarios …

Test 3: Rearranging parts of the storyboard that someone else changed

Here Testy has changed one of my purple boxes to green, and I’ve just been fiddling with the layout a bit, swapping the order of the 2 view controllers to the right.

This auto merged and left us looking good, it chose my layout.

Test 4: Adding modified views to a Navigation Controller

When you’re inferring screen elements such as the nav bar adding a navigation controller can affect a bunch of view controllers that someone else might have been working on. Here I’ve added a navigation controller while Testy’s been changing some colours.

To my surprise this auto merged just fine, the views that Testy was working on got the nav bar. It makes sense if you take a look at the XML, no nav bar is added as a child of the view controllers, the inferred setting is in there and Xcode knows what to do with that.

Test 5: Making changes to a slightly more filled out view controller

Things have gone ok so far, so lets revisit editing the same view controller, this time making it a bit more realistic.

We both started off with this

All I did was enclose the label in a scroll view, Testy had a few more bits and pieces to do, he changed that label from an attributed label to a regular label, moved it, changed its text and changed the background colour of the view for good measure. We know we’re going to be looking at XML here but that wasn’t a problem before.

And some of the XML here isn’t so bad either.

But it’s clear that as soon as you make more than 1 simple change you’ve got a problem, and you could easily waste a lot of time dealing with it.

The order of of XML has changed between the versions significantly, and it doesn’t seem to be too smart at highlighting which parts are the same. For example I didn’t touch the table view in either revision but Xcode highlighted it being removed on one side and added in again in the middle of a bunch of other stuff later on.

It ended up only taking a few minutes to figure this example out and getting to a version that made sense including both sets of changes but I was hand editing the XML and that is dangerous. It’s clear that if you make more than a few changes and keep them for yourself for too long, you could end up in a bad way pretty quickly.

Xcode as git client you say?

This might be more of a personal preference, and if you want to rebase rather than merge this is a not an option (for now at least) but it seems screwing up the storyboard XML is likely to happen less frequently if you don’t let anything other than Xcode touch it.

A conclusion, for now

After experimenting a little with this test project I’m happy that we can edit our storyboards simultaneously when we absolutely have to, but that shouldn’t stop us planning things out so that it doesn’t happen, and we’ll be sticking to this list, same one as above:

  • Everyone on the same build of Xcode
  • Multiple storyboards
  • Use Nibs for custom views
  • One person owns the storyboard setup / decides granularity
  • Think about which storyboards are involved when assigning tasks
  • Merge storyboards often

Who has two thumbs and loves storyboards now? John McClane

Drawing Physics with SpriteKit

There are plenty of games out there with this basic mechanic already but I wanted to see if it could be done easily using SpriteKit, spoiler alert: it can.

Shapes being drawn and then becoming part of a physics simulation

The Code

It’s on github, knock yourself out

I make use of some handy dandy categories on UIBezierPath made by other people, they are:

How it Works

We’re combining UIKit and SpriteKit here so we’re layering a transparent UIView on top of an SKView.

The SKView presents a single scene, it will contain our shapes and has a bounding static physics body to stop them escaping. The view controller sets up the scene in standard fashion.

- (void)viewWillLayoutSubviews
{
    scene = [[DropShapeScene alloc] initWithSize:self.view.bounds.size];
    scene.scaleMode = SKSceneScaleModeAspectFill;
    SKView *spriteView = (SKView *) self.view;
    [spriteView presentScene: scene];
}

We have a very simple UIView subclass that sits on top providing very basic drawing functionality - it will handle drawing a single path, once the drawing ends it passes the path to it’s delegate and forgets about it. The drawing is done similar to my previous post, here’s the delegate protocol.

@protocol SimplePathDrawingDelegate <nsobject>
-(void)drawingViewCreatedPath:(UIBezierPath *)path;
@end

We’ll let the view controller be the delegate, and thats where we do the interesting stuff, once it gets the drawn path.

-(void)drawingViewCreatedPath:(UIBezierPath *)path
{
    CGRect pathBounds = CGPathGetPathBoundingBox(path.CGPath);
    
    UIImage *image = [path strokeImageWithColor:[UIColor greenColor]];
    SKTexture *shapeTexture = [SKTexture textureWithImage:image];
    SKSpriteNode *shapeSprite = [SKSpriteNode spriteNodeWithTexture:shapeTexture size:pathBounds.size];
    
    shapeSprite.position = CGPointMake(pathBounds.origin.x + (pathBounds.size.width/2.0), scene.frame.size.height - pathBounds.origin.y - (pathBounds.size.height/2.0));
    
    shapeSprite.physicsBody = [SKPhysicsBody bodyWithConvexHullFromPath:path];
    shapeSprite.physicsBody.dynamic = YES;
    [scene addChild:shapeSprite];
}

We take the drawn line on a journey from path, to image, to a texture that is applied to a sprite. That part is pretty straightforward, more tricky is using that path to create a physics body.

SKPhysicsBody gives us a number of options for creating physics bodies, they are:

+ bodyWithCircleOfRadius:
+ bodyWithRectangleOfSize:
+ bodyWithPolygonFromPath:
+ bodyWithEdgeLoopFromRect:
+ bodyWithEdgeFromPoint:toPoint:
+ bodyWithEdgeLoopFromPath:
+ bodyWithEdgeChainFromPath:

There are a few there that will take a path and give us a body, perfect, right? Except on closer inspection only 1 of them will create a path that can be dynamic, and that one bodyWithPolygonFromPath: has the caveat

A convex polygonal path with counterclockwise winding and no self intersections.

Sadly any realistic user isn’t going to like having to draw nothing but convex polygonal counterclockwise paths with no intersections.

Additionally, SpriteKit only lets us have bodies with 12 or fewer sides!

There are a few approaches we could take for getting by these restrictions: multiple joined physics bodies, using Box2D directly to get around the limit on body vertices, but we’ll use a convex hull from the points that make up the path and make an SKPhysicsBody category to do it for us.

I won’t list the code here, you can download the project to have a look but here’s what it does. (I use some existing categories on UIBezierPath to help out here and got a convex hull implementation online too, they’re all included in the project.)

  • Get the points from the path
  • Order the points for the convex hull algorithm
  • Get the convex hull
  • While there are too many points in the hull, smooth it using increasing tolerance (removing points that make the smallest angles).

And that’s all there is to it. The results are pretty nice for most shapes, if you wanted to get started on a physics drawing game you wouldn’t need much more than the SKPhysicsBody (ConvexHull) category.

Fun with UIBezierPath and CAShapeLayer

This is a quick prototype for a fun drawing tool - as you drag your finger across the canvas the line grows branches which sprout leaves. The branches are randomly generated within certain parameters and animate on while you draw the main line.

A line is drawn, with branches automatically being added along its path

Yes, the leaves are very realistic looking, thank you.

The Code

It’s all on on GitHub, feel free to use and improve!

It’s not about the line drawing

The line drawing is very basic - simply adding points to a UIBezierPath. I keep an array of the curves and draw them all in drawRect:. I don’t care about smooth curves or different textures or performance but I’m sure this will work with more sophisticated drawing code too. Most of the drawing code I’ve shipped has been OpenGL based, so it was nice to see how good the results are when keeping things super simple with UIKit / CoreGraphics.

How it Works

Let’s start with the basic line and layer on the other bits. It starts with a pan gesture recogniser in our UIView subclass.

UIPanGestureRecognizer *pgr = [[UIPanGestureRecognizer alloc] initWithTarget:self action:@selector(handlePan:)];

[self addGestureRecognizer:pgr];

Now in the action selector we create new paths when a pan begins, and add to the current path when a pan changes.

-(void)handlePan:(UIPanGestureRecognizer *)gestureRecognizer
{
    if(gestureRecognizer.state == UIGestureRecognizerStateBegan)
    {
        UIBezierPath *newVineLine = [[UIBezierPath alloc] init];
        [newVineLine moveToPoint:[gestureRecognizer locationInView:self]];
        [vineLines addObject:newVineLine];
    }
    else if(gestureRecognizer.state == UIGestureRecognizerStateChanged)
    {
        UIBezierPath *currentLine = [vineLines lastObject];
        [currentLine addLineToPoint:[gestureRecognizer locationInView:self]];
    }

    [self setNeedsDisplay];
}

You can see we’ve a mutable array called vineLines that we’re holding our paths in. This means to draw our paths we can simply iterate over that like so:

- (void)drawRect:(CGRect)rect
{
    for(VineLine *vineLine in vineLines)
    {
        [vineLine stroke];
    }
}

That’s the basic line drawing done, now let’s add some branches. Again they’re UIBezierPaths. Every so often we want to generate a random path, the branch, and add it to the user drawn path, the vine. There are a couple of options for this, we could let the view / view controller keep track of the branches and when to draw them but let’s encapsulate all that in a VineLine, a subclass of UIBezierPath. (In the snippet above just swap out UIBezierPath with our new subclass.

@interface VineLine : UIBezierPath

@property(nonatomic, retain, readonly)NSMutableArray *branchLines;

@end

Rather than just subclassing NSObject and having a property for our path we’re subclassing UIBezierPath and overriding addLineToPoint:, adding functionality to the existing method to decide when to create our branch and add it to the branchLines array. Note that VineBranch is just another UIBezierPath subclass that can create random paths with leaves on the end. All we’re doing here is checking if the point we’re adding is far enough away from the last branch (or beginning of the line) to create a branch and if it is, creating a new random branch and storing it in an array of branches.

-(void)addLineToPoint:(CGPoint)point
{
    [super addLineToPoint:point];
    
    float distanceFromPrevious;
    
    if([_branchLines count] == 0)
    {
        distanceFromPrevious = hypotf(point.x - firstPoint.x, point.y - firstPoint.y);
    }
    else
    {
        distanceFromPrevious = hypotf(point.x - lastBranchPosition.x, point.y - lastBranchPosition.y);
    }
    
    if(distanceFromPrevious > _minBranchSeperation)
    {
        VineBranch *newBranch = [[VineBranch alloc] initWithRandomPathFromPoint:point maxLength:_maxBranchLength leafSize:_leafSize];
        newBranch.lineWidth = self.lineWidth / 2.0;
        
        [_branchLines addObject:newBranch];
        lastBranchPosition = point;
    }
}

If we modify our drawRect: from before we can now draw the branches and leaves as well as the main line.

- (void)drawRect:(CGRect)rect
{
    [vineColor setStroke];
    
    for(VineLine *vineLine in vineLines)
    {
        [vineLine stroke];

        for(UIBezierPath *branchLine in vineLine.branchLines)
        {
            [branchLine stroke];
        }
    }
}

And we’re done!

Animating The Branches

That’s where CAShapeLayer comes in. CAShapeLayer has a number of animatable properties, and animating strokeEnd is great for drawing a path to the screen. So we can remove the code to iterate through the list of branches and stroke them instead, every time a branch is created we create a layer for it and animate the stroke.

-(void)vineLineDidCreateBranch:(VineBranch *)branchPath
{
    CAShapeLayer *branchShape = [CAShapeLayer layer];
    branchShape.path = branchPath.CGPath;
    branchShape.fillColor = [UIColor clearColor].CGColor;
    branchShape.strokeColor = vineColor.CGColor;
    branchShape.lineWidth = branchPath.lineWidth;
    
    [self.layer addSublayer:branchShape];
    
    CABasicAnimation *branchGrowAnimation = [CABasicAnimation animationWithKeyPath:@"strokeEnd"];
    branchGrowAnimation.duration = 1.0;
    branchGrowAnimation.fromValue = [NSNumber numberWithFloat:0.0];
    branchGrowAnimation.toValue = [NSNumber numberWithFloat:1.0];
    [branchShape addAnimation:branchGrowAnimation forKey:@"strokeEnd"];
}

We can make our view the VineLine’s delegate and add a call to the delegate notifying it of a new branch in our addLineToPoint: method from above.

Random Paths

Initially I tried to be clever and see what way the line was curving and attach curves that seemed natural but that wasn’t looking too good. Eventually I just started throwing random numbers at it and things started looking better (this probably should have been obvious to me). So what we’re doing here is getting a random point close to the main line (as defined by _maxLength) and adding a curve to that point, control points are picked near that end point so we don’t end up with curves that are too crazy. Finally, we add the leaf, which for now is just a circle.

-(id)initWithRandomPathFromPoint:(CGPoint)startPoint maxLength:(float)maxLength leafSize:(float)leafSize
{
    self = [super init];
    if(self)
    {
        [self moveToPoint:startPoint];
        
        CGPoint branchEnd = CGPointMake(startPoint.x + arc4random_uniform(maxLength * 2) - maxLength,startPoint.y + arc4random_uniform(maxLength * 2) - maxLength);
        CGPoint brachControl1 = CGPointMake(branchEnd.x + arc4random_uniform(maxLength) - maxLength / 2,branchEnd.y + arc4random_uniform(maxLength) - maxLength / 2);
        CGPoint branchControl2 = CGPointMake(branchEnd.x + arc4random_uniform(maxLength / 2) - maxLength / 4,branchEnd.y + arc4random_uniform(maxLength / 2) - maxLength / 4);
        
        [self addCurveToPoint:branchEnd controlPoint1:brachControl1 controlPoint2:branchControl2];
        
        UIBezierPath* leafPath = [UIBezierPath bezierPathWithOvalInRect: CGRectMake(branchEnd.x - leafSize/2.0, branchEnd.y - leafSize/2.0, leafSize, leafSize)];
        
        [self appendPath:leafPath];
    }
    return self;
}