Twitter Phone Number Creepiness

I created a Twitter account to test setting up 2FA using the new authenticator app I’m working on. After the initial setup, turning on 2FA (my app worked!) and sending a tweet, twitter told me I might be a robot and had me complete a captcha and enter my phone number so it could send a verification text.

I didn’t see any indication that the phone number would be used for anything besides this verification but I immediately noticed that Twitter was recommending people to follow based on that phone number (I’m certain the email address I signed up with couldn’t be associated with those people in any way.)

I’m not surprised that Twitter wants phone numbers and uses them to build their social graph but requiring your phone number and not making it clear what it’s going to do with it strikes me as especially creepy, Facebook-level creepy.

Incorporating SwiftUI Views in a UIKit Layout

Mixing some SwiftUI in with your UIKit app is a great way to start using it for real without having to commit to fundamentally changing how you build your app. Doing so requires some boilerplate and when what you want to start with is a single view that only makes up a small part of a screen it might seem overly onerous.

This almost put me off, but I saw it through and turned the boilerplate into an extension on UIViewController which will add your SwiftUI View on top of a placeholder UIView that has been laid out with constraints (in my use case, from a storyboard).

extension UIViewController {
    func addSwiftUIView<Content>(_ swiftUIView: Content, for placeholder: UIView) where Content: View {
        let swiftUIController = UIHostingController(rootView: swiftUIView)

        guard let swiftUIView = swiftUIController.view else { return }
        swiftUIView.translatesAutoresizingMaskIntoConstraints = false
        swiftUIController.didMove(toParent: self)
        swiftUIView.constrain(to: placeholder)

My Swift Extension (Anti?) Pattern

I’ve become aware of how frequently I create extensions in Swift. I previously viewed them as something to use sparingly, like when there was something missing from a built-in type that was needed frequently - but now I regularly write them for once-off use. I even find myself wishing I could make extensions private so I could confidently write them in a non-generic fashion.

This is the example that got me thinking about it today:

extension UInt8 {
    static func entireRange() -> ClosedRange<UInt8> {
        return UInt8.min...UInt8.max

Which exists only to be used in this other extension:

extension Data {
    static func randomBytes(size: Int) -> Data {
        var byteArray = [UInt8]()

        for _ in 0..<size {
            byteArray.append(UInt8.random(in: UInt8.entireRange()))

        return Data(byteArray)

Which exists to be called once in some test code that will eventually be replaced when working with the actual data.

I leave the extensions in context in the files where they’re being used rather than in a specific file for the extension, or a collection of helpers and I find the resulting code reads wonderfully.

Every authenticator app I’ve used has annoyed me in some way so I’ve started making my own.

The first annoyance I’m working on removing is having to wait around to get the next code when the timer’s running down.

Started a new project with SwiftUI about half an hour ago but I just backed out of it and started again with UIKit. There’s a lot I really like about SwiftUI but my head is in a making things place and not a learning things place today.

Moving hosting to Had fun hosting it on my own server for the last while but looking forward to being able to post more frequently with this new setup - including shorter posts, like this one.

You'd Be Surprised What People Don't Know

Have you ever thought about trying to explain something, but it’s so obvious that you assume everyone already knows?

Have you ever thought that maybe there are lots of things you’ve never even though about explaining, because, well they’re even more obvious than that?

I’ve been thinking about this lately and I know for sure I can answer yes to the first question, and almost certainly even more so to the second. I’ve avoided unnecessary explanations - success! Except I’ve come to realise that a great many of the explanations I avoided, code comments I didn’t leave, articles I didn’t write, were missed opportunities.

As Daniel Jalkut put it in a recent episode of Core Intuition, sometimes we know Rome too well. In his metaphor a lifelong resident of a city is less suited to writing a guide for visitors than a more recent arrival whose freshly acquired knowledge is less ingrained. He was talking about marketing his product, but it immediately resonated with me for a slightly different reason: bringing new developers on to an old codebase. The one or two people who’ve been working on the app for the last 5 years won’t know what a new developer is going to struggle with, they may not spot that the fact that you have to do x,y and z before you call some particular function is non obvious.

With recent events there has been lots of sharing and explaining about a couple of things that I thought were pretty obvious. Hand washing, and exponential growth. It turns out a lot of people don’t know that soap is the thing that cleans your hands (not hot water or antibacterials) and that you don’t have to double things very many times before the numbers get huge. I’m really glad that people have been sharing because it wasn’t until I really thought about it that it made sense that not everyone knows this already and that I just happened to know them too well. It turns out my obsession with computers and taking chemistry in secondary school with a teacher that now that I think about it was way too excited about soap than is normal made me uncommonly equipped for current events.

I’ve learned that it’s important to examine how we’ve come to know something, and to consider if it’s reasonable to assume others will have had similar opportunity to come by this knowledge when deciding if something is worth explaining, commenting on or sharing. I’ve resolved to document earlier, comment better, and in general share more. I’m also planning to make newcomers central to my efforts to fill in documentation deficits and improve developer onboarding on older apps.

Hopefully at least one person reading this didn’t already know it all :)

Parenting Advice for New Programmers

I’d like to share some parenting advice if you’ll humour me.

My Mam shared this with me the day my first son was born and I think it applies to new programmers as much as it does to new parents.

These are both fields of endeavour filled with experts, some actual experts but more often self-imagined. 1

You will find yourself receiving advice from these experts, a lot. Some of it you may have actually solicited, more often though someone will have wandered into your conversation or overheard your child cry from a nearby restaurant table.

The advice is simply this: say thank you, and move on.

You’re free to ignore the advice if it is irrelevant or you know it to be wrong. The expert might be well meaning, so assume they are. The expert might be actual, but they almost definitely aren’t.

My Mam made clear I could apply this to her future advice, and every now and then I do. And you should apply it to mine because though I usually do mean well, I’m rarely an actual expert.

  1. Those people that believe the way that happened to work out well for them in their one specific situation is the one right way to do it for everyone in every situation.

Detecting When Your App Gets Backgrounded using Combine

The introduction of the Combine Framework provides a new (reactive) way to respond to system events such as your app entering the background.

In this example I have a CALayer subclass running some CABasicAnimations that need to be paused when the app gets backgrounded and resumed when the app becomes active again. In the layer’s init we attach a subscriber to the default notification centre’s publisher for the particualr events that we’re interested in. When attaching the subscriber we provide a closure that will be executed every time a new event arrives.

let bgSubscriber = NotificationCenter.default
   .publisher(for: UIApplication.willResignActiveNotification)
   .sink {
      _ in


let foregroundSubscriber = NotificationCenter.default
   .publisher(for: UIApplication.didBecomeActiveNotification)
   .sink {
      _ in


Let’s dig in a little to what’s going on here because it may not be obvious if you’re not familiar with Combine or NotificationCenter.

NotificationCenter predates Combine as a mechanism for broadcasting information (notifications) to interested observers. Every app gets a default notification centre which is used to broadcast system events such as those in the example here. Without Combine you can register to observe notifications and provide a selector to define what function to call when a notification is received.

With the introduction of Combine however, NotificationCenter added the publisher(for:object:) method which returns a Publisher that will emit values when notifications are broadcast.

A Publisher is how Combine represents a sequence of values over time that can be subscribed to. By calling sink(receiveValue:) we create a subscriber that will receive every value from the publisher.

The true power and utility of Combine isn’t seen in this example, but even still I don’t think it’s overkill to use Combine to provide a modern Swifty replacement for old fashioned notification observers and selectors.

A Note on Memory Management

To complete this specific example there’s a little more work to do. In my app I create many of these custom CALayer instances and at various points they get removed and new ones created in their place. With the code above as it is we’re preventing the layers from being deallocated when they’re no longer being used (because we captured self in the closure passed in to the sink call). To fix this problem we can keep a reference to the subscribers

private var appEventSubscribers =  [AnyCancellable]()

   appEventSubscribers = [foregroundSubscriber, bgSubscriber]


so that we can cancel them when the layer gets removed from its parent.

override func removeFromSuperlayer() {
        appEventSubscribers.forEach {


iPad OS or: Names Are Important

When I attended WWDC for the first time in 2010 the iPad was still brand new and not yet available in Ireland. Many had dismissed it as just a big iPhone, others (me, at least) had hyped it as A BIG IPHONE.

It ran iPhone OS version 3.2.

As we queued up before the Monday morning keynote we were all expecting the next iPhone and the next version of iPhone OS and we speculated about what new features we’d see. There was one seemingly minor announcement though, that nobody was expecting: the next version of the operating system would be called iOS 4, dropping the ‘iPhone’ to accommodate iPad. It made sense at the time and initially my only reaction was delight at how they managed to do it mid presentation, with nobody slipping up and using the new name before the announcement and nobody using the old one afterwards. In the years that followed though, as the ‘just a big iPhone’ opinions persisted and many developers put forth minimal effort in making their iPhone apps work on iPad, I came to think that not creating a separate identity for the iPad’s operating system was a missed opportunity, one that they finally rectified this year by renaming iPad’s operating system to iPad OS.

Perhaps making it incredibly clear that iPhone developers could make iPad apps made the previous OS naming scheme worthwhile up to a point but I believe making it clearer that iPad has its own OS, with its own interface, its own set of interactions and idioms that are distinct from iPhone will be a bigger benefit to software on the platform. OS naming is now consistent across all of Apple’s mobile devices - they are all based off iOS but Apple Watch runs watch OS, Apple TV runs tv OS and iPad runs iPad OS. These operating systems are not distinct in terms of technology or how we develop for them or who can develop for them, the distinction rather is in how the user interacts with them. Hopefully (and I believe it will) for iPad, this will result in more differentiated iPad software that takes advantage of its unique and powerful features.

Passing a closure to a UIButton

I’m tired of @objc #selector (nonsense: ) muddying up my Swift code. This most commonly rears its ugly head when dealing with buttons. Why can’t we just provide a button with a closure to execute when someone taps it? 1

// 😩
button.addTarget(self, action: #selector(myButtonHandler), for: .touchUpInside)

// 😍

Well now you can! I made a small UIButton subclass that provides a swifty facade for adding target/actions for events and otherwise behaves exactly like a regular old UIButton (to be clear, selectors are still doing the business under the hood.)

Full source on gist.

  1. The reasons are simple and boring and because we're still working with Objective-C frameworks from the 1890s (that for the record, I still love), but that wasn't the point of this post.

No, Apple Does Not Share Your FaceID Data

The notch full of sensors on iPhone X enables Face ID to capture

accurate face data by projecting and analyzing over 30,000 invisible dots to create a depth map of your face and also captures an infrared image of your face. A portion of the A11 Bionic chip’s neural engine — protected within the Secure Enclave — transforms the depth map and infrared image into a mathematical representation and compares that representation to the enrolled facial data.

Meanwhile, from the same notch, third party developers can access

a coarse 3D mesh geometry matching the size, shape, topology, and current facial expression of the user’s face.

These are 2 different things.

For more see Apple’s support article on Face ID and their developer documentation on ARKit Face Tracking.

The Apple Watch Platform

I’ve a phone in my pocket most of the time, Alexa is always waiting for me in the kitchen and I spend hours every day in front of an old fashioned PC but my watch is with me all day long wherever I go. Sometimes it’s the only computation available and almost always it’s the least intrusive.

Apple Watch, The Computer That's Always There

As good as Apple Watch is though, it has so far failed as an app platform. Apple Watch is built on the same technology that runs iPhone, and the same tools that developers use to make iPhone apps are used to make Apple Watch apps, so why are there so few Apple Watch apps, and why are so many really bad?

The original watch hardware was very limited, and app support even more so. Apps actually ran on your phone and were sort of beamed onto the watch’s screen. If you managed to find the apps on the terrible honeycomb grid they loaded really slowly and performed terribly. A lot of developers and users were instantly turned off of third party apps, but the watch got by with the excellent built in notifications and fitness tracking functionality.

Each release of watchOS and every hardware revision has seen incremental improvements to third party app support: apps actually running natively on the watch, custom watch face complications, new capabilities, better performance, and a better way of launching apps (a list!). But the great new app platform imagined when the watch was first announced has still to arrive and many apps on your Apple Watch today likely still date back to the original release.

I don’t know what it will take for the Apple Watch platform to become as successful as the Apple Watch but I don’t think the capability of the device or the OS is holding it back at this stage (though WatchKit does leave a lot to be desired.) I’d like to see watch apps completely decoupled from iPhone apps (they run on the watch now, but are still delivered as extensions of iPhone apps) and they need to have more ways to integrate with or at least appear on the watch face. At least then we might finally be able to rule out finding, installing and launching watch apps as the reason for there not being very many good ones.

Why Can't We Just Pay for Free Unlimited iCloud Storage?

Over the past few years Apple has proven that they’re willing to try charging higher prices for iPhone. Just a couple of years ago the 6S plus was priced from $749, a year later the 7 plus was available from $769 and now the 8 plus is on sale from $799. Meanwhile, the market has shown it’s happy to pay those prices and I suspect it will prove so once more with the impending $999 iPhone X.

What I’d like to see next year is for Apple to charge us even more money for phones that don’t cost them anything extra to produce, and here’s why:

The experience of figuring out that you might need an iCloud subscription, figuring out how much space you might need, paying for it, dealing with the inevitable failures to renew when your card expires or your balance is low, and getting warnings about backups failing is awful. I’d love to see Apple try to figure out the cost of providing all new iPhone users with unlimited (with an asterisk that says there’s actually some limits) iCloud storage and build it into the price of the phone.

I pay Apple $35.88 for iCloud storage each year, I’d happily pay $99 more for the phone instead.

Audio Degapinator - The Poor Dev’s Smart Speed

I’ve been listening to podcasts with Overcast’s Smart Speed feature turned on for long enough to have saved 55 hours of not listening to the silences between every podcast host’s thoughts.

I decided to spend 1 of those hours today making my own very simple, very limited, but surprisingly effective AVAudioPlayer version of that feature. I’ll explain below how it works, but you can check out the full Swift iOS source (there’s not much to it) on GitHub: Audio Degapinator on GitHub.

AVAudioPlayer offers features for audio level metering:

/* metering */
open var isMeteringEnabled: Bool /* turns level metering on or off. default is off. */

open func updateMeters() /* call to refresh meter values */

open func peakPower(forChannel channelNumber: Int) -> Float /* returns peak power in decibels for a given channel */

open func averagePower(forChannel channelNumber: Int) -> Float /* returns average power in decibels for a given channel */

And for adjusting playback, including:

open var rate: Float /* See enableRate. The playback rate for the sound. 1.0 is normal, 0.5 is half speed, 2.0 is double speed. */

My code then:

  • turns metering on
  • updates meters with a timer
  • checks if there is currently silence playing using averagePower
    • increases the playback rate 2x until the silence ends

I tested using the latest episode of ATP and Debug episode 49. In both cases the silences were noticeably reduced and, to my ear, sounded completely natural. I listened to the entire episode of Debug and it had shaved off a little over 3 minutes by the end.

This was a fun little project, it’s the first time I’ve looked at anything related to audio playback on iOS in quite a while and it was super interesting … I fear I may just have to write my own podcast app now.

Simulating Universal Gravitation with SpriteKit

Gravity in SpriteKit is a single planet sort of gravity. By that I mean that it applies a single force to all bodies in the simulation - basically everything falls down. But what if you wanted a mutliple planet sort of gravity, can that be achieved in SpriteKit?

The answer is yes, and it turns out it’s it doesn’t take a whole lot of code to get some fun and quite realistic results.

Screen capture. Small circular nodes orbit a larger central node like planets around a star.

In this SpriteKit app, dragging on the screen creates a new ‘planet’.

How it Works

First we turn off gravity as it normally applies in a SpriteKit scene and then on every tick we apply Newton’s law of universal gravitation to all the nodes in the physics simulation.

F=G m1.m2/r^2

That is for every pair of nodes, apply a force to each one that is equal to the product of their masses (and the universal gravitational constant), divided by the distance between them, squared.

There are some tweaks to the above formula to make the numbers a bit easier to deal with (i.e. smaller) and to make creating stable systems a bit easier, but sticking exactly to the formula above and plugging in some realistic numbers things work pretty much as you’d expect.

In addition to simulating gravity, I’m also combining planets that pass close to each other and adding trails to trace their paths and giving new planets random colour. It all results in a surprisingly fun and addictive little toy so even if you’re not interested in the code just build and run it on your iPhone (or watch, or Mac) and enjoy!

Get the code at

Detecting Which Complication Launched Your WatchKit App

One of the joys of working with watchOS, much like it was working with iPhone OS many years ago, is the enforced simplicity. Free from worrying about about the unending device combinations and configurations and the unlistible features and extension points of modern iOS the constraints of a limited SDK focus your creativity. Simple, robust, yet still delightful interfaces flow from your fingertips, designers designs are readily translated to working product.

Sadly though, we’re not content for long. Just like in the early days of iPhone OS, you soon find yourself wanting to do just a tiny bit more than Apple has made available, and so focus and delight makes way to our more common friend, the ugly hack. Today’s feature that just couldn’t wait for a proper API is: detecting which watch face complication launched my app.

How It Works

When your app is launched in response to the user tapping a complication, the handleUserActivity method of your WKExtensionDelegate is called. You’re given a userInfo dictionary, and this is where we’d hope to find the details of which complication had launched us. Sadly though there’s no CLKComplicationFamilyKey to let you know the user tapped the circual small rather than the utilitarian large to lauch the app, but there is something we can use, the CLKLaunchedTimelineEntryDateKey. This gives us the exact date and time that the complication was created at. By remembering exactly when we created which complication then we can figure out which complication resulted in the app being launched and acting accordingly.

The Code

// 1.
class ComplicationTimeKeeper{
    static let shared = ComplicationTimeKeeper()
    var utilitarianLarge : Date?
    var utilitarianSmall : Date?
    var circularSmall : Date?
    var modularLarge : Date?
    var modularSmall : Date?

// 2. in your CLKComplicationDataSource
func getCurrentTimelineEntry(for complication: CLKComplication, withHandler handler: @escaping ((CLKComplicationTimelineEntry?) -> Void)) { 
     // Call the handler with the current timeline entry
     switch {
     case .utilitarianLarge:
         let date = Date()
         ComplicationTimeKeeper.shared.utilitarianLarge = date
         let template = CLKComplicationTemplateUtilitarianLargeFlat()
         template.textProvider = CLKSimpleTextProvider(text:"Something")
         let timelineEntry = CLKComplicationTimelineEntry(date: date, complicationTemplate: template)
     default: handler(nil)

// 3. in your WKExtensionDelegate
func handleUserActivity(_ userInfo: [AnyHashable : Any]?) {
    guard let userInfo = userInfo, let timelineDate = userInfo[CLKLaunchedTimelineEntryDateKey] as? Date else{
    if let utilLarge = ComplicationTimeKeeper.shared.utilitarianLarge, == .orderedSame {
         WKExtension.shared().rootInterfaceController?.pushController(withName: "SomeController", context: nil)

In 1, we create a singleton (no shameful hack is complete wihtout one) to track when our various complications were made.

In 2, we setup the utilitarian large complication and store the creation date, just add more cases to the switch statement for other complication families that you are supporting.

Finally in 3 we check what time the complication that launched the app was created and check which one it was and launch the relavant interface controller.


The code above has a couple of limitations that you may need to work around. First it doesn’t take Time Travel into account so if your app supports that each complication may have more than one corresponding datetime. Secondly (though in practice I haven’t seen this be an issue) I don’t see why two complications couldn’t have clashing datetimes, for that you could add a method to ComplicationTimeKeeper that returns the next unique date.

It’s Time For Complications

Apple made much of the value of complications at this years WWDC. Having originally not allowed you to make your own in watchOS 1, to allowing you but telling you its only if you really have something super important that gets updates throughout the day in watchOS 2, now this year they told us we really need to have a complication even if its just an icon to launch your app. It seems they’ve noticed, as anyone who has worn Apple Watch for any reasonable amount of time will tell you, that complications are the best way to access the functionality of an app. But everything they talked about at WWDC was about having a complication, singular. You can support multiple complication families, but you can only have one of each and they are treated as different views of a single feature, showing more data when you’ve the room, but not really doing anything different.

Ideally, we’d have the ability to provide multiple complications for each complication family. If that was the case you could have a watch face with each complication slot filled by the same applicaiton, each showing something else (the built in world clock complication can already do this, but nothing else) and crucially each performing a different function of your app when they’re tapped. I wouldn’t be surprised if this is something that is eventually supported in WatchKit, but for now at least we can ugly hack our way to using different complication families to provide different functionality.

Should Apple Deprecate UILongPressGestureRecognizer?

The answer is yes.

  • For anywhere you currently require a long press, move to 3D touch.
  • For anywhere you have different actions for both, make the long press action an option when 3D touching. (For example organising icons on the home screen.)
  • Make an accesability preference that makes a long press behave as a progressivly more forcefull 3D touch.

Cheating on Swift Substrings

If you found yourself needing to get a substring of a String in Swift before you got around to the relavant chapter of the book you were probably left scratching your head for a bit. How could something so conceptually simple be so akward to perform?

Here’s a great article explaining how to find a substring in Swift, from Natasha The Robot.

It turns out that Swift Strings are much cooler than your old fashioned strings from other languages, and Swift Ranges are even cooler still. But unless you’re using them frequently, I find that

str.substringWithRange(Range(start: (advance(str.endIndex, -1)), end: str.endIndex))

doesn't exactly roll off the tongue.

So here's my cheat, which is to not use String at all. Arrays in Swift can be chopped up using plain integer ranges, and a String is just an `Array<character>`. Swift even lets you iterate over the contents of a String and access each Character in turn, but it doesnt give you String subscripting.

So theres a couple of cheat options, implement subscript on String yourself, or what I preferred, extend String to give you quick access to an Array representation of the String.

extension String { func asArray() -> [Character] {

    var array : [Character] = []
    for char in self {
    return array


You can then do fun stuff like this, which for me, reads very nicely.

let str = “Coma inducing corporate bollocks” str.asArray().last // “s” str.asArray()[10] // “i” String(str.asArray()[2..<7]) // “ma in” ```

You don’t need to break out the big O notation to see this isn’t going to perform great, you’re iterating over the entire string everytime you want to get a piece of it, then the array methods are going to go do it again, so use with caution!

Death By Date Format String

Recently I learned that you probably always want “yyyy” and not “YYYY”.

let dateFormatter = NSDateFormatter()
dateFormatter.dateFormat = "YYYY-MM-dd"


This prints 2015-12-26. Obviously. So what about


It prints 2016-12-27.

Note that the year is 2016.

I was fortunte1 enough to get assinged a production crash bug this week that after a long day of head scratching, turned out to be caused by this.

Interestingly, the NSDate created with the format is the date I expected, it represents 27 December 2015 and it’s only getting a string from the date with that format that gives you the ‘wrong’ year. Similarly an NSDate constructed in any other way that represents 27 December 2015 will behave the same.

The NSDateFormatter docs point you at Unicode Technical Standard Number 35 for the definitions of date format strings. I’ve looked at this before and I expect I’m not alone in having paid more attention to the day and month parts of the format. They’re usually what we’re interested in because the year is always the year, at most we might prefer 2 or 4 digits but thats about as interesting as it gets. I suspect what happens fairly often (and what probably happened with our bug) was that the developer guessed at YYYY as the year format, and when it appeared to work just fine, assumed it was correct.

The relavant part of that standard states that y is the year, but Y is

Year (in “Week of Year” based calendars) … May not always be the same value as calendar year.

And the problem is that, as far as I can tell, it almost always is the same as the calendar year. The last few days of the year are the only ones I’ve seen causing problems. If it was more different, it would be spotted easier and perhaps I would have already known that YYYY was wrong and spotted that as the error right away.

  1. If I consider it a learning opportunity and not an annoying time suck of a bug! [return]

Optional Optionals

So here’s a confusing sentence.

With Swift functions you can have optional parameters, you can also have parameters that are optionals, and you can have optional parameters that are optionals.

A rather confused looking Donald Rumsfeld

Not taking the time to think about the 3 different levels of optionality in function parameters had me scratching my head for a few minutes today, but all the options (sorry) are useful and it’s not at all confusing once you remember them.

My scenario was that I created a function that does some stuff and then executes a closure supplied by the caller. Something like

func doSomeStuff(thenDoThis:()->())

Which a caller would call like

doSomeStuff {
	// and then do this stuff

But I want to let the caller decide whether they want to supply the closure or not, so if they like they could just call the function and be done.


So let’s make the closure optional. Easy, as with any type in Swift, we can mark it optional by including a ?

func doSomeStuff(thenDoThis:(()->())?)

So then if we go ahead and call

doSomeStuff() // Error: Missing argument for parameter #1 in call

But it was optional, so why the error? Well it wasn’t optional in the sense that I could leave it out, it’s just that it was an optional type which we are still expected to provide every time. As our type is an optional closure with no parameters and no return, we have to supply a closure with no parameters and no return or nil. So we’d actually have to call

doSomeStuff(nil) // this works fine but isn't what we want

So how do you create an optional parameter, one that a caller can decide to leave out? To do that you provide a default value to be used for that parameter, right in the function declaration.

func doSomeStuff(thenDoThis:()->() = defaultClosure)

This means that if the caller doesn’t supply a value for thenDoThis we’ll use defaultClosure instead (assuming defaultClosure is defined elsewhere as a ()->().) We can now happily call the following if we don’t want to supply a closure.

doSomeStuff() // yay!

The behaviour I was interested in though was that if I didn’t supply a closure, that there would be no closure executed at all, not that some other one I had to define would be called instead. Well, I could just make defaultClosure do nothing, or just have the default value be {} like

func doSomeStuff(thenDoThis:()->() = {})

Which is fine, and maybe even the preferred way, but you can also have an optional optional parameter, and have it’s default value be nil.

func doSomeStuff(thenDoThis:(()->())? = nil)

Now if the caller omits the closure, thenDoThis will be nil, which makes more sense to me in this situation.

Basic ORM on top of FMDB and SQLite for iOS

Disclaimer: If you’re not sure if you should be using SQLite for your iOS project then you probably shouldn’t be, CoreData is worth the learning curve.

When you do have call to use SQLite then the FMDB wrapper makes using it through Objective-C a breeze. I won’t explain how to use FMDB, their API is very straightforward and you’ll find plenty of help elsewhere. A typical experience though is that you’ll execute a query, you get back a lovely FMResultSet object and you extract values from that using your database column names–nice.

What would be slightly nicer is automatically mapping that result set onto a model object. So lets make that a thing.

Example Time

We have a table in our database called People with the following fields:

  • personId
  • firstName
  • lastName
  • address
  • favouriteTellytubby

And it makes sense for us to have a Person class in our app because maybe we’ll want to maintain a table of people and be able view the detail of a person by passing a Person from the table view to the detail view. The Person class will be defined something like this:

@interface Person : NSObject

@property(nonatomic) NSInteger personId;
@property(nonatomic, copy) NSString *firstName;
@property(nonatomic, copy) NSString *lastName;
@property(nonatomic, copy) NSString *address;
@property(nonatomic, copy) NSString *favouriteTellytubby;


So to create some Person objects we could alloc init a bunch of them and set their properties based on what we get back from the database, alternatively we could create a custom initialiser method to take a FMResultSet and set them all that way. All of which is perfectly fine until you find yourself repeating it over and over again.

Homer Simpson making OJ the old fashioned way

For simple situations like this though, there is a better way (better as in less repetitive at least).

I’ve a simple class that I use as a base class for all my model objects, it provides an initialiser that takes a result set as a parameter and looks for columns in that result set with the same name as its properties.

@interface MBSimpleDBMappedObject : NSObject

-(instancetype)initWithDatabaseResultSet:(FMResultSet *)resultSet;


-(instancetype)initWithDatabaseResultSet:(FMResultSet *)resultSet
    self = [super init];
        unsigned int propertyCount = 0;
        objc_property_t * properties = class_copyPropertyList([self class], &propertyCount);
        for (unsigned int i = 0; i < propertyCount; ++i)
            objc_property_t property = properties[i];
            NSString *propertyName = [NSString stringWithUTF8String:property_getName(property)];
            [self setValue:[resultSet objectForColumnName:propertyName] forKey:propertyName];
    return self;


What we’re doing here is quite simple, but it’s enabled by a couple of powerful Objective C features. Firstly, at runtime we can dynamically retrieve the names of a loaded classes properties, then we can simply set the values of properties using key value coding.

Those are the only 2 things happening here, get a list of the properties, then for each property set its value to the one from the result set with a matching column name.

This means all our model subclasses have to do is declare a bunch of properties, so all there is to those classes is the interface I described before, just subclassing MBSimpleDBMappedObject instead of NSObject like so.

@interface Person : MBSimpleDBMappedObject

@property(nonatomic, readonly) NSInteger personId;
@property(nonatomic, readonly, copy) NSString *firstName;
@property(nonatomic, readonly, copy) NSString *lastName;
@property(nonatomic, readonly, copy) NSString *address;
@property(nonatomic, readonly, copy) NSString *favouriteTellytubby;


I’ve marked the properties read only, because all I’m interested in is a copy of what’s in the database, changing the values of those properties won’t update the database, though I do plan to add that functionality in the future. If this is all you need then you’re done, your Person implementation can be left blank.

A Note About Dates

If you’re familiar with SQLite and FMDB you’ll know they don’t really do dates, but you’ll probably find yourself wanting to keep track of some dates in the database. FMResultSet’s objectForColumnName will gladly give you a number or a string, but it doesn’t do NSDate’s. Here’s how I deal with that.

Better Example Time

Let’s change our People table a bit to make it a bit more useful, so our list of fields looks like:

  • personId
  • firstName
  • lastName
  • address
  • dateOfBirthTimestamp

and update our Person interface too

@interface Person : MBSimpleDBMappedObject

@property(nonatomic, readonly) NSInteger personId;
@property(nonatomic, readonly, copy) NSString *firstName;
@property(nonatomic, readonly, copy) NSString *lastName;
@property(nonatomic, readonly, copy) NSString *address;
@property(nonatomic, readonly)NSTimeInterval dateOfBirthTimestamp;
@property(nonatomic, readonly, strong)NSDate *dateOfBirth;


With no other changes the dateOfBirthTimestamp property will be set correctly which may be enough, but you’d probably have to make an NSDate with it anytime you wanted to do anything useful with it. We’ve added an NSDate property, but as there is no corresponding column name, it will remain nil. That is until we override the initialiser as follows.

-(instancetype)initWithDatabaseResultSet:(FMResultSet *)resultSet
        self = [super initWithDatabaseResultSet:resultSet];
            _dateOfBirth = [NSDate dateWithTimeIntervalSince1970:self.dateOfBirthTimestamp];
   return self;



The base class will still map all the other properties, we just construct the NSDate.

Uploading Xcode Bot Builds to Testflight, with launchd

Continuous integration with Xcode is super easy to set up and does the basics of continuous integration really well. With almost no effort you’ll have nightly builds, test suites doing there thing, email alerts to committers, lovely graphs and even a cool dashboard thing for your big screen. I won’t go through setting that all up here, the Apple docs are excellent and there are plenty of other people who’ve already explained it better than I will.

Where things are less than straightforward is when you want to use the IPA file produced–to send it to your testers via TestFlights, or to your remote teammates, your client or whoever.

The server executes an Xcode scheme, which defines your targets, build configuration, and tests. In the scheme there’s an opportunity to include custom scripts that run at various points, pre and post each of the schemes actions, so you can run a script pre-build or post-archive etc.

This post-archive step is the last place we can do some work, so it’s the obvious place to go upload our build to TestFlight, right? Well it would be except the IPA file never exists at this point. The IPA file is generated some time after this. The process is:

  • Archive
  • Post archive scripts
  • ???
  • Generate IPA file

So if you want to upload to TestFlight what can you do? Well the solution offered by anyone I’ve seen blogging about it is to go make your own IPA using xcrun. That doesn’t sound so bad until you end up with code signing and keychain issues and it’s all to do something that is about to happen as soon as you’re done anyway.

My solution was to just wait until the IPA file was made. My initial naive attempts were to schedule the upload from the post-archive script using at or simply adding a delay for some amount of time while the IPA file didn’t exist. What I should have realised though is that the Bot will wait as long as I’m waiting and only when my script finishes will it continue and make the IPA file.

launchd to the rescue.

What I’ve ended up with, and which is working nicely for us, is a scheduled job on the build server which will notice any IPA files built by an Xcode bot, and upload them. I wasn’t familiar with launchd prior to this and was excepting to use cron, but it turns out this is the modern OSX way for scheduling jobs. There’s a great site showing you how to use launchd but I’ll show you what I have anyway.

What I have:

  1. A plist for launchd
  2. Plists for each project that explain where to send the build
  3. A shell script that looks for IPA files and sends them to TestFlight or FTP using the information from 2.

1. The launchd plist

This is placed in /Library/LaunchDaemons and simply tells launchd that we want to run our script every 600 seconds. You could schedule it to run once a day or any other interval, I left it at 10 minutes so any bots that are run on commit or are started manually will have their builds uploaded right away rather than at the end of the day.

<?xml version="1.0" encoding="UTF-8"?><plist version="1.0"><dict><key>Label</key><string>com.mbehan.upload-builds</string><key>ProgramArguments</key><array><string>/ci_scripts/<key>StartInterval</key><integer>600</integer><key>StandardOutPath</key><string>/tmp/build-uploads.log</string><key>StandardErrorPath</key><string>/tmp/build-uploads.log</string></dict></plist></key>

2. Per project plist

If we want the build to be uploaded automatically, it needs a plist telling it where to go. We share builds with one of our clients via FTP so there is a method key for that and a different set of keys are required if it’s value is to FTP rather than TestFlight. I keep these plists in the same directory as the script.

<?xml version="1.0" encoding="UTF-8"?><plist version="1.0"><dict><key>Method<string>TestFlight</string><key>ProductName</key><string>Some App.ipa<key>APIToken</key><string>GET THIS FROM TESTFLIGHT<key>TeamToken</key><string>AND THIS</string></dict></plist></key></key>

3. Checking for IPA files, uploading

We’re using find with the -mtime option here to find recently created files with the name specified in the plist. If we find a file we then either use curl to upload to TestFlight or we send it via FTP depending on the method indicated in the plist.

You can remove the stuff for FTP if you only care about TestFlight, and you might want to add extra detail to the plist such as distribution lists.



for f in $files
	echo Processing $f "..."
	productName="$(/usr/libexec/plistbuddy -c Print:ProductName: "$f")"
	echo $productName
	ipaPath=$(find /Library/Server/Xcode/Data/BotRuns/*/output/"$productName" -mtime -15m | head -1)
	if [ ${#ipaPath} -gt 0 ]; then
		echo "Have IPA FILE: " $ipaPath
		method="$(/usr/libexec/plistbuddy -c Print:Method: "$f")"
		if [ $method == "FTP" ]; then
			echo "Attempting FTP ..."
			host="$(/usr/libexec/plistbuddy -c Print:FTPHost: "$f")"
			user="$(/usr/libexec/plistbuddy -c Print:UserName: "$f")"
			pass="$(/usr/libexec/plistbuddy -c Print:Password: "$f")"
			filename="$(/usr/libexec/plistbuddy -c Print:FileNameOnServer: "$f")"
			hostDir="$(/usr/libexec/plistbuddy -c Print:DirectoryOnServer: "$f")"

			date=`date +%y-%m-%d`

			ftp -inv $host <<-ENDFTP
			user $user $pass
			cd $hostDir
			mkdir $date
			cd $date
			put "$ipaPath" "$filename"
		elif [ $method == "TestFlight" ]; then
			echo "Attempting TestFlight ..."
			apiToken="$(/usr/libexec/plistbuddy -c Print:APIToken: "$f")"
			teamToken="$(/usr/libexec/plistbuddy -c Print:TeamToken: "$f")"
			/usr/bin/curl "" \
			  -F file=@"$ipaPath" \
			  -F api_token="$apiToken" \
			  -F team_token="$teamToken" \
			  -F notes="Automated Build"

This all assumes you’ve set your provisioning profile and code signing identity up correctly for the build configuration used by your Xcode scheme. Make sure the configuration used in the archive step (Release by default) will make a build the people you want to share builds with will be able to install.

Simple Dynamic Image Lighting with CoreImage

With the kind of apps I usually make, I often end up doing a lot of gamey looking things right inside of UIKit. The addition of UIDynamics made one of those jobs, gravity, super easy. I wanted the same kind of simplicity for lights.

Animated figure being dynamically lit by 3 moving coloured lights

Using The Code

It only works on image views for now, but it works well and frame rates are good (much better than the gif lets on) for all but very large images on older devices. You can get all the code on github and using it should be pretty simple.

You just create a lighting controller, add some light fixtures and image views you want to be lit to the controller, and let it know when you need to update the lighting (when we’re moving the lights in the example above). Here’s the interface for the MBLightingController:

@interface MBLightingController : NSObject

@property(nonatomic) BOOL lightsConstantlyUpdating;

-(void)addLitView:(MBLitAnimationView *)litView;


Only set lightsConstantlyUpdating if the lighting is always changing (this came about because I was playing around with adding a light to a rope with UIDynamics, which you can see in the project on github.)

So, there are a couple of things there that you won’t know what they are, the MBLightFixture protocol, and MBLitAnimationView.

Anything can be a light, so long as it implements the protocol, which means it needs a position, intensity, range and color. I’ve just been using a UIView subclass but maybe your light will be a CAEmitterLayer or something.

MBLitAnimationView can be used everywhere you’d use a UIImageView, it just adds the ability to be lit, and makes working with animation easier.

Your view controller’s viewDidLoad might include something like this:

//create the ligthing controller
self.lightingController = [[MBLightingController alloc] init];
//add an image to be lit
MBLitAnimationView *bg = [[MBLitAnimationView alloc] initWithFrame:self.view.bounds];
bg.ambientLightLevel = 0.1; // very dark
[bg setImage:[UIImage imageNamed:@"wall"]];
[self.view addSubview:bg];
[_lightingController addLitView:bg];
//add a light
SimpleLightView *lightView = [[SimpleLightView alloc] initWithFrame:CGRectMake(200, 200, 25, 25)];
lightView.intensity = @0.8;
lightView.tintColor = [UIColor whiteColor];
lightView.range = @250.0;
[self.view addSubview:lightView];
[_lightingController addLightFixture:lightView];

How It Works

The light effect is achieved using CoreImage filters and everything happens in the applyLights method of MBLitAnimationView.

I experimented with a bunch of different filters trying to get the right effect, and there were several that worked so just try switching out the filters if you want something a little different.

Multiple filters are chained together, first up we need to darken the image using CIColorControls:

CIFilter *darkenFilter = [CIFilter filterWithName:@"CIColorControls"
                                 @"inputImage", currentFrameStartImage,
                                 @"inputSaturation", @1.0,
                                 @"inputContrast", @1.0,
                                 @"inputBrightness", @(0-(1-_ambientLightLevel)), nil];

Then, for every light that we have, we create a CIRadialGradient:

CIFilter *gradientFilter = [CIFilter filterWithName:@"CIRadialGradient"
                                    @"inputRadius0", [light constantIntensityOverRange] ? [light range] : @0.0,
                                    @"inputRadius1", [light range],
                                    @"inputCenter", [CIVector vectorWithCGPoint:inputPoint0],
                                    @"inputColor0", color0,
                                    @"inputColor1", color1, nil];

Then we composite the gradients with the darkened image using CIAdditionCompositing:

lightFilter = [CIFilter filterWithName:@"CIAdditionCompositing"
                           @"inputImage", gradients[i],
                           @"inputBackgroundImage",[lightFilter outputImage],nil];

Finally, we mask the image to the shape of the original image:

CIFilter *maskFilter = [CIFilter filterWithName:@"CISourceInCompositing"
                            @"inputImage", [lightFilter outputImage],

Just set the image view’s image property to a UIImage created from the final filter’s output and we’re done!

CGImageRef cgimg = [coreImageContext createCGImage:[maskFilter outputImage]
                                                  fromRect:[currentFrameStartImage extent]];
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
imageView.image = newImage;

What’s Next?

Playing with CoreImage was fun so I think I’ll revisit the code at some point in the future, I’d like to try it out with SpriteKit’s SKEffectNode where it really makes more sense for using with games. Or I might keep working with UIKit and get it working for any view–shiny / shadowy interfaces might be interesting.

UIImageView Animation, But Less Crashy

Animation with UIImageView is super simple and for basic animations it is just what you need. Just throw an array of images at your image view and tell it to go, and it will go. For animations of more than a few frames though its simplicity is also its failing–an array of UIImage s is handy to put together but if you want large images or a reasonable number of frames then that array could take up a serious chunk of memory. If you’ve tried any large animations with UIImageView you’ll know things get crashy very quickly.

There are also a few features, like being able to know what frame is currently being displayed and setting a completion block that you regularly find yourself wanting when dealing with animations, so I’ve created MBAnimationView to provide those, and to overcome the crash inducing memory problems.

My work was informed by the excellent Mo DeJong and you should check out his PNGAnimatorDemo which I’ve borrowed from for my class.

How It Works

The premise for the memory improvements is the fact that image data is compressed, and loading it into a UIImage decompresses it. So, instead of having an array of UIImage objects (the decompressed image data), we’re going to work with an array of NSData objects (the compressed image data). Of course, in order to ever see the image, it will have to be decompressed at some point, but what we’re going to do is create a UIImage on demand for the frame we want to display next, and let it go away when we’re done displaying it.

So the MBAniamtionView has a UIImageView, it creates an array of NSData objects and then on a timer creates the frame images from the data, and sets the image view’s image to it, it’s that simple.


As expected crashes using the animationImages approach disappeared with MBAnimationView, but to understand why, I tested the following 2 pieces of code, for different numbers of frames recording memory usage, CPU utilisation and load time.

MBAnimationView *av = [[MBAnimationView alloc] initWithFrame:CGRectMake(0, 0, 350, 285)];
[av playAnimation: @"animationFrame"
                       withRange : NSMakeRange(0, 80)
                  numberPadding  : 2
                          ofType : @"png"
                             fps : 25
                          repeat : kMBAnimationViewOptionRepeatForever
                      completion : nil];
[self.view addSubview:av];
UIImageView *iv = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 350, 285)];
    iv.animationImages = @[[UIImage imageNamed:@"animationFrame00"],
                           [UIImage imageNamed:@"animationFrame01"],

                           [UIImage imageNamed:@"animationFrame79"]];
[self.view addSubview:iv];
[iv startAnimating];


Starting off with small numbers of frames it’s not looking too good for our new class, UIImageView is using less memory and significantly less CPU.

10 FramesMemory Average / PeakCPU Average / Peak
UIImageView4.1MB / 4.1MB 0% / 1%
MBAnimationView4.6MB / 4.6MB11% / 11%
20 FramesMemory Average / PeakCPU Average / Peak
UIImageView4.4MB / 4.4MB 0% / 1%
MBAnimationView4.9MB / 4.9MB11% / 11%

But things start looking up for us as more frames are added. MBAnimationView continues to use the same amount of CPU–memory usage is creeping up, but there are no spikes. UIImageView however is seeing some very large spikes during setup (highlighted in red).

40 FramesMemory Average / PeakCPU Average / Peak
UIImageView4.1MB / 65MB 0% / 8%
MBAnimationView5.7MB / 5.7MB11% / 11%
80 FramesMemory Average / PeakCPU Average / Peak
UIImageView4.5MB / 119MB 0% / 72%
MBAnimationView8.4MB / 8.4MB11% / 11%

Those red memory numbers are big enough to start crashing in a lot of situations, and remember this is for a single animation.

The Trade Off

There has to be one of course, but it turns out not to be a deal breaker. Decompressing the image data takes time, we’re doing it during the animation rather than up front but it’s not preventing us playing animations up to 30 fps and even higher. On the lower end devices I’ve tested on (iPad 2, iPhone 4) there doesn’t seem to be any negative impact, in light of that I’m surprised the default animation mechanism provided by UIImageView doesn’t take the same approach as MBAnimationView.

MBAnimationView on github