Custom Transitions in iOS

The other night I had the opportunity to present on custom animations and transitions at the monthly San Antonio iOS Developer Meetup. I created a Playground that goes over the basics of animation in iOS as well as a prototype for duplicating this fancy transition from the App Store:

In the same repo, there’s also a partially complete project, detailing how to implement your own transition from one view to another. Feel free to play around with the code.


I listen to a podcast about pens. A weekly podcast about pens. Yes, there really is enough pen related news to have a weekly podcast. So much stationery related content in fact, that there are dozens (dozens!) of websites and blogs dedicated to the wonderful world of pens, pencils, paper and ink! Over time, I’ve slowly branched out from a few core pen blogs to more than I can name off the top of my mind. I decided to make Nibbler, a free, little iOS app for aggregating pen and paper news.

Why make a pen news app? I could easily add all of these sites to an RSS reader, just like I do for all my tech news sites, but it just didn’t feel right to me. A lot of the posts from these pen blogs often feature well composed images of the shades of a new ink, the body of a fancy pen, etcetera. Most of the RSS readers I’ve had experience with on iOS don’t do a great job at showing off these pictures while browsing through posts. They also take out a lot of the stylized flair that gives the site its character. While browsing through the posts from dozens of pen blogs, Nibbler presents the leading image of each post. I’ve found some of the photos lead me to tap on posts I normally wouldn’t have an interest in (I might have a budding pencil attraction).

So, how does it work? At launch, Nibbler pulls in RSS feeds from two dozen pen, paper, and pencil blogs. Each of these RSS subscriptions can be toggled on or off, so if you are just sick and tired of looking at Ed Jelley’s beautiful photography (I do not think this is humanly possible), you don’t have to.

With all of the RSS posts loaded, a nice visual feeds shows off each post’s leading image and title. Tapping on a post opens up a full web view of the article, showing off the complete website. If the user prefers, Safari’s Reader Mode can be enabled, which simplifies the page, showing just the article text and allowing the user to set a preferred font and background color.

Apart from Reader View and subscription preferences, there are also themes and alternate app icons! Colorful, ink inspired themes and icons (a dark mode too)! I’m a big fan of apps with dark modes and the ability to make small tweaks. When making Nibbler, I knew there had to be a variety of themes and alternate icons to pick from. Also, as a pen person (dare I say addict?), I switch inks out of my pens depending other the weather, of course I’m going to want to change the way my app looks. The themes are based off of some of my favorite inks, and I will be adding more in the future. All current and future themes/icons are unlockable via the Nibbler’s only in-app purchase for $1.99 (I need a way to feed my habit, okay?)

I’m fairly new to iOS development and Nibbler is my first real app in the App Store. I’m a little nervous putting it out there. It’s something I’ve put dozens of hours into. I hope that people will like it and find it useful, discovering new pen blogs and enjoying the way the app works. I’ve loved working on this app, and could continue pushing out the release forever, continually adding new features, telling myself that it’s not quite ready yet, but I think it’s at a good state for a 1.0. If you find any issues, have an idea for a feature, or would like an additional blog added to the feed, send me an email or reach out to me on Twitter.

You can find Nibbler on the App Store.

Core ML and Vision for iOS

Apple showed off some spectacular tech demos throughout this year’s WWDC, particularly, those related to ARKit and it’s base framework Core ML. I’ve only dabbled with basic machine learning in the past (K Nearest Neighbor barely counts), and was intrigued at the amount of abstraction Apple provides in implementing computer vision, natural language processing, and working with custom models. After watching the Introducing Core ML session, I decided to get my hands dirty and create a little image recognition app fueled by machine learning.

Starting off was easy enough, after setting up a basic single page application with an UIImageView, a couple of labels, and a set of buttons, I picked a demo model (I went with Inception V3), and dragged it into Xcode 9. With the model in the project, all that was left was to reference the model, and have it make a prediction based on an image the user provides.

Once the model has been imported, clicking on the file in Xcode 9 will reveal details specific to the model, including what inputs and outputs. In the case of Inception V3, the model expects an image and will return a dictionary of labels and probabilities.

Using the model

let model = try VNCoreMLModel(for: Inceptionv3().model)
let request = VNCoreMLRequest(model: model, completionHandler: displayPredictions)
let handler = VNImageRequestHandler(cgImage: image.cgImage!)
try handler.perform([request])

These two code blocks are where the magic happens. Above, I referenced the model I imported, specify a completion handler, provide the image, and initiate the prediction.

Viewing predictions

func displayPredictions(request: VNRequest, error: Error?) {
   // Make sure we have a result
   guard let results = request.results as? [VNClassificationObservation]
   else { fatalError("Bad prediction") }

   // Sort results by confidence
   results.sorted(by: {$0.confidence > $1.confidence})

   // Show prediction results
   print("\(results[0].identifier) - \(results[0].confidence)%")
   print("\(results[1].identifier) - \(results[1].confidence)%")
   print("\(results[2].identifier) - \(results[2].confidence)%")

When the prediction request completes, the completion handler above is called, handling the results. I did a simple sort based on the confidence percentage provided by the model and displayed the top three results.

? Success: 98.32451%

Pretty dang easy. If you’d like to take a look at my example project, head on on over to GitHub.

Pastebot Filters

On a recent episode of The Talk Show, John Gruber mentioned Pastebot, a clipboard manager from the good folks at Tapbots. I decided to give the app a try and have been impressed by both its ease of use and expandability. Not only does Pastebot maintain a history of your Mac’s clipboard, it also has a handy ‘Filters’ feature, allowing you modify text on the fly.

The clipboard

For a simple example, I have created a filter, that will convert a plaintext URL, into Markdown. All I need to do is copy a URL, and then invoke Pastebot (Shift + Cmd + V), select the filter icon next to the clipboard item, apply the correct filter, and voilà, [Link Text](

A preview of how the filter effects the clipping

If you’re feeling really dangerous, Pastebot also allows Shell scripts to be run as a part of filters, allowing for powerful operations to be performed on paste. A great example of this is a default filter which uses a script to convert Markdown to HTML.

If you’re interested, I have made available a few simple Pastebot filters for creating Markdown content: lists, blockquotes, URL’s and images. You can find them here.