Monthly Archives: December 2015

Tutorial – Using DeepLearningKit with iOS for iPhone and iPad

  1. Clone DeepLearningKit:  git clone

Screen Shot 2015-12-28 at 14.48.05

2. Clone demo app: git clone

Screen Shot 2015-12-28 at 14.48.37

3. Open DeepLearningKitForiOSDemoApp.xcodeproj in xcode (e.g. from Finder)

Screen Shot 2015-12-28 at 14.49.20

4. Have a look at ViewController.swift – notice that import DeepLearningKitForiOS gives an error (in red)

Screen Shot 2015-12-28 at 14.50.17

Screen Shot 2015-12-28 at 14.50.44

5. Open Finder and Drag DeepLearningForiOS.xcodeproj over to the demo app in xcode

Screen Shot 2015-12-28 at 14.52.15

6. Highlighted line below shows how the framework DeepLearningForiOS.xcodeproj can be included

Screen Shot 2015-12-28 at 14.52.55

7. Click on app settings (highlighted line in left part of Xcode) and go the General tab on the right

Screen Shot 2015-12-28 at 15.48.44

8. Scroll down to embedded binaries in General tab and add DeepLearningKitforiOS.frameworkiOS

Screen Shot 2015-12-28 at 15.21.28

9. Result afterwards should look something like this – embedded binaries down to the right

Screen Shot 2015-12-28 at 15.21.41

10. Drag the Shaders.metal file from DeepLearningKitForiOS into top project

(not quite sure why this needs to be done, but anyway)

Screen Shot 2015-12-28 at 15.56.09

Screen Shot 2015-12-28 at 15.56.18

11. Connect iPhone to your Mac (e.g. iPhone 6S), compile and run, should get something like this

Screen Shot 2015-12-28 at 15.32.35

DeepLearningKit – Deep Learning for iOS (tested on iPhone 6S), tvOS and OS X developed in Metal and Swift

In early October we purchased the new iPhone 6S and had high expectations of its GPU performance. One of the reasons for our expectations was a blog post by Simon Gladman where he wrote that iPhone 6S had 3 times the GPU performance of iPhone 6, this was also reported by TheNextWeb.

In our GPU programming case (developing Deep Learning algorithms with Metal) – going from iPhone 5S to iPhone 6S – got 1 order of magnitude in improved performance! Calculation time to run through a 20 layer deep convolutional neural network model for image recognition went from approximately 2 seconds to less than 100 milliseconds. Note that 100 milliseconds or in other words 0.1 seconds is what Jacob Nielsen stated is one of 3 important response times – that a user feels a system reacts instantenously.

This blog post gives a brief overview of DeepLearningKit – a Deep Learning Kit for iOS, OS X and tvOS. It is developed in Metal in order to make efficient use of the GPU and Swift for setting up Metal as well as loading data and integrate with apps.

1. DeepLearningKit – GPU Accelerated Deep Learning for Apple’s iOS, tvOS and OS X with Metal and Swift

DeepLearningKit currently implements Convolutional Neural Networks in Metal (parallelized for the GPU), deep learning layer operators include: convolution, pooling, relu layer.

On OS X DeepLearningKit can easily be adapted to utilize several GPUs if present, e.g. to run the same deep learning model on several GPUs to increase throughput or run different models in order to increase number of classes to predict over.

let GPUs = MTLCopyAllDevices()

gave the following on a (2012) Retina Macbook Pro

Screen Shot 2015-10-14 at 13.16.49

An interesting feature on iOS (and most likely on tvOS, but not yet tested in our case) is that one can share memory between GPU and CPU (less copying of data).

2. App Store for Deep Learning Models

Given the immense asymmetry in time taken to train a Deep Learning Model versus time needed to use it (e.g. to do image recognition), it makes perfect sense to build a large repository of pre-trained models that can be (re)used several times. Since there are several popular tools used to train Deep Learning models (e.g. Caffe, Torch, Theano, DeepLearning4J, PyLearn and Nervana) we’re working on supporting importing pre-trained models in those tools into an “app store” for deep learning models (currently we’ve been primarily been working with Caffe CNN models).

Screen Shot 2015-10-14 at 10.05.24

The tweet above illustrates how much energy is required to train a Deep Network (per night), some Deep Learning Models can take weeks of training on GPUs like the Nvidia TitanX, or in other words piles of wood of energy. Using a model is quite different since it requires less energy than lighting match.

Screen Shot 2015-10-14 at 10.51.52


Deep Learning Models also typically have a (low) limit in the number of classes they can predict per model (e.g. in the ImageNet competition there are 1000 classes, CIFAR-100 100 classes and CIFAR-10 10 classes). This means that in order to create real-life applications one need to intelligently (and very rapid load them from SSD into GPU accessible RAM) switch between several Deep Learning Models, or if there is enough capacity one can run several models in parallel on the same GPU. Selecting an approriate Deep Learning model (i.e. which is the most likely to work well in a given context) is to our knowledge not a well-studied field of research, and in some ways it resembles the meta or universal search problem found in web search (e.g. cross-model ranking), but latency plays an even bigger part in the mobile on-device case (don’t have time to run many models).

With state-of-the-art compression techniques for Convolutional Neural Network the (groundbreaking) AlexNet model from 2012 can be compressed from 240MB to 6.9MB.  This means that one could theoretically fit more than eighteen thousand AlexNet models on a 128 GB mobile device like the iPhone 6!


Deep Learning on iOS, tvOS and OS X devices is still in its infancy, and open source DeepLearningKit hopes to play a part of it. Check out our DeepLearningKit tutorial at



DeepLearningKit – Open Source Deep Learning Framework for Apple’s iOS, OS X and tvOS

Happy to announce that a (early) version of DeepLearningKit is available on:


0. What does DeepLearningKit do?

It currently allows using deep convolutional neural networks model trained in Caffe on Apple’s iOS, OS X and tvOS  (transformed from protobuffers into json with the tool at – a tutorial about this will come later).

1. Open Source Licence?

Apache 2.0

2. How to get started?

Have a look at Tutorial – Using DeepLearningKit with iOS for iPhone and iPad – it is about using a pre-trained CIFAR-10 Network in Network example.

3. What is DeepLearningKit developed in?

It is developed in Metal (for GPU Acceleration) and Swift (for app integration). I believe DeepLearningKit is the first (public) Deep Learning tool that is using the Metal compute API for GPUs (Metal is Apple’s recommended way to program GPUs)

4. More documentation

More tutorials and a paper describing DeepLearningKit will be made available on (+ for the paper)

5. I love developing for Apple’s [iOS,OS X or tvOS] and would like to contribute to this project, how?

Here are a few thoughts:

  1. Fork repo(s), play with it/them and provide feedback or fixes.
  2. Create apps that use DeepLearningKit (disclaimer: still very early version) and tell us about them.
  3. Try (and perhaps adapt) different types of deep neural networks to DeepLearningKit, e.g.
    1. Microsoft Research’s ImageNet 2015 winning approach described in the paper Deep Residual Learning for Image Recognition, or
    2. DeepMind’s (Google) AI for Atari games described in the papers Human-level control through deep reinforcement learning, Deep Reinforcement Learning with Double Q-Learning and Playing Atari with Deep Reinforcement Learning
    3. Other types of Deep Learning, check out http://DeepLearning.University for inspiration
  4. Performance Optimization wrt Metal (GPU): Metal is a very new API (in particular for GPGPU non-graphical processing), and there is probably ways of improve usage of it.
  5. Performance Optimization wrt algorithms (e.g. shader functions for comvolution): see our paper for roadmap
  6. Importers: develop model importers (in Swift) for convolutional neural networks) from other tools than Caffe, e.g. Torch,  TensorFlow, Theano, Nervana Systems, DeepLearning4J or Pylearn.HDF5 is an interesting format.
  7. Training Support: our goal was to primarily support using already trained Deep Learning models (since in the long run people will probably not train their own DL models but rather pick them from a Deep Learning Model store or similar, see our paper for why), but it would still be great to train convolutional neural networks in DeepLearningKit itself.
  8. Image Handling Support: DeepLearningKit is missing basic conversion from e.g. UIImage to RGB (the example network supports 32x32x3 CIFAR RGB Image Format, but has no conversion from UIImage to it). Check out e.g. Drawing Images From Pixel Data – In Swift and Image Processing in iOS Part 1: Raw Bitmap Modification for inspiration.

6. Is DeepLearningKit production ready for my mission critical app?

most likely not, but that doesn’t stop you from testing it out?

7. DeepLearningKit reminds me more about CUDA/GPU Libraries such as Nvidia’s cuDNN or  Facebook’s fbCunn rather than larger tools such as Torch, TensorFlow and Caffe, is that right?

You’re right, DeepLearningKit can be roughly seen as an early “metalDNN” with Swift packaging for loading and running models. (It currently doesn’t support Fast Fourier based convolution such as Facebook’s fbcunn)

8. Who Developed and Open Sourced DeepLearningKit

DeepLearningKit was developed and opensourced by the company Memkite – check out the About page for details