Want to contribute to DeepLearningKit and wonder how?

  1. Fork repo(s), play with it/them and provide feedback or fixes.
  2. Create apps that use DeepLearningKit (disclaimer: still very early version) and tell us about them.
  3. Try (and perhaps adapt) different types of deep neural networks to DeepLearningKit, e.g.
    1. Microsoft Research’s ImageNet 2015 winning approach described in the paper Deep Residual Learning for Image Recognition, or
    2. DeepMind’s (Google) AI for Atari games described in the papers Human-level control through deep reinforcement learning, Deep Reinforcement Learning with Double Q-Learning and Playing Atari with Deep Reinforcement Learning
    3. Other types of Deep Learning, check out http://DeepLearning.University for inspiration
  4. Performance Optimization wrt Metal (GPU): Metal is a very new API (in particular for GPGPU non-graphical processing), and there is probably ways of improve usage of it.
  5. Performance Optimization wrt algorithms (e.g. shader functions for comvolution): see our paper for roadmap
  6. Importers: develop model importers (in Swift) for convolutional neural networks) from other tools than Caffe, e.g. Torch,  TensorFlow, Theano, Nervana Systems, DeepLearning4J or Pylearn. HDF5 is an interesting format.
  7. Training Support: our goal was to primarily support using already trained Deep Learning models (since in the long run people will probably not train their own DL models but rather pick them from a Deep Learning Model store or similar, see our paper for why), but it would still be great to train convolutional neural networks in DeepLearningKit itself.
  8. Image Handling Support: DeepLearningKit is missing basic conversion from e.g. UIImage to RGB (the example network supports 32x32x3 CIFAR RGB Image Format, but has no conversion from UIImage to it). Check out e.g. Drawing Images From Pixel Data – In Swift and Image Processing in iOS Part 1: Raw Bitmap Modification for inspiration.