Easily train and deploy neural networks with TensorFlow Lite Model Maker
We have seen a lot of movement in the world of AI recently because the tools have finally caught up with our ambitions. Until relatively recently, if you wanted to train a neural network, you had to create the network yourself. Now, TensorFlow and many of its competitors relieve us of the heavy lifting. We create the dataset, such as labeled images, and then train it on any number of neural networks that TensorFlow provides.
But the process is hardly as simple as plug and play. Training a model and deploying it to a mobile device takes a tremendous amount of skill. Model Maker is Google’s attempt to change that, and it comes in two distinct parts. The first simplifies the training process. The second is an Android Studio plug-in that uses code generation to deploy the trained model on a mobile device.
Getting started with Model Maker
The Android Studio Model Maker plug-in is currently only available in the early betas of Android Studio 4.x. So, at the time of writing, you’ll have to download the canary version of Studio.
The marginally bad news is that, again at the time of writing, only Image and Text Classification are supported. So far, there’s no object detection for the plug-in. You can still do it the old way, see Week 2 of the following course.
Creating the tflite file
Borrowing heavily from the Model Maker tutorial, we can create and download our model.tflite file. The model we’re using recognizes five different types of flowers, including daisies, dandelions, roses, sunflowers and tulips. All the work is done on Google’s Colab.
The example we’re using already has metadata that the Android Studio plug-in uses to generate its code. If you want to create your own metadata for your specific model, then you’ll need to follow the instructions available here: https://www.tensorflow.org/lite/guide/codegen.
Upload your tflite file with the Android Studio plug-in.
To upload your file, create a new ‘Empty Activity’ Android project which you can call Model Maker. Go to New->Other->TensorFlow Lite Model.
Navigate to the point where you downloaded the model.tflite file.
When the model has been successfully imported, it will display the metadata as follows, showing among other things, the required input and outputs.
You will also notice a few other changes. You have a new ml folder, as well as new TensorFlow dependencies. Changes have also been made to the build.gradle file.
It’s unlikely that you’ll notice it, but if you build the project, there is a generated Model.java file in app/build/generated/ml_source_out/debug/out/com/riis/modelmaker/ml/Model.java. We’ll be using this in the next section to interact with our tflite file.
Putting Image Classification into focus.
In the following example, we take a random sunflower image from the web and run it through our model. First, start by loading the sunflower image. Then crop the image to 224*224 so our model can consume it. Next, create an instance of the model. Process the image, and finally read the outputs.
var tfImage = TensorImage(DataType.FLOAT32)
val imageProcessor = ImageProcessor.Builder()
.add(ResizeOp(224, 224, ResizeOp.ResizeMethod.BILINEAR))
tfImage = imageProcessor.process(tfImage)
val model = Model.newInstance(this@MainActivity)
val outputs = model.process(tfImage)
val outputBuffer = outputs.probabilityAsTensorLabel.categoryList
Run the app in debug mode, and you can see the model is more than 90% confident our image is a sunflower.
In conclusion, the game has changed.
Google’s new Model Maker is a game changer for Android developers. It dramatically simplifies model training and deployment on a mobile device. In fact, deploying a model in an Android app can now be accomplished in a half dozen lines of code. Yes, the documentation is scarce and remains somewhat confusing. However, we’re still in the early beta stages and that will quickly change for the better.