How to build image recognition API server with Labellio

June 30th, 2015 06:06

Once you created your image recognition model on Labellio, I’m sure you want to try it in your app.  In this entry, I would like to share how to build your image classification API server using your model in your app.


Expected use case

This example assumes that you want to add new feature in your web application or smartphone app by using the recognition model you built on Labellio.  Since your app already exists, it makes a lot of sense to implement it as a service in microservice architecture, from maintainability and scalability perspective.

Let’s say we want to implement a very basic Web API server.  It will serve “http://[your domain]/classify as an endpoint that accepts images in HTTP POST method, and responds with the most probable label with score in JSON format.

The code here is very minimalistic for the explanation purpose.  In the real production development, you should consider many other things like image format check, error handling etc, which we don’t cover here.


Runtime environment

It is best to use GPU instance in AWS to implement this kind of server very quickly.  We showed how to set it up in the previous post you can find below.  Image classification is relatively heavy workload, but at the same time, it is easier to increase the number of machines to scale it out if you use AWS.

We use the same Alpaca’s AMI ami-9fe512db in the N. California region as in the previous post.

We use Python 2.7, since it is the easiest way to implement as Caffe interface and Labellio library are implemented in it.


Sample code

Here you can find the sample code.

We save image files in the working directory specified by UPLOAD_FOLDER.  Caffe model file downloaded from Labellio is stored in the MODEL_FOLDER directory.  The supported image file format is JPEG and PNG.  We only check file extension from the file name.  The program receives images sent through “/classify” in the POST method and temporarily store it in UPLOAD_FOLDER, then get the label (answer) using labellio_cli, and returns the result in JSON format.



1. Login to the GPU instance created in AWS using SSH.

2. Install Flask.

$ sudo pip install flask

3. Run the following command. This enables labellio_cli to be used.

$ source /opt/caffe/caffe.bashrc

4. Untar your Caffe model and rename it as `model` directory.  Make a directory `tmp` to put images in.

$ tar xzf [model filename]
$ mv [untarred model directory] model
$ mkdir tmp

5. Save the example code above as ``, and run the following command.

$ python



It will run Flask server listening on TCP 5000.  When testing, save the image file to classify in your local and point your HTTP client (such as curl) to the port 5000 on the server.

$ curl -F "image=@test.png"  http://[your domain]:5000/classify

Make sure your Security Group opens up the port 5000 to let the incoming network connection go through to the server.

You will get the classification result, score and class label name in JSON format.

{"0": {"score": [0.7100653052330017, 0.07795180380344391, 0.21198289096355438], "label": "alpaca"}, "label_name": {"sheep": 2, "gorilla": 1, "alpaca": 0}}

This results reads `alpaca` is the most probable class label amongst sheep, gorilla and alpaca.

Now you are ready!  Prepare your client app to perform the same HTTP POST and get the classification result.  In the real use case, you can also think about tweaking such as ignore classification result with lower score than threshold, depending on your application requirements.