O conteúdo desse portal pode ser acessível em Libras usando o VLibras

How to deploy your object detection model in an App Inventor app in minutes?

Let’s imagine you are developing an app for the detection of household items (chair, table, shoes, etc.) in order to help people with visual deficiencies to move around.

So you just finished the development of an object detection model using YOLO8 training the model using a jupyter notebook running in Google Colab with a dataset you prepared and downloaded from Roboflow.

But how to deploy this trained model now into an App Inventor app?

A simple solution is using the Hosted Inference API provided by Roboflow, which provides a REST API through which you can upload an image and retrieve predictions from your model.

So once you have finished training your YOLOv8 model, you’ll have a set of trained weights ready for use. These weights will be in the /runs/detect/train/weights/best.pt folder of your project. You can now upload your model weights to Roboflow Deploy to use your trained weights on Roboflow’s infrastructure.

The .deploy() function in the Roboflow pip package now supports uploading YOLOv8 weights. To upload model weights, add the following code to your jupyter notebook:

project.version(dataset.version).deploy(model_type=”yolov8″, model_path=f”{HOME}/runs/detect/train/”)

Now check out your Roboflow project where the model is being deployed:

There you will also find all the information you need to program the API call in your App Inventor app:

So lets now call the API from an App Inventor app.
For this we use the Web component:

We then make the API call by passing the respective arguments as indicated in your Roboflow project:

  • Web.RequestHeaders to a dictionary with key: “Content-Type” and value: “application/x-www-form-urlencoded
  • Web.Url to a string joining the strings of “https://detect.roboflow.com/objetos-dentro-de-casa/1” + “?api_key=” + (as obfuscated text the api key) + “&confidence=40&overlap=30&format=json

And then call Web.PostText passing as parameter the picture taken as image base64 (using the ImageToBase64 extension)

Using then when web.GotText we receive the inference response from the model running on the Roboflow infrastructure. The hosted API inference route returns a JSON object containing an array of predictions. Each prediction has the following properties:

  • x = the horizontal center point of the detected object
  • y = the vertical center point of the detected object
  • width = the width of the bounding box
  • height = the height of the bounding box
  • class = the class label of the detected object
  • confidence = the model’s confidence that the detected object has the correct label and position coordinates

Here is an example response object from the REST API received as JSON:

[{“class”:”Sapato”,”confidence”:0.4265051782131195,”height”:88,”width”:93,”x”:431.5,”y”:239},{“class”:”Armario”,”confidence”:0.4174592196941376,”height”:212,”width”:302,”x”:153,”y”:237}]

We can then use this JSON object, e.g., to display a list of detected objects.

Using a canvas to show the image, we can also draw bounding boxes on top of the image after transforming the received YOLO coordinates into canvas coordinates, and also using different colors for each object class.

Example .apk file
Example .aia file