API tools for URL classification services

3. Integrate Azure-Custom Vision3. Integrating Azure Custom Vision

  • 4 minutes to read

This tutorial shows you how to Azure Custom Vision use. You upload a set of photos to share with you tracked object assign. Load them into the Custom visionService and start the training process. In this tutorial, you will learn how to use Azure Custom Vision.You will upload a set of photos to associate it with a Tracked Object, upload them to the Custom vision service and start the training process. Then use the service to run the tracked object Then you will use the service to detect the Tracked Object by capturing photos from the webcam feed.

GoalsObjectives

  • Learn the basics about Azure Custom Vision
  • Learn how to setup the scene to use Custom Vision in this project
  • Learn how to integrate upload, train and detect images

Understanding Azure Custom Vision

Azure Custom Vision is part of the Cognitive Services-Product family and is used to train image classifiers.Azure Custom Vision is part of the Cognitive Services family and is used to train image classifiers. The image classifier is an AI service that uses the trained model to apply matching tags Application used to tracked objects This classification feature will be used by our application to detect Tracked Objects.

Learn more about Azure Custom Vision.

Preparing Azure Custom Vision

Before you can begin, you need to create a Custom Vision project. The fastest way to do this is via the web portal. Before you can start, you have to create a custom vision project, the fastest way is by using the web portal.

Follow this quick start tutorial to set up your account and project through to the section Upload and tag imagesFollow this quickstart tutorial to setup your account and project until section Upload and tag images.

warning

To train a model you need to have at least 2 tags and 5 images per tag. To use this application, you should create at least one tag with five images with it To use this application you should at least create one tag with 5 images, so that the training process later won't fail.

Preparing the scene

Navigate to the folder in the project window Assets > MRTK.Tutorials.AzureCloudServices > Prefabs > Manager. In the Project window, navigate to the Assets > MRTK.Tutorials.AzureCloudServices > Prefabs > Manager folder.

Drag the prefab there ObjectDetectionManager From there drag the prefab ObjectDetectionManager into the scene hierarchy.

Look for the in the hierarchy window ObjectDetectionManagerObject and select it ObjectDetectionManager object and select it. The prefab ObjectDetectionManager contains the component ObjectDetectionManager (script) , and as you can see in the inspector window, it depends on Azure settings and project settings ObjectDetectionManager prefab contains the ObjectDetectionManager (script) component and as you can see from the Inspector window it depends on Azure settings and Project settings.

Retrieving Azure api resource credentials

The credentials required for setting up the ObjectDetectionManager (Script) can be obtained from the Azure portal and from the Custom Vision portal ObjectDetectionManager (script) settings can be retrieve from the Azure portal and the custom vision portal.

Retrieving Azure Settings credentials

Find the Custom Vision resource of type Cognitive Servicesthat you can find in the Prepare the scene of this tutorial (select the name of the Custom Vision resource followed by -PredictionFind and locate the custom vision resource of type Cognitive Services you have created in the Preparing the scene section of this tutorial (select custom vision resources name followed by -Prediction ), Click there Overview (Overview) or Keys and Endpoint (Key and endpoint) to get the required credentials. There click on Overview or Keys and Endpoint to retrieve the necessary credentials.

Retrieving Project Settings credentials

Open the project you created for this tutorial in the Custom Vision dashboard and click the gear icon in the top right corner of the page to open the settings page project you have created for this tutorial and click on the top right corner of the page on the gear icon to open the settings page. You can find the required credentials in the section resources Here on the right hand Resources section you will find the necessary credentials.

After this ObjectDetectionManager (Script) is set up properly, look for the in the scene hierarchy SceneControllerObject and select it ObjectDetectionManager (script) setup correctly, find the SceneController object in your scene hierarchy and select it.

You see that box Object Detection Manager (Object Recognition Manager) in the SceneController-Component is empty. Drag ObjectDetectionManager from the hierarchy into this box and save the scene. You see Object Detection Manager field in the SceneController component is empty, drag the ObjectDetectionManager from the hierarchy into that field and save the scene.

Take and upload images

Run the scene, click Set Object (Set Object) and enter the name for one of the tracked objects that you created in the previous lesson Set Object, type in the name for one of the Tracked Objects you have created in the previous lesson. Now click the button Computer vision (Machine vision) that you see below on the Property card Find Now click on Computer vision button you can find at the bottom of the Object card.

A new window will open where you have to take six photos to train the model for image recognition. Click the button Camera (Camera) and take an AirTap action when you are looking at the item you want to track. Do this six times. Click on the Camera button and perform an AirTap when you look on the object you like to track, do this six times.

tip

To improve the model training try to take each image from different angles and lighting conditions.

When you have enough pictures, click the button Train (Train) to start the model training process in the cloud Train button to start the model training process in the cloud. When you activate the training, all images will be uploaded and the training will start. This can take up to a minute or more. Activating the training will upload all images and then start the training, this can take up to a minute or more. A message within the menu shows the current status and as soon as the completion is displayed A message inside the menu indicates the current progress and once it indicates the completion you can stop the application

tip

ObjectDetectionManager (Script) uploads the captured images directly to the Custom Vision service ObjectDetectionManager (script) Directly uploads taken images into the Custom Vision service. Alternatively, the Custom Vision API accepts URLs for the images. As an exercise, you can ObjectDetectionManager (Script) , change to upload the images to blob storage instead. As an alternative the custom vision API accepts URLs to the images, as an exercise you can modify the ObjectDetectionManager (script) to upload the images to a blob storage instead.

Detect objects

Before recognizing the objects, you have to select the in ObjectDetectionManager (script) Before detecting the objects we have to change the Api key present in ObjectDetectionManager (script) under project settings that already assign with custom vision key.

Find and go to the Custom Vision resource in the Azure portal. click on Keys and Endpoint (Key and Endpoint) to get the API key and replace it with the old API key under Project Settings. Find and locate the custom vision resource in Azure portal.There click on Keys and Endpoint to retrieve the Api key and replace with old Api key under project settings.

You can now test the trained model, run the application, and save it in the main menu on Search Object Click on (Find object) and enter the name of the relevant tracked object You can now put the trained model to the test, run the application and from the main menu click on Search Object and type the name of the Tracked Object in question Property card is displayed. Click the button Custom vision.The Object card will appear and click on the Custom vision button. Starts here ObjectDetectionManagerTo capture images from the camera in the background and the status will be shown in the menu ObjectDetectionManager will start taking image captures in the background from the camera and the progress will be indicated on the menu. Point the camera at the object with which you trained the model and you will see that the object will be recognized after a short time. Point the camera to the object you used to train the model and you will see that after a short while it will detect the object.

Congratulations!

In this tutorial, you learned how to use Azure Custom Vision to train images and use the classification service to recognize images that match the associated tracked object In this tutorial you learned how Azure Custom Vision can be used to train images and use the classification service to detect images that match the associated Tracked Object.

In the next tutorial, we'll see how to use Azure Spatial Anchors to create a tracked object Link to a location in the physical world and display an arrow that leads the user back to the linked location of the tracked object. In the next tutorial you will learn how to use Azure Spatial Anchors to link a Tracked Object with a location in the physical world and how to display an arrow that will guide the user back to the tracked object's linked location.