Using Area Learning in Unity

In this tutorial we will explain how to use area learning features in Unity. This tutorial assumes you’re already familiar with using the Delta Camera prefab to integrate motion tracking into a Unity project. See our tutorial on using motion tracking if you haven’t done this before. This tutorial also assumes you’re familiar with area learning concepts.

Prerequisites

  • Existing project integrated with motion tracking using a recent version of the Tango SDK (Ancha or newer). If you don’t have one, you can complete the motion tracking tutorial.
  • Familiarity with Unity.

Within the Tango Manager prefab, there are a few settings relevant to area learning. We’ll go over them here, but you don’t need to do anything yet.

In the Tango Application script, the Auto-connect to Service option controls the startup flow of Tango. It should not be selected if using area learning features. Area learning requires some application-specific settings during start-up and prevents us from using the Tango Unity SDK’s built-in auto-connect option. Instead, we will create code to handle start-up and initialization using the Tango APIs.

Enable Area Descriptions requests the area learning permission, which is required to load an area description or get the list of area descriptions on a device.

Learning Mode (which becomes available after you select Enable Area Descriptions) requests the area learning permission and enables learning mode. With learning mode enabled, the area learning system will create new area descriptions based on what it sees. You can also save the area description, as described below, to create or extend an area description file.

Using Enable Area Descriptions mode

In this section, we’ll show you how to use previously created area descriptions in your application. Specifically, you’ll be able to:

  • Get a list of area descriptions
  • Load a specific area description
  • Set the Tango Delta Camera to use the area description
  • Show the user instructions for relocalization

List and load area descriptions

To load an area description file, you must specify the area description usingTangoApplication.Startup(AreaDescription) when connecting to the Tango Service. A common way to do this is by showing a list of all area descriptions on the device for the user to choose from. However, we can only query the list of area descriptions after we have requested the area learning permission. Therefore, the code must manually control the TangoApplication startup process by explicitly callingTangoApplication.RequestPermissions() and TangoApplication.Startup().

First, in the Tango Manager prefab, make sure that Auto-connect to Service is cleared and Enable Area Description is selected. This will allow us to implement our own connection code and add the area learning permission to our permissions request.

Next, we will add a new script to request permissions, get the list of area descriptions, and connect to a specified area description. For simplicity, we will use the most recent Area Description.

First, create a new, empty GameObject. Next, create a new C# script, name it “AreaLearningStartup”, and drag it onto the GameObject you created. Finally, open the script and copy/paste this code into it:

using System.Collections;
using UnityEngine;
using Tango;

public class AreaLearningStartup : MonoBehaviour, ITangoLifecycle
{
    private TangoApplication m_tangoApplication;

    public void Start()
    {
        m_tangoApplication = FindObjectOfType<TangoApplication>();
        if (m_tangoApplication != null)
        {
            m_tangoApplication.Register(this);
            m_tangoApplication.RequestPermissions();
        }
    }

    public void OnTangoPermissions(bool permissionsGranted)
    {
        if (permissionsGranted)
        {
            AreaDescription[] list = AreaDescription.GetList();
            AreaDescription mostRecent = null;
            AreaDescription.Metadata mostRecentMetadata = null;
            if (list.Length > 0)
            {
                // Find and load the most recent Area Description
                mostRecent = list[0];
                mostRecentMetadata = mostRecent.GetMetadata();
                foreach (AreaDescription areaDescription in list)
                {
                    AreaDescription.Metadata metadata = areaDescription.GetMetadata();
                    if (metadata.m_dateTime > mostRecentMetadata.m_dateTime)
                    {
                        mostRecent = areaDescription;
                        mostRecentMetadata = metadata;
                    }
                }

                m_tangoApplication.Startup(mostRecent);
            }
            else
            {
                // No Area Descriptions available.
                Debug.Log("No area descriptions available.");
            }
        }
    }

    public void OnTangoServiceConnected()
    {
    }

    public void OnTangoServiceDisconnected()
    {
    }
}

For your application, you will need to implement more functionality in the OnTangoPermissions function. In this sample, we always pick the most recent area decription and only write a log if there are no visible area descriptions. for your application, you should implement your own logic for how to choose an area description and handle the case when there is no area description available.

Enable localized motion tracking in the Delta Controller

By default, the Tango Delta Camera does not use area descriptions. Enable the Use Area Description Poseoption on the Tango Delta Camera prefab to use the localized coordinates from the area description.

Improve the UX before localization

Before localization happens, the application gets no motion updates. To improve the user experience, we’ll show instructions to walk around until we have localized. To keep things simple, we’ll use the same image and script used in the area learning example.

If you don’t already have one, add a UI Canvas by right-clicking in the Hierarchy panel and selecting UI > Canvas.

Next, add a UI Image to the Canvas. Right click on the Canvas in your Hierarchy panel and select UI > Image.

Next, define the Source Image for the Image we just added. If you’re using the Tango SDK for Unity 5, it can be found under Assets > TangoSDK > Examples > Common > Textures > relocalize_screen in your Project panel. Otherwise, you can get it from our Unity examples. Drag the file from your Project panel onto the Source Image field of the Image in your Hierarchy, then click the Set Native Size button so it displays correctly.

Now we’ll add a script to listen for localization, and show the image as appropriate. In the Tango SDK for Unity 5 it is located at Assets > TangoSDK > Examples > AreaLearning > Scripts > RelocalizingOverlay, or get it from our Unity examples. Drag the script onto the Tango Manager in your Hierarchy to add it.

Finally, we’ll associate the image to the script. With the Tango Manager selected, drag the Image containing the relocalize_screen image onto the Relocalization Overlay field of the Relocalization Overlay script.

You can now Build & Run your application. If you don’t have an area description on your device or the most recent area description isn’t of the area you’re currently in, you’ll need to create one to localize against for the localization screen to go away. You can use Tango Explorer or any other application that can save area descriptions to do this. We’ll also describe saving area descriptions in your own application later in this tutorial.

Congratulations! If you’ve gotten to this point, your application should be able to load the most recent area description, show a screen with instructions until relocalization occurs, and use the area description’s coordinate frame while handling camera movement.

Using Area Learning mode

In this section, we’ll describe how to use Area Learning mode to create new area descriptions or extend existing ones. It builds on top of the “Using Enable Area Descriptions” tutorial, so make sure to go through that section first.

Remember that learning mode requires much more computational power than loading an area description. You should restrict learning mode to a separate setup mode in your application and use Enable Area Descriptions mode during your main experience.

Handling startup in Area Learning mode

First, in the Tango Manager prefab, make sure that Auto-connect to Service is cleared and Learning Mode is selected.

In Area Learning mode, we can use the same startup script as Enable Area Descriptions mode with one key difference in the behavior of m_tangoApplication.Startup(). In Area Learning mode, callingm_tangoApplication.Startup(null) is valid, and triggers the creation of a new area description.

In the AreaLearningStartup script, replace the line:

Debug.Log("No area descriptions available.");

with:

m_tangoApplication.Startup(null);

Now, our application will load the most recent area description if present, or create a new area description if none exist on the device.

Save an area description

In Area Learning mode, you can save the area description that was learned by callingAreaDescription.SaveCurrent() in a place that makes sense for your own application. Saving an area description can take a few minutes, so be sure to save in a background thread.

While the save is running, Tango Events will be sent that describe the save progress. You can listen for the Tango Events by implementing the ITangoEvent interface and listening to AreaDescriptionSaveProgressevents. In the snippet below, we update a GUIText m_savingText with the save progress as a percentage. You’ll need to create your own implementation to match your UI.

public void OnTangoEventAvailableEventHandler(Tango.TangoEvent tangoEvent)
{
    if (tangoEvent.type == TangoEnums.TangoEventType.TANGO_EVENT_AREA_LEARNING
        && tangoEvent.event_key == "AreaDescriptionSaveProgress")
    {
        m_savingText.text = "Saving. " + (float.Parse(tangoEvent.event_value) * 100) + "%";
    }
}

You can also see saving progress implemented in our area learning sample.

Handling drift corrections

When a relocalization occurs, small errors that have been accumulated are corrected, including errors in the past. We can use this to correct not only the device’s location, but also objects that were placed relative to the device, even back in time.

Our augmented reality sample does this. The sample lets you drop virtual markers into the real world. Every time a marker is dropped, we also store the timestamp when we placed the marker and the transformation from the device to the placed marker. When we relocalize, we go through the list of objects, ask for the (now corrected) device pose for each timestamp, and update the object’s location using the new device pose and stored transformation.

The code snippet below from our augmented reality sample show’s one implementation. Note that it relies on our ARMarker class’s member variables to keep the timestamp and pose.

public void OnTangoPoseAvailable(Tango.TangoPoseData poseData)
{
    if (poseData.framePair.baseFrame ==
        TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION &&
        poseData.framePair.targetFrame ==
        TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE &&
        poseData.status_code == TangoEnums.TangoPoseStatusType.TANGO_POSE_VALID)
    {
        // Adjust mark's position each time we have a loop closure detected.
        foreach (GameObject obj in m_markerList)
        {
            ARMarker tempMarker = obj.GetComponent<ARMarker>();
            if (tempMarker.m_timestamp != -1.0f)
            {
                TangoCoordinateFramePair pair;
                TangoPoseData relocalizedPose = new TangoPoseData();

                pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION;
                pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
                PoseProvider.GetPoseAtTime(relocalizedPose, tempMarker.m_timestamp, pair);
                Vector3 pDevice = new Vector3((float)relocalizedPose.translation[0],
                                              (float)relocalizedPose.translation[1],
                                              (float)relocalizedPose.translation[2]);
                Quaternion qDevice = new Quaternion((float)relocalizedPose.orientation[0],
                                                    (float)relocalizedPose.orientation[1],
                                                    (float)relocalizedPose.orientation[2],
                                                    (float)relocalizedPose.orientation[3]);

                Matrix4x4 uwTDevice = m_uwTss * Matrix4x4.TRS(pDevice, qDevice, Vector3.one) * m_dTuc;
                Matrix4x4 uwTMarker = uwTDevice * tempMarker.m_deviceTMarker;

                obj.transform.position = uwTMarker.GetColumn(3);
                obj.transform.rotation = Quaternion.LookRotation(uwTMarker.GetColumn(2), uwTMarker.GetColumn(1));
            }
        }
    }
}

分享:
tango Asked on 2016年11月2日 in 虚拟现实VR.
Add Comment
0 Answer(s)

Your Answer

By posting your answer, you agree to the privacy policy and terms of service.