Hello everyone.

The Mixed Reality Forums here are no longer being used or maintained.

There are a few other places we would like to direct you to for support, both from Microsoft and from the community.

The first way we want to connect with you is our mixed reality developer program, which you can sign up for at https://aka.ms/IWantMR.

For technical questions, please use Stack Overflow, and tag your questions using either hololens or windows-mixed-reality.

If you want to join in discussions, please do so in the HoloDevelopers Slack, which you can join by going to https://aka.ms/holodevelopers, or in our Microsoft Tech Communities forums at https://techcommunity.microsoft.com/t5/mixed-reality/ct-p/MicrosoftMixedReality.

And always feel free to hit us up on Twitter @MxdRealityDev.
Options

Camera View discrepancy?

I have an app where I am able to use the spatial mapping and ray casting to place an annotation(gameobject) onto the surface of the spatial map. The way this is done is by using the view from the video camera on the hololens and clicking on the view in a separate app on a pc. For some reason the annotations seem to show up in the hololens slightly to the left of where they should be placed. Is there some reason for this? how would I calibrate it to be on target?

Best Answer

Answers

  • Options

    Can you provide a little more specifics as to the code?
    Other than that did you set your focus point?

    Dwight Goins
    CAO & Founder| Independent Architect | Trainer and Consultant | Sr. Enterprise Architect
    MVP | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer
    http://dgoins.wordpress.com

  • Options
    edited September 2016

    @Dwight_Goins_EE_MVP ok so here is the code on the client side for making a raycast and sending it to the hololens

    if (Input.GetMouseButtonDown(0) && writing.Status == AsyncStatus.Completed)
                    {
                        Ray cast = Camera.main.ScreenPointToRay(Input.mousePosition);
                        Vector3 origin = cast.origin;
                        Vector3 dir = cast.direction;
    
                        //RGB shape
                        //write.WriteBytes(colors[currentColor]);
                        write.WriteBytes(new byte[] { 255, 0, 0 });  //write red annotation
                        currentColor = (currentColor + 1) % colors.Length;
                        write.WriteByte(0);
    
                        write.WriteSingle(origin.x);
                        write.WriteSingle(origin.y);
                        write.WriteSingle(origin.z);
                        write.WriteSingle(dir.x);
                        write.WriteSingle(dir.y);
                        write.WriteSingle(dir.z);
    
                        writing = write.StoreAsync();
                        sending = false;
                    }
    

    Then here is the code for reading in that info, making the raycast, and passing back the info to the client.

      float x = read.ReadSingle();
                            float y = read.ReadSingle();
                            float z = read.ReadSingle();
    
                            float xd = read.ReadSingle();
                            float yd = read.ReadSingle();
                            float zd = read.ReadSingle();
    
                            Ray cast = new Ray(new Vector3(x, y, z), new Vector3(xd, yd, zd));
                            RaycastHit hit = new RaycastHit();
                            bool ifHit = Physics.Raycast(ray: cast, hitInfo: out hit, maxDistance: 30f, layerMask: SpatialMappingManager.Instance.LayerMask);
                            rotation = rb.rotation;
                            if (ifHit)
                            {
                                Debug.Log("Angle: " + Vector3.Angle(rb.up, hit.normal));
                                if (Mathf.Abs(Vector3.Angle(rb.up, hit.normal)) > 90)
                                {
                                    rotation = Quaternion.Euler(180, 0, 0) * rotation;
                                    Debug.Log("Fliped");
                                }
                                point = hit.point;
                            }
                            else
                            {
                                point = cast.GetPoint(2.5f);
                            }
    
    
                            PlaceAnnotation(new Color(255 / 255f, 0 / 255f, 0 / 255f), shape, point, rotation);
    
                            reading = read.LoadAsync(CLIENT_MESSAGE_SIZE);
    

    this is in an if else

        write.WriteByte(1);
                                write.WriteBytes(ab);
                                write.WriteByte(shape);
    
                                write.WriteSingle(point.x);
                                write.WriteSingle(point.y);
                                write.WriteSingle(point.z);
    
                                Vector3 eulerRot = rotation.eulerAngles;
                                write.WriteSingle(eulerRot.x);
                                write.WriteSingle(eulerRot.y);
                                write.WriteSingle(eulerRot.z);
                                looper = null;
    

    Here is the code for placing the annotation in the unity scene for both client and hololens

    private void PlaceAnnotation(Color color, byte shape, Vector3 position, Quaternion rotation){
                    GameObject dup = (GameObject)Instantiate(annotation, position, rotation);
                    gameObjs.AddLast(dup);
                    Queue<Transform> q = new Queue<Transform>();
                    q.Enqueue(dup.GetComponent<Transform>());
    
                    Material mat = Instantiate(def);
                    mat.color = color;
                    while(q.Count > 0) {
                        Transform t = q.Dequeue();
                        Renderer rend = t.GetComponent<Renderer>();
                        if(rend != null)
                            rend.material = mat;
                        foreach(Transform current in t)
                            q.Enqueue(current);
                    }
                }
    

    And this is the Xaml code for overlaying the annotations over the video from the hololens

     <SwapChainPanel x:Name="DXSwapChainPanel" CompositeMode="MinBlend"  >
                <MediaElement x:Name="RemoteVideo"  RealTimePlayback="True" AudioCategory="Communications" Stretch="UniformToFill"  Unloaded="MediaUnloaded" />
                <Grid x:Name="ExtendedSplashGrid" Background="White">
                    <Image x:Name="ExtendedSplashImage" Source="Assets/SplashScreen.png" VerticalAlignment="Center" HorizontalAlignment="Center"/>
                </Grid>
    
            </SwapChainPanel>
    

    I did not set a focus point, which looks like it may be the issue, but I'm not using mixed reality capture. It looks and feels like MRC, but it's more like both client and hololens know where the annotation should be and render it.

Sign In or Register to comment.