I’ve Given up on Raycasting, and That’s Okay

Blog Posts ,Programming ,Unity ,Virtual Reality
March 16, 2016

I might be lying. Unity professionals, if you want to shoot me an email about all the reasons why what I’m doing is a Bad Idea, please do!

Now that that’s out of the way… onto the good stuff!

Raycasting, which is the act of sending out a line from a camera in order to detect if things are getting hit, is a big component of game development. It’s also an incredibly challenging thing to calculate sometimes, especially when VR camera implementations use multiple camera elements within your 3D scene and you’re trying to establish a center point. In particular, for me, implementation of raycasting for selecting kittens was a huge pain point in KittenVR, but this past weekend, I decided to make a small VR game using Cardboard, and my old nemesis CollisionDetection was back.

For those of you who are unfamiliar with Cardboard, there’s a handy GazeInputModule that comes with the Unity plugin that allows users to interact with objects using the magnet. Sounds easy enough, right? The challenge comes when you start looking at the way Unity built out their UI system, and how most of the Event Trigger options aren’t quite at a place where they can be applied to 3D interfaces.

I guess that's close enough

Close enough to the center of my screen to be considered in focus, I guess?

After several instances of trying and failing to force my game objects into a world where they were easily focused on and consistently target-able, I decided to try something totally different, and forget the gaze input all together.

I decided, in hackathon spirit, to throw caution to the wind and make my own version of a “raycast” – by creating a physical object instead of relying on a ray itself.

If I am looking at an object in front of me, there is an invisible “ray” being drawn from my eyes to the object in question. I decided to make that a physical component instead – and thus the “I have an invisible cube sticking out of my face” solution was born.

Using Unity’s built in “OnCollisionEnter” and “OnCollisionExit” system, I was able to create a way to test for physical “raycast” collisions in my scene in order to fake focus by using an invisible object attached to my camera that had some, but not all, physical properties. Essentially, the engine is aware that the cube exists, and can detect collisions with it, but the viewer never sees anything and the rest of the laws of physics don’t apply to the cube. I love VR – you get to decide which physics apply and which don’t! M-theory confirmed?

This approach to focusing and selecting elements seems to work amazingly well, and is simplified from raycasting since it’s utilizing components and features that Unity already has built in for detecting collisions without having to shoot out rays within your own scripts. I paired a boolean variable “isInFocus” to set true/false OnCollisionEnter and OnCollisionExit, and set the “on click” equivalent (in this case, the Cardboard magnet) to only perform the “raycast” action if the object was, in fact, in focus.

invisicube

Behind the scenes: an invisible cube attached to the Main Camera within the CardboardMain prefab

I’ve seen some implementations of a similar mechanic where the box was acting as a pointer, but I had issues with that approach unless the player was within a very small and specific distance of the target object. Treating the box as a ray helped detect collisions within 5 units ahead of the player, regardless of where it was in that space.

The cube also needed some specific properties in order to ignore usual behaviors of gravity, namely, that it was locked into place with “Freeze Location” and “Freeze Rotation”, and that gravity and movement effects were turned off. We also wanted to hide the Mesh Renderer, but keep both the Collider and Rigidbody in place to perform the collisions:

cube_properties

The code was incredibly straightforward:

// Update is called once per frame

void Update () {

  if(Input.GetMouseButtonUp(0)) {

    if(isInFocus) { CheckIfHit(); } } }

// On Trigger Enter
void OnCollisionEnter(Collision col)
{
  PlaceInFocus();
}

// On Trigger Exit
void OnCollisionExit(Collision col)
{
  RemoveFocus();
}
// When pointer enters, is in focus
public void PlaceInFocus()
{
  isInFocus = true;
  gameObject.GetComponent<MeshRenderer>().material.color = inFocusColor;
}
// When pointer exits, is not in focus
public void RemoveFocus()
{
  isInFocus = false;
  gameObject.GetComponent<MeshRenderer>().material.color = outFocusColor;
}
// On pointer click, test if in focus
public void CheckIfHit()
{
  _controller.SendMessage("UpdateScore");
}

What this works well for:

  • Small, simple environments where there aren’t many conflicting elements to try to target at once
  • In focus / out of focus check based on center point
  • Fixed-distance “raycasting”

What this probably won’t work well for:

  • Variable-distance raycasting
  • Conditional raycasting

I *think* that this method ended up being more computationally friendly than traditional raycasting – I didn’t have to send out rays to detect if something was in focus, then also send out collision detecting rays when something was in focus – and it worked really well for my use case, so I figured I’d share in case someone else was looking for a quick and dirty way to build out a few VR-friendly collision mechanisms. Working on my 2014 MBP with build settings for Android, I stayed around 60 FPS in editor before any kind of optimization attempts, so in my N=1 case, the performance works out okay too.

How do you handle raycasting and collisions from “focusing” on a VR headset? What tools are you using, and how can my strategy improve? Developers, I’d love to hear your thoughts! 

Related Posts

1 thought on “I’ve Given up on Raycasting, and That’s Okay”

  1. Anonymous says:

    Interesting solution for detecting a collision without using raycast or gaze input. Thanks for the the post about it.

Leave a Reply