At first glance, the cube may look fairly normal. There’s an element of red and blue to the cube, and it doesn’t seem out of the ordinary. The cube itself actually isn’t special at all – but if you take a look at the scene view on the left in the image above, you may start to notice that there’s more than meets the eye.
With virtual reality, now more than ever, new experiences are opening up to support a number of creative tricks that we can play on the human eye (and brain) in different ways that are impossible to reflect in the physical world. I spent a few days this week experimenting with a setup that would allow us to use multiple cameras and different culling masks in Unity to create different perspectives in a scene – a technique that allows for a healthy dose of “Wait, what’s what I see?”
This gif demonstrates the experience of having multiple cameras sharing different perspectives of a given scene. When the space bar is pressed, the main camera switches from seeing one of the two cubes to the other, while two additional cameras display their layer only on two additional panels. While it may not look like much from this angle, it’s the combination of these techniques that can make for some interesting mechanics in a virtual world.
In virtual reality applications, it’s generally pretty likely that you’ll have a camera to represent the player’s head. But what if you want to capture other elements in the scene to display while the user is in an experience? Setting up different cameras to show angles from different views of the scene allows us to use culling masks on our extra cameras to specify what layers in our scene we want to display.
Consider an experience in a video game where a particular spell allows you to see formerly invisible enemies wandering a space. When you cast that spell, you’re triggering a change in the game that makes a hidden layer (say, “EnemyLayer”) suddenly show up to your main camera. It was always there, you just changed what was filtered out of the scene to your main camera.
We’re going to use Unity’s built-in camera objects and set up layers to display two different cubes with their own layers. The cubes will be visible one at a time, and we’ll have a simple script to tell our main camera which one to display. Two panels will be set to project our custom cameras, each of which will be displaying the output of one of the two layers. To create this effect, we’ll start with the following:
- Add two cameras into your scene. Name them according to the layers you’ll be adding – in this example, I named one of them ‘RedCamera’ and one of them ‘BlueCamera’.
- Add two cubes into your scene. Name one of them ‘RedCube’ and one of them ‘BlueCube’. Make associated materials for each of the cubes and place them onto the two cube game objects so that one is red and one is blue.
- Add two more cubes into your scene and scale them to be rectangular in shape. These are going to be our “monitors” for the cameras that we just created. As with the other objects, name one of them “RedCameraOutput” and one “BlueCameraOutput”.
By default, the main camera in Unity (which is tagged ‘MainCamera’) is the one that will be output to the main game window. We’re going to be specifying our outputs for our two color cameras to each of the output panels via a ‘RenderTexture‘, which is applied to a material the same as a standard texture and outputs continually from the specified camera input.
- In the Assets folder, right click and create new Render Textures for each of your two panels. Drag the Red and Blue Render Textures to their associated output game objects.
- Select your Red Camera. Under the Camera component in the Inspector window, assign your Red Render Texture to the Target Texture box.
- Select your Blue Camera. Under the Camera component in the Inspector window, assign your Blue Render Texture to the Target Texture box.
Next, we want to set up our layers so that the red and blue objects only appear to their specific cameras. This is the base of the culling masks, and we’ll want to assign objects into either the red or blue layer, depending on what they should be displaying.
- Under ‘Layers’, select ‘Edit Layers’
- Add a new layer called ‘Blue’ in User Layer spot 8
- Add a new layer called ‘Red’ in User Layer spot 9
- Assign your Red Cube to the Red layer and your Blue Cube to the Blue layer
- Return to your Red Camera. Under the Culling Mask drop down, deselect the Blue Layer.
- On the Blue Camera, repeat step 5 to deselect the Red Layer
What you should see now is the scene at the very top of the post: two panels, one displaying the red cube and one displaying the blue cube. The last step for the basic setup is going to let us switch between the two layers on our main camera.
Adding a Script to Swap the Main Camera Masks
I went ahead and did the basic swap when the space bar was pressed in my script – it was a surprisingly small amount of code to change the camera mask. I made a ‘CameraDisplayScript.cs’ C# script with the following code in it:
Basically, the script above has a boolean variable to show whether or not a particular color mask is showing (the ‘redVisible’ variable) which determines which way to shift our culling masks. Since we assigned our Blue layer to 8 and our Red layer to 9, we want to display the corresponding layer in our bit shift code. I assigned this script to the main camera object, which resulted in the above gif output when the game was put into play mode.
Bonus Step: Adding the Character Controller
The last thing that I did for fun was add in a plane and swap out the main camera for a player controller. This let me see how the player was represented on each of the different displays to show how perspective can change around a player depending on the masks and layers that you’re using.
So, why do all of this?
There are a number of different things that using layers and culling masks allow us to do, particularly in VR. You can change what’s visible to some players or others, dynamically creating a new element of perspective shifting and game play mechanics in an application. You can build out levels that grow and change with a user based on their behaviors in the virtual world, or perhaps even begin to look at integration with haptics and external tracking devices to provide new depth to applications. You can create new user controlled filters for personal bubbles. You can build in-game “security cameras” or experiment with multiplayer interactions that derive from finding common experiences among different environments while being in the same networked space – there’s a lot of possibility here from an experimental standpoint, and it’s one that I haven’t seen many virtual reality games or experiences take advantage of yet.
You can get the full source for this demo on GitHub Here: Camera Culling Demo