Experiments in WebVR – Part 2

Blog Posts ,Programming ,Virtual Reality
February 3, 2015

Back in January, I wrote about my first attempt at using the new WebVR “standards” (I’ll get more into this later) to make a mini JavaScript application that uses Three.js and a Three VR Renderer from wwwtyro. In this post, I’ll walk through the steps that I followed to get a demo up and running using Tyro’s instructions – if you’ve got Firefox Nightly or a vr-enabled build of Chromium installed on your desktop, you can check out the demo at http://webvrdemo.azurewebsites.net, but most browsers will render a blank page until I get some better browser detection going.

Poor quality is from gif conversion, not the actual rendering

Setup

To begin working with browser-based virtual reality applications, you’ll need to get one (or both!) of the following since WebVR hasn’t been integrated into the main versions of Chrome or Firefox.

Additionally, you can clone the entire project on GitHub for the following, or copy & paste the code from the individual files:

  • Three.js – you’ll need to get the built version of this, so you can either copy it directly from my WebVR repository or download the source from Threejs.org and built it before copying the file into your WebVR project
  • VRRenderer.js file – this is included in my repository and is courtesy of WWWTyro

You should grab the Oculus SDK & Runtimes as well:

My environment

Because all of these components are in pre-release stages, your mileage may vary on getting things set up depending on your development environment configuration. I did the coding part of this project on my OS X partition on my Macbook Pro but tested on Windows as well, both with Chromium, so either should be fine.

  • Macbook Pro i5, 16 GB, Intel Iris 1536 MB (mid-2014) running Yosemite
  • Oculus 0.4.4 runtime / SDK for Mac
  • Atom editor
  • Chromium

The Project

I followed the tutorial that Tyro posted about using the VRRenderer with Three.JS here, but being new to web development in general, there were a few parts that didn’t make a ton of sense to me, so this is my attempt at breaking it down a little further. I also made a few changes in the rendered shapes, and ended up with a final result that was pretty much guaranteed to make viewers dizzy.

First things first, you’re going to want to set up your file structure. This project has an index.html file to render the page, Three.js, VRRenderer,js, and a styles.css file. You can break out the main JavaScript into a separate file if you’d like, but for simplicity, I just kept it in <script> tags in my index.html.

Atom

The awesome part about Tyro’s VRRenderer file is that it hides a lot of the rendering under the hood so you can focus on generating the environment instead of tweaking the eye positioning or tracking. Similar to how the Oculus SDK allows you to drop a full character controller & camera into a project, VRRenderer replaces the traditional Three.js camera renderer and uses the tracker to update the viewing angle when your head is moved.

The Code

If you’re like me (a relatively new web developer) then the hardest part of the WebVR instructions from WWWTyro’s tutorial is figuring out how to get get everything in order in the right files. Assuming that you already have your Three.js and VRRenderer.js files in your project directory, the next step is to create the shell for index.html.

<html>
 <meta charset="utf-8">
 <head>
 <link rel="stylesheet" type="text/css" href="style.css">
 <script src="three0.js"></script>
 <script language="javascript">
 </script>
 <script src="VRRenderer.js"></script>
 </head>
 <body>
 <canvas id="render-canvas"></canvas>
 <div style="position: fixed; top: 8px; left: 8px; color: white">Hit the F key to engage VR rendering</div>
 </body>
</html>

Our HTML file is very simple: we specify our header, load in the accompanying scripts, leaving a space to write in the rendering code, and create a basic WebGL canvas named “render-canvas”. Over the canvas is a label in the top left that tells the viewer how to toggle between full screen and VR mode.

Next, we’ll add a very short styles.css file to the project that contains the following:

body {
 margin:0;
}
#render-canvas {
 width: 100%;
 height: 100%;
}

This style sheet tells our rendering canvas to take up the full space available and removes additional margin padding so our VR canvas is the actual size of the screen and doesn’t have a border in the frame.

Once we’ve added both of those, we can go ahead and add in our functions to index.html file. Tyro does an awesome job explaining each of these in detail on his tutorial page, so I’ll keep those to a minimum and just show you where to stick each of them.

You’ll want to start off by declaring the three elements we use for rendering at the top of your script block: the canvas, the HMD (head-mounted display) and the array of devices that the browser detects.

<script language = "javascript">
 var renderCanvas = document.getElementById("render-canvas");
 var vrHMD;
 var devices;

After those three variables are declared, we’ll start adding in the functions specified in Tyro’s tutorial.

 /** Get the list of VR Devices **/
 window.addEventListener("load", function() {
 if (navigator.getVRDevices) {
 navigator.getVRDevices().then(vrDeviceCallback);
 }

 else if (navigator.mozGetVRDevices) {
 navigator.mozGetVRDevices(vrDeviceCallback);
 }
 }, false);

 /** Assign objects to their correct VR device (HMD, sensor) **/
 /** Note that FF 38.0a1 consistently lists HMD first **/

 function vrDeviceCallback(vrdevs) {
 for (var i = 0; i < vrdevs.length; ++i) {
 if (vrdevs[i] instanceof HMDVRDevice) {
 vrHMD = vrdevs[i];
 break;
 }
 }
 for (var i = 0; i < vrdevs.length; ++i) {
 if (vrdevs[i] instanceof PositionSensorVRDevice &&
 vrdevs[i].hardwareUnitId == vrHMD.hardwareUnitId) {
 vrHMDSensor = vrdevs[i];
 break;
 }
 }
 /**Log that the devices are properly seen **/

 for(var i = 0; i < vrdevs.length; i++)
 {
 console.log(vrdevs[i]);
 }

 initScene();
 initRenderer();
 render();
 }

Of course, in order for this code to actually display anything, we need to include the functions for initScene(),  initRenderer(), and render(), so we’ll add those next, still in our <script> block:

/**
 * Initialize a basic Three.JS scene. This is where you will draw the mesh
 * components on the canvas and set up the camera.
 **/
 function initScene() {
 camera = new THREE.PerspectiveCamera(60, 1280 / 800, 0.001, 10);
 camera.position.z = 2;
 scene = new THREE.Scene();
 var geometry = new THREE.TorusKnotGeometry(1, 4, 64, 8, 2, 3, 1);
 var material = new THREE.MeshNormalMaterial();
 mesh = new THREE.Mesh(geometry, material);
 scene.add(mesh);
 scene.add(camera);
 }
 /**
 * Initialize the VR renderer for the scene. This creates the WebGL canvas in
 * the browser and uses the VRRenderer to display the scene side by side.
 **/
 function initRenderer() {
 renderCanvas = document.getElementById("render-canvas");
 renderer = new THREE.WebGLRenderer({
 canvas: renderCanvas,
 });
 renderer.setClearColor(0x555555);
 renderer.setSize(1280, 800, false);
 vrrenderer = new THREE.VRRenderer(renderer, vrHMD);
 }

 /**
 * Render the VR scene and change the camera in the scene to reflect the HMD
 * tracking position (camera.quaternion.set). The camera is then passed with the
 * Three.js scene as parameters to the VRRenderer, which handles the redrawing as
 * the scene run.
 **/
 function render() {
 requestAnimationFrame(render);
 mesh.rotation.y += 0.01;
 mesh.rotation.x += 0.01;
 var state = vrHMDSensor.getState();
 camera.quaternion.set(state.orientation.x,
 state.orientation.y,
 state.orientation.z,
 state.orientation.w);
 vrrenderer.render(scene, camera);
 }

If you’re following along with this and comparing to the WWWTyro tutorial, you’ll notice that there is a difference in what we render our Three.js canvases: the original tutorial has a 3D shape floating in space, but I took it a little further by creating a shape around the camera, which results in a spiraling background around us in the scene instead of a shape in one direction. This is done by changing the line

var geometry = new THREE.IcosahedronGeometry(1, 1);

to:

var geometry = new THREE.TorusKnotGeometry(1, 4, 64, 8, 2, 3, 1);

After that, the only thing remaining is to initialize the full screen mode! Chrome and Firefox are up to their normal tricks of changing one letter capitalization, so we need to try both of them. I added a try-catch around the request to prevent weird exceptions from being thrown and unhandled, but it’s not much help yet since all I do is log it.

/** Enter fullscreen mode **/
 window.addEventListener("keypress", function(e) {
 if (e.charCode == 'f'.charCodeAt(0)) {
 try
 {
 if (renderCanvas.mozRequestFullScreen) {
 renderCanvas.mozRequestFullScreen({
 vrDisplay: vrHMD
 });
 } else if (renderCanvas.webkitRequestFullscreen) {
 renderCanvas.webkitRequestFullscreen({
 vrDisplay: vrHMD,
 });
 }
 }
 catch(e)
 {
 console.log("Problem with attempting fullscreen");
 }

 }
 }, false);
</script>

With that, you can close off your script block and you’re good to go! The current file that I’ve got in my GH repo includes the shell for a “cancel fullscreen mode” function, which has yet to be implemented, but this isn’t needed for the first go.

After I pushed the codebase to GitHub, I hopped onto my Azure account and published the website using the new tooling Azure has to connect directly to a GH repository for continuous deployment. It was probably the fastest part of the project, but being able to switch my Oculus over to a different computer to test on Windows was really exciting and was met with a lot of excitement from my colleagues. The performance was qualitatively good (though I was only rendering one object, and didn’t take any benchmarks) and most people got a kick out of being inside a spinning rainbow. I’ve affectionately named it the motion sickness simulator.

Thoughts

WebVR is going to be awesome and fun – it already is – but after this experiment, I’ll probably be sticking with Unity for my immediate VR development needs, at least when it comes to my VR world creations. There is definitely a need for some open platform standards for virtual reality, and the web is an awesome way to do it, but at this point, it’s still easier to hop into Unity and throw something together.

  • World creation: The Three.JS library is powerful, but for non-experienced web developers like myself, getting everything built and having to code everything in pure text, rather than having a visual world editor, is definitely a pain point for me.
  • Browser compatibility: It’s early in the WebVR game, so I totally understand why things aren’t there yet, but troubleshooting errors is a pain right now – prerelease browsers and early stages of the rendering means that there’s little to no support when roadblocks pop up. Additionally, changes on the browser end can be fatal to WebVR projects at this stage, with no indications of why- again, an early stage problem rather than a long-term one.
  • Ramp up time: I built my first Unity app (a cube rotating in space) in about 2 hours, and integrating the Oculus camera into it was a very simple task. In contrast, my WebVR project, excluding the time I spent on earlier ideas, took about ten to get from the initial repository creation to a workable demo – which promptly broke when I switched from Chromium to Firefox Nightly; however, if you’re an experienced web developer, I could see this being less of an issue.

Overall, I definitely plan on continuing to work with WebVR – but it’s not my go-to for quick VR projects yet. I plan on trying out a new demo sometime soon, and definitely recommend checking out the tools available for VR rendering in the browser (especially if you’re a more experienced web developer!) as they become more robust and documented. In general, I’ll call the experiment a success, and can’t wait to see where it grows!

Related Posts

1 thought on “Experiments in WebVR – Part 2”

Leave a Reply