Welcome to Monday Musings! These are are shorter-form, note-like blog posts that I share that may or may not be related to VR/AR, but that I want to share some quick thoughts on, get some extra insight on, or share out quickly.
There’s been a lot of news around input devices for virtual reality recently, some of it promising (can’t wait to try Rink hands-on) and some not-so-good news (delay in estimated Touch launch) – but since input is on the brain, I figured I’d use today Monday Musing to share my thoughts on VR/AR input!
(Bonus: there’s an upcoming episode of Just A/VR Show dedicated to VR input being released shortly!)
Let’s start with a basic overview!
There are a few different methods for input in virtual & augmented reality, and each of them have their own set of strengths and weaknesses. Generally, they’ll fall into one of these categories:
- HMD Integrated (Cardboard magnet, GearVR touchpad)
- Non-controller peripherals (Keyboard, Mouse)
- Controller (Standard, e.g. Xbox, or specialized, e.g. Rink, Touch)
- Gesture (Leap Motion, Kinect, HoloLens)
- Haptic (Glove One)
Within each of those categories, there is a pretty large spectrum of functionality – the Cardboard magnet, for example, only provides a trigger input compared to the scroll, tap, and back buttons integrated into the GearVR, but I’ve found these descriptions to be a pretty good starting point when considering input types for a new application. As the ecosystem evolves (FOVE, for example, has pupil tracking built in) these categories will certainly grow as well.
From a basic usability perspective, integrated input for head mounted displays is nice because it provides a layer of standardization for developers to work with on a given platform. Every GearVR application knows exactly what it has access to, and can expect the user to have those set of behaviors provided. Of course, that’s not to say that you can’t develop across multiple platforms with different varieties of input options – and most good experiences do provide flexibility. Integrated input, though, tends to be limited in that it’s restricted to what fits on the headset.
Controllers are a good next step in complexity, and generally will have a good set of similar inputs across exact devices, but require that you’re holding something in your hand. For a lot of well-written applications, this isn’t too bad, and I’ve actually forgotten that I’ve had them in my hands with Job Simulator, but it does detract slightly from realism, especially if you don’t play a lot of games. A keyboard and mouse combo works in some situations, but you’re limited to users who are familiar enough with a keyboard layout to be able to use it while in an HMD (it takes some practice but generally isn’t too bad to work around).
Gestures are a good way of getting hand tracking into a virtual experience, but I tend to have trouble feeling fully immersed with these due to the lack of haptic feedback. They do work really, really well for applications that have a ‘holographic’ or magic element to them (Harry Potter VR experience, please?!) since there’s no expected physical force feedback from items that are touched, so it tends to be a good solution for specific types of applications that embrace that lack of feedback as a feature, rather than something that needs to be ignored.
Finally, haptic controllers are an area that I can’t wait to see continue to improve, because I think that these have the most potential for full immersion. We’re still a ways off from this becoming the norm, but the idea of truly feeling present in a virtual space will be made much more realistic with great force feedback that is able to play to a wider range of our senses. Consider me ready for my whole Ready Player One rig!