Recently, I’ve been heads down in VR design research and working on some prototyping – and since it’s been on my mind a lot over the past several weeks, I found myself thinking about the use of text elements in VR while I was walking through downtown San Francisco the other day. As I stood on a street corner waiting to cross, I was struck by how often, in the physical world, we rely on non-text contextual cues to convey information, and how this would translate to good virtual and augmented reality design.
Text in the physical world comes into play to convey very specific types of information – some more useful than others – and processing text tends to play a relatively small world outside of our smartphones and computers. One place where text is incredibly useful in our physical world? Road signs.
Road signs are designed to be easily understood, immediately recognized and internalized, and very rarely convey extraneous information. We learn how to categorize signs as important or unimportant based on context and our immediate situation – if we’re walking a familiar route, street name signs fade into the background; driving in an unfamiliar place with an upcoming term amplifies their importance immensely. When applying this general significance to text-based UI elements in virtual reality, I came up with a framework to guide user interface designs in my VR apps: the ‘road sign design’ test.
How is text represented in your virtual reality application?
I’m guilty of applying the “floating text UI” over the head of the NPC kitten in KittenVR. In the physical world, cats do not have text appear over their heads when I talk to them – it’s unrealistic, but commonly seen in traditional game NPCs. With VR, immersion is everything, and a better solution is to embrace a design strategy that has a closer alignment to physical world experiences. A better alternative in Kitten VR would be a sign declaring “Lost Kittens, reward offered!” or doing a voice-over narration for a talking cat.
As part of the application design and structure, consider any non-natural ways that text is presented to the user. How does your presentation enhance (or detract) from a user’s immersion?
How frequently is text placed in your environment?
Imagine, you’re sitting at a conference lunch room. There are about fifty white tables in an open space, and each table has a brochure with the lunch sponsor and conference name on it. Some tables have suggested conversation topics – two signs that have the same text printed on each side, to be viewed from virtually every angle. It’s information overload.
When I think about great VR experiences, what I do NOT want is to be inundated with an overwhelming expectation of needing to read a bunch of things only to find the same information repeated over and over again every five feet.
Sit in your application and look around. How frequently is a text element present? Do you feel a gentle guidance to understand what to do and explore, or is there a lot to take in that creates a sensation of urgency?
How critical are your text elements to the experience?
Road signs are contextually important for a variety of different reasons. A speed limit sign isn’t useful while you’re gridlocked in traffic, but is very helpful when pulling off of a highway. ‘Children at play’ signs generally don’t appear next to busy intersections, but are good reminders in residential neighborhoods to be cautious of our surroundings. Text placement in virtual reality should be similarly contextual – naturally, this will depend on the context of the application itself, as well as any given point within the experience.
Consider the motivation behind each text element carefully and avoid superfluous text for the sake of adding detail.
Could this be better said with iconography?
If you have decorative text elements in your application, think about whether or not the same result can be achieved with icons. Job Simulator from Owlchemy Labs does an excellent job with this – a pepper icon conveys that a red bottle in the fridge is Sriracha – which frees the user from feeling as though they need to pay attention to the non-essential elements and helps encourage their attention to more important components within the environment.
Personally, I’m a really big fan of contextual text in virtual reality. It’s not too difficult to approximate the user’s attention at the center point of the screen, and this information can be used to show or hide relevant text as the user explores an experience.
Above all else, though, what I love most about VR is the opportunity to break rules. These are just a few guidelines that work for the types of applications I’ve been working on, but your experience may vary – don’t be afraid to try new things as it works in your design.