Mobile Game Doctor Logo

VR uses existing UX & UI models

Paul Rybicki • Sep 07, 2023

VR uses existing UX & UI models

Paul Rybicki • Sep 07, 2023

Start with existing UX & UI models in XR

For designers new to creating novel User Interfaces and novel User Interactions in a spatial environment like the current round of battery-powered VR headsets, you can leverage two areas of your prior experience as a conceptual bridge to this design space. We will get into those in detail in this article and you will leave with a conceptual foundation to operate from when designing interactions and interfaces in spatial environments.


We need to start with two aspects of human centered design: Memory and Attention. 


Your attention please…

Attention is an aspect of information processing that impacts how our users gather information from a sequence of screens over time (spatio-temporal environments) and process that information using Working Memory into larger constructs which they then attach to the goals they have formed while playing your game. This applies to a phone, a laptop or a headset.


There are three forms which impact a user’s information processing and task completion. Selective attention, sustained attention and divided attention.

Selective Attention is the act of choosing which stimuli to focus on and to process. Usually our emphasis is on visual stimuli and information input from a screen, but auditory stimuli (sound, especially spatial sound), tactile stimuli (vibration and haptic feedback) also use selective attention. These stimuli interact and can influence each other.


Sustained or focused attention is the effort it takes to maintain processing of stimuli or information elements while ignoring other stimuli, or ignoring other information from the same stimuli.


Divided attention is the ability to process or focus on more than one event or stimulus at the same time. 


When designing interfaces and interactions we often combine stimuli in a single experience, or substitute one for another, as in the case where a user taps on an on-screen button and the handset vibrates to indicate input received, but the button does not change color or size.

Types of memory

Recall dips into our Long-Term memory, our built-up knowledge over time. To recall a piece of information from long-term memory we need to see/hear/feel enough pattern fragments to connect those fragments with the memory and its associated information. 


Users should not need to recall information from screen to screen. When using an interface, Recognition is the primary system at play and uses Working, or Short-Term memory. Short-term memory is highly influenced by our attention resources, so we often use pictures or visual representations (icons) in our interfaces as a shorthand to aid in recognition and lower the burden on our attention.


Knowing our place within the screens or the flow of a game we play regularly is very similar to our ability to recall the layout and arrangement of physical spaces, known as Spatial Memory and is drawn from our long-term memory. Combining our spatial memory and our present “location” within a flow or virtual space results in Situational Awareness and is how we know which way to go, or which UI element will take us to which feature or system within the game.


Using an interface is a form of spatial navigation in short-term memory based on our processing of stimuli or an event intentionally constructed to carry information.


How we use the space around us to work (the Promise)

Recalling ideas or concepts stored in long-term memory requires enough memory traces to be reactivated through recognition (pattern completion) of any one of the stored patterns from a fragment (Smith & Kosslyn 2013). Therefore our ability to recall what functionality or feature sits behind the click of a button or which information is in what file relies on cues and on context of the user’s immediate spatio-temporal environment. It’s highly associative. 

Looking around your office, people are constantly using the walls for work spaces. Using the wall for a brainstorming session with Post-Its? Perhaps there’s a mood board of images on another wall of your workspace. That rolling dry-erase board makes a fantastic post-meeting reminder of discussions recently or even long past. Why is that? And how do we design digital interfaces and products now that have this recall capability and support multiple users just like people used to do in an office?


An author once described a shared, networked visual workspace as “Unthinkable complexity … ranged in the nonspace of the mind, clusters and constellations of data.”


The “shared” portion of that quote has been realized by wi-fi, fiber and cellular networks. The non-space of the mind can be thought of as our cognition, and our situational awareness. 



Any interface is a reference point for the user to their long-term and short-term memory. A reminder of the cluster of data and the constellation (connection to other information or decisions) in which it resides. Existing 2D and 3D GUI’s use spatial layout and information hierarchy as well as temporal sequencing and progressive refinement to give users a way to segment and compartmentalize ideas, concepts & memories. For example: you need a flashlight. Where is your backpack? Tap an icon or reach with an arm & finger gesture towards your shoulder. A visual representation of the backpack opens and items are displayed, you open a side pocket to reveal more items and the view of the backpack visually shifts to focus on the pocket and it’s contents, following your attention and focus.This is the same thing as clicking your way through folders to get to a file. In terms of UI, the Treeview model can be employed in spatial and volume-centric environments as progressive refinement, but it’s still all progressive refinement & disclosure in the end.


2D UX to a 3D space

Current head mounted display products (battery powered headsets using Quallcomm’s chips) already allow for gaze-based interactions as well as hand and finger gesture recognition. 


Think of a user's gaze and head movement like a mouse pointer. This gives you mouseOver / mouseOut events where the scene can react to your gaze. The mouse interaction model that items scale up or expand in size when you look at them, is incredibly powerful in VR, where it appears to move slightly towards you. 


When using pointing or arm gestures for interaction, thinking of your interfaces and the user’s hand as magnetic objects which can attract or repel each other will yield very natural feeling results as an interface moves slightly toward or away from a use’s hand. While this interaction is simply using the 2d interaction model of scaling up a UI element onMouseOver / onMouseOut it has a fundamentally different feeling when employed in 3d space.


Finally, immersion. Polygon count and textures of the uncanny valley are not the only key to User immersion. They are one way to get there but there is a far better method which works well on both the current and previous generation of user-purchasable hardware: Interaction & reaction. Imagine you’re in a cartoony outdoor nature scene. Everywhere you look, birds flutter away when you move your arm suddenly or make a noise. A squirrel wanders towards you then runs away when you shift your weight form one leg to the other, moving your whole upper body. Environmental sounds come from all around you, and you hear a cartoon dog sniffing at your shoes from behind you. There are literally dozens of small, medium and even large ways you can interact with the scene around you. Spend 30 seconds in that environment? Focused attention. Situational awareness. Immersion due to the sheer amount of interactivity of the digital environment you are in. This approach to immersion is buildable now and when displays improve, adding more detailed textures or more 3d audio events to this immersive foundation will pay enormous dividends.



Mobile-centric decision chunking FTW

Despite the seemingly infinite canvas of an immersive headset experience, displaying limited information to your user in a sequence of chunks, as employed by mobile games, actually works much better.Keeping in mind that Attention is generally both selective and focused, let’s consider the following scenario. A player is refilling their backpack from a storage locker. The backpack stores items in a “tool roll” style interface allowing for 2d UI projection in world space. 


They run out of room in the pack - how do you allow them to sort or discard items? 


You can magically ‘unfold’ the tool roll to expand it larger and larger than is possible on a laptop screen, sure. But do you also allow them to see how many of each item they have in storage elsewhere? And how much they could get for selling each item later? Think: what would we do on Mobile? We would show the UI needed for the sorting task the user’s attention is focused on. We might give them a filter view icon to see the same items with different detail information. Mobile design keeps user-actions chucked by context and is far more effective at task competition. Countless design patterns exist for small-screen devices and work well when applied to sequenced tasks in immersive environments.



Because letters. Text is still hard to read on the displays in current gen hardware so we use larger fonts. The next-gen hardware with their improved displays will change this however hardware limitations in reading text or UI up close to the user still exist. For the screen door effect to be resolved, a requirement for the long-use sessions of work-related VR, a combination of varifocal and high density displays will likely be needed. Our eyes naturally rotate to bring items in view into correct stereoscopic focus at different distances, but since headsets have a fixed view plane, our natural focusing muscles work against the visual stimuli when focusing ‘up close’ in the headset leading to eye strain and headaches. So for now, experiences designed using distant (2 meter plus) Interfaces with large fonts to make them readable at said distance are best.


Foveated rendering is a rendering technique which concentrates resources in the area of the display where the user is looking. This allows for battery-friendly rendering with a noticeable visual improvement. Fixed versions of this concept have existed for some time and render the periphery of a user's vision at lower resolution. Designers can rely on auditory stimuli and use 3d sound to compensate for a lower fidelity visual system. 


And don’t forget the other senses! Designers can rely on auditory stimuli and use 3d sound to compensate for a lower fidelity visual system. This can be especially useful in providing immersion and directional cues outside the User’s FOV.

How to plan for the future (the potential)

Shared VR spaces have the potential to engage groups in a shared immersive experience. There can be a temptation to use the technique we know from console or desktop and show high poly and visual detail, and it’s totally beautiful! But visual detail, while it provides an initial reaction in our users, only gets us so far. Interaction is the true key to deeper immersion, where things react to your user’s focused attention or casual action. Building from a foundation of deep and subtle interaction means as the hardware generations get more powerful, you can add layers of additional visual detail to your game that will be both beautiful and performant. 


All the steps Mobile games use to get performance and battery life from a handset should be employed. The current IDE/engines will do many things for you out of the box: combined draw calls, backface & occlusion culling, dynamic level of detail (billboards) baked lighting, dynamic resizing of UI for consistent size at varying distances, and more!


It’s up to us as designers to avoid surfaces with high color variances which create visual hierarchy problems, avoid translucency on current devices due to material or layer compositing overhead & subsequent hit to battery life, avoid exhausting our user’s attention with too much simultaneous information or choice; and to create spaces with low line of sight and multiple object occlusions for a perception of depth and distance for users new to headsets. 


In the past people have sought to use the promise of VR to provide virtual spaces in a virtual office that embrace real world workspaces and extend them. But in practice this likely results in information display that overwhelms the user’s attention & recognition. Use VR for what it’s good for right now: targeted immersive experiences of limited session time with minimal use of text that employs known interaction patterns and recognizable gestures for input. Don’t try to make it a general purpose laptop solution (yet).

By Dave Rohrl 08 May, 2024
GDC is a highlight of my year each and every year. 2024 was the 30th year I’ve attended, and 23rd consecutive year that I’ve lectured at GDC. Every year, it feels more and more like the world’s biggest class reunion, where I get to see and spend time with the remarkable folks I’ve made games with, learned from, worked alongside, or otherwise connected with over 3 decades in the videogames industry. This year was no exception; I got the chance to once again connect with a variety of friends - old and new - at a variety of points in their careers and from all over the industry. 2024 was a notable GDC in a number of ways. After years of canceled conferences and arduous entry requirements stemming from Covid-19, the conference finally felt like it was all the way back, with more than 30,000 attendees and a bustling expo floor. On the other hand, the industry itself didn’t feel nearly all the way back at GDC. Best estimates are that the game industry shed around 10,000 jobs last year and layoffs have continued at a brisk pace in early 2024. In my large social circle, there are always some people who are looking for a new role, but this year there were far more people in that situation, and with more urgency. And they came from across all sectors of the industry - console, PC, mobile, and more. But despite the challenges facing the industry - and especially those facing mobile gaming companies - we saw a lot of encouraging signs. The game industry remains vibrant, dynamic, and ever changing. There were a lot of companies in the mobile game space looking for expert help to get their projects on track. This isn’t the first conference in 2024 where we’ve connected with new and exciting teams, and we look forward to working with them on their projects. I was surprised and pleased to meet a large number of developers from emerging markets ranging from Brazil to Vietnam who have had success making ad-monetized hypercasual games and are looking to play for higher stakes by making deeper, more sophisticated games with hybrid monetization and better retention. We are excited to help these studios achieve their goals, and think the ecosystem as a whole will benefit from the new and exciting ideas these teams from around the world will bring to bear. I also spoke to a variety of PC developers who have seen their audiences shift from expecting a one-time release to demanding a living, breathing game that is receiving constant attention from the development team. Although not every lesson from mobile games will be relevant in the realm of PC gaming many of them will, and we look forward to helping these developers expand into the exciting world of games as a service. I also saw some intriguing new types of game emerging (and re-emerging). There was far more talk about HTML5 games on the web than I’ve ever seen before, and blockchain gaming seems to be having another moment, ushering in a wave of far smarter, subtler implementations of web3 in games. I expect to see compelling (and very different) developments in both of these markets in the near future. So looking back, GDC 2024 was great. This is in part because GDC is always great - the best place in industry to learn, connect, and grow. And in part because although there are many challenges facing the game industry in general and mobile games in particular, there are still tons of exciting new developments out there, and the Mobile Game Doctor team and I couldn’t be more excited to be part of them.
A blue and white poster for the game developers conference.
By Tadhg Kelly 17 Apr, 2024
Unlock valuable insights on the mobile gaming industry with our latest blog post: 'Appmagic Response to GDC'. Dive into in-depth research and expert analysis from the Mobile Game Doctor team as they navigate through prevailing industry shifts. Don't miss out on staying ahead of the curve – read now!
A green owl is sitting on top of a yellow coin on a purple background
By Patricia Pizer 21 Mar, 2024
Discover how Duolingo's success impacts the mobile gaming landscape in our latest blog post! Explore the intersection of gamification and language learning apps, and learn valuable insights for competing in today's fierce market. With over 500 million users, Duolingo's freemium model sets a new standard for engagement and monetization. Dive into the analysis now and stay ahead of the competition!
A group of anime characters are standing next to each other holding swords
By Johnny Liu 07 Mar, 2024
Explore mobile gaming monetization and engagement strategies with our Product Management Consultant, Johnny Liu! In this enlightening video, Johnny delves into a comparative case study of Honkai Star Rail and Genshin Impact, offering valuable insights to enhance user experience and drive revenue generation. Watch now for actionable learnings to optimize your game's strategies.
A person is holding a cell phone in their hand
By Mike Kanarek 22 Feb, 2024
Uncover the secrets to effective initial geo and channel planning with our latest blog post on Competitive Analysis for Marketing! Whether you're a small startup or a medium-sized developer navigating the UA landscape, discover invaluable insights into choosing the right advertising channels and geographies.
A doctor is holding a cell phone in front of a blue background
By Dave Rohrl 01 Feb, 2024
Explore valuable lessons learned from failure in our latest blog post, 'My Favorite Mistake.' Join us as we delve into a personal reflection on a failed project from a seasoned game development veteran. Gain insights into the reasons behind the project's failure and the invaluable lessons learned for future endeavors. Read now for a candid exploration of failure and growth in game development.
A person is holding a wooden block with the words
By Mike Kanarek 27 Dec, 2023
Explore transformative trends and expert insights in Part 2 of our Mobile Market Year in Review 2023 blog series! Delve into reflections from MGD's experts on the challenges and opportunities that defined the mobile gaming landscape last year. Gain valuable perspectives on narrative-driven retention, emerging platforms, and technological advancements shaping the future. Get ready to navigate the evolving mobile gaming industry with our in-depth analysis. Read now!
A person is holding a wooden block with the words
By Mike Kanarek 20 Dec, 2023
Dive into the Mobile Market Year in Review 2023 with insights from MGD's experts in our latest blog series! In Part 1, our team reflects on the key learnings and trends that shaped the mobile gaming landscape. From navigating industry shifts to fostering indie innovation, discover valuable lessons to drive success in the year ahead. Join us as we explore the dynamic world of mobile gaming. Read now!
A group of video game characters standing next to each other
By Dave Rohrl 08 Dec, 2023
Dive into the world of game monetization with our latest blog post, 'The CAPS$ Approach to Monetization.' Explore the core motivations behind player spending and learn how to optimize your game's revenue strategy. Discover valuable insights into maximizing monetization opportunities tailored to your game's audience. Read now to unlock the secrets of successful free-to-play game development!
More Posts
Share by: