Visual Variables for visualizations in VR


Guideline: Visual Variables for visualizations in VR

There exist guidelines that order visual variables according to how accurately they can be perceived by the user.
For example, the one created by Mackinlay states that position can be read more precisely than length or angle.
However, until today, I did not find such a guideline that specifically targets visualizations inside a Virtual Reality environment (VRE).

While designing visualizations in a VRE, I often wonder if it is reasonable to follow these guidelines.
I think that they should be similar if the visualization in the VRE is two-dimensional as those should be perceived similar as two-dimensional visualizations on a monitor.
However, in the case of three-dimensional visualizations, I think that the order is likely going to change since there is less distortion in the projection of a VRE.
What is your opinion on this topic, would you expect these guidelines to hold true for visualizations in VR environments?


Hi, Niklas, I was hoping someone else may offer better answer than me. After 7 days have passed, I thought that a no-so-good answer may be better than nothing.

The ordering of visual channels is 2D may need to be backed up by more empirical studies, and you are right, different ordering may occur in 3D VR/VE. Mackinlay’s paper uses accuracy as an ordering criterion, Bertin has 4 criteria, and many visual search studies in the 1960s-1980s (e.g., by Williams, Treisman, Quinlan, Humphreys etc.) used the preattentiveness (pop-out effect) criterion (measured by RT). So the ordering is a multifaceted question.

In addition, there are many other visual channels that have not been studied. One paper ( listed more than 30 visual channels. So the question about ordering is further complicated.

Size-related visual channels have an “unfair” advantage as their sizes are usually not constrained in considering their bandwidth or accuracy. So the use of spatial bandwidth is also a cost criterion. Another question that we have to ask is: if accuracy of reading numbers is a task requirement, why don’t we display the numbers on/with the objects since size-related channels use more spatial bandwidth anyway?