As a big fan of Virtual Reality, I couldn't stand its laser interaction method. So I decided to explore a more intuitive interaction method - VR Hands.
Time: Oct - Dec, 2018
All current VR products use the laser interaction model: the controller emits a laser, leaving a dot on the target. The dot serves as a cursor to navigate, select and click.
Using a laser to interact with targets requires great efforts to control hands and wrists.
Cursor is inefficient in VR. Because the ratio between the hand movement and the cursor movement is large, making it hard to exert precise control on the cursor, which reduces the efficiency.
The laser and the cursor are annoyingly shaky. The reason is human’s innate hand tremor (Elble and Koller, 1990).
“1 meter is a comfortable distance for menus and GUIs that users may focus on for extended periods of time.”
— Oculus website (2018)
Contents should be presented at least 1 meter away in VR. But most people can only reach to 60 - 80 cm in front of their eyes (Foley & Held, 1972). So lasers are needed to reach further contents.
Vergence-Accommodation Conflict (V-A Conflict) is a innated problem of current VR display technology. It will cause eye fatigue when the viewing distance is nearer than 1 meter. But it could be ignored when the viewing distance is further than 1 meter.
That leads to 2 possible solutions:
I choose the second solution for:
Design should consider feasibility. But at the ideation phrase, design should never be limited by technology. So I decide to first pursue the best design, then to consider if the solution is feasible.
Though in the process of realizing the solution compromises might happen, the experience bar is set at a high level at first, so the overall experience would be better than focusing on feasibility first.
There are several types of common interactions, such as scrolling, swiping, and zooming. But the most fundamental one is clicking. Once clicking is solved, other interactions won’t be too hard. So the design focuses primarily on clicking.
Break down clicking into smaller parts and three very specific design challenges reveal: controller, selection and manipulation.
In real world, people can’t click anything that’s out of hands’ reach. But they have a similar behavior — pointing faraway items with fingers. So for “clicking” faraway objects, hands are the best controllers.
And it is possible track hand and finger motion in VR by using products like Leap Motion.
All kinds of selections face the same tradeoff between intuition and accuracy. Extreme intuition and extreme efficiency can not coexist. Accuracy comes with the price of extra forces and attention, which lowers intuition. The opposite is also true. If people want to operate more intuitively, they have to sacrifice some accuracy.
The key is to strike a balance between accuracy and efficiency according to the platform and the content.
Accuracy is the basis of selection, while intuition is the key to make the selection convenient and pleasurable. When designing the VR interaction, we must focus on intuition first. Otherwise, only bad ideas such as laser interaction would be generated.
The key to improve selection intuition is to abandon cursor. Because Cursor is always accompanied with correction, and correction leads to higher accuracy / lower intuition.
VR interactions do not require cursor-level accuracy. Most contents and controls in VR are already big enough for direct finger clicks, just like mobile phone UIs.
Hands are not accurate enough for selection.
Using Eyes to Ensure Accuracy
Eyes are way more accurate than hands. Its accuracy level is the smallest things it can see. So eyes are accurate enough for selection.
In the selection process, people have to stare at (eye-select) the things first before they could click it. Hand responses were about 90 msec slower than eye responses (Brueggemann 2007).
There won’t be cursors hovering above the target to indicate the selection, so there should be other feedbacks.
The perfect timing for the feedback is when the user’s index finger has already pointed at the target but before final manipulation. In this way the feedback won’t be too aggressive, allowing users to stop the feedback anytime they wants by hiding the index finger.
It is a little tricky to determine whether the user has performed the manipulation because there is no physical contacts.
It is impossible to determine the manipulation by calculating moving distance of the fingertips, because people click in different ways.
Inspired by iPhone X, I decide to use the velocity pattern of the fingertip to recognize the click. Even different clicks move in different distance, they all have the same sudden velocity change pattern.
1. More psychological factors should be considered in the process.
How eye locking down the target would change people’s behavior still remains to be explored. Because when people realize that their eyes have locked down the target, they might not even bother to reach out their hands to the air pointing at targets.
2. Conflicts between eyes and hands.
There are situations where the eyes and the hands might not point at the same direction. For example, in video games, players should be able to draw their swords without looking at them. This is possible because of proprioception, human can sense the position of their own bodies without looking at them. But since this situation is not that common in common daily VR operations and is extremely complicated, I won’t discuss much about it here. I would explore more about the conflict in future exploration.
Elble, R. J. and W. C. Koller (1990). Tremor, Johns HoPKINS UNIVERSITY PRESS.
Oculus (2018). VR Best Practices. [website] Available at: https://developer.oculus.com/design/latest/concepts/bp-vision/
Foley, J. and R. Held (1972). "Visually directed pointing as a function of target distance, direction, and available cues." Perception Psychophysics 12(3): 263-268.
Brueggemann, J. (2007). "The hand is NOT quicker than the eye." Journal of Vision 7(15): 54-54.