Introduction to Designing for the RealWear HMT-1 and HMT-1Z1

 

Getting Started Designing for Hands-Free Users for RealWear with WearHF

Ready to design the future…bienvenidos!

This article is to help you get started designing for the RealWear HMI proprietary hands-free platform known as WearHF. More detailed information on WearHF and resources for developers is available on our developer site.

WearHF (HF stands for Hands-Free) is a custom interface layer including voice and visuals that sits on top of Android. As part of our support for WearHF, we are in the process of developing a design system loosely based on Google Material Design. This will allow designers and developers to easily create experiences for RealWear devices and retain as much consistency as possible with the experience on Android phones and tablets but enabling the user to be completely hands-free.

As we work through the creation of this design system, I will continue to post articles with information and examples to assist in your experience design. This article is a first introduction to the system.

Physical Buttons

Unlike other wearable devices, the HMT is completely hands-free for navigation. Currently the only exception to this rule is the ability to change language and control microphones using the action button on the side of the device. The action button is also programmable and accessible to you in your experience. I recommend not using this if at all possible as too much dependency on this button causes it to become complex leading to user friction. And, well, if you have to press buttons you aren’t really hands-free are you? This is why we reserve this interaction to extenuating circumstances like “I have no idea what the voice commands say because I don’t speak German. Please can I use English?” Or “I’ve muted my microphones and so I can’t tell the device to do anything because it can’t hear me.”

The Display

The display is 854×480 and is designed to fit just below the users line of sight for most use cases. However, there are instances where the display sits above the line of sight. This depends on the primary focus of the human. For example, a repair technician who is working on a machine would likely have the display below their line of site and glance down to access information while a surgeon would choose to look up into the display as their primary focus is on the patient which typically requires looking down.

Eye Dominance

Most humans are dominant in either their right or left eye. Since the HMT is monocular, this means that one eye is used to view the GUI. If a user were to view the interface using the wrong eye, they would have significant trouble seeing the interface and would likely come away with the impression that the device is not usable. (Note: there are some very unusual humans out there who are neither left or right eye dominant. Clearly mutants.)

How do I figure out which eye is dominant?

I’m glad you asked! There is a nifty way of figuring this out which we have printed in our quick start guide.

1. Form a triangle with your hands placed together at arms length. 2. With both eyes open, focus on any distant object centered in the triangle. 3. Maintaining focus on the object centered in the triangle, close your right eye. 4. If the object is still in the triangle, you are left eye dominant. 5. Maintaining focus on the object centered in the triangle, close your left eye. 6. If the object is still in the triangle, you are right eye dominant. 7. If the object is in the triangle with either eye then you are dominant eye neutral. 8. Repeat test to confirm.
The Viewable Area

Most users need a bit of time getting the display into a comfortable position. It is not uncommon for first time users to close the opposite eye for the first several hours of use. If the display is positioned properly, the user should be able to see all four corners of the screen.

The Safe Area

Even though users should be able to see all four corners of the GUI, it is a good idea to keep any important information inset ~30 to 40px from the outer edges. (854×480 resolution minus 40 = 774×400 safe area). To take this approach even further, centralizing information in the center of the GUI is the greatest chance for success for and visibility by the user.

“Say What You See” (SWYS)

The HMT Say What You See concept provides an easy method for voice interaction. It means that a user can literally look at the screen and speak the words they see in order to control the device.

It is sometimes difficult to avoid visual clutter with too many commands, however having the voice commands displayed on the screen makes it virtually impossible for a user to have trouble using the device and is exceptionally valuable at increasing user comfort particularly for first time use.

Font Size

While there is no ‘it has to be this size to work’ direction we can give, in general we feel that the very minimum font size that should be used is 18px but generally speaking its better to go with 24px and upwards to play it safe.

WearML

This a good time to bring up WearML. WearML is a proprietary markup language for RealWear devices that allows designers and developers to provide overlaid number or tooltip commands to supplement and/or replace on-screen voice commands as part of the SWYS interaction paradigm.

WearHF detects objects on the screen and creates an associated voice command in the format of an item number overlay to allow users to select the item by using our standard fallback selection approach of “Select Item [item number]”.

Other GUI Considerations

There are some considerations and challenges when implementing a SWYS GUI. As mentioned above, the balance between providing voice commands and too much visual clutter requires significant evaluation and design sensitivity.

Localization particularly is a tricky area since colloquialisms, character styles, formats and counts can vary across languages and cultures. A phrase or word in one language might fit perfectly but be completely inappropriate or too long in others.

Voice Commands

And here we are. The most important part — the voice commands. If you have used the HMT at all you know that unlike many voice systems (Alexa, Google Assistant, Siri) we do not have a ‘wake up word’. This means that you don’t have to say “Hey, HMT” or “Yo, RealWear” to get the device to hear you. Because of this we need to be especially vigilant when selecting voice commands and ensure that they are at least 3 syllables. Sometimes a two syllable voice command will register just fine but its better to be safe than sorry, especially since our device is generally used while people are working which makes the need for accuracy and efficiency paramount to a successful experience.

There are a couple of other considerations when choosing voice commands. You don’t want to choose something that is difficult to say for obvious reasons. You also want to be sure voice commands don’t conflict with each other so the user doesn’t accidentally activate something they didn’t mean to.

This concludes today’s lesson. Thank you for your participation and best wishes for success in your hands-free design endeavors!

This article was originally posted on Medium.

Start transforming your frontline workforce today

Stay up-to-date with RealWear