The incredible power of Zero-UI technology
For better or worse, much design work is still visual these days. This makes sense since the essential products we interact with have screens.
Television introduced us to screens in 1938 for the first time. Ever since, our world has been flooded with Computers, iPods, smartphones, tablets… and more and more types of screen-based devices.
Hence, today only a minute goes by without interacting with a screen.
The Internet of Things, coined by Kevin Ashton in 1999, surrounds us with intelligent devices. In 2020 more than 10 billion devices were already connected to the Internet — and by 2025, this amount is expected to double up to 20 billion.
So, knowing that intelligent machines can hear our words, anticipate our needs, and sense our gestures, what does that mean for the future of design, especially as those screens go away?
Let’s discover together what this so-called Zero-UI stands for.
What is Zero-UI?
It isn’t a new idea. I bet you are already familiar with it.
Zero User Interface, or Zero-UI, is an increasingly popular concept first coined by designer Andy Goodman, formerly at Accenture Interactive’s agency.
Have you ever used an Amazon Echo, talked to your iPhone using Siri, or skipped a song, double-tipping your AirPods? Then, you’ve already used a device that could be considered part of this so-called Zero-UI concept!
It is about getting away from the touchscreen and interfacing with the devices around us more naturally. This includes different fields such as haptics, computer vision, voice control, and even artificial intelligence.
Why do we need this transition?
It is aimed to allow more natural interactions when compared to screen-based devices.
To understand the need for this transition let’s look at how we currently communicate with technology to understand the need for this transition. Most of us interact with our devices daily through a Graphical User Interface (GUI).
A GUI interface allows users to interact with electronic devices through graphical icons and visual indicators. It can be a displaying screen — for computers — or a touchscreen for any phone or tablet. Thus, users must still use a mouse and keyboard combination or tapping and swiping to transmit information.
If you look at the history of computing, starting with the jacquard loom in 1801, humans have always had to interact with machines abstractly and complexly.
— by Andy Goodman.
Interfaces have come a long way from their humble origins, but they still need to provide the best experiences for those who use them. We download an endless amount of apps and click through too many screens in an attempt to perform daily tasks.
Luckily, designers and developers are addressing the issue to bring forth some exciting changes to help out with this problem. Just like computers evolved from being used via running code in a terminal to having a friendly and intuitive graphical interface, the next natural step has no interface.
Today, machines still force us to come to them on their terms, speaking their language. The next step for electronic devices is to finally understand us on our terms, in our own natural words, behaviours, and gestures.
This is precisely where Zero-UI appears. It is aimed to allow more natural interactions when compared to screen-based devices. At the helm of this transition are both gesture-based and voice-recognition user interfaces.
According to Dharmik, the gaming world has been one of the first to adopt gesture controls to provide a more natural user experience. The Nintendo Wii console was launched in 2006 and contained gesture-based controllers or later models. You can watch the revolutionary launch of Wii advertising below!
Launching advertising of Wii.
The world has fallen for the charm of this so-called Zero-UI, which will not likely change.
Voice recognition is another common Zero-UI feature found in our daily life. During the 2000s, Google launched Google Voice Search. Until Alexa was first released in 2014, voice recognition experienced a commercial explosion. Ever since, more than 312 million Alexa devices have been sold. Amazon is expected to overpass this amount by 2025: 320 million are expected to be sold.
How will Zero-UI change the design?
This technology does — and will — create a massive effect on society.
According to Andy Goodman, Zero-UI represents a whole new dimension for designers. He compares the designer’s leap from UI to Zero-UI as just designing for two sizes to thinking about what a user tries to do in any possible workflow.
Instead of relying on clicking, typing and tapping, users will now input information utilising voice, gestures, and touch. Interactions will move away from phones and computers and into physical devices. We will communicate instead.
The most important — and revolutionary — part of this concept is that it can be used for cities, homes, ecosystems, and personal devices.
Different Types of Zero-UI
There are various ways to communicate with current technology instead of relying on a visual screen, which can be used to achieve this desired Zero-UI.
Voice Recognition and Control
It’s a process where software or device identifies the human voice, understands some instruction, and performs a particular action accordingly. When a user asks a question or gives a statement to the software, the tool recognises and reacts to the user’s query.
The best examples of voice recognition and control are Siri and Amazon Echo.
Haptic Feedback
Haptic Feedback facilitates users with vibration-based Feedback. Even though we are already used to it when interacting with our phones, it is an integral part of wearable products such as fitness trackers and smartwatches as they enable users with notifications. It is also an essential feature for current game controllers — you can notice someone attacking you before seeing it on screen.
In the coming future, it will also be available with intelligent clothing.
Gesture-Based User Interface
It is one of the most natural ways of interaction. The gaming world first adopted this concept. It allows users to incorporate more motion and physical space properties than just button-based commands. The best examples of a gesture-based user interface are Microsoft Kinect, Think Wii, and PlayStation Move.
Google’s Project Soli.
Google has also released a gesture control product Project, Soli. A sensing technology detects touchless gesture interaction using a small radar.
Context Awareness
Contextually aware apps and devices facilitate users with a more simplified physical and digital experience by anticipating their needs. It eliminates all additional layers of interaction.
The AirPods are one of the best examples. By introducing sensors into a device or location data, we can design next-generation contextual experience that offers more implicit interaction rather than explicit.
These are some of the most common — and already working — ways to communicate with technology. Upcoming times will introduce breakthrough devices with these outstanding capabilities and even more.
Zero-UI will rely on Data and AI.
As we move away from screens, many of our interfaces will have to become more automatic, anticipatory, and predictive.
Whereas interface designers right now live in apps like InDesign and Adobe Illustrator, the non-linear design problems of Zero-UI will require vastly different tools and skill sets.
Designers will have to become experts in science, biology, and psychology to create these devices… stuff we don’t have to think about when screens constrain our designs.
— Andy Goodman.
One clear example would be designing a TV controller. Depending on who is standing in front of that TV, the gestures it needs to understand to do something as simple as turn up the volume might be radically different: a 40-year-old might twist an imaginary dial in mid-air, while a millennial might jerk their thumb up.
What’s after Zero-UI?
Zero-UI is the ultra-modern version of artificial intelligence.
Zero-UI is meant to allow users to experience more human-like interactions. Soon Google Assistant, Siri and Alexa will become the memories of the precious past of the tech world.
Looking to the future, the next big step will be for the concept of the ‘device’ to disappear.
— by Sundar Pichai, Google C.E.O