Even though microwave ovens are in everyone’s homes nowadays, this was not something that many people in Vietnam saw twenty years ago, especially older generations like my grandmother. It was the first time that she knew such a thing existed. To her, the microwave was merely a metal box with a door, a cooking chamber, a blinking screen, and buttons with numbers and some foreign text on it. It took her a while, but she soon came to realize that her food could quickly be reheated with just a few clicks of the buttons. That is what machines and technologies have always been intended for - helping humans get things done faster and easier.
TODAY’S Human and Machine INTERACTION
Humans have been interacting with machines for thousands of years. However, in the early years when the computer was first invented, most of the human-machine interactions were made to help a specific set of users achieve specific tasks, without much attention to the user experience (UX) and the user interface (UI). Back then, the main criteria for machines were efficiency, safety, and utility. It was not until the 1990s when UX was added to a machine’s usability criteria. It was at this time that users started to focus more on the satisfaction, experience, and aesthetic appeal. Until microcomputers and sensors became more affordable, human and machine interaction was focused around the WIMP (windows, icons, menus, point-and-click devices) paradigm. Nowadays, multi-touch screens, gesture-recognizing cameras, myoelectric bands, and tactile-based devices are among the applications that are revolutionizing the way people interact with machines.
We are in an exciting era where we have the opportunities to interact with advanced technologies every day. Smartphones with multi-touch and speech recognition capabilities are integrated into people’s everyday lives. Gaming consoles that can identify human body gestures and poses are getting more creative and popular. Wearable devices that can sense users’ health conditions and provide tactile-based feedback are easily available to consumers.
HUMAN AND MACHINE INTERACTIONS
Computer vision has come a long way. From being able only to record events or actions to now having camera-equipped computers that can detect objects, track motions, recognize human poses and even detect a finger’s gesture, all thanks to ultra high-resolution depth cameras. The way these cameras work is that they can sense different objects from the raw data by measuring how long it takes for projected light points to be reflected back to the camera sensors. The ability to make sense of faces, objects, or landmarks is done at the image or video processing levels. A few examples of applications for the computer vision technology are Microsoft's Kinect, a self-parking car or Sixth Sense. Sixth Sense is an open source project that allows people to create their own SixthSense device; a prototype is made up of a mirror, camera, and a pocket projector and add their own apps to the codebase. "'SixthSense' is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information."
Speech recognition is a similar technology to computer vision but where it captures sounds instead of images or videos. With speech recognition technology, when we speak our voice is converted from sound waves into an electrical signal, which is then processed by microprocessors. Sound recognition algorithms are applied to reduce the background noise, identify the human voice, and recognize different words. This data can then be manipulated to generate texts, control devices, or communicate with other machines. A few examples of speech recognition technology are Apple's iOS Siri, the Amazon Echo or Nuance voice biometrics. Nuance is a technology company that provides server and embedded speech recognition software. The difference between Nuance and its competition is their "end-pointing" method, the process for determining the beginning and the end of a speech. Nuance looks for a large change in volume within a specific set of frequencies, also known as voice energy. Nuance is rumored to have powered Siri in the iPhone.
Wearable devices are receiving a lot of attention. Especially, as Fortune 500 companies continue to invest heavily in this market. A classic example of a wearable device is Google's Glass, which allows users to take pictures, shoot videos, or send messages all via the glasses. Gaining serious traction in the wearable technology arena are augmented reality (AR) devices, which augment real-world environments by using computer-generated content including sound, video, or graphics. Samsung, Google, Microsoft, and Facebook are each rolling out their respective consumer AR products such as GearVR, Cardboard, HoloLens, and Oculus. Other examples that are gaining traction in the market include smart watches like Pebble, Fitbit, or the Apple Watch. The reason smart watches are now entering the wearable market is because they are providing customers with an alternative option to check messages, look up directions, or track fitness goals that are faster than if they were to perform these actions on their phones. Myo by Thalmic Labs is an interesting wearable device that uses measured electrical activity in forearm muscles to transmit gestures to a connected device. Similar to how Tom Cruise uses his hands to control technology in the movie Minority Report, Myo lets you control technology for a variety reasons - business presentations, gaming, and education.
A WORLD OF SENSORS
With the advances in sensory technologies, scenarios that were only found in Hollywood movies or science fiction books are now a reality. Imagine a day in the life using innovative technology - waking up in the morning to Echo’s weather and traffic updates; turning on the lights, setting the room temperature, or having coffee prepared, all with a gesture of your hand by using Myo; riding Google’s Self-Driving Car to work; using HoloLens to build hardware; asking Siri to have lunch delivered; recording a meeting with Google Glass; tracking your exercise with Fitbit; and, so on. The possibilities of how to make our lives more efficient by using technology are almost endless. We are living in an exciting age when human and machine interaction is no longer just about getting things done, but doing it seamlessly while improving the quality of our lives.
To learn more about how we approach problem-solving for our customers, including our software development process, click the image below.