Blog post
Written:
June 28, 2018
Author:
Robert Currie

Human-Computer Interaction, touching the untouchable.

Share:
Share:
Organisation growth

Keyboard and Mouse

For decades now we have developed a long and productive life with technology, from personal computers, to heavy industry robotics. However the way in which we interact with technology has barely changed; there is the old and faithful mouse and keyboard. This input has been the tried and true form for people to interact with the digital world, and it will continue to dominate as the point of interaction for a long time to come. However with the advances of augmented and virtual realities, hardware developers have been advancing the way in which we can interact with the digital world. In this blog I will explore and explain some of the current, and on the horizon, technologies, that will allow us a more natural way to interact with the digital world.

Original control from an existing source

For many who step into the realms of augmented or virtual reality, the key thing is the ability to, as naturally as possible, interact with the virtual objects that are around the user. Originally this was done through the input of an existing game console controller connected to the computer, but was quite quickly replaced by motion controllers. 

Vive and Oculus motion controllers.

These motion controllers were designed to represent each hand within the virtual space, tracked individually using the existing sensors supplied to track the headset. They also had triggers designed to be pressed down to act as a gripping action. These controllers still had additional buttons and combinations of buttons allowed for further actions, but were far more ergonomically designed for use within a fully digital environment. Another advantage of this new control setup was that a user's controllers could be tracked even if they themselves could not see their hands in their field of vision, an advantage of external tracking.

Hand Gesture based interaction.

The next step for most was to find a way to be able to affect a digital object or environment without the use of controllers at all. This was largely brought about with the advances in augmented reality and its mixture of digital and real world objects interacting together. Releases of technology such as the Microsoft HoloLens, a complete stand alone system, required the user to be able to use digital objects without relying on physical input devices (largely as this would kill is mobility factor). With the HoloLens sporting optical and infrared cameras built into the headset (based on Microsoft's Kinect technology) simple and clear hand gestures were programmed into the hardware to be recognised as a left click and a menu/go back function. Although limited to whatever gestures are programmed into the device this did allow for the first time people to ditch any form of physical input hardware in order to interact with a digital object.

Full-Scale Hand Tracking

Microsoft built the HoloLens from the ground up to be a multipurpose device and as such was limited to the processing and storage abilities of mobile technology. Thus interaction ability was limited. However this proof that cameras could follow the motions of hands paved the way for other hardware innovators to come up with their own, more polished and feature-full products. Step in Leap Motion with their full-scale hand tracking device. This technology solely used infrared technology capture, a wide angle of tracking, and mixed with some new software (again influenced by the Kinect technology), allowed for all of the components of a persons hand to be individually and accurately tracked during motion. With each joint within the hand now being tracked, developers could translate this into a digital hand. This digital hand could grip and interact with virtual objects in the most realistic and human way possible, by using what we have evolved to use in the real world.

Future interaction: Vive Knuckles

HTC has taken their standard motion controllers, redesigned and brought them up to date with current technology capabilities. The Knuckles are designed to act like a standard motion controller, ergonomically designed to interact with the virtual world, but with the added benefit of being able to individually track each finger on each hand. This allows for more precise grips and more complex holding of objects through the use of force sensors (pressure) and capacitive sensors (touch), which tell the computer how you're gripping, squeezing or pinching objects.

Future interaction: BCON Wearable, Programmable Tracker

Designed to be worn on the foot/feet of a user (but can also be worn on the head) the BCON is a fully custom programmable unit that can detect movement of your foot. It relays this information to an assigned action/button, or combination that would typically be inputted using a keyboard. This can allow people to utilize the movement of feet in new ways that will allow users to be able to not only hold objects, but begin to perform more complex actions. The BCOM was also created as an aid for those with disabilities such as quadriplegics, by attaching the BCOM to the users head which they can still move, it can relay keyboard presses that are otherwise impossible. 

Conclusion

Now this is only a fraction of what the current and future market has to offer, with technologies such as Haptics and full body motion capture also playing a large role in human-computer feedback and interaction. However, even from this small snippet it is clear to see that humans and technology are slowly becoming more integrated in a natural way. Even without discussing bio-mechanical prosthesis (a topic for another time), it's evident that integration utilises what we have evolved with rather than having to learn new skills entirely. It's safe to assume the mouse and keyboard will be here to stay for a long time to come, but we now live on the edge of a new and exciting frontier when it comes to exploring our digital universe.