Recently, I have been giving some thought to what the future of human-computer interfaces will be like. This article is supposed to be a compilation of ideas that I have either read about or thought up within the past few weeks. I think Moore’s Law will continue to play a pivotal role in dictating the direction of interface design. This is because smaller and faster circuitry; allowing increasing amounts of processing power, storage capacity, and design complexity; will provide interface designers and manufacturers an increasing degree of flexibility.
Let us first consider the ways in which a (non-handicapped) human can physically interact with a computer (I have not yet thought about how interface accessibility technologies will develop for handicapped users; no offense intended). I have also not yet researched brain-interfacing technologies.
To provide input to a computer, a human can use touch, gestures, and speech. To receive output from a computer, a human can use sight, hearing, and touch. Note that the order in which I list these functions is important; this should become apparent as the article progresses.
The goal is to make human-computer interaction as efficient as possible – potentially, even more efficient than human-human interaction. I think ‘text’ is going to stay around for a long time. The average human (let us call her Jane) cannot always communicate complex ideas with consistency using only speech and gestures against hearing and sight. Of course, the reverse is also true – text is not always the most efficient means of communication either. Therefore, the input and output of textual information needs to be as efficient as possible and balanced effectively with sensory interaction. The requirement is to achieve a balance in trying to communicate the maximum amount of information, as accurately as possible, in the least amount of time, using a minimum amount of energy.
The conventional keyboard is indeed a fantastic input interface for text. The keyboard allows Jane to input textual information in a quicker, cleaner, and more consistent way compared to handwriting and speech. This is mainly due to the amazing dexterity of her fingers. This means that no matter how advanced handwriting and speech recognition technologies become, the keyboard will still be a more efficient input interface. Any advancement in keyboard design will probably just be improved ergonomics. For example, touch sensitive surfaces will probably replace the traditional key-press design. Someone might even invent a layout more ergonomic that the standard QWERTY layout on English keyboards.
The keyboard is also good at selected forms of non-textual input. For example, keyboard shortcuts let you perform common tasks without having to move your fingers away from the keyboard while typing. Therefore, as long as the keyboard remains in favor, keyboard shortcuts will tag along.
Even though textual input may be used to command and control a computer (command-line interface), it is obviously impractical for Jane. A graphical user interface (GUI) has the potential to make life easy for Jane, but it might also do just the opposite. GUI design is still a nascent field and we will continue to see out-of-the-box designs that will improve human-computer interaction. The mouse (including the touchpad, nipple, and trackball) will become obsolete and give way to more efficient interfaces, like the touchscreen (already available on phones and tablets). In this context, touch is still a much more powerful input method compared to gestures and speech; it allows Jane to interact more accurately with her computer.
Imagine trying to issue speech commands to a computer in an office environment. Even if speech recognition technology attains perfection, its impracticality far outweighs its benefits. Gestures are much more convenient and discreet, and will probably be a common input technique in the future. However, gestures are not as accurate as touch. Still, we might commonly use interfaces that are halfway between the two. For example, a projection keyboard (an existing technology) optically projects a keyboard layout onto a surface; finger movement within the projected area is captured and translated into key presses.
Of course, input only makes up one side of the equation. Of the five human senses (sight, hearing, touch, smell, taste), sight is the most powerful in terms of processing capacity. Therefore, visual output is the most efficient way a computer can communicate information to a human. Hearing is required for at least a basic multimedia experience and casual UI feedback. Touch, or tactile feedback, will probably be used for nothing more than minor sensory feedback enhancement, if at all, in mainstream products.
Today, a computer is not just a business machine; it is also an entertainment hub. Therefore, display technologies will continue to improve. With the recent introduction of an affordable pocket-sized multimedia projector, it is easy to conceive that solid displays will soon give way to projected displays. Following that, holographic (3D) projectors are going to take over from their 2D ancestors. As moving towards non-surface displays will require increasingly gesture-based input interfaces, there will have to be a point where the higher input accuracy of touch-based interfaces will demand a split interface for operating the computer. That is, multiple display interfaces – a touchscreen for operating the computer and a holographic projector for playing multimedia like videos, games, etc.
I have tried to imagine what a future device with such interfaces could look like. A few years back, I saw a couple of images that showed what looked like something straight out of a Bond movie: a set of pens that, together, formed the most portable computer imaginable. A quick Google search refreshed my memory: http://www.todaysgizmos.com/computer/pen-size-computers/. What my imagination came up with today, takes this a step further. Of course I feel silly trying to predict how long it will take technology to get to this point, but 10-15 years sounds plausible to me.
Imagine a device the size of today's smart phones that fits easily in your palm. It is a computer, cell phone, camera, TV, all in one; running a full-fledged desktop operating system. It has a projector-camera pair on the front face – it projects the display onto your desk and the camera picks up your finger movements on the display, so that it works as if you are using a touchscreen. The display can extend into a virtual keyboard when you need to type. The computer recognizes hand gestures for use with applications, games, etc. On the back face, there is a holographic projector, along with stereo speakers on the side faces; this creates a completely immersive 3D environment and multimedia experience for movies and games. The top face is a touchscreen for minimal operation of the computer (for example, to use the phone), without having to use the projector. It seamlessly connects wirelessly to local access points and cellular networks for unlimited Internet access. It is able to receive digital broadcast radio and TV from local stations and satellites. It is always location-aware, using GPS or other technology. Processing power and storage capacity are virtually unlimited for common usage scenarios. Battery lifetime is in days, if not weeks or months.
However far-fetched such technology might sound to us today, it is worth imagining. It is as if in 1990, someone described to me, in detail, a gadget of the future called the ‘iPhone’.
Subscribe to:
Post Comments (Atom)

Interesting writeup there. It made me dream :). I have some comments of my own below.
ReplyDeleteAbout processing power, I do believe that under these circumstances, for a single core, Moores Law wont be valid as the temperatures of electricity flowing through wires at such close proximity, the temperatures are fast approaching those of the sun within the processing core. However with developments in photonics this could possibly change because light is immensely fast.
I do believe that because of extremely fast data carrier technologies, a persons profile will move around with him. Software would be developed using proxies which would give it really cool capabilities as to where it could be used. Maybe you would be able to use a mail app from your computer via a keyboard/pointing device, and from your car via sound[assuming that there would still be cars around]
Human Memory is more strongly linked to smell than to visual input. A lot of multimedia apps would cater to that need by spraying chemicals into the air[This technology exists today, its just too expensive for broad consumption]. Wireless power transmission has already been developed. There would be no cables, just electricity pylons in big cities that dispense energy and bill the respective person appropriately depending on the usage of his portable devices.
For more reliable stuff i.e. computer systems that help the functioning of the human body[pacemakers, drug injectors, etc], would be powered internally by the body itself, through chemical reactions with substances abundant within the body itself.
Mice would be obsolete as pointing devices. Apache pilots have their machine guns aim at targets by the computer looking at their eyes. Such stuff would be used a lot in the future.
The dvorak keyboard is more friendly than the qwerty keyboard ergonomically, but you're right layouts could improve with time.
I beleive that GUI's are boring because they lack depth of field. Just take a look around its refreshing to have your eyeballs focus at different objects its more fun. Some games are already using simulated depth of field on 2D display surfaces e.g. Crysis, however I feel that we might have better optic viewing devices that can manipulate light making depth of field feel very natural. Maybe this technology would come out first in cinema's or personal helmets maybe :). It'd definitely be exciting.
I tried so hard to keep it realistic, but just could not resist losing track near the end of the article ;)
ReplyDeleteThe temperature at the surface of the Sun is around 5000 C. Does it get THAT hot in there!? Anyway, from what I have read so far, photonics is making good progress. I think within the next 5 years we will see very basic photonics-based logic implemented.
Smell output technology has the potential to enhance the multimedia experience (walk in a garden, or a restaurant), but what about bad smells (walk through Resident Evil)?
Wireless powered robots were recently demoed by a team at Georgia Tech. I agree - ultimately, probably all devices will run on wireless power.
The idea about using the human body as a power source is fascinating. Already, a researcher at MIT has been able to develop a way to tap electricity from a tree to power his environment monitoring wireless ad-hoc sensor network.
Till now, I never bothered to look up dvorak, even though I've come across the term several times. I will look it up now.
Sometimes, boring GUIs are more efficient than eye-candied ones. In my opinion, the more 3D-ish the GUI becomes, the trickier it becomes to get the input interfacing done right. The ultimate in visual immersion is probably 'surround holographics' (like surround sound). But I don't think that is ideal for GUI because GUIs are as much about input as they are about output.
Thanks for the comments, I-T.
Photon based logic gates were around while I was still a student in GIK, a SPIE Journal/Magazine I went through featured those. Once you have a NAND gate, any logic is implementable and we have really fast logic gates used in high tech optical equipment today even.
ReplyDeleteNow if we come down to the basic processor, there was not only the ALU(Arithmetic Logic Unit), but also the register. We'd need a method to store and retrieve light efficiently and fast before any progress could be made on that front.
Some random googling showed up this article, but I'm not sure how fast this process is or how mature for that matter.
Moreover the most important thing in electronics was the CLK. It depended on the piezoelectric effect of the quartz crystal. There are substances that emit light at regular intervals also, I don't remember the term, but it sounded similar.
And just for the readers, QWERTY was a keyboard layout designed to prevent typewriter jams.
The bad smells of Resident Evil might be bad for a gaming newbie, but for hormone full teenager game aficionado's, it could just be the thing to enhance the thrill factor a little more. Plus after the existence of necrophilia, anything's possible :P.
On a further note, I guess one of the best ways of changing the future apart from getting out the old Gatling gun is probably to write a novel. A lot of the stuff we have today esp user interfaces are influenced by science fiction.
I looked up dvorak and I do want to give it a try. But I will certainly not swap my keys around; it is too much of a hassle and laptop keyboards are delicate as it is. On the other hand, if I were using a projection keyboard, I would have no such excuses.
ReplyDeleteVery interesting, what you have mentioned about developments in photonics research. Maybe photonics scientists will come up with an alternative to the conventional clock-based architecture.
The one thing that never occurred to me is how computers would work to bridge gaps between people of different cultures. It might be reasonable to assume that one day, translation could become real time like the babel fish from "The Hitch Hikers Guide to the Galaxy". Maybe we would have a hearing aid that could translate all communication we hear into English preserving the persons accent. This would be encompassed into Human-Computer-Human interfaces.
ReplyDeleteAnother thing is that will the defacto language for information interchange continue to be English? Korean typists can convey messages much much faster than English typists, because Korean is designed in such a way that the typists hit two keys on their keyboards at the same time, hence transmitting much more information. Maybe both the dvorak and the qwerty would be scrapped for something designed to tend to a language which itself is designed with fast Human Computer Interaction as an interface.
Science fiction is full of great ideas, however its also full of some stupid ones. In Star Trek there was a highly evolved race, that used to build really great computers and space ships. Their culture was so predominantly built about the premise of robot interaction that they started communicating to each other and with machines with high pitched binary clicks. Its a limitation of computers today that they are limited to binary(This is because the transient capacitive effects of transistors. They can either be charged or discharged.). With developments coming up wrt manipulation of electron spin direction, one day we'll have a much higher base, which could be another direction for research that'll yield really effective direct speedup.
Dude now you're making me think about HCI as an interesting field for further study, I never gave it much thought before now.
Thinking aside Human Computer Interfaces and maybe alongside Human Computer Interaction, another interesting application might be the role of playing God. Dont get me wrong here, humans can be biased because of their culture, they can be unreliable, they have a short shelf life and they're eventually prone to succumbing to death and disease(Although maybe that'll change). I guess with better computer human interaction, humans would trust computers, computers would do the tough dirty jobs like administering justice etc. Eventually trust in an entity that's open source could become so profound that opposing countries at war might trust a computer thats been verified and signed off by the scientists of the two countries to remain fair.
There was another episode of Star Trek where two civilizations were at war for a millions of years, but they were culturally and structurally intact. How they managed to do that was that satellite based laser cannons used to bombard the other planet. Some wise people from both planets held a meeting under difficult situations, they said stopping war is not an option, but losing our culture and heritage isn't one either. They reached a compromise, where their armaments were monitored, simulated and a computer would simulate attack and the people would get tagged for silent execution. This may sound stupid at first, however so is world politics.
I'm not saying that stuff wont go exactly on this path, but idea of such a drastic paradigm shift where computers would have humans inside a sandbox as opposed to the other way around might be an interesting observation.
you should look into Haptic User Interfaces, augmented reality and touch interfaces.
ReplyDeletethe reason that the keyboard is so successful, is the ability to type without looking at it. Same goes with the mouse. I do not think that a touch screen interface will ever replace a keyboard, unless the device is hand held.
the real challenge is to use technology where it is really applicable. going 3D from 2D is not always a good idea.
i have worked with touch screen interfaces and gestures and all I could see was that, people were using technology because they can, not because they really needed it.
in star trek, they are always flying their ships with a keyboard and not a flight stick. That might be cool, but i find it highly improbable
Umair,
ReplyDeleteI looked briefly into haptic technology. For applications like games and virtual reality, haptic interfaces are already being used. But unless they invent a sort of 'wireless haptic interface', it is going to be a hassle to use the technology and it will not be very widely adopted for everyday use. Haptic feedback in game controllers and virtual reality gear is all good, but will I be bothered enough to wear a full-body suit every time I want to watch a movie?
The mouse is a brilliant input device, no doubt. You have a good point. The major advantage it has over touch interfaces is that it can translate very small movements into larger displacements on screen quite accurately, with absolute ease. If I want to click that 'x' button to close a window, all I need is a short jerk and a click. On the other hand, if I were using a touch interface, above a certain screen size it would become both slow and tiring. So, for smaller screens, say up to the size of tablets, touch is great. But for full-blown desktop use, touch is probably not the best input interface.
I-T mentioned eye tracking as a method of input for helicopter pilots. This is surprising, as I was of the opinion that eye tracking is very inaccurate (due to the inherent way in which the human eye darts around), and thus extremely unreliable for use in critical systems. I will read up on this too.
This makes me think about the topic that I purposely did not touch in the article - brain-computer interfacing. A year ago, I read scientists were able to interface a rat's brain with a flight simulator. Not only was the brain able to generate signals to control the aircraft, it used the feedback from the simulator to actually 'learn' how to fly! http://www.cnn.com/2004/TECH/11/02/brain.dish/
I agree that the keyboard is here to stay. But I think the conventional mechanical key-press design is probably going to be replaced by touch- or gesture-based keyboards. The feedback we get from a key press is very important because otherwise the difference between resting a finger on a key and pressing it gets blurred. The Blackberry Storm, Blackberry's flagship touchscreen smartphone, uses a feedback-based touchscreen, which users say is way better than the competition for full QWERTY typing.
Quoting from: http://www.digitimes.com/news/a20081211VL200.html
ReplyDelete...
The technology does not use a capacitive or resistive touch panel, and instead places cameras on the top corners of the display. The two cameras locate the user's finger position and transfers the mouse pointer to that location.
The technology also supports multi-touch gestures such as rotate, zoom-in and zoom-out.
The advantage of the technology compared to traditional touch panels is its cost-effectiveness, since the technology only requires two cameras.
...
Nicely written bud, for I absolutely loved the ending..
ReplyDelete"It is as if in 1990, someone described to me, in detail, a gadget of the future called the 'iPhone'"
PS. You're theory about "007 tech. imagination" always coming true of time, despite it all being the butt of jokes in its present age, has been, for me, a well-plausible and admirable one.