In the past few decades, everything about our computers have changed. The screens. The guts. The size, weight, and materials. The software itself, of course. But one thing has stayed exactly the same, frozen in time from the early days: The tools we use to tell them what to do. So it's odd that we're so desperate to throw them out the window.
Early on, there were two competing ways for us to talk to our computers. The command line and the graphical user interface, or the system that gave us a screen that looked like a desktop and files that looked like little file folders, which we could navigate through using the keyboard and mouse. The latter won out, and since then, they've reigned as the primary way to communicate with a PC.
But over the past five years, usurpers have arrived, first in the form of touch screens, then in the form of gestural interaction systems like Leap Motion. Yesterday, HP introduced us to Sprout, a computer that consists of a touchscreen monitor, a RealSense 3D camera, a projector, and a flat touchscreen mat to create the ultimate Frankenstein of interaction methods. It also, like so many of its peers, kills the keyboard and mouse for good. Kind of.
Leave Our Mice Alone
Sprout is an interesting and odd $1,899 system, and it's being pitched as the ultimate no-keyboard, no-mouse tool for creativity: The overhead camera can take images of objects you put on the mat, for example, and the 20-point touch mat lets you manipulate those objects between the mat and screen.
But there are plenty of things it can't do, too. It can't 3D scan things, despite the RealSense camera. There's no Adobe support. And even though it's billed as keyboard-free, a projected keyboard is still a primary part of the interface.
Sprout is an interesting patchwork of systems that together suggest where computing is going, not a finished product of that future generation of devices. In other words, it's a stab into the intermodal interaction darkness—a great experiment, like Leap Motion and Meta and so many other gesture, touch, or voice-based interfaces.
There's no denying touch is wonderful for specific uses, like moveable screens that are always close to our bodies. And gestural interaction is equally awesome, especially for organic actions like panning through a virtual reality or exploring a 3D model, or even a surgeon who can't touch an actual screen. The use cases are endless.
But they also fall short in plenty of scenarios. They're still imprecise. They still require the user to put more effort into carrying out a specific task than a mouse and keyboard do. In some cases, they give us too much flexibility and control over what ought to be a simple task, and a mouse or keyboard would work better. In others, they don't give us enough, and a Wacom would work better. They're simply not a replacement for every interface we use today.
Thanks, Hollywood
So where does all this mouse- and keyboard-hate come from? It may have something to do with the way we imagine the future—and the way it's portrayed in movies and TV, too.
Think of Minority Report or Guardians of the Galaxy. Think of Iron Man, where Tony Stark pans and zooms around spaces with a flick of his wrist. We're seduced by the effortless, immersive environments shown in these films. Even Elon Musk is seduced by it. And as a result, the movies we watch influence the way we think technology should look and act dramatically, even though these are Potemkin interfaces.
But not all visual effects designers are human interaction specialists. Jacob Nielsen—who definitely is—explains the values of keeping the good old mouse around in a post by showing what each system can and cannot do. "There is no single winner," he writes. "Mice and fingers each have their strong points." That's why we should be designing both systems to work in unison, not choosing one over the other.
"The fact that the mouse and touch input have so different strengths is one of the main reasons to design different user interfaces for desktop websites and for mobile sites," Nielsen adds.
What We Think Is Better Isn't Always Better
It turns out that humans aren't very good at predicting which actions will be fastest—we suck at telling the difference between an interface that looks faster and one that actually is faster.
In 1989, interaction design expert Bruce Tognazzini reported that users have a hard time discerning between what form of interaction is really the quickest. Even though users thought typing in keyboard commands were faster than using the mouse, the opposite actually proves to be true—because thinking about a keyboard command requires what he calls "high level cognitive functioning," whereas mice do not. "Users achieve a significant productivity increase with the mouse in spite of their subjective experience," Tognazzini reported.
So what we think will be a faster, more modern method of interaction doesn't always compare with what really is faster. To a certain extent, that explains why we're so eager to jump fully into the touch and gesture era, even though our fingers are less precise than mice. In the end, it makes more sense to imagine a future where multiple interaction routes co-exist—not one in which a single input completely replaces all others. Existing computers with Leap Motion built right in probably make more sense than a totally touchscreen or gestural PC like Sprout.
Yet we just can't let go of that one version of the future, in which we all have Tony Stark's rakish confidence and spent the day conducting a digital symphony with our hands. It just looks like too much fun.
from Gizmodo http://gizmodo.com/why-everyone-wants-to-kill-the-mouse-and-keyboard-1652834936
No comments:
Post a Comment