Sabtu, 07 November 2009

Killing Keyboard And Mouse

Killing Keyboard And Mouse

Can anything beat our traditional tools? Many new ideas have tried to change the world, but it's only now that some actually have a chance.

Despite years of innovation, nothing currently beats the humble keyboard and mouse combination. It allows us to do everything from the household accounts to battling aliens without having to think about what we're doing. Maybe it's just familiarity, or maybe it really is the best possible interface the PC will ever have. Either way, any technology wanting to replace the classic double act has an uphill struggle ahead of it. But how did things get this way - and will the keyboard and mouse remain the status quo forever?

A brief history of UIs

From the 1960s until the 1980s, the way we worked with computers didn't really change: you'd enter commands with a keyboard, and you'd get a response in textual form either in print or on a screen. The breakthrough came when Douglas Engelbart invented the mouse in 1967, although as with many things, this wasn't necessarily obvious at the time. As recently as 1984, columnist John C Dvorak wrote: "There is no evidence that people want to use these things."

He probably wishes he hadn't written that one. The WIMP interface - Windows, Icons, Menus and Pointers - became mainstream with the launch of the Apple Macintosh in 1984. The first version of Windows followed a year later, and by the release of Windows 3.0 in 1990, the mouse was a key part of most personal computer setups. Since then, the PC interface hasn't changed much. Windows Vista is certainly prettier than Windows 3.0, but the basics have remained the same. However, the traditional keyboard, mouse and monitor configuration isn't the only way to interact with a PC, and over the years there have been numerous attempts to replace it.

Until the late 1990s, when optical mice and laser systems started to surface, you didn't use a mouse for precision - you used a trackball. The likes of you and I used these to play Missile Command, but trackballs were also used in real military applications such as air traffic control and sonar tracking. The arrival of optical mouse tracking in the 1990s enabled mice to catch up, however, and trackballs are now a rare sight.

The same applies to 3D mice, which were first floated in the 1990s. These models resembled the offspring of a trackball, a knob and a joystick. While Logitech's 3Dconnexion still makes them, they're largely used for working with 3D CAD and modelling apps rather than common desktop programs.

The first recognisable graphics tablet - the Styalator - was developed in 1957, although it wasn't until the 1980s that graphics tablets were commonly used with PCs. They were - and are - particularly popular with illustrators and designers, who benefit from the combination of a paper-like drawing surface and pressure-sensitive pens. Pressure also made its way to monitors in the form of touch-sensitive screens, which you'll often find controlling point-of-sale PCs.

Touch input soon moved to personal digital assistants (PDAs) such as Apple's ill-fated Newton. Other firms did a better job, however: Palm's PalmPilot sold in huge quantities, and Microsoft brought touch input and handwriting recognition to Windows CE (now known as Windows Mobile). Bill Gates showed off a prototype Tablet PC in 2000, and tablet support has been built into Windows since XP. However, it wasn't until Apple shook things up in 2007 that touch interfaces became mainstream.

The magic touch

Steve Jobs apparently hates buttons - so when Apple made the iPhone, it was designed with a touchscreen instead of a traditional keyboard. The interface supports multiple simultaneous inputs, enabling users to zoom on photos by pinching and pulling or control applications by swiping a finger. The technology came via Fingerworks, a touch input firm that Apple acquired in 2005 - a full year after Microsoft started development on its own multitouch system, Surface, which also debuted in 2007. However, while Apple's system fits in your pocket, Surface's table would barely fit in your front room.

Microsoft and Apple are now investing heavily in multitouch input, which is particularly useful for photo applications, web browsing, mapping and other visual applications. Microsoft is building multitouch support into Windows 7, while Apple is one step ahead and has already added multitouch trackpads to its Pro laptops.

The trend is gaining momentum, too: Dell has added multitouch to its Latitude XT tablet, Asus has included a multitouch trackpad to its Eee PC 900, and according to analysts at iSuppli, the number of touchscreens in phones will jump dramatically over the next few years. In 2006, just 200,000 units with touchscreens were sold; by 2012, iSuppli predicts that the number will be closer to 21 million.

Devices are also starting to use gesture recognition, largely thanks to the success of Nintendo's Wii console. Both Apple's iPhone and Google's Android can use motion sensing for input, enabling applications ranging from spirit levels to games that refresh when you shake the device. The downside is that the response doesn't necessarily map directly to your actions, which is why Nintendo has had to release a whole new motion sensing add-on to the Wii - the MotionPlus.

Command and control

The problem with multitouch and motion-sensing controllers is that you still need to touch your device. Wouldn't it be great if you could operate it from a distance, ideally without thinking about it? Speech recognition has been around for years, and it's built into Windows Vista. Bill Gates even claimed that it would be a standard way of interacting with our PCs within five years... back in 1999.

However, in practice, how many of us use it? Some do, no question - the writer of hit game Deus Ex dictated his whole screenplay instead of typing it up - but it's far from being a mainstream practice.

There are many reasons for this. Even though the software has been available since the '90s, the programs using it required extensive training, and still ended up with a reputation for unreliability. As recently as 2006 a disastrous demo of Vista's speech recognition technology caused much amusement.

Speech recognition is used, but in very specific niches: annoying corporate phone systems, accessible systems and the odd computer game, like the recent Tom Clancy's EndWar. It will likely never take off in the office, if only due to the amount of noise it would create. However, mobile technology raises some more exciting possibilities. Conducting a phone search via voice is quicker and easier than fighting with a fiddly keyboard, for example.

One input device whose potential is still largely untapped is the humble camera. It's already been used to recognise gestures for video games and as an input device for phone apps such as the Compare Everywhere application, which scans barcodes, looks them up online and finds related information such as prices or reviews. Still, it could do even more. Camera input could be used for multitouch-style gesture recognition or in applications that can recognise whatever you point your phone at and provide appropriate data. In Japan, Quick Response codes - QR codes - are very popular: they allow users to do things like scan a code and instantly visit the related website.

Gesture recognition might arrive in another form. Microsoft's SideSight, a prototype mobile phone interface, keeps a beady eye on your fingers while you waggle them around on a desk or across a piece of paper. The firm demonstrated SideSight at October's User Interface Software and technology conference, although production is still years away.

Augmented reality

Hollywood's been depicting virtual reality interfaces in films for years, but so far we've been rather resistant to the idea. One reason for that is that VR can make people sick: your brain says you're moving, your body says you aren't, and your stomach gets upset and ruins your day.

However, some of the elements are on their way - and the good news is that you won't need to wear a silly helmet or a gauntlet to make your PC work. According to Screen Digest, 10 per cent of TVs sold worldwide in 2011 could have 3D capability, with that proportion rising to 16 per cent in 2015. In the short term, getting 3D graphics out of them will require special glasses where each eyepiece flicks on and off in sync with the screen. However, the technologies with the best long-term potential use autostereoscopic lenses, which deliver reasonable 3D without glasses. If you don't mind the glasses, Nvidia will sell you everything you need right now.

Augmented reality is an evolved version of the original virtual reality concept: you see the world as it is, but with additional data overlaid. For example, you could see a business card when you look at a contact, or a rating for a product when you see it on the shelf. The National University of Singapore's Mixed Reality Lab went in a slightly different direction back in 2004, turning its campus into a game of Pac-Man, complete with floating 3D power-pills that were served up via GPS technology. However, the number of technologies required for this concept to be more than a gimmick are likely to be prohibitive, at least unless a particularly exciting application is developed for it.

Using your SixthSense

Wearable PCs have been around for some years now, but they're still too impractical for most of us. That might not always be the case though: MIT Media Lab's SixthSense concept takes wearable computing in a new direction by using the entire world. The concept, developed by Pranav Mistry, is a long way from production, but some of the ideas are fascinating. Want to take a photo? Make a picture frame with your fingers and SixthSense will do the erset. Wondering if something's worth buying? Let SixthSense project user reviews onto the packet. Want to know the time of day? SixthSense will project a watch onto your wrist. The technology isn't that complicated. At heart, SixthSense is a mobile computer connected to a pocket projector, a mirror, and a camera. The projector takes care of visual outputs, while the camera tracks your movements and recognises the things you're looking at. Best of all, the current prototype only costs £180 to build. See it in action here.

Touching the future

The problem with future-gazing is that while we can usually see the benefits of a new technology, the disadvantages often have to wait until we have it in front of us. It's really only after repeated use that the glitches and drawbacks of a given system become apparent. Whether it's the motion sickness and headaches caused by virtual reality, the problem of scaling up speech recognition to accommodate a full office without deafening the employees in it, or even typing a long message on a multitouch system without the satisfyingly tactile click of a keyboard, there are often problems. The mouse and keyboard have survived in part because they're inoffensive and everybody is now used to them. They're so ingrained in our computing culture, they've become the benchmark - no matter how limited a QWERTY layout or two buttons might actually be.

To replace them, any new user interface either has to take on a field for which the keyboard and mouse is completely unsuitable - as drawing tablets did for artists and touchscreens did for smartphones - or be such an improvement that we forget how we ever lived without it. Maybe brain-controlled systems will do it, or even some brand-new interface layout that nobody's dreamed up yet

There's always room for innovation. The problem is that only one in a million bright ideas, if that, has the potential to see true greatness after the shine of publicity wears off. So sadly, we'll have to wait a bit longer to see which concept is good enough to see off the configuration that we've all got so used to.

0 Orang Berbicara:

Posting Komentar

Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | belt buckles