Friday, June 18, 2010

fingers tips

, whether they own a home or not, and if so, how much that home is worth.

I can then take my hand out of my pocket, gesture with a flick to signal a scroll, and thus sift through pages of more information about the women. Eventually, instead of reading all this on my arm, I could do it via a chip implanted into my head.

interactive sixths senseWe’ve long dreamt of wielding ultimate power — and there’s now power like information so readily available like this: All of it literally at our finger tips, extractable from the air’s radio waves.

To be sure, while the above scenario is already basically nearly possible — something like it was demonstrated in February by some MIT researchers at a conference called TED in Long Beach, Ca. (image left) — it is still a long way off from being adopted by the mainstream. Carrying around cameras, projectors and phones that have to work together is still complicated — but it’s the sort of thing that is coming, even if it will takes years.

Combined with “augmented reality” technology, which can tell you instantly other things about the bar — for example, what cocktails on the menu you can buy for the women — all this technology will make things much more interesting.

In the immediate future, however, there’s plenty of territory for companies and researches to conquer more basic interactive technologies, based on finger touch and speech-recognition.

Called “natural human interface” technology, it translates our gestures — touching a physical screen, or swipe our fingers across it, for example — into commands that let us be more productive than ever before. Touch technology is where we’ll see the most clear opportunities over the next two years, because even a small screen now has so many channels to translate different gestures. Other technologies — using things like infrared, motion detector sensors and gyroscopes to track movement of your hands or fingers — will let you interact with screens with no touch at all. For now, such non-touch technologies are used for things like games (think of the Nintendo Wii, for example, or Microsoft’s Natal, which let you move a hand control or simply your body, to interact with a screen), but they are much less interesting because they are still relatively imprecise. Most of the coolest action right now is in touch.

synaptics-touch1That’s where recent developments by Apple (which produced the massively popular iPhone, based on touch technology) and Microsoft (which produced Surface, a way to interact with graphics on a tabletop screen) are leading the way. Apple’s iPhone has popularized the use of single and double finger taps, swiping and other touch commands to do things like zoom in and out. Synaptics, a Santa Clara, Calif., company supplying such touch technology to Apple and other touch-phones such as Android, is generating ever more complex gesture features. Last month, Synaptics released a new screen technology boasting 48 sensing channels on an 8-inch screen that can accept up to 10-finger touch commands at a time — useful things like multi-player games, for example. Apple’s advantage is that it owns its devices outright — and integrates such technology seamlessly into its software and hardware — letting the touch features function smoothly within its graphical user interface.

Other companies, such as Microsoft, are pushing things forward, even when they don’t own the entire device, and thus lack such native integration. Microsoft’s Media Room project, for example, will announce in October new features that let you interact with a TV using a touch-screen remote control, the company says. The remote will let you use standard swiping and tapping gestures, but will also support speech commands. Microsoft is working with remote control makers such as Philips and Ruwido. The remote will also be integrated with the offerings of content providers and set-top box makers.

So with all of this new touch-based technology coming out, we have plenty to do before we need to graduate to the chip in the head. “The tip of the iceberg,” is what we’ve seen so far, says Andrew Hsu, product marketing director and strategist at Synaptics. Only recently have device makers experimented by making touch-screens bigger, so that multiple people can control them at the same time.

Synaptics is also developing ways to move beyond touch, including knowing when a finger is merely close to the screen, through its existing capacitive technology. (Other technologies, such as HP’s Touchsmart, are doing something like this using infrared sensors. Correction: An earlier version of this piece suggested that Synaptics is working on using infrared; it is not.). These efforts can then let you control a device simply by waving your finger or hand above the surface. Synaptics is also developing technology to take things in the other direction: detecting when you’re forcing your finger down on the screen beyond just a simple touch. Finally, technology is being built to detect how you’re holding the phone. This will improve on existing accelerometer technology, to make it more sensitive (the iPhone’s switch to and from “landscape mode” works, but not that well). It will also allow you to issue additional commands by gripping in the device in certain ways. Finally, for safety reasons (to avoid car accidents), Synaptics and others are looking for ways for users to more easily control a device with a single hand, or with feedback technology such as haptic technology (which has the screen push back against your finger slightly, when you’ve pressed it in the right area) or through speech recognition.

Waving my hand in the air to solicit immediate information about three women at the bar may be interesting, but it’s a long way off. Right now, look forward to all the cool stuff that’s about to happen on phones and other interactive displays.

No comments:

Post a Comment