h1

Kinect GUI: Now “Gesture” User Interface

December 27, 2010

On Monday following the Christmas holiday, I decided to work from home due to a blizzard that buried my area. I spent the morning editing the UI for a new app my team is developing. Menus, on-screen instruction, error and warning messages all need to meet certain requirements while also providing exactly what users require to operate the software tool.

After several hours of editing, I was ready for a break. My sons gave me Microsoft’s new Kinect sensor for the Xbox game system and a Zumba game. Since I take a weekly Zumba class, I decided to give it a shot. I stood in the center of my living room with the Kinect sensor placed on top of my TV, directly in front of me. On-screen prompts were easy to follow. I only had to wave at the sensor and it detected my presence. I did a forty-minute Zumba workout and discovered, to my delight, that I did not have to exaggerate any movements or perform any complicated hand gestures in order to control the system. I had a great deal of fun with it and forgot I was playing a game.

Until I decided to stop.

I discovered stopping the game wasn’t so intuitive. There was no on-screen control for me to wave at. It took several minutes and then my son remembered the gesture for Stop. I think it was holding my right hand down and my left hand up at a 45-degree angle. In other words, users must memorize the Stop gesture.

Once I stopped the game and caught my breath, I started thinking. Have we, at long last, finally evolved past the Window and mouse as user interface and made the stuff of science fiction a reality? In Iron Man 2, Robert Downy, Jr. brings his hands together in a single clap and then pulls them apart to manipulate a 3D view of a map. In author Jeff Somers’ Avery Cates series, most of the technology in Cates’ world is operated by gesture. The iPad for example, employs a sort of pressure/gesture hybrid action to control it. Use a modified “tickle” gesture to turn pages and a two-finger pinch to minimize windows. Kinect employs a simple hand wave gesture to first control an on-screen cursor and then to play a game designed around natural body motion.

When software was anchored to a particular hard drive, it had to adhere to the standards established by the operating system. But as we reach for the Cloud to lease software, these standards no longer apply. Applications can now evolve past Windows and mouse-clicks in favor of more intuitive commands, based on the gesture.  The UI editing work I did this morning reminds me how important it is for the interface itself to provide useful information. While trying to stop my Zumba game, I looked for on-screen controls to wave at, but found none.  I wonder how a Gesture User Interface like Kinect could be adapted for basic office use, such as word processing?

What do you think about GUI evolution?

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: