One of the cool things about going to school is that there always other stuff going on around you. Mostly for free, usually for really cheap. The other day I found out that Craig Mundie, Microsoft’s Chief Research and Strategy Officer was coming to demo the “new ways that humans will be interacting with computers with a 3D demo.”. I keep up with technology and some of the strides being made(and told myself that this had to be their new Kinect) so I decided that I had to be there.
After making the 15 minute trek from Duke Divinity to the Fuqua School of Business, I found my way(just in time I might add) to the auditorium that he was presenting in. If you aren’t a geek and don’t keep up with techie things, you’ll be interested to know that Microsoft Kinect (code named Project Natal) is a new way to interact with computers through the use of motions and gestures tracked by an incredible camera that not only recognizes your body, but can ignore unintended gestures as well. Think of it as the Wii without a controller.
The large point that Craig made was not one of the technology behind it (though it was pretty cool) but was more centered around the idea of why a device like this is needed.
When the original Macintosh was released, the breakthrough in the consumer market was one of the GUI, or graphical user interface. It was now possible to use a system of icons to convey a message, and made computers usable to the general public. You no longer needed to learn code in order to interact with the device. It was, in its truest form, revolutionary.
The point that Craig made was that the trend has moved from the GUI to NUI, or natural user interface. Does this system use graphics still? Of course. However, there is no longer a mouse and a keyboard, you tell the machine what to do by using voice commands and physical hand movements. In a way, it seems more…natural.
This occurred to me last night when Allie and I were out to dinner and observed a mother with several children. One child was in her lap, using her iPad to play a game. One was across the table with her iPhone, supposedly doing the same. The kids seemed, as far as I the creep across the restaurant could tell, to be able to entertain themselves VERY easily by just tapping on what they wanted to do. They saw the icon, they clicked it. If you think about it, it’s brilliant. They didn’t have to realize that funny shaped thing next to the computer moved a cursor. They didn’t have to find that cursor on the screen, move it around and then search a menu for what they wanted. They found the icon, tapped on it, and were off. Its like taking the graphical interface to the next level. It’s what Steve refers to as “magical” about the iPad.
This is the difference that Android and the iPhone have made in the mobile market. No longer did you have to scroll through menus with directional keys or navigate through menus with a ton of buttons or scroll wheels. Oh wait, I guess Android still requires that. No longer did you have to worry about having a stylus with you wherever you go (or losing it).
Sure, the Microsoft Kinect is more advanced in many ways than the interactions with the iPad, but it is the same concept. Perhaps computers don’t have to be so complicated. It should be relatively easy to do whatever you need to do, as quickly as possible.
It’s not perfect yet. Apple hasn’t quite seemed to figure out how to make it easy to manage lots of applications while maintaining the the simplicity. The new folders function seems to help, but isn’t perfect. The Microsoft Kinect works well (from the few minutes that I got to play with it) but the gestures have to be large and intentional in order to be recognized and consequently must often be repeated.
But. Imagine a world in the future when yo walk up to the table at a restaurant and the menu is a part of the table. You point to what you want and it expands to show you the options for preparation. That is already happening in man restaurants around the world with Microsoft’s Surface. Imagine never having to touch a cell phone while driving. Ever. And yet it can still be used to make calls hands free and navigate. That is already happening in many cars.
It’s changing our world as we know it. It will be interesting to see how it changes in the future. This is our world.