Since the first computer we are using keyboards to give commands to the thing. Ok, the first computers were controlled with switches. But rather quick there are keyboards and a computer without a keyboard is difficult to imagine. As the graphical user interfaces made their appearance, a new thing came around the corner. Douglas Engelbert introduced to us the mouse. The good man died this year at the age of 88. Again the mouse is difficult to imagine not to be around our computers. In the past period different kinds are made: trackballs, pen mouse’s, track pads etc. Nowadays they fit in the mouse of your hand (Wedge mouse) or you can bend the open.
Disadvantage of a keyboard or mouse, you need space to use it. But also coordination between hand and eyes, which is quite difficult for some people. Everyone knows someone acting really strange first time using a mouse. Dutch prime Minister Wim Kok showed that in 2007.
Microsoft was early aware of the difficult hand-eye coordination. I think they were the first to build software and hardware with partners to create laptops which can be used with a pen (pen computing). The pen wasn’t placed a strange place far away from the computer, but right on the screen where it should happen. In the long run the tablets came to expand the concept and make it a real hype. Now a pen is not really needed anymore. I am not sure if the ‘old’ Surface (Pixel Sense) wasn’t first. Ok, it was to big and not really for consumers.
Other disadvantage of the mouse, keyboard and pen/touch computing is, it is hard to use on places where hygiene is serious business. Touch screens can not stand lubricants or other nasty fluids, mouse’s and keyboard are bacteria collectors etc. Hospitals where computers are still growing on usage, have to deal with that.
For gaming a likewise story can be told. Since the first game consoles (Atari Pong console) there is something like a controller. Of course the first controller was rather simple, but nowadays they still look the same. There are more new functions and buttons added, but one vendor uses other colors than the other, but real innovation looks quite low.
The surprise came in 2010 when Microsoft introduced Kinect (project Natal). This thing gave gaming a new dimension. With the three camera’s and the accompanied software games could be played on a more natural way. Not long after that the Kinect came to the PC. We at Prodware developed tools to use Kinect in combination with Dynamics software.
In the beginning the precession of the thing was a problem. Only a full person (Skeleton) could be seen and getting the details was difficult. Nowadays the SDK is refined, now a person can sit down and detected and the precession has grown. I think the end isn’t near yet, now the hardware can be improved. XBox One with his new Kinect promises lots of nice new things.
It isn’t strange people are looking for other ways to control computers. And solutions are found. Like controlling your computer/phone etc via voice, with your body or just with touch.
The latest new possibility in the existing onse is Leap motion. This little thing is placed before you and you move your hands above it. It contains several infra red sensors, which look a rather big area above the Leap motion. The area is not limited right above it, but also in front and behind it. Till a height of more then 20 centimeter it can detect fingers.
Standard a Visualizer is delivered, with which you can test your Leap motion. Here you see how that looks.
There is a whole market, where different vendors of games, applications and tools display their products and make available for consumers. And it is not just some poor titles or worse remakes, games like Cut the Rope are available with a lot of more fun stuff.
The appearance of the store means there is a SDK available to program the Leap motion. And yes also from C#.
You open a Listener and connect to the controller. Then a flow of data comes to you, data you can use to do the great things.
From this flow of Frames data you can see hands, fingers and gestures. Those can be connected to actions within your application and you are good to go!
This opens up a lot of opportunities. Ok, the details and quality could be lot better. But I think they can solve a lot with the SDK and drivers, on the other hand with newer rather sophisticated hardware. But it is useful to have look at it now.
Like Microsoft said during the introduction of Kinect for Xbox: You are the Controller. You are the mouse, but be carefull not to be replaced