During Summer 2014, I had the chance to work on a Kinect v2 “Speech Bubbles” app, built upon a sample project from Kinect MVP Tom Kerkhove. In the initial version, I added a speech bubble gets displayed above a person’s head when the person walks into view.
However, this created a problem for multiple people. It was displaying the same text for every person who walked into view, up to 6 people. So, I decided to update the program to display a new random message every time a new person is detected.
I quickly discovered that this created yet another problem: all the displayed messages would get updated for everyone, even when only 1 new person entered the frame. Finally, I added an array of displayed messages (one for each detected body), so that everyone gets their own randomized text.
Now, it was ready to test in the wild… so I used it at HackUMBC, a hackathon event at the University of Maryland Baltimore College.
UPDATE, after July 2014 updates: This blog post was originally published using the alpha version of the Kinect v2 SDK. If you have the updated July 2014 SDK, make sure you start with the updated version of the reference project, written by Tom Kerkhove.
Ever since The Minority Report was released in 2002, gesture-based computer control has been compared to the NUI features shown in the movie. Sure, the Nintendo Wii came out in 2006 with built-in motion control, but it still required the user to hold a controller in their hand.
Minority Report, from Dreamworks Pictures
Fast-forward to 2010, the original Kinect was introduced as an Xbox 360 accessory. This brought gestures and voice control to a home console like never before. In 2012, Microsoft released Kinect for Windows, which allowed any hobbyist developer to build an app or game for Windows using a slightly-modified version of the Kinect.