Back

Kinect from C# to HTML

Let's start at the very beginning. The Kinect came out and I learned that there was an open source project to make it accessible to the PC through C#.

At the time I knew very little about C# except that you use Microsoft Visual Studio to write and compile it. I knew enough about software development from past projects to modifiy a skeleton tracking demo and send it's data to an HTML page. Let me back up.

Socket servers are not new. They've been around since the ARPA network in 1971. Sockets let two devices on the same network exchange messages. I think of them as "chat rooms for robots." If you're the one programming the robots, its a useful tool for getting messages back and forword over a local or remote network.

NodeJS is the right sized tool for my back-end server needs. I understand JavaScript and have published a wealth of open sources libraries over the years that leverage it's advantages. So my objective was to get skeleton position data from C# into a web browser.

The networking protocols for sockets in C# were incompatible with socket.io, the emerging favorite NodeJS library for implementing sockets for the web. They would, however connect to NET sockets, which are baked into NodeJS as a default. So I built a NodeJS library that ran 2 socket servers simeltanously and implemented a messaging protocol so that socket clients could communicate regardless of how they are connected.

I expanded this server to also support ActionScript 3.0 as well as Arduino, making it a huge beast for enabling devices, technologies, and interactive projects to communicate with one another.

This began a journey into discovering the bounds of interactivity between humans and machines.

I explored connecting Instagram, Twitter, and Facebook APIs to Arduino to create event installations that triggered physical devices to perform tasks when social media mentions met predetermined thresholds.

I explored controlling smart devices based on where you were looking and pointing in a room in order to personalize online streaming production.

I explored algorithmic evaluation of positional data for a wide range of applications - analysing posture, detecting body language, and even recognizing an action someone is performing (eg. Opening a door for someone else).

At the time, the highest resolution Kinect was v1, and it wasn't until the company that was funding my research determined it didn't have promise that I came into posession of a Kinect v2 - however my research had stalled.