Madhu here! For this week’s post I will discuss a video that I came across on Youtube. I assume everyone would agree that this is a cool concept, but how did they implement this?
I will present my thoughts here and it would be great to discuss alternative approaches with all of you – My hunch is that the booth is equipped either with a video recorder or some sort of a motion detector.
The video recorder route can be relatively simple with a raspberry pi because of its processing power. The background is of a certain color, which means that there has to be a unique RGB weight that can be used to extract the person accurately. Once you have a sequence of images in which you know where the person is, then turning ON and OFF the appropriate LEDs at certain frame rate is not a hard problem. Implementing this in real-time would be much more challenging which makes me believe that they went with the second guess – the motion detector.
Motion sensors can detect joints easily and would be ideal for this application, but they do not work effectively outdoors (which is probably why they chose an indoor booth).
For more inspiration check out this video created by my colleague Bharath that teaches MATLAB enthusiasts how to implement similar ideas using a Kinect sensor (in Real-time).






