Hello Element14 community.
I welcome you to my next blog post as part of Experimenting with Gesture Sensors competition. This is 13th blog post as part of my journey with Gesture Sensors. In previous blogs I described my plans as part of this event, my first experiences with software and hardware, my stand which I made for experiments, some troubleshooting techniques which I used, my Library, my first project which I completed as part of competition and recently I described my own gesture detection algorithm. For summary there are link to all my previous blogs:
- Blog #1: Introduction to my Maxim Integrated/ADI MAX25405 Gesture Sensor Experiments
- Blog #2: Unboxing and Powering MAX25405 Evaluation Kit
- Blog #3: Experimenting with MAX25405 EVKIT GUI Program
- Blog #4: Building stand for MAX25405 EVKIT
- Blog #5: Experimenting with MAX25405EVKIT UART API
- Blog #6: Connecting to MAX25405EVKIT using Web Browser and TypeScript
- Blog #7: Debugging Maxim’s Firmware Framework Crash
- Blog #8: 12V Accident
- Blog #9: Gesture Controlled Tetris
- Blog #10: C Library for Low-Level MAX25405 Control
- Blog #11: Undocumented features of MAX25405
- Blog #12: Time-driven swipe gesture detection library
In this blog I will continue my previous blog and today I will describe my second algorithm for detecting swipe gestures.
Previous Algorithm Failure Analysis
As you may know from my previous blog, my first algorithm worked but not very well. I was thinking what is wrong with this algorithm. I realized two aspects:
- My previous algorithm has fixed-size buffer (with 50 items) which contained data from 50 previous screens. For example, in the situation when gesture started 10 screens ago, then algorithm processed 40 screens of noise and 10 of gesture. Frequently 40 screens of noise resulted to the wrong gesture provided. Even more, there is effect of misleading gesture detection at the beginning and ending of gesture when user is moving hand to and from field of view. Thus, I thought that in this case algorithm detect gesture based on 40 screens of noise and then it process highly misleading data on remaining 10 screens.
- In previous blog post I described algorithm of computation score for each direction. I designed this formula naturally without any rational background and I of course was not sure that it is correct and provide any valuable information.
I implemented my second algorithm from scratch, cleaned code and organized my code better than in previous case. But in many functionalities are very similar to previous algorithm or work exactly the same.
Memorization Improvement
As I stated processing always exactly 50 screens are simple to implement but it is not good idea. Instead, I implemented dynamic array with data from up to 50 screens. I did not change digital filtering stage very much and I deployed the same digital filter as in previous case. Consequently, I process data highlighting fast transitions (=gestures) and data are almost free from offsets. It is easy to apply threshold for deciding that gesture is or is not in progress. I add samples to array of data for gesture detection if and only if there is gesture in progress. Otherwise, my algorithm is idle. This is significant benefit over previous algorithm which processed data permanently including screens without any gesture which resulted to spurious gesture detection (which was resolved by output glitches filtering). My second algorithm has no output glitch filter because it is designed in a way that it does not need it. Gestures are not detected continuously but they are detected at the end of gesture (which can be easily detected since I can detect situation when I stop adding samples to the buffer for some period of time). This has benefit of much higher accuracy but also it is disadvantage because there is latency. My previous algorithm provided information about gesure faster. Often it provided information about gesture even it was still in progress and did not complete yet. This new algorithm has not this benefit. It always waits util gesture completes.
Direction score computation improvement
Except architectural changes in algorithm operation, I also changed scoring of movements as part of gesture. I removed squaring distance. Now step score is calculated as multiplication of distance and intensities at edge points of the step. Direction computation is the same and principal of incrementing scores for each direction and then selecting maximum score works exactly the same as in case of previous algorithm.
Result
This algorithm is much better and pacman is playable with this algorithm. The algorithm has much less false detections but there are still some. The biggest disadvantage is latency of detecting gesture because of waiting until gesture completes. In next blog I will show playing pacman with deployed this algorithm in background. Now you can look “foreground” to the debugging screen of this algorithm. Dot in right corner indicates if gesture is in progress (green in case of gesture in progress, red when no gesture is detected). Dots indicates collected center of mass.
Source codes
Source codes of this algorithm will be shared as part of next blog post in which I describe my final project utilizing this algorithm.
Conclusion
This is all from this blog and from part of developing custom gesture detection algorithm. In this and previous blog post I described evolution of my own algorithms. In this stage of MAX25405 competition I learned the most. I did not learn very much about MAX25405 and hardware, but I learnt that software processing data from this sensor is very important and quality of the result highly depends on it. At the end I have algorithm which of course is not as good as algorithm from Maxim, but it is acceptable for playing Pacman as you will see in next blog. Stay tuned for my latest blogs as part of this competition. Thank you for reading my blogs.
Next blog: Blog #14: Gesture Controlled Pacman