It's hard to believe that we've already reached the end of the 10 week design challenge period. As always with these challenges, I've learned a lot and had a great time. Thanks to Infineon and E14 for the opportunity and the dev board.
My challenge journey has been spread over a few blog posts that are listed and linked below.
In these times of pandemic stress my project ended up being a bit helter-skelter. I started out getting acquainted with the board and then evaluating functional elements that I considered using for my project, some of which I did not end up using. I'll try to provide a more cohesive summary below:
Low Power Secure Entry Project Summary
Concept
Multi-stage Security System for Front Entry of my house. The requirements for the system are that it can determine if there is a person in the area of my front door and if the person approaches to allow entry if the person can be identified.
Stages
- Motion Detection - detect motion using microwave detector
- Object Classification - classify what caused the motion using a camera
- Facial Recognition and Access - identification and access using a camera and microphone
Motion Detection
I am reusing a Microwave Sensor that I designed for another project. It uses an RCWL-0516 sensor with an M5StickC module to publish motion detection information via MQTT over WiFi. I am using a Mosquitto MQTT Broker and Node-Red Dashboard running on a RPi4. The tasks for this project were to implement MQTT in FreeRTOS and integrate it with the EInk display. And along the way I decided to add a JQ6500 MP3 player to play sound clips of the detection results.
Object Classification
I struggled for a while trying to figure out how to add an image AI component to the project. I initially thought that I would add a standalone camera and run the Neural Network model on the PSoC 62S2. I tried a couple of cameras with other MCU boards - an OV7670 RGB VGA (640x480) Arduino Nano 33 BLE Sense with OV7670 Camera and an HM01B0 monochrome QVGA (320x320) Camera Module for Raspberry Pi Pico . I realized that porting a camera library and a NN library was going to be difficult in the time I had available, so I decided on the alternative of using a working smart camera configuration to provide classification information via MQTT. I had used a Person Detection model with the Portenta with Vision Shield which also uses the HM01B0 camera Arduino Portenta - Person Detection , so I decided to use that configuration.
The working demo of the Motion Detection and Object Classification running on the PSoC with EInk Display and audio is in Blog post #8.
Facial Recognition and Access
My intent was to switch to running a facial recognition model if a person was detected and the detection bounding box filled the frame (i.e in close proximity). Facial recognition would be used to grant entry access. Unfortunately, I ran out of time to implement this functionality.
Power Reduction
Since this was a "low power" challenge, I wanted to use the Low Power Assistant (LPA) library and the power configurability of the PSoC 62S2 MCU and the 43012 WiFi module to see how much I could reduce the average running power of my program. Since I'm running a configuration with WiFi, I knew that I wouldn't be able to achieve super low power. The low power configuration uses filters for MQTT traffic and puts the MCU to sleep and shuts down the WiFi stack when there is no traffic. When MQTT traffic occurs the WiFi module uses an interrupt to wake the MCU. With this configuration I was able to achieve an average power reduction of about 4X from 1.6 mA to 400 uA of VDD current.
Conclusion and next steps
The PSoC-62S2-43012 kit and Modus Toolbox provide a very capable and flexible hardware and software platform for development of power optimized IoT solutions. I like the use of pre-configured board support packages (BSP) that make it easy to swap between different development boards. And the context sensitive libraries and examples make it simple to get started with applications. I also like the Eclipse framework for the IDE as I've used that with other development software.
The large array of tools to configure the hardware are both a plus and a minus. Optimizing configuration settings requires quite a bit of learning. Luckily there is a lot of great documentation although you do need to search for it. The one area that caused me difficulty was the incompatibility of the PDL and HAL tools in setting up GPIO. I think in general using the HAL tool works best although you can get confused if you encounter examples and forum posts and don't realize the context (at least I got confused).
Machine Learning is relatively new to Modus Toolbox and there was only a Gesture Classification example when I started. I did try that example with the accelerometers on the EInk shield and that worked well. I also tried Keyword spotting using a project that I found on Electromaker. Using that project I captured data from the EInk shield microphone and built a model on Edge Impulse that worked reasonably well. I did however have problems deploying that model library and I was not able to resolve issues that I had with getting adequate capture volume from the microphone.
My original intent was to use the Modus Toolbox Machine Learning to do object detection and keyword spotting, but I ended up using an Arduino Portenta with vision shield to do the object detection and did not incorporate the keyword spotting. This is definitely an area that I want to revisit and get working.
Learning about the power configurability of the PSoC 62 MCU and the 43012 WiFi module was very enlightening. I was impressed by the capabilities of the hardware and software in this respect. The hardware has been well designed for power optimization.
Overall, a very enjoyable challenge experience. The opportunity to try out lots of different hardware and software (both the challenge kit and lots of peripherals) is always fun. Looking forward to spending some time looking at the projects that the other challengers have built .