Raspberry Pi Zero 2W

Table of contents

RoadTest: Raspberry Pi Zero 2W

Author: dimiterk

Creation date:

Evaluation Type: Evaluation Boards

Did you receive all parts the manufacturer stated would be included in the package?: True

What other parts do you consider comparable to this product?: Orange PI zero

What were the biggest problems encountered?: Lack of RAM, Any Cortex A53 quad core board.

Detailed Review:

1. Introduction

In this review we will take a look at the new RPI Zero 2W model.  Many thanks to @rscansy and Element14 for proving the hardware. The main focus of the roadtest will be on how the new RPI model can be used to test a couple of AI pipelines using Tensorflow Lite as well as IOT sensor monitoring applications.

Before we delve into the details of the AI pipeline let’s quickly take a look at the main points that the new RPI Zero 2W offers compared to the first generation RPI zero.

{gallery}RPI zero2W

2. Hardware

The kit came with the following hardware

  • Power supply (microUSB 5V 2.5 A – from Cana kit)
  • microSD card (e.g. 32 GB SanDisk microSD)
  • RPI Zero 2W
  • Small heat sink

In addition, the test setup requires using the following to test the AI pipelines.

  • USB OTG adapter (or possibly a USB hub with microUSB connector)
  • miniHDMI adapter (or possibly a miniHDMI to HDMI cable)
  • RPI camera
  • RPI camara 24 pin CSI flex adapter

The Raspberry Pi Zero 2 W uses a quad code (4x) Cortex-A53 CPU based on ARMv7-l with  512MB.

From a peripheral standpoint the RPI Zero 2W is 100% compatible with Gen1 . The RPi Zero 2 can be overclocked from 1-1.4 GHz with proper cooling.

The peripherals are identical to RPi and include the following.

  • 40-pin GPIO (following standard Pi layout)
  • 1x mini CSI connector
  • 1x microSD card
  • 1x micro USB power input
  • 1x micro USB data port
  • 1x mini HDMI display port

 

The visual difference is the usage of the RP3A0  SIP  (System in Package) coupling 512MB LPDDR2 RAM on top, and the BCM2710A1 clocked at 1 GHz on the bottom. In addition one can see an EMI can which covers the WIFI radio section.

 3. Software

From a SW standpoint the MPU is compatible with RPi 3B+. The RPi Zero 2 W can run a 64-bit Pi OS unlike RPI Zero W however just because one can this may not be a good idea depending on the application. 

The current version of Raspberry Pi OS is based on Debian version: 11  codename Bullseye. This is offered in 32 and 64 bit versions. Due to RAM constraints using a 64 bit version a GUI leaves very little RAM for AI application.

The new OS uses the new libcamera library which breaks compatibility with the previous camera stack. Some other changes include removing the mmal library and the V4L2 stack.

RPI OS Buster

Raspberry Pi 32-bit Buster was used for this roadtest since it offers a feature complete camera framework. Specifically, one can use the usual /dev/video with v4L2 on Buster with the RPI camera.

First let’s enable the camera

sudo raspi-config

Test by issuing  raspistill -o tet.jpg

 OpenCV and Gstreamer setup

Next OpenCV and Gstreamer have to be installed.

To install Gstreamer there are 2 options;

Install the old version from the repo or install it from source:

$ sudo apt-get install libx264-dev libjpeg-dev

# install the remaining plugins

$ sudo apt-get install libgstreamer1.0-dev \
     libgstreamer-plugins-base1.0-dev \
     libgstreamer-plugins-bad1.0-dev \
     gstreamer1.0-plugins-ugly \
     gstreamer1.0-tools \
     gstreamer1.0-gl \
     gstreamer1.0-gtk3

$ sudo apt-get install gstreamer1.0-qt5
$ sudo apt-get install gstreamer1.0-pulseaudio

uname -a
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install real-vnc-server
sudo raspi-config
sudo apt-get update


sudo rm -rf /usr/bin/gst-*
sudo rm -rf /usr/include/gstreamer-1.0
sudo apt-get install cmake meson flex bison
sudo apt-get install libglib2.0-dev libjpeg-dev libx264-dev
sudo apt-get install libgtk2.0-dev libcanberra-gtk* libgtk-3-dev
sudo apt-get install libasound2-dev




cd Documents/
ls
wget https://gstreamer.freedesktop.org/src/gstreamer/gstreamer-1.18.4.tar.xz
sudo tar -xf gstreamer-1.18.4.tar.xz
cd gstreamer-1.18.4
mkdir build && cd build
meson --prefix=/usr         --wrap-mode=nofallback         -D buildtype=release         -D gst_debug=true         -D package-origin=https://gstreamer.freedesktop.org/src/gstreamer/        $
ninja -j4
ninja test
sudo ninja install
sudo ldconfig
wget https://gstreamer.freedesktop.org/src/gst-plugins-base/gst-plugins-base-1.18.4.tar.xz
sudo tar -xf gst-plugins-base-1.18.4.tar.xz
cd gst-plugins-base-1.18.4
mkdir build
cd build
meson --prefix=/usr -D buildtype=release -D package-origin=https://gstreamer.freedesktop.org/src/gstreamer/ ..
ninja -j4
sudo ninja install
sudo ldconfig

wget https://gstreamer.freedesktop.org/src/gst-plugins-ugly/gst-plugins-ugly-1.18.4.tar.xz
sudo tar -xf gst-plugins-ugly-1.18.4.tar.xz
cd gst-plugins-ugly-1.18.4
mkdir build && cd build
ninja -j4
sudo ninja install
sudo ldconfig

wget https://gstreamer.freedesktop.org/src/gst-omx/gst-omx-1.18.4.tar.xz
sudo tar -xf gst-omx-1.18.4.tar.xz
cd gst-omx-1.18.4
mkdir build && cd build
meson --prefix=/usr              -D header_path=/opt/vc/include/IL        -D target=rpi        -D buildtype=release ..
ninja -j4
sudo ninja install
sudo ldconfig

wget https://gstreamer.freedesktop.org/src/gst-rtsp-server/gst-rtsp-server-1.18.4.tar.xz
tar -xf gst-rtsp-server-1.18.4.tar.xz
cd gst-rtsp-server-1.18.4
mkdir build && cd build
meson --prefix=/usr              --wrap-mode=nofallback        -D buildtype=release        -D package-origin=https://gstreamer.freedesktop.org/src/gstreamer/        -D package-name="GStreamer 1.18.4 BLFS" ..
ninja -j4
sudo ninja install
sudo ldconfig

Gstreamer can be tested with the following:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, width=1280, height=720, framerate=30/1 ! videoconvert ! videoscale ! clockoverlay time-format="%D %H:%M:%S" ! video/x-raw, width=640, height=360 ! autovideosink

Overclocking

Next we have to overclock the RPI to run the networks faster. Edit /boot/config.txt as root user.

sudo nano /boot/config.txt

over_voltage=4arm_freq=1300core_freq=500

Save and reboot.  sudo reboot

Tensorflow lite setup

 

Tensorflow Lite models are derived from full TensorFlow models by quantizing the weights to 8bit integers from float.

https://www.tensorflow.org/lite/guide/python

 

python3 -m pip install tflite-runtime

 

First clone the examples library

git clone https://github.com/tensorflow/examples --depth 1

 

Next test the following two AI edge examples by leveraging the Tensorflow lite exmaples:

Object detection

cd examples/lite/examples/image_classification/raspberry_pi Run the example: python3 classify.py \  --model efficientnet_lite0.tflite \  --maxResults 5 

Image segmentation

cd examples/lite/examples/image_segmentation/raspberry_pi

The accuracy of these models is however lower compared to a full TF model.

One advantage of TFLite models is that they embed the metadata for the classes on the model .h5 file itself. So if you open the .h5 file as a zip file you will get a list with the classes that the model is trained to recognize.

 

{gallery}AI

IOT Application.

The last test was to use a HAT daughterboard to build some IOT applications for sensor data visualizations.

{gallery}IOT App

#!/usr/bin/python

import RPi.GPIO as GPIO
import time

R = 11 # LED pin
G = 13
B = 15

"""
	Setups the pins in BCM mode
"""
def Init():
    GPIO.setwarnings(True) # suppress GPIO used message
    GPIO.setmode(GPIO.BOARD) # use BCM pin numbers
    GPIO.setup(R, GPIO.OUT) # set LED pin as output
    GPIO.setup(G, GPIO.OUT) # set LED pin as output
    GPIO.setup(B, GPIO.OUT) # set LED pin as output


"""
	Turns on the LED on port P0
"""
def RedLEDon():
    # GPIO.output(R, GPIO.HIGH)
    GPIO.output(G, GPIO.HIGH)
    GPIO.output(B, GPIO.HIGH)


"""
	Turns off the LED on port P0

"""
def RedLEDoff():
    # GPIO.output(R, GPIO.LOW)
    GPIO.output(G, GPIO.LOW)
    GPIO.output(B, GPIO.LOW)


"""
	Sets the LED state as HIGH or LOW.

	:param state: state of the LED, can be 1 for HIGH or 0 for LOW.
	:returns none : 
"""
def SetLED(state):
    if state:
        RedLEDon()
    else:
        RedLEDoff()


# if not used as a module (standalone), run this test program 
if __name__ == "__main__":
	Init()
	try:
		while(True):
			SetLED(1)
			time.sleep(0.5)
			SetLED(0)
			time.sleep(0.5)
			
	except KeyboardInterrupt:
		print("Clean exit on CTRL-C")
        GPIO.cleanup()

The last test was interfacing with a SI7020 sensor to build an IOT dashboard. The board also contains a radio so the idea was to build a wireless sensor network with RPI.

import time
import math

from si7020 import SI7020


def calc_dew_pt(temp_c, rel_hum):
    A, B, C = 8.1332, 1762.39, 235.66

    pp_amb = 10 ** (A - (B / (temp_c + C)))
    return -(C + (B / (math.log10(rel_hum * pp_amb / 100) - A ))), pp_amb


if __name__ == '__main__':
    sensor = SI7020()
    sensor.reset()
    temp = sensor.get_temp()
    hum = sensor.get_rel_humidity()
    print(time.time(), temp , hum)

In conclusion the RPI Zero W V2 is a solid upgrade from V1. What would make the board better suited for edge AI solutions is more RAM.

The good

  • RPI Zero 2W is 5x faster than RPI Zero W
  • Hardware wise this is a RPI3 in Pi zero form factor 

The Bad

  • Minimal RAM makes browser not usable
  • CSI camera needs adapter cable
  • No 4K HDMI output
Anonymous
  • Hi, I have a few questions.
    1. Which version of OpenCV will work?
    2. Overclocking is necessary for tensorflowlite?

    3.After installing tensorflowlite, trying to install image_classification. If I do, "sh setup.sh", it suspends at

    "Building wheels for collected packages: numpy  Building wheel for numpy (pyproject.toml) ... /

    Is this because I skipped installing opencv and gsstreamer? or Didn't do overclocking?

    Your help is really appreciated.