Xilinx Spartan-7 FPGA Maker Board by Digilent - Review

Table of contents

RoadTest: Xilinx Spartan-7 FPGA Maker Board by Digilent

Author: moiiom

Creation date:

Evaluation Type: Development Boards & Tools

Did you receive all parts the manufacturer stated would be included in the package?: True

What other parts do you consider comparable to this product?: MiniZed or Pynq board with Zynq 7000 AP SoC.

What were the biggest problems encountered?: There were not as much ressources (like examples or tools) for this specific project.

Detailed Review:

1. My application

 

As I have some experience over the last year with Neural Networks - Machine Learning I wanted to employ such a net on a FPGA.

I would therefore like to thank Element14 and Digilent for giving me the chance to test the board.

 

2. Unboxing

imageimageimage

 

The developement board is neatly packaged. There buttons and switches as inputs, leds and rgb-leds as outputs and pin connector in different form factors (PMOD,Arduino). The board can be powered by USB

or power jack.

 

3. Demo

 

The demo is a simple project with blinking leds which react to the buttons and switches.

 

4. Building the Neural Network

 

For the roadtest I've build a simple network with one input, one hidden and one output layer. There are only dense layers used and no convolutions.

©jamonglabimage

 

As you can imagine is the network a big matrix multiplication with extras. Each input value, for example pixels e.g., is muliplied with a specific value, usually called weight, for every neuron (the circle in the picture above). Additionally

a bias is added and a activation function is applied.

I used the MNIST data set to train my net. It consists of many handwritten digits between 0 an 9 with a picture size of 28 x 28 pixels.

This means my network gets 784 values as input. The output layer outputs 10 values representing a probability for the handwritten digit in the picture. For the hidden layer I chose a size of 50 as the result were good enough.

This is a relatively small network but i need:

          784*50 + 50(bias) = 39250 for the first

and     50 * 10 + 10 = 510 weights for the second layer.

Together the network has to learn 39760 weights.

 

The learning is the most difficult and ressource intensive part. Therefore it is usually done with a GPU (or if available big servers etc.). The completely trained network is then employed on the FPGA.

I wrote the code to train the network with the tool Tensorflow/Keras which has a Python API. It's free and in my opinion good to start with as many datasets can be directly downloaded within the API.

The weights and biases are retrieved after the training is completed and saved to a csv file.

Now we can directly use the data outside Tensorflow.

image

The picture above shows most of the code to train the network.

5. Verifying the network

 

To check if the weights are usefull, I wrote another python program but without using Tensorflow/Keras. I drew a small digit in Microsoft Paint3D and fed it into the program.

The program is simply doing a matrix multiplication with the weights we trained before:

 

image

Simple arithmetic can show us which digit is drawn in the picture. The softmax activation function converts the 10 values at the output layer to probabilities for each digit.

I tried it with many drawn digits and it worked most of the time quite good.

 

The self drawn digit ( 28x28).image

 

image

The code returns the probability for each of the 10 digits. It's correct with the "3". The result isn't always that clear, for example with "1" and "7" if badly drawn.

 

6. Deploying on FPGA

 

I searched for a long time for tools that are offered to help bringing the network to the FPGA.

Xilinx offers the so called reVISION stack ( https://www.xilinx.com/products/design-tools/embedded-vision-zone.html  ). This would be exactly what you need.

Unfortunately it is only suitable for Xilinx Zynq SoCs and MPSoCs. Those have a ARM CPU integrated.

So I had to seach for other possibilities:

 

Matlab/Simulink

Matlab offers the possibility to generate HDL code for FPGAs.

You can use Matlab code or generate from Simulink blocks (which was way better imo.).

My first naive try:

image

 

image

It resulted in 1h of computing and ~470000 lines of code. I didn't try to synthesize it image

 

Vivado HLS

Xilinx offers this really useful tool for free (at least I didn't pay anything but maybe because it isn't monetary).

It's able to convert C/C++ code to VHDl or Verilog.

 

I imported the weigths and biases as C arrays. Then a matrix multiplication similiar to the one above in python was programmed. The function

should return the digit with the highest probability.

image

Since it took me a lot of time I have not tried what can be done to optimize the code for FPGAs (data types, parallel processing .. )

 

The C-Synthesis created much HDL code to be imported into Vivado.

 

Putting it together

 

In Vivado the imported module could be added to my VHDL code as a normal component.

The picture with the digit had to be hardcoded as "array" to the project as I lacked time.

The return value from the network ,which is the number of the digit (in our case the "3"), is directed

as an output to the leds in order to show the result (in binary so there's enough leds).

Sadly I did not get the expected result. Judging from the leds the neural net detected a "5". I'm yet not sure where the

mistake is. The "5" looks similiar to the "3" (for the net) so even inaccuracies in computing could be possible. But I'm sure i'll find the problem.

 

7. Conclusion

 

I really liked working with the board and the Vivado tools. Programming was really easy with the USB interface.

Still if someone wants to try to implement neural networks on FPGAs I would recommend to use the Xilinx Zynq SoCs and MPSoCs like

Arty Z7 : https://store.digilentinc.com/arty-z7-apsoc-zynq-7000-development-board-for-makers-and-hobbyists/

PYNQ-Z1 : https://store.digilentinc.com/pynq-z1-python-productivity-for-zynq-7000-arm-fpga-soc/

as Xilinx offers the tools that are needed.

Anonymous