element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Vision Thing
  • Challenges & Projects
  • Project14
  • Vision Thing
  • More
  • Cancel
Vision Thing
Blog BBAI-Dried Sea Cucumber Species Identifier
  • Blog
  • Forum
  • Documents
  • Events
  • Polls
  • Files
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Vision Thing to participate - click to join for free!
  • Share
  • More
  • Cancel
Group Actions
  • Group RSS
  • More
  • Cancel
Engagement
  • Author Author: kurst811
  • Date Created: 24 Nov 2019 11:25 AM Date Created
  • Views 2558 views
  • Likes 9 likes
  • Comments 7 comments
  • visionthingch
Related
Recommended

BBAI-Dried Sea Cucumber Species Identifier

kurst811
kurst811
24 Nov 2019
image

Vision Thing

Enter Your Project for a chance to win an Oscilloscope Grand Prize Package for the Most Creative Vision Thing Project!

Back to The Project14 homepage image

Project14 Home
Monthly Themes
Monthly Theme Poll

 

  • Overview
  • Hardware
  • Framework
  • Collecting Dataset
  • Image Labeling
  • Training
  • Implementation
  • Testing
  • Conclusion

Overview

My project idea is to used the BeagleBone AI board + Mini Power Bank + USB camera to make a Portable Dried Sea Cucumber Species Identifier. At first, i am going to aim at identifying at most 3 type of dry sea cucumber species : Isostichopus Badionotus , Apostichopus Californicus ,and Holothuria Mamamata. The project going to use machine learning, feature extraction, and classification to produce a predicted output. The ideas came about during my visit to the Philippines where i got sampled of various dry sea cucumber from my uncle who was a middleman between restaurant and fisherman. Sea cucumber in dried from is difficult to identify it specie so i thought it would be a neat idea to use computer vision as a solution to identified dried sea cucumber species.

 

Holothuria MamataIsostichopus BadionotusApostichopus Californicus
imageimageimage

Hardware

_Beagle Bone AI

_Logitech camera

_Anker Battery

 

image

 

image

Framework

For neural net frame work, i decided on using Darknet as it something i work before and i was able to get around 10fps using only cpu. I initaly try to convert a Tensorflow model to TIDL format but i couldn't get it to work so i went with my current approach. For the model, i used a modified tiny-yolo model and retrained it with new object for around 5000 steps. For darknet framework, i modified the framework so that it could output the prediction onto a text file. Overall, the accuracy of this model was around 85% and since my application detect object that is idle there no need to had a high frame rate. The link to the modified Darknet version and model can be found on the link in Implementation.

 

Collecting Dataset

The first problem i ran into when doing this project is finding a suitable amount of images for specific species of dried sea cucumber. Since there barely any image of dried sea cucumber on Imagenet, i spend a couple hour just looking at image on site like Alibaba and Amazon since there are actual dried cucumber seller. After a couple of hour , i decided to settle on these sea cucumber species : Isostichopus Badionotus , Apostichopus Californicus ,and Holothuria Mamamata. However , I was only able to get at most around 20 different images and i would like to have around at least 60 different images for each species. Thus, I decided to create new images from the existing pool of images. One method that i used was simply using pictures with multiple dried sea cucumber and make each sea cucumber their own images. Another method I use is rotating the image so it at a 45 degree angle.

 

This is the original image :

image

This is an image i generate from the original:

image

 

Image Labeling

I use LabelImg to create a bounding box around all the objects in the images. In addition, LabelImg also create the coordinate text file in Darknet format so it convenient to use since I don't have to convert the coordinate file into Darknet format.

 

During image labeling process, it important that you make the bounding box as tight as possible as the framework going to learn feature within the bounding box. The higher the percentage of desire features within the bounding box the more accurate your detection model going to be.

image

 

Training

1.For training , I simply copied the data folder in LabelImage (after all the image is finish labeling ) and put into Darknet folder

 

2.Next , I create a BBAI.data as show below:

classes= 2
train  = BBAI/train.txt
valid  = BBAI/train.txt
names = BBAI/BBAI.names
backup = backup/

 

3.Then, I create a train.txt to let the program know where all the images is stored . Train.txt should look like this

BBAI/obj/Badionotus0.jpg
BBAI/obj/Badionotus1.jpg
...
BBAI/obj/Badionotus60.jpg
BBAI/obj/Mammata0.jpg
....
BBAI/obj/Mammata40.jpg

 

4.Finally, I create a BBAI.names that simply contain the object name

Badionotus
Mammata

 

5.Next, I create my own .cfg file ( this is the model file) from the tiny-yolo.cfg. The important part here is change your model classes to how many class ( object ) you have and change the number of filter to (classes + 5 ) * 5 on line 114. Other thing you can experiment in the model for performance improvement is width/height and the number of convolution layer.

 

 

6. Finally , use the following command to start training your model. There should be an already pre-trained model in the Darknet directory. You are simple retraining with new data.

 

./darknet detector train BBAI/BBAI.data BBAI/BBAI.cfg BBAI/darknet53.conv.74

 

image

 

It recommend to train around 9000 iteration , but in some case around 7000 or even 5000 iterations would yield better result. Since i modified Darknet to save the model every 1000 iterations. It should be fairly simple to test and see which iterations have the best result.

image

 

 

Implementation

To make enabling image capturing and detection simpler. I make a local webapp using nodeJS to control this application remotely.

 

The app.js create a webserver to host the webpage. To access the webpage you simply just type in localhost:5000 in the url.

This is the control UI

 

image

 

Full Repo : https://github.com/Husky95/BBAI

 

Testing

For manual testing of object detection. I feed the framework and image with the current model to make sure it can detect object correct.

Use this command to for manual testing :

./darknet detector test BBAI/BBAI.data BBAI/BBAI.cfg  backup/BBAI_5000.weights Test.jpg

 

 

Isostichopus Badionotus Holothuria MamamataApostichopus Californicus
Outputimageimageimage
Console Outputimageimageimage

 

Conclusion

Overall, the average prediction accuracy was around 75% and the average time it take is around 30 sec for an image. The long amount of time it take is mostly due to Darknet not able to fully use all of the BBAI special hardware. I believe that once BBAI release their Tensorflow Lite support and using the Tensorflow Lite framework , the amount of time it take to identify the object will be significantly reduced. Out of the three dried sea cucumbers i tested , Mammata and Californicus have a relatively high prediction accuracy at around 85% . While, Badionotus have around a 65% accuracy. This is mostly due to the lack of images variety in Badionotus dataset and maybe the unique color pattern of this specific sea cucumber specie.

  • Sign in to reply

Top Comments

  • 14rhb
    14rhb over 5 years ago +3
    I was wondering how you were getting on this your project kurst811 as things went a bit quiet, and now I can see why. This is great and a very interesting write up to follow through later in my own slower…
  • Sean_Miller
    Sean_Miller over 5 years ago +3
    Great stuff! I wonder how one could take your trained model into the BBAI classification.tidl.cpp example code? That example makes use of the BBAI hardware. Here is the section of the classification.tidl…
  • shabaz
    shabaz over 5 years ago +3
    Very interesting, this was an impressive project. Could you explain Darknet a little more if you don't mind? A video walk-through of that or your project would be awesome. Anyway, excellent work.
  • mayermakes
    mayermakes over 5 years ago

    words can express how muc hI love this project.

    first for its specificness and second for its execution.

    and on third I might add , I'ma huge bug nerd, so this use of AI is just sooo awesome!!!

    • Cancel
    • Vote Up +3 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • kurst811
    kurst811 over 5 years ago in reply to shabaz

    It similar to Tensorflow except it is optimize to run on with Invidia graphics and it also had a few function to increase performance using CPU only. I used this because i were having problem using the native TIDL library and BBAI just doesn't support Tensorflow at all.

    • Cancel
    • Vote Up +2 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • shabaz
    shabaz over 5 years ago

    Very interesting, this was an impressive project.

    Could you explain Darknet a little more if you don't mind? A video walk-through of that or your project would be awesome.

    Anyway, excellent work.

    • Cancel
    • Vote Up +3 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • three-phase
    three-phase over 5 years ago

    Quite an ambitious project that has produced some really great results. Well done.

     

    Kind regards.

    • Cancel
    • Vote Up +2 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • dubbie
    dubbie over 5 years ago

    The accuracy you have achieved with your neural network is pretty good. Obviously it would be nice if it was better. More training data should help. Getting good training data is sometimes quite difficult. I have tried recognising letters with an ANN, where there is really only one correct image (once the image has been normalised). Any other data is wrong so there is not much point on training  the ANN on slightly wrong letters.

     

    One thing I did which helped with the level of confidence of the selection made by the ANN was to get a percentage reliability value (I could not see whether your system produced this) as well as the second choice. I cannot remember the exact maths I used, but it was something to do with statistics - I'll see if I can find it somewhere, sometime. It was useful to see the difference between the first and second choices. A small difference would indicate an increased level of uncertainty in the accuracy of the first choice, so a first choice of 80% certain with a second choice of 20% would probably be more reliable than a 90% first choice and 80% second choice.

     

    Dubbie

    • Cancel
    • Vote Up +2 Vote Down
    • Sign in to reply
    • More
    • Cancel
>
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube