element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • Members
    Members
    • Benefits of Membership
    • Achievement Levels
    • Members Area
    • Personal Blogs
    • Feedback and Support
    • What's New on element14
  • Learn
    Learn
    • Learning Center
    • eBooks
    • STEM Academy
    • Webinars, Training and Events
    • More
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • More
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • More
  • Products
    Products
    • Arduino
    • Dev Tools
    • Manufacturers
    • Raspberry Pi
    • RoadTests & Reviews
    • Avnet Boards Community
    • More
  • Store
    Store
    • Visit Your Store
    • Choose Another Store
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
Bluetooth Unleashed Design Challenge
  • Challenges & Projects
  • Design Challenges
  • Bluetooth Unleashed Design Challenge
  • More
  • Cancel
Bluetooth Unleashed Design Challenge
Blog Universal LED Animator #9 - Image analysis
  • Blog
  • Forum
  • Documents
  • Events
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Blog Post Actions
  • Subscribe by email
  • More
  • Cancel
  • Share
  • Subscribe by email
  • More
  • Cancel
Group Actions
  • Group RSS
  • More
  • Cancel
Engagement
  • Author Author: mgal
  • Date Created: 19 Jul 2018 6:43 PM Date Created
  • Views 164 views
  • Likes 3 likes
  • Comments 2 comments
  • python-opencv
Related
Recommended

Universal LED Animator #9 - Image analysis

mgal
mgal
19 Jul 2018

Hi all, this post will elaborate on the use of OpenCV-based image analysis for the Animator project. As you may remember from a few weeks back, I managed to get some nice results recognizing LED strips in pictures with OpenCV, but I needed more precision. I finally managed to get some decent results and I'm going to explain what I did and how. First, the results:

Here you can see three LED strips on a minimum brightness setting, utlising the current version of the config pattern. It's pretty simple for now - the first Animator that connects with the Beagle Bone gets colored red, the second gets green and the third goes blue. When the user uploads a photo of the setup, OpenCV finds its orientation (horizontal vs vertical) and relative positions of LED strips, returning a list of three numbers based on the elementary colour it finds in the picture - [0, 1, 2] means that the strips are ordered red, green, blue from left to right (or top to bottom - that information is stored elsewhere) - [2, 1, 0] means it's the opposite order, and so on. This allows the web interface to map user's inputs to their respective LED strips - the user doesn't care if the LED strip they want to configure is the one that connected to the central unit most recently, but they'd be very happy to just say 'hey, make the leftmost strip blue'. The primary goal of this project was to enable them to do exactly this, and if 30 seconds later they rearrange their physical setup, they should be able to just take another photo and work with the web interface as if nothing happened.

 

  • Future goals
  • The procedure
  • Python implementation
  • Step-through
  • Debugging output

 

Future goals

 

You probably noticed the attention-catching circles littering the image - I didn't have enough time, nor memory in the Blends, to introduce recognizing direction of the strips, but coupled with some decent maths that'd allow for even nicer features - imagine LED strips emulating water droplets sliding from slopes, with each droplet's speed determined by the angle the LED strip is hanging from. That's a future goal and the circles in the image mark the beginnings, middles and ends of each line detected by OpenCV, for the purpose of colour recognition. The future idea is to make the configuration mode patters exhibit three colours, so that the LED strips' individual colours would be recognized together with their direction. This is not implemented now.

 

The procedure

The idea is to make OpenCV able to recognize the three coloured lines in the image, get their orientation and relative positions, and use that information later to configure the web interface. This goal is achieved by:

  1. Preprocessing the image,
  2. Using probabilistic Hough Transform to find lines,
  3. Counting the lines (there will be dozens, which is fine for now),
  4. Finding the middle of each line,
  5. Checking if the line is horizontal and vertical, and incrementing the corresponding variable
  6. Getting the dominant colour in the area around the middle of each line,
  7. Counting the number of lines per colour,
  8. Counting the average of aggregate X and Y coordinates per colour
  9. Deciding if the strips are horizontal or vertical based on the sums of vertical and horizontal lines found (point #5),
  10. Sorting the lines based on averages of X or Y coordinates to get them in left-right or top-bottom order.

 

Python implementation

The practical implementation of the above looks like this:

 

#Preprocessing the image: resize, grayscale, threshold, close gaps
myfile = ('uploads/'+os.listdir('uploads')[0])
print('reading image: ', myfile)   
src = cv2.imread(myfile)
small = cv2.resize(src, (500, 700)) 
src = small
gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)

img = gray
kernel = np.ones((4,4),np.uint8)

ret,img = cv2.threshold(img,120,255,0)
close = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)

#Get lines via probabilistic Hough Transform
minLineLength = 500
maxLineGap = 10
lines = cv2.HoughLinesP(close,1,np.pi/180,100,minLineLength,maxLineGap)
radius = 15
nlines = 100

redx = 0
greenx = 0
bluex = 0
redy = 0
greeny = 0
bluey = 0
redi = 0
greeni = 0
bluei = 0

horz = 0
vert = 0

for n in range(nlines):
    #keep counting lines until there is at least one per colour
    if (redi>0 and greeni>0 and bluei>0): 
        break
    x1,y1,x2,y2 = lines[0][n]
    #find centres of lines
    x3 = int((x2+x1)/2)
    y3 = int((y2+y1)/2)

    #find dominant colours in a 30px circle around the line centre
    roi_size = radius
    roi_values = src[(y3-roi_size):(y3+roi_size), (x2-roi_size):(x2+roi_size)]
    mean_blue = int(np.mean(roi_values[:,:,0]))
    mean_green = int(np.mean(roi_values[:,:,1]))
    mean_red = int(np.mean(roi_values[:,:,2]))
    colour = ''
    #also, find the line's horizontal and vertical shift
    horz += abs(x2-x1)
    vert += abs(y2-y1)
    #operations needed for getting the average of aggregate positions of lines
    if(max(mean_red, mean_green, mean_blue) == mean_red):
        colour = 1
        linecolour = (0,0,255)
        redx+=x3
        redy+=y3
        redi = redi+1
    elif(max(mean_red, mean_green, mean_blue) == mean_green):
        colour = 2
        linecolour = (0,255,0)
        greenx+=x3
        greeny+=y3
        greeni = greeni+1
    else:
        colour = 3
        linecolour = (255,0,0)
        bluex+=x3
        bluey+=y3
        bluei = bluei+1
        
    #print("{}: {}x{} R: {}  G: {}  B: {}, {} r {} g {} b {}".format(n, x3, y3, mean_red, mean_green, mean_blue, colour, redi, greeni, bluei) )
    
    #Diagnostics - draw lines and circles if you like
    #cv2.line(small,(x1,y1),(x2,y2),linecolour,2)
    #cv2.circle(small, (x1,y1), radius, (255,0,0))
    #cv2.circle(small, (x2,y2), radius, (0,0,255))
    #cv2.circle(small, (x3, y3), radius, (0,255,0))
    
redx /= redi
greenx /= greeni
bluex /= bluei
redy /= redi
greeny /= greeni
bluey /= bluei
os.remove(myfile)
print("Rx: {}  Gx: {}  Bx: {} Ry: {}  Gy: {}  By: {} horz: {} vert: {}".format(redx, greenx, bluex, redy, greeny, bluey, horz, vert))

#decide if strips are horizontal or vertical, then sort them and put them in shared variables for the main process to read
if(horz>vert):
    print('horizontal.')
    dict = {'1' : redy, '2' : greeny, '3' : bluey}
    ids = sorted(dict.items(), key=lambda kv: kv[1])
    print(ids)
    n1.value=int(ids[0][0])
    n2.value=int(ids[1][0])
    n3.value=int(ids[2][0])
    print(n1.value, n2.value, n3.value)
else:
    print('vertical.')
    dict = {'0' : redx, '1' : greenx, '2' : bluex}
    ids = sorted(dict.items(), key=lambda kv: kv[1])
    print(ids)
    n1.value=int(ids[0][0])
    n2.value=int(ids[1][0])
    n3.value=int(ids[2][0])
    print(n1.value, n2.value, n3.value)

 

Step-through

Here's what happens to the image at each step of the process:

 

1. Opening, resizing, converting to grayscale

2. Thresholding

 

3. Closing gaps

 

While we could definitely improve, this is enough for the probabilistic Hough transform to detect a decent amount of lines. Naturally, for the colour recognition, we come back to the full-colour image.

 

Debugging output

The output of the above code's debugging messages is as follows:

 

Stage one: Number of analysed line, coordinates on the centre, mean colour content of the 30px-wide area around the centre, dominant colour (1=red etc.), total number of lines found per colour.

0: 206x235 R: 59  G: 17  B: 27, 1 r 1 g 0 b 0

1: 234x443 R: 86  G: 36  B: 45, 1 r 2 g 0 b 0

2: 371x388 R: 13  G: 36  B: 93, 3 r 2 g 0 b 1

3: 372x575 R: 12  G: 32  B: 90, 3 r 2 g 0 b 2

4: 210x277 R: 119  G: 38  B: 42, 1 r 3 g 0 b 2

5: 371x430 R: 15  G: 35  B: 94, 3 r 3 g 0 b 3

6: 208x235 R: 47  G: 20  B: 25, 1 r 4 g 0 b 3

7: 194x143 R: 121  G: 22  B: 29, 1 r 5 g 0 b 3

8: 371x541 R: 22  G: 32  B: 89, 3 r 5 g 0 b 4

9: 372x388 R: 24  G: 44  B: 92, 3 r 5 g 0 b 5

10: 152x388 R: 63  G: 75  B: 65, 2 r 5 g 1 b 5

Stage two: Averages of each colour's X and Y coordinates, factors used to find strips' orientation (sum of X differences between lines' starts and ends, same for Ys)

Rx: 210.4  Gx: 152.0  Bx: 371.4 Ry: 266.6  Gy: 388.0  By: 464.4 horz: 131 vert: 1053

Let's make a decision - is the dimensional shift larger in X than Y on average?

vertical.

Sort the colours in a dictionary based on average Xs (we don't care about Ys anymore) and print the dictionary:

[('g', 152.0), ('r', 210.4), ('b', 371.4)]

 

Stay tuned for a video demo in my final blog post!

Anonymous
Parents
  • aspork42
    aspork42 over 4 years ago

    Awesome! Great post showing how you get the data out of the images.

    Maybe I missed it, but you convert to greyscale when trying to find the color of the strips. How is that done? Or do you just use the greyscale image as a mask for the RGB one?

    • Cancel
    • Vote Up 0 Vote Down
    • Reply
    • More
    • Cancel
Comment
  • aspork42
    aspork42 over 4 years ago

    Awesome! Great post showing how you get the data out of the images.

    Maybe I missed it, but you convert to greyscale when trying to find the color of the strips. How is that done? Or do you just use the greyscale image as a mask for the RGB one?

    • Cancel
    • Vote Up 0 Vote Down
    • Reply
    • More
    • Cancel
Children
  • mgal
    mgal over 4 years ago in reply to aspork42

    Hi James,

     

    Thank you for asking. I convert to grayscale to get shapes (i.e. coordinates of where the lines start and end, as well as their centres), then I find dominant colours in the original coloured picture using those coordinates. A lot of OpenCV algorithms work better in grayscale - see my 'Exploring OpenCV' post for some more details on that. If you take a look at the code, the 'src' object is the grayscale image, algorithmically modified in the lines 7-18 to get the coordinates, and then we get back to working with 'src', which is the original image. If you have any more questions let me know.

     

    Best regards,

    Monika

    • Cancel
    • Vote Up 0 Vote Down
    • Reply
    • More
    • Cancel
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2022 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • Facebook
  • Twitter
  • linkedin
  • YouTube