element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • About Us
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Vision Thing
  • Challenges & Projects
  • Project14
  • Vision Thing
  • More
  • Cancel
Vision Thing
Blog ALPR with RPi and BB-AI
  • Blog
  • Forum
  • Documents
  • Events
  • Polls
  • Files
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Vision Thing to participate - click to join for free!
  • Share
  • More
  • Cancel
Group Actions
  • Group RSS
  • More
  • Cancel
Engagement
  • Author Author: crisdeodates
  • Date Created: 19 Nov 2019 7:36 PM Date Created
  • Views 1270 views
  • Likes 4 likes
  • Comments 0 comments
  • beaglebone-ai
  • raspberry_pi_3b+
  • alpr
Related
Recommended

ALPR with RPi and BB-AI

crisdeodates
crisdeodates
19 Nov 2019

Introduction

 

Law enforcement officers are often searching for vehicles that have been reported stolen, are suspected of being involved in criminal or terrorist activities, are owned by persons who are wanted by authorities, have failed to pay parking violations or maintain current vehicle license registration or insurance, or any of a number of other legitimate reasons. Victims and witnesses are frequently able to provide police with a description of a suspect vehicle, including in some cases a full or partial reading of their license plate number. Depending on the seriousness of the incident, officers may receive a list of vehicles of interest to their agency at the beginning of their shift, or receive radio alerts throughout the day, providing vehicle descriptions and plate numbers including stolen vehicles, vehicles registered or associated with wanted individuals or persons of interest, vehicles attached to an AMBER alert, missing persons alert, and Be On the LookOut - or BOLO - alerts. These lists can be sizable depending on the jurisdiction, population size, and criteria for the list, and can present challenges for the patrol officer.

 

Automated License Plate Readers (ALPRs) function to automatically capture an image of the vehicle’s license plate, transform that image into alphanumeric characters using optical character recognition or similar software, compare the plate number acquired to one or more databases of vehicles of interest to law enforcement and other agencies, and to alert the officer when a vehicle of interest has been observed. The automated capture, analysis, and comparison of vehicle license plates typically occur within seconds, alerting the officer almost immediately when a wanted plate is observed. It automates a tedious, distracting, and manual process that officers regularly complete in their daily operations of searching for wanted vehicles. ALPR systems vastly improve the efficiency and effectiveness of officers in identifying vehicles of interest among the hundreds or thousands they observe during a routine patrol. In doing so, ALPR can identify that needle in a haystack -- the stolen car, the vehicle wanted in connection with a robbery or child abduction, or the vehicle registered to a missing person.

 

The information collected can be used by police to find out where a plate has been in the past, to determine whether a vehicle was at the scene of a crime, to identify travel patterns, and even to discover vehicles that may be associated with each other.

Automated License Plate Recognition has many uses including:

  • Recovering stolen cars.
  • Identifying drivers with an open warrant for arrest.
  • Catching speeders.
  • Determining what cars do and do not belong in a parking garage.
  • Expediting parking by eliminating the need for human confirmation of parking passes.

Watch a short video on ALPR here:

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

[Source: https://www.youtube.com/watch?v=LnovknVA2cE]

 

Bill of Materials

 

Items
Raspberry Pi 3B+
Rpi camera
BeagleBone AI
USB Camera

 

Raspberry Pi Vs BeagleBone AI

 

image

 

Raspberry Pi 3B+

image

[Image Source: Aliexpress]

  • Broadcom BCM2837B0 64-bit ARM Cortex-A53 Quad Core Processor SoC running @ 1.4GHz
  • 1GB RAM LPDDR2 SDRAM
  • 4x USB2.0 Ports with up to 1.2A output
  • Extended 40-pin GPIO Header
  • Video/Audio Out via 4-pole 3.5mm connector, HDMI, CSI camera, or Raw LCD (DSI)
  • Storage: MicroSD
  • Gigabit Ethernet over USB 2.0 (maximum throughput 300Mbps)
  • 2.4GHz and 5GHz IEEE 802.11.b/g/n/ac wireless LAN, Bluetooth 4.2, BLE
  • H.264, MPEG-4 decode (1080p30); H.264 encode (1080p30); OpenGL ES 1.1, 2.0 graphics
  • Low-Level Peripherals:
    • 27x GPIO
    • UART
    • I2C bus
    • SPI bus with two chip selects
    • +3.3V
    • +5V
    • Ground
  • Power Requirements: 5V @ 2.5A via a micro USB power source
  • Supports Raspbian, Windows 10 IoT Core, OpenELEC, OSMC, Pidora, Arch Linux, RISC OS, and More!
  • 85mm x 56mm x 17mm

Beaglebone AI

image

[Image Source: Element14]

  • Processor: Texas Instruments Sitara AM5729
  • Dual ArmRegistered CortexRegistered-A15 microprocessor subsystem
  • 2 C66x floating-point VLIW DSPs
  • 2.5MB of on-chip L3 RAM
  • 2x dual ArmRegistered CortexRegistered-M4 co-processors
  • 4x Embedded Vision Engines (EVEs)
  • 2x dual-core Programmable Real-Time Unit and Industrial Communication SubSystem (PRU-ICSS)
  • 2D-graphics accelerator (BB2D) subsystem
  • Dual-core PowerVRRegistered SGX544Tm 3D GPU
  • IVA-HD subsystem
  • BeagleBone Black mechanical and header compatibility
  • 1GB RAM and 16GB on-board eMMC flash with high-speed interface
  • USB Type-C for power and superspeed dual-role controller; and USB type-A host
  • Gigabit Ethernet, 2.4/5GHz WiFi, and Bluetooth
  • micro HDMI
  • Zero-download out-of-box software experience with Debian GNU/Linux

ALPR Working

 

We are going to build a compact ALPR using Python and OpenCV. This project is inspired by from the works of Chris Dahms.

You can watch the whole steps in detail in his video here :

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

[Source: https://www.youtube.com/watch?v=fJcl6Gw1D8k ]

 

The code has been modified for our requirements and tested on both RPi 3B+ and Beaglebone AI.

After the license plate is recognized, the data is written to a CSV file using Panda dataset.

 

The procedure for ALPR will be briefly explained as follows:

image

 

1. The Image is read by the program.  For testing, let's use a random vehicle image where the license plate is visible in the frame.

image

[Image Source: BMW Blog, https://www.bmwblog.com/2015/12/19/why-do-americans-put-european-license-plates-on-their-cars/ ]

 

imgOriginal  = cv2.imread("test.png")

 

2. Conversion to grayscale

image

height, width, numChannels = imgOriginal.shape
imgHSV = np.zeros((height, width, 3), np.uint8)
imgHSV = cv2.cvtColor(imgOriginal, cv2.COLOR_BGR2HSV)
imgHue, imgSaturation, imgGrayscale = cv2.split(imgHSV)

 

3. Threshold the grayscale image

image

height, width = imgGrayscale.shape
imgBlurred = np.zeros((height, width, 1), np.uint8)
imgBlurred = cv2.GaussianBlur(imgMaxContrastGrayscale, GAUSSIAN_SMOOTH_FILTER_SIZE, 0)
imgThresh = cv2.adaptiveThreshold(imgBlurred, 255.0, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, ADAPTIVE_THRESH_BLOCK_SIZE, ADAPTIVE_THRESH_WEIGHT)

 

4. Contour detection

image

imgThreshCopy = imgThresh.copy()
imgContours, contours, npaHierarchy = cv2.findContours(imgThreshCopy, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)   # find all contours
height, width = imgThresh.shape
imgContours = np.zeros((height, width, 3), np.uint8)

 

5. Detecting contours with possible characters

image

class PossibleChar:
    def __init__(self, _contour):
        self.contour = _contour
        self.boundingRect = cv2.boundingRect(self.contour)
        [intX, intY, intWidth, intHeight] = self.boundingRect
        self.intBoundingRectX = intX
        self.intBoundingRectY = intY
        self.intBoundingRectWidth = intWidth
        self.intBoundingRectHeight = intHeight
        self.intBoundingRectArea = self.intBoundingRectWidth * self.intBoundingRectHeight
        self.intCenterX = (self.intBoundingRectX + self.intBoundingRectX + self.intBoundingRectWidth) / 2
        self.intCenterY = (self.intBoundingRectY + self.intBoundingRectY + self.intBoundingRectHeight) / 2
        self.fltDiagonalSize = math.sqrt((self.intBoundingRectWidth ** 2) + (self.intBoundingRectHeight ** 2))
        self.fltAspectRatio = float(self.intBoundingRectWidth) / float(self.intBoundingRectHeight)

listOfPossibleChars = []

for i in range(0, len(contours)):                 
        cv2.drawContours(imgContours, contours, i, Main.SCALAR_WHITE)
        possibleChar = PossibleChar.PossibleChar(contours[i])

        if (possibleChar.intBoundingRectArea > MIN_PIXEL_AREA and
        possibleChar.intBoundingRectWidth > MIN_PIXEL_WIDTH and possibleChar.intBoundingRectHeight > MIN_PIXEL_HEIGHT and
        MIN_ASPECT_RATIO < possibleChar.fltAspectRatio and possibleChar.fltAspectRatio < MAX_ASPECT_RATIO):
            intCountOfPossibleChars = intCountOfPossibleChars + 1           # increment count of possible chars
            listOfPossibleChars.append(possibleChar)

imgContours = np.zeros((height, width, 3), np.uint8)
contours = []
for possibleChar in listOfPossibleCharsInScene:
     contours.append(possibleChar.contour)
cv2.drawContours(imgContours, contours, -1, Main.SCALAR_WHITE)

 

6. Regrouping with matching characters

image

def findListOfMatchingChars(possibleChar, listOfChars):
    listOfMatchingChars = []   
    for possibleMatchingChar in listOfChars:       
        if possibleMatchingChar == possibleChar:  
            continue                              
        fltDistanceBetweenChars = distanceBetweenChars(possibleChar, possibleMatchingChar)
        fltAngleBetweenChars = angleBetweenChars(possibleChar, possibleMatchingChar)
        fltChangeInArea = float(abs(possibleMatchingChar.intBoundingRectArea - possibleChar.intBoundingRectArea)) / float(possibleChar.intBoundingRectArea)
        fltChangeInWidth = float(abs(possibleMatchingChar.intBoundingRectWidth - possibleChar.intBoundingRectWidth)) / float(possibleChar.intBoundingRectWidth)
        fltChangeInHeight = float(abs(possibleMatchingChar.intBoundingRectHeight - possibleChar.intBoundingRectHeight)) / float(possibleChar.intBoundingRectHeight)

        if (fltDistanceBetweenChars < (possibleChar.fltDiagonalSize * MAX_DIAG_SIZE_MULTIPLE_AWAY) and
            fltAngleBetweenChars < MAX_ANGLE_BETWEEN_CHARS and
            fltChangeInArea < MAX_CHANGE_IN_AREA and
            fltChangeInWidth < MAX_CHANGE_IN_WIDTH and
            fltChangeInHeight < MAX_CHANGE_IN_HEIGHT):
            listOfMatchingChars.append(possibleMatchingChar)

    return listOfMatchingChars

def distanceBetweenChars(firstChar, secondChar):
    intX = abs(firstChar.intCenterX - secondChar.intCenterX)
    intY = abs(firstChar.intCenterY - secondChar.intCenterY)
    return math.sqrt((intX ** 2) + (intY ** 2))

def angleBetweenChars(firstChar, secondChar):
    fltAdj = float(abs(firstChar.intCenterX - secondChar.intCenterX))
    fltOpp = float(abs(firstChar.intCenterY - secondChar.intCenterY))
    if fltAdj != 0.0:                           
        fltAngleInRad = math.atan(fltOpp / fltAdj)      
    else:
        fltAngleInRad = 1.5708                         
    fltAngleInDeg = fltAngleInRad * (180.0 / math.pi) 
    return fltAngleInDeg

listOfListsOfMatchingChars = []
                  
for possibleChar in listOfPossibleChars:                        
    listOfMatchingChars = findListOfMatchingChars(possibleChar, listOfPossibleChars)       
    listOfMatchingChars.append(possibleChar)               
    if len(listOfMatchingChars) < MIN_NUMBER_OF_MATCHING_CHARS:    
        continue                                                                           
    listOfListsOfMatchingChars.append(listOfMatchingChars)      
    listOfPossibleCharsWithCurrentMatchesRemoved = []
    listOfPossibleCharsWithCurrentMatchesRemoved = list(set(listOfPossibleChars) - set(listOfMatchingChars))
    recursiveListOfListsOfMatchingChars = findListOfListsOfMatchingChars(listOfPossibleCharsWithCurrentMatchesRemoved)
    for recursiveListOfMatchingChars in recursiveListOfListsOfMatchingChars:       
        listOfListsOfMatchingChars.append(recursiveListOfMatchingChars)
return listOfListsOfMatchingChars

 

7. Detection of a potential license plates

image

imgContours = np.zeros((height, width, 3), np.uint8)
    for listOfMatchingChars in listOfListsOfMatchingCharsInScene:
        intRandomBlue = random.randint(0, 255)
        intRandomGreen = random.randint(0, 255)
        intRandomRed = random.randint(0, 255)
        contours = []

        for matchingChar in listOfMatchingChars:
            contours.append(matchingChar.contour)

        cv2.drawContours(imgContours, contours, -1, (intRandomBlue, intRandomGreen, intRandomRed))

image

listOfPossiblePlates = []
for listOfMatchingChars in listOfListsOfMatchingCharsInScene:      
    possiblePlate = extractPlate(imgOriginalScene, listOfMatchingChars) 
    if possiblePlate.imgPlate is not None:                        
        listOfPossiblePlates.append(possiblePlate)  

for i in range(0, len(listOfPossiblePlates)):
    p2fRectPoints = cv2.boxPoints(listOfPossiblePlates[i].rrLocationOfPlateInScene)
    cv2.line(imgContours, tuple(p2fRectPoints[0]), tuple(p2fRectPoints[1]), Main.SCALAR_RED, 2)
    cv2.line(imgContours, tuple(p2fRectPoints[1]), tuple(p2fRectPoints[2]), Main.SCALAR_RED, 2)
    cv2.line(imgContours, tuple(p2fRectPoints[2]), tuple(p2fRectPoints[3]), Main.SCALAR_RED, 2)
    cv2.line(imgContours, tuple(p2fRectPoints[3]), tuple(p2fRectPoints[0]), Main.SCALAR_RED, 2)
    print("possible plate " + str(i)")
    cv2.imshow("Plates", listOfPossiblePlates[i].imgPlate)

 

8. Applying character recognition on the detected plate using tesseract API

imageimageimage

imageimageimage

imageimageimage

listOfPossiblePlates = DetectChars.detectCharsInPlates(listOfPossiblePlates)

def detectCharsInPlates(listOfPossiblePlates):
    intPlateCounter = 0
    imgContours = None
    contours = []

    if len(listOfPossiblePlates) == 0:          
        return listOfPossiblePlates            
    for possiblePlate in listOfPossiblePlates:      
        possiblePlate.imgThresh = cv2.resize(possiblePlate.imgThresh, (0, 0), fx = 1.6, fy = 1.6)
        thresholdValue, possiblePlate.imgThresh = cv2.threshold(possiblePlate.imgThresh, 0.0, 255.0, cv2.THRESH_BINARY | cv2.THRESH_OTSU)

        listOfPossibleCharsInPlate = findPossibleCharsInPlate(possiblePlate.imgGrayscale, possiblePlate.imgThresh)

        listOfListsOfMatchingCharsInPlate = findListOfListsOfMatchingChars(listOfPossibleCharsInPlate)

        if (len(listOfListsOfMatchingCharsInPlate) == 0):
            possiblePlate.strChars = ""
            continue

        for i in range(0, len(listOfListsOfMatchingCharsInPlate)):                             
            listOfListsOfMatchingCharsInPlate[i].sort(key = lambda matchingChar: matchingChar.intCenterX)       
            listOfListsOfMatchingCharsInPlate[i] = removeInnerOverlappingChars(listOfListsOfMatchingCharsInPlate[i])             

        intLenOfLongestListOfChars = 0
        intIndexOfLongestListOfChars = 0

        for i in range(0, len(listOfListsOfMatchingCharsInPlate)):
            if len(listOfListsOfMatchingCharsInPlate[i]) > intLenOfLongestListOfChars:
                intLenOfLongestListOfChars = len(listOfListsOfMatchingCharsInPlate[i])
                intIndexOfLongestListOfChars = i

        longestListOfMatchingCharsInPlate = listOfListsOfMatchingCharsInPlate[intIndexOfLongestListOfChars]

        possiblePlate.strChars = recognizeCharsInPlate(possiblePlate.imgThresh, longestListOfMatchingCharsInPlate)

    return listOfPossiblePlates

def findPossibleCharsInPlate(imgGrayscale, imgThresh):
    listOfPossibleChars = []                        
    contours = []
    imgThreshCopy = imgThresh.copy()
    imgContours, contours, npaHierarchy = cv2.findContours(imgThreshCopy, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

    for contour in contours:                
        possibleChar = PossibleChar.PossibleChar(contour)

        if checkIfPossibleChar(possibleChar):              
            listOfPossibleChars.append(possibleChar)     

    return listOfPossibleChars

def checkIfPossibleChar(possibleChar):
    if (possibleChar.intBoundingRectArea > MIN_PIXEL_AREA and
        possibleChar.intBoundingRectWidth > MIN_PIXEL_WIDTH and possibleChar.intBoundingRectHeight > MIN_PIXEL_HEIGHT and
        MIN_ASPECT_RATIO < possibleChar.fltAspectRatio and possibleChar.fltAspectRatio < MAX_ASPECT_RATIO):
        return True
    else:
        return False

def findListOfListsOfMatchingChars(listOfPossibleChars):
    listOfListsOfMatchingChars = []                  
    for possibleChar in listOfPossibleChars:                        
        listOfMatchingChars = findListOfMatchingChars(possibleChar, listOfPossibleChars)       
        listOfMatchingChars.append(possibleChar)                   

        if len(listOfMatchingChars) < MIN_NUMBER_OF_MATCHING_CHARS:    
            continue                            
        listOfListsOfMatchingChars.append(listOfMatchingChars)     
        listOfPossibleCharsWithCurrentMatchesRemoved = []
        listOfPossibleCharsWithCurrentMatchesRemoved = list(set(listOfPossibleChars) - set(listOfMatchingChars))
        recursiveListOfListsOfMatchingChars = findListOfListsOfMatchingChars(listOfPossibleCharsWithCurrentMatchesRemoved)
        for recursiveListOfMatchingChars in recursiveListOfListsOfMatchingChars:        
            listOfListsOfMatchingChars.append(recursiveListOfMatchingChars)            
        break

    return listOfListsOfMatchingChars

def findListOfMatchingChars(possibleChar, listOfChars):
    listOfMatchingChars = []                
    for possibleMatchingChar in listOfChars:                
        if possibleMatchingChar == possibleChar:    
            continue                                
        fltDistanceBetweenChars = distanceBetweenChars(possibleChar, possibleMatchingChar)
        fltAngleBetweenChars = angleBetweenChars(possibleChar, possibleMatchingChar)
        fltChangeInArea = float(abs(possibleMatchingChar.intBoundingRectArea - possibleChar.intBoundingRectArea)) / float(possibleChar.intBoundingRectArea)
        fltChangeInWidth = float(abs(possibleMatchingChar.intBoundingRectWidth - possibleChar.intBoundingRectWidth)) / float(possibleChar.intBoundingRectWidth)
        fltChangeInHeight = float(abs(possibleMatchingChar.intBoundingRectHeight - possibleChar.intBoundingRectHeight)) / float(possibleChar.intBoundingRectHeight)

        if (fltDistanceBetweenChars < (possibleChar.fltDiagonalSize * MAX_DIAG_SIZE_MULTIPLE_AWAY) and
            fltAngleBetweenChars < MAX_ANGLE_BETWEEN_CHARS and
            fltChangeInArea < MAX_CHANGE_IN_AREA and
            fltChangeInWidth < MAX_CHANGE_IN_WIDTH and
            fltChangeInHeight < MAX_CHANGE_IN_HEIGHT):
            listOfMatchingChars.append(possibleMatchingChar)  
      
    return listOfMatchingChars                  

def removeInnerOverlappingChars(listOfMatchingChars):
    listOfMatchingCharsWithInnerCharRemoved = list(listOfMatchingChars)                

    for currentChar in listOfMatchingChars:
        for otherChar in listOfMatchingChars:
            if currentChar != otherChar:        
                if distanceBetweenChars(currentChar, otherChar) < (currentChar.fltDiagonalSize * MIN_DIAG_SIZE_MULTIPLE_AWAY):                                 
                    if currentChar.intBoundingRectArea < otherChar.intBoundingRectArea:         
                        if currentChar in listOfMatchingCharsWithInnerCharRemoved:             
                            listOfMatchingCharsWithInnerCharRemoved.remove(currentChar)         
                    else:                                                                       
                        if otherChar in listOfMatchingCharsWithInnerCharRemoved:                
                            listOfMatchingCharsWithInnerCharRemoved.remove(otherChar) 

 

9. Overlay the license plate number to the original image

image

listOfPossiblePlates.sort(key = lambda possiblePlate: len(possiblePlate.strChars), reverse = True)
    licPlate = listOfPossiblePlates[0]
    cv2.imshow("imgPlate", licPlate.imgPlate)           
    cv2.imshow("imgThresh", licPlate.imgThresh)
    if len(licPlate.strChars) == 0:                    
        print("\nno characters were detected\n\n") 
        return                                          

    drawRedRectangleAroundPlate(imgOriginalScene, licPlate)             
    print("\nlicense plate read from image = " + licPlate.strChars + "\n")  
    print("----------------------------------------")
    writeLicensePlateCharsOnImage(imgOriginalScene, licPlate)          
    cv2.imshow("imgOriginalScene", imgOriginalScene)

def drawRedRectangleAroundPlate(imgOriginalScene, licPlate):
    p2fRectPoints = cv2.boxPoints(licPlate.rrLocationOfPlateInScene)         

    cv2.line(imgOriginalScene, tuple(p2fRectPoints[0]), tuple(p2fRectPoints[1]), SCALAR_RED, 2)         
    cv2.line(imgOriginalScene, tuple(p2fRectPoints[1]), tuple(p2fRectPoints[2]), SCALAR_RED, 2)
    cv2.line(imgOriginalScene, tuple(p2fRectPoints[2]), tuple(p2fRectPoints[3]), SCALAR_RED, 2)
    cv2.line(imgOriginalScene, tuple(p2fRectPoints[3]), tuple(p2fRectPoints[0]), SCALAR_RED, 2)

def writeLicensePlateCharsOnImage(imgOriginalScene, licPlate):
    ptCenterOfTextAreaX = 0                             
    ptCenterOfTextAreaY = 0
    ptLowerLeftTextOriginX = 0                          
    ptLowerLeftTextOriginY = 0
    sceneHeight, sceneWidth, sceneNumChannels = imgOriginalScene.shape
    plateHeight, plateWidth, plateNumChannels = licPlate.imgPlate.shape
    intFontFace = cv2.FONT_HERSHEY_SIMPLEX                      
    fltFontScale = float(plateHeight) / 30.0                    
    intFontThickness = int(round(fltFontScale * 1.5))           
    textSize, baseline = cv2.getTextSize(licPlate.strChars, intFontFace, fltFontScale, intFontThickness) 
    ( (intPlateCenterX, intPlateCenterY), (intPlateWidth, intPlateHeight), fltCorrectionAngleInDeg ) = licPlate.rrLocationOfPlateInScene
    intPlateCenterX = int(intPlateCenterX)              
    intPlateCenterY = int(intPlateCenterY)
    ptCenterOfTextAreaX = int(intPlateCenterX)        
    if intPlateCenterY < (sceneHeight * 0.75):                                                 
        ptCenterOfTextAreaY = int(round(intPlateCenterY)) + int(round(plateHeight * 1.6))      
    else:                                                                                       
        ptCenterOfTextAreaY = int(round(intPlateCenterY)) - int(round(plateHeight * 1.6))     
    textSizeWidth, textSizeHeight = textSize                
    ptLowerLeftTextOriginX = int(ptCenterOfTextAreaX - (textSizeWidth / 2))           
    ptLowerLeftTextOriginY = int(ptCenterOfTextAreaY + (textSizeHeight / 2))   
    cv2.putText(imgOriginalScene, licPlate.strChars, (ptLowerLeftTextOriginX, ptLowerLeftTextOriginY), intFontFace, fltFontScale, SCALAR_YELLOW, intFontThickness)

 

9. Write the Data to a CSV file

image

raw_data = {'Date': [time.asctime( time.localtime(time.time()) )], 
    'PlateNumber': [licPlate.strChars]}
df = pd.DataFrame(raw_data, columns = ['Date', 'PlateNumber'])
df.to_csv('data.csv')
print("Data written to file")

Conclusion

 

  1. The testing was initially done with Raspberry Pi 3B+ and was later extended to BeagleBone AI.
  2. Some characters are misread. Eg: The algorithm wrongly interpreted 'B' as '8' and also interpreted the left section of the plate to be '1'.

          However, this can be rectified by using a different model or by training a model according to our dataset which is a very paining task.

  1. The testing was done on a series of still images from various internet sources. Testing with realtime video source is yet to be done.

 

What Next

 

The program will be extended to relay the information in real-time to a cloud server so that it can be accessed by permitted personnel.

 

References

 

ALPR:

  • https://en.wikipedia.org/wiki/Automatic_number-plate_recognition
  • https://www.theiacp.org/resources/about-alpr
  • https://whatis.techtarget.com/definition/Automated-License-Plate-Recognition-ALPR
  • https://www.eff.org/pages/automated-license-plate-readers-alpr

SBC's

  • https://beagleboard.org/ai
  • https://magpi.raspberrypi.org/articles/raspberry-pi-3bplus-specs-benchmarks

Algorithm:

  • https://www.youtube.com/watch?v=fJcl6Gw1D8k
  • https://github.com/MicrocontrollersAndMore/OpenCV_3_License_Plate_Recognition_Python
  • Sign in to reply
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube