element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet & Tria Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • About Us
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Embedded and Microcontrollers
  • Technologies
  • More
Embedded and Microcontrollers
Embedded Forum Writing protocols for bare-metal C
  • Blog
  • Forum
  • Documents
  • Quiz
  • Polls
  • Files
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Embedded and Microcontrollers to participate - click to join for free!
Actions
  • Share
  • More
  • Cancel
Forum Thread Details
  • Replies 18 replies
  • Subscribers 470 subscribers
  • Views 3216 views
  • Users 0 members are here
  • firmware
  • ip_iot
  • embedded
Related

Writing protocols for bare-metal C

ipv1
ipv1 over 5 years ago

Over the years, I come across the need to connect sub systems together with the like of UART, I2C, CAN, LIN, RS485, MODBUS and what not. A few years ago, I also wrote about implementing custom protocols that are simple versions of the previously stated ones and recently jumped into a project that uses a custom LIN implementation. By custom, I means that it does not follow the rules or LIN in terms of communication but uses the same electrical base and some other pieces.

 

The existing code base is heavily reliant on the processor registers etc and the guy who wrote it can't/won't explain the structure of the system as a lot of it is unplanned, undocumented patches. It works but its a lot of C code mixed with "other things".

 

My question here is about a standard way of writing things like these. I am talking about implementing multi-byte exchange protocols and what should be the do's don't, correct way, though processes, etc when writing this stuff. Finite state machine code with non-blocking code is where I usually land but I want your thoughts on the process.

 

What is your way of writing protocols without an OS and how would you accomplish multiple tasks with out the scheduler?

  • Sign in to reply
  • Cancel

Top Replies

  • Jan Cumps
    Jan Cumps over 5 years ago in reply to Jan Cumps +5
    While I'm at it: never consult hardware/firmware forums when thinking about version control. They are inventing issues that have been solved in 1985 .... and say that firmware development is unique. And…
  • michaelkellett
    michaelkellett over 5 years ago in reply to ipv1 +5
    There's a lot to be said for the "Superloop plus interrupts" architecture - but like everything it has its place and gets used in the wrong places. For simple systems it has lots of advantages - not the…
  • shabaz
    shabaz over 5 years ago +4
    Hi Inderpreet, In telecoms there's lots of protocols documented, but there's no (publicly available) source code often, so it's up to the software developer to code it. The way the protocol is documented…
  • shabaz
    shabaz over 5 years ago in reply to ipv1

    Hi Inderpreet,

    Regarding:

    there is rarely any implementation or code for protocols

    It requires several things usually, and so there's some mix-and-match that can occur, and using features as needed. Ordinarily the bare minimum is the need for a state machine, but also some code that can determine what message is arrived, and in the other direction how to encode for an outgoing message. Also, at a minimum, a function per state to map from what arrived to what needs to occur next. You'll also need (perhaps) a way to handle more than one message or event, and one way is to have a list capability and for your engine to go through each item in the list. And, to stop going insane : ) some logging/troubleshooting capability. You mention bare-metal, but tasks could help (doesn't have to be preemptive), so some OS and middleware is needed, whether you use existing stuff or create bits of your own. There's no single pattern for protocols, but you can get a good idea by looking at similar-complexity protocol open source to see what they used.

    I can point you toward an example open source complex system, you're unlikely to want to use it unless it's a major project, but it will give you ideas of the palette of things you'd want to be able to deal with protocols - not all will apply. Also, the example is for C++, and many of the features there won't work the same if you're trying to implement something more cut-down in C. The complex example is Adaptive Communications Environment and it was designed for implementing real-time stuff including protocols, and it's good enough for satellites apparently. I've used it for one project. But, it's large enough that it needs (say) Linux. Looking at that example, you'll get an idea of the types of 'helper' things (for want of a better way to describe it) your embedded code might need to implement protocols. If it's a trivially simple protocol it might not need as much helper stuff.

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • Cancel
  • ipv1
    ipv1 over 5 years ago in reply to shabaz

    I understand your inputs are quite useful. I think I have quite a bit of the answer I was looking for and have another question moving forward.

     

    So my take on this has been state machines but with the constraint that most if not all of the code will be non-blocking calls. This means that signals are replaced by Interrupts and there are more than one. I have done projects with more than a dozen tasks running on a controller without the need for a scheduler as state machine calls would be running the main loop and there would be serial, adc, timer and counter interrupts doing things to change states and affect buffers. This makes sure that I can get away with controllers with limited resources as well as have total control over the system. Logging via a space uart is always a good option and the challenge boils down to writing the layers necessary for most of the FSMs to talk to each other. That having been said, my question now is...

     

    What is the standard way of writing code for these projects? Does a good practice even exist and if not then am I over engineering things?

    • Cancel
    • Vote Up +2 Vote Down
    • Sign in to reply
    • Cancel
  • shabaz
    shabaz over 5 years ago in reply to ipv1

    Hi Inderpreet,

     

    From what I've seen (could be very different to what others have seen), that is a standard way of writing code, i.e. a big loop in main() in a microcontroller, consisting of all the tasks (say each one being a function call in that loop).

    Nothing must take too long, and adding more code in the future will slow the loop rate for all tasks. And use interrupts for things which most override the loop, for example the most time-critical tasks.

    This is a very traditional way, but I've seen it deployed for fairly complex products.

    But, once you've got more resources and perhaps more complexity in code too, then it can be advantageous to take advantage of an OS. In the past it had to be a co-operative OS pretty much, due to lack of too many resources. For example the original uC/OS was co-operative, I don't know what the later ones are. And, fairly large massively feature-rich network hardware was running on co-operative OS until around 10 years ago. Incidentally, there are maybe a dozen (at least) different flavors of co-operative and pre-emptive, it isn't a binary choice: there are different scheduling algorithms, each with benefits/disadvantages.

    Nowadays given the choice some teams will pick an OS and most likely pre-emptive, but that doesn't mean the old methods always have to be thrown out. However moving to a modern OS allows you to not have to reinvent the wheel for things like passing messages, reducing risk of error and crashes. And if the OS comes with features like upgrades then your code can take advantage of them. Usually there's a long list of features, and the things that commonly cause issues in C can be reduced by using the OS features. Plus, anything that reduces lines of code has a chance of reducing bugs. But, on the flip side, the old-school big main () loop is even successfully present in some military gear.. and that has to be reliable too.

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • Cancel
  • michaelkellett
    michaelkellett over 5 years ago in reply to ipv1

    There's a lot to be said for the "Superloop plus interrupts" architecture - but like everything it has its place and gets used in the wrong places.

     

    For simple systems it has lots of advantages - not the least is that ALL the code in the project can be known and accessible - which is almost never the case as soon as an RTOS or OS gets used.

     

    It's important no to do too much in interrupts, ideally use them to look after buffering but do as little processing as possible.

     

     

    MK

    • Cancel
    • Vote Up +5 Vote Down
    • Sign in to reply
    • Cancel
  • Andrew J
    Andrew J over 5 years ago

    My take on this would be that if you find someone presenting you with a ‘need’ for a new protocol, or you find yourself trying to create a new protocol, you should go back into the design process and ask where it went wrong.  That’s not to say a new protocol won’t be required, but given what is established in industry already you need to stick with what’s there.  Really, the only time I can see a need to create a new protocol is if you’ve invented a new industry!  An existing protocol may not be perfect, may be too heavyweight etc etc but there it is already and in use.  If I created something new in an existing marketplace it would end up so niche that it would likely fail to gain any traction.  At the least, you’d have customers coming back demanding it talked nicely to their other equipment!

     

    And to the point on interrupt driven processing, I’d say it has its place but drives development, testing and support complexity so appropriateness must be considered - it’s not a standardised design pattern.  Timing, blocking, synchronisation, reproducibility all become issues to resolve and the more your software is interrupt driven the greater the complexity level.  Personally, I’d only ever drive ‘imperatives’ as interrupts (I.e. this thing MUST be done at the time of interrupt and can’t wait) and not conflate the concepts of events and interrupts.

     

    EDIT: When I wrote that paragraph above, it hadn't crossed my mind to think 'real-time' - it's nothing I have any experience with - so I would have to caveat that paragraph accordingly. 

    • Cancel
    • Vote Up +2 Vote Down
    • Sign in to reply
    • Cancel
  • johnbeetem
    johnbeetem over 5 years ago

    I worked intensively on a multi-protocol communications device with (thank God) no operating system.  It was basically the approach described by Michael Kellett:

    There's a lot to be said for the "Superloop plus interrupts" architecture - but like everything it has its place and gets used in the wrong places.

     

    For simple systems it has lots of advantages - not the least is that ALL the code in the project can be known and accessible - which is almost never the case as soon as an RTOS or OS gets used.

     

    It's important no to do too much in interrupts, ideally use them to look after buffering but do as little processing as possible.

    Our basic data unit was a "buffer".  Input data went into a buffer using DMA and when the buffer was "complete", which meant different things for different serial protocols, an interrupt would occur and the ISR (interrupt service routine) would append the buffer to the "Action Queue" for processing by the main loop.  The main loop, which ran in foreground, would grab the next buffer from the Action Queue, perform the action indicated by a 16-bit action number, and then queue up the buffer for a different output port or return the buffer to the free buffer list.

     

    We also had a timed events.  These were handled by an "event wheel", a technique used in event-driven logic simulation.  The main loop would see if any timed events needed to be processed and did it before the Action Queue.

     

    As Michael points out, bare metal programming means you have all the source code so you can track down bugs without wearing a blindfold.

     

    You learn lots of interesting things programming at the bare metal.  One bit of fun was when we ported the software from Motorola 68020 to PowerPC.  The 68020 has an atomic memory increment instruction implemented with a non-interruptible read/modify/write.  The PowerPC does this with three instructions: Load, Increment Register, and Store.  If an interrupt occurs in the middle of this and the ISR modifies the same shared variable, you can get a nasty result.  This has a low probability of occurrence, which makes debugging nasty.  The error is happening at the machine language level, so programmers who only know high-level languages don't understand how it could be happening.

     

    Management was always trying to replace the bare-metal code with an operating system.  I always fought against it.  After I left the company they did the operating system thing.  It took a couple years to get going and required at least twice the memory.  Think about it: there's no way that adding an OS is going to make your code run faster or smaller, right?

    • Cancel
    • Vote Up +2 Vote Down
    • Sign in to reply
    • Cancel
  • Jan Cumps
    Jan Cumps over 5 years ago in reply to johnbeetem

    johnbeetem  wrote:

     

    ...

     

    We also had a timed events.  These were handled by an "event wheel", a technique used in event-driven logic simulation.  The main loop would see if any timed events needed to be processed and did it before the Action Queue.

     

     

    That's virtually the same as what RTosses do.

     

    johnbeetem  wrote:

     

    ...

     

    Management was always trying to replace the bare-metal code with an operating system.  I always fought against it.  After I left the company they did the operating system thing.  It took a couple years to get going and required at least twice the memory.  Think about it: there's no way that adding an OS is going to make your code run faster or smaller, right?

    On the other hand, it's a proven working way of scheduling tasks with right timings and priorities. And have inter task communication.

    Reusable things that you need, and have to build inhouse if you roll your own. That leads to writing a toolkit that already exists by a team that could work on domain specific functionality?

    • Cancel
    • Vote Up +2 Vote Down
    • Sign in to reply
    • Cancel
  • johnbeetem
    johnbeetem over 5 years ago in reply to Jan Cumps
    Reusable things that you need, and have to build inhouse if you roll your own. That leads to writing a toolkit that already exists by a team that could work on domain specific functionality?

    My experience with RTOS has been mixed.  The product I'm talking about at one time had an optional set of protocols that we didn't want to write ourselves because nobody understood them.  So we licensed software from a vendor.  The software ran on VRTX.  Well, we didn't want to have to run VRTX on our standard product (and pay a per-unit royalty), so I engineered it so that VRTX was optional.  If installed, it ran our standard software as a task.  The 68000 version of VRTX was simple and well-documented, so doing this was straightforward.

     

    Later I had to the same thing for PowerPC.  That ended up being a nightmare.  VRTX had been bought by Mentor Graphics and the PowerPC documentation was awful.  You were supposed to use a "board support package" which was supposed to take care of everything for you, but if your hardware didn't have a "board support package" the documentation didn't tell you what you needed to know.  I ended up having to disassemble the task switch code to figure out how they saved and restored registers.

     

    So yeah, using the PowerPC VRTX didn't open up a bunch of time for "domain-specific functionality".

     

    I love embedded programming down at the bare metal and having control over everything.  I'm OK with writing application-level code for mainframes, but if I'm doing embedded I want to be down at the bare metal.  Chacun a son goût.

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • Cancel
<
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube