I don't think so.
I have a Computer Science background and automatic code generation has allways been a big topic there. But it never made it to reality.
Why? Because with the time the processing power grows and with that the technology allways tends to more powerfull frameworks instead of code generation. Frameworks enable you to develop complex software very fast - as it is true with code generation, but give you more flexibility.
I think the same is true or the embedded market as well. Many processors run Linux and there you have a lot of powerfull frameworks at hand to do UI stuff, networking or even so complex stuff like IO and thread scheduling.
I think automatic code generation is indeed the future and is already used with high(er) languages and compilers.
I use an imbedded C-Compiler for PICmicro and this piece of software does really all kinds of things for you. The libraries are really powerfull, this helps the user to think less about the processor and machine instructions but more about the functionality of the application itself. The compiler has also a wizard which ask you to make the framework to set a device and it's periferals.
The backside of automatic code generation; it will cost more processor time and memory but on the other hand; processors are becoming faster and cheaper each day. Sometimes you will need to optimize code, especially for 'critical' real-time-processes.
I personally think that programming will become more visual, that is the GUI between the user and the code will change. A simple example is drawing a circuit with logic functions (and/or/etc.) and translate these logic functions in software-routines.
Best regards,
Enrico Migchels
Power conversion design engineer
Heliox B.V.
Best - The Netherlands
I hear this argument quite often from computer science graduates when discussing assembler vs C++ for embedded apps - technology is getting so cheap and powerful that powerful high level languages can be used with embedded operating systems ....
Personally I think this is the wrong way to look at it. Most embedded applications don't need hundreds of millions of operations per second, and even a $5 chip running an embedded OS is massively overpriced. Why throw a system running thousands of lines of code at a problem that has very few inputs or outputs? A 40 cent microcontroller in a washing machine could have it's two or three hundred bytes of code generated automatically from a good flow chart.
Where there are a relatively small number of inputs and outputs (most control systems), automatic code generation is a big help to the companies who's main expertise is the thing being controlled, not the processor doing the controlling. Where expensive PLC modules used in industry for many years used to have a terminal or PC interface and allowed equipment engineers to program machines using logic equations and diagrams, now smaller, cheaper microcontrollers can be employed closer to the work being done - and they need code, which for most systems is going to be cheaper if it's generated by an easy to use automatic tool than hiring a programmer who then needs to learn all about the product being controlled.
Not all embedded applications need internet connectivity, multitasking, multimedia, 16 bit colour, XML parsers or a penguin. Mostly they count pies, turn lights on and off or keep the office at a comfortable temperature - all very mundane and scripted. Ideal for low power drag and drop programming.
I think the issue is that embedded micro's now cover a large range. From small 8bit micro's that deal with simple takes thought to embeded 32bit RTOS running in top end hitech equipment like a iPhone.
The engineer is responible for choosing the micro and the development tools that fit the application.
I am currently pushing Microchip from smaller and smaller chips that will perform the functions I need. The range of PIC10's and PIC12's pack a mighty punch in small standalone systems at a price that is incredibly low - a wizzy GUI flowchart tool just cant create the asm code well enough yet for these devices. The C tools available are very good but are still not as good as an engineer on code size.
However at the other end of the scale, embedded wizzy GUI programming tools like the one NI offer for ARM micros are fantastic. They take away much of the low level IO and memory problems that are easy to slip into on larger projects. My feeling is that some engineers fail to understand abstract programming and data handerling in larger projects that make them stable and more portable and reusable. Tools like that from NI do this and the engineer can spend more time on the final application.
Both have merits and both have thier problems. In time I think we will all see a more wizzy way of writing code. I no longer hand code micro's or edit EPROMS in hex, asm is slowly disappearing and better C compliers come alone - but ultimately we will still need embedded engineers, maybe just less of them?
Paul
Hi there Paul,
I see a growing need for intelligence and functionality in applications. This is also driven by the low cost of programmable devices. Designers these days have to consider if discrete build circuitry is the more clever way to proceed. The trade-off in component cost, flexibility and design throughput time have to be considered. The easy-to-use programming tools and good C-compilers make programming more accessible for hardware designers. They just simply do the 'whole' package. This is ofcourse more important in smaller companies where designers have to be more all-round.
So, more programming but less specific knowlegde of machine-code needed. But this is seen in market already a longer time. I understand that much of the more complex embedded systems are optimized by a handfull of 'heroes' who still understand the art of opcodes, operands, stacks and workregisters :-)
Best regards,
Enrico Migchels
Power conversion design engineer
Heliox B.V.
Best - The Netherlands
I certainly hope that ECCM isn't our future !!
There are some quite well established automatic code generation design flows in use in Automotive and Aerospace applications (eg MATLAB/SIMULINK -> RTC -> C) - these do work and can produce useable production code. The tools are expensive.
The issue with all of these approaches is that they do not remove the need for programming but provide a new programming langauge (possibly graphical, higher level whatever).
If they provide very high level code blocks (or library functions in C) then you are stuck with using what you are given - which may be quick but may not be very well optimised - if you need to get to low level coding (and you always will if you want optimal performance) your are better off (much) using a well known and standard langauge.
We're going to see a lot of these new "no code required" approaches over the next few years and I'm sure that they will promise a great deal and deliver very little - try parsing a string, calculating a CRC and adding two 64 bit integers in the "ncr" on offer. (I chose these as tasks easily accomplished in assembler or C but frequently challenging for graphical systems - using a ready made block doesn't count !!).
You are right about the current state of code generation. Primarily because the code generation continues to be based on a spatial logic approach. That's the standard "if-then-else" logic that is so difficult to understand and maintain that dominates the industry. I've developed a temporal approach that improves the logic because it eliminates the need to test for its place in the logic. I wrote a book to explain this: Breaking the Time Barrier - The Temporal Engineering of Software. My website at www.vsmerlot.com has some information showing how one can make the transition.
I started using CASE tools back in the 1988 and was disapointed in their complexity. That's when I began the process of trying to understand what the problem was and then fix that problem. As you've pointed out the code generation leaves a lot to be desired. I tried working the problem backwards but the code was so bad that the model was just as bad. That's how I discovered a better code structure using temporal engineering and applied for a patent. The patent issued in Feb 2004 as US Patent No. 6,345,387. I decided in 2009 not to enforce the patent and allow the public to use the technology.
My background: I'm the inventor of "Time Domain Architecture" (TDA) US Patent 4,847,755. This time domain technology is better known as Multi-Core & Hyper-Threading Technology. Understanding time was important in creating parallel code and real-time tasks which led to the hardware architecture and today to the temporal software architecture.
As of this writing DARPA is looking into the Temporal Engineering of Software. In addition I’ve been selected as a presenter at the DOD sponsored 2010 Systems & Software Technology Conference in Salt Lake City. I’m excited about being selected and looking forward to presenting.