the Rasberry PI is a far better representation of today's embedded micros and particularly those that will become the norm. There will always be a place for minimal micros but it is for extravagantly high volumes so hardly worth considering for your first language.
Spoken like someone who doesn't do "hard real time" or simple stuff. I've just designed an ATTINY44 into a product for a customer - he'll make between 1000 and 4000 a year in batches of 100 at first - so not big numbers. The BOM cost (including pcb) will be about £3. It couldn't possibly support an applications processor like the Pi although it could be done with an ARM CortexM0 (within budget but for perfectly good reasons they want a DIL package). Either way, AVR or ARM it'll be coded in C.
BTW, my definition of "hard real time" is when the result must be presented at the correct time to be valid (real time is when there is no lower limit).
Java and C# are not good choices for a first langauge - not least because both are to greater or lesser extent, proprietary.
EEs (assuming Embedded Engineers) should learn more than one programming langauge - C is the best for understanding the nuts and bolts, the next one depends a bit on the individual's direction of travel.
If you want to read a good rant on C v C++ and OOP try this:
the Rasberry PI is a far better representation of today's embedded micros and particularly those that will become the norm. There will always be a place for minimal micros but it is for extravagantly high volumes so hardly worth considering for your first language.
Spoken like someone who doesn't do "hard real time" or simple stuff. I've just designed an ATTINY44 into a product for a customer - he'll make between 1000 and 4000 a year in batches of 100 at first - so not big numbers. The BOM cost (including pcb) will be about £3. It couldn't possibly support an applications processor like the Pi although it could be done with an ARM CortexM0 (within budget but for perfectly good reasons they want a DIL package). Either way, AVR or ARM it'll be coded in C.
BTW, my definition of "hard real time" is when the result must be presented at the correct time to be valid (real time is when there is no lower limit).
Java and C# are not good choices for a first langauge - not least because both are to greater or lesser extent, proprietary.
EEs (assuming Embedded Engineers) should learn more than one programming langauge - C is the best for understanding the nuts and bolts, the next one depends a bit on the individual's direction of travel.
If you want to read a good rant on C v C++ and OOP try this:
the Rasberry PI is a far better representation of today's embedded micros and particularly those that will become the norm. There will always be a place for minimal micros but it is for extravagantly high volumes so hardly worth considering for your first language.
Spoken like someone who doesn't do "hard real time" or simple stuff. I've just designed an ATTINY44 into a product for a customer - he'll make between 1000 and 4000 a year in batches of 100 at first - so not big numbers. The BOM cost (including pcb) will be about £3. It couldn't possibly support an applications processor like the Pi although it could be done with an ARM CortexM0 (within budget but for perfectly good reasons they want a DIL package). Either way, AVR or ARM it'll be coded in C.
Building on Michael's answer, it's not only a question of cost in dollars. It's also cost in software complexity. A SoC like RasPi's is brutally complex, and Linux is a mainframe operating system. The learning curve to know what's really going on in your chip is very steep. Most people don't have the time, inclination, or ability to do this, so the OS is a black box. When things don't work, good luck. [RasPi's Broadcom SoC doesn't just have a steep learning curve -- it has an NDA wall.]
OTOH, with a simple microcontroller you can know everything your processor is doing. You don't need pages and pages of code just to initialize it. This is the joy of embedded computing -- you have full control and you're not separated from the hardware by layers of OS. It continues the fun of programming a PDP-11, where you have direct access to all the peripherals.
Well I now must reply as John has very articulately captured a view I see outside of larger companies, my viewpoint is from within them and more related to trends than today's implementations so perhaps I'll have another go at putting it forward. When I first started working in microprocessor design teams (of course SOCs are a job that one would partition into areas of expertise) we were designing in 0.35u (micros are often designed in mature technologies as their budgets can't support the mask costs of the cutting edge), we designed a single cycle 8051 and used flash sizes up to 16k. This was excellent for simple programs which would be ok with the small non extended program memory and becoming familiar with the data sheet to write to the peripherals directly was a one off as there were so many different architectures out there you kind of stuck to one manufacturer. Time moves on and people start to demand the Harvard architecture of the ARM, the Cortex comes out and we only synthesize cores now. A micro now is usually taped out on the more mature 65nm process however 1/f noise and even matching for those circuits that can't be chopped means analogue circuits don't scale the way the digital does. People continue to demand larger and larger program spaces and program memory demands are now almost too large to put onto the same die and we find ourselves stacking the processor onto these huge flash die. We don't develop small cores any more and one day the machines that make that geometry process will all be out of service and these cores will be expensive legacy production runs. The M0 synthesizes to an area not too much smaller than the M3 and consumes about 1/5th the current (these are numbers from an actual synthesis so might not match ARM's pitch?) so one can mostly ignore the differences and just say there is the Cortex with it's well developed tool chain and chips coming along that are an order of magnitude larger than hobby micros, also the ARM itself is nowhere near the largest part of a SOC. Having the same core one can easily switch between manufacturers to pick the most appropriate set of peripherals and you have better things to do with your life than read a whole new data sheet so you see things like CMSIS where you are even today abstracting yourself from the registers and detailed nuts and bolts. The very large program memories come from very large programs so often these run small RTOSes and writing a thread safe driver is not a task to be attempted by the beginner so these new classes of micro are going to come with driver sets for RTOSes. Cost wise all this extra functionality doesn't change the die size with the geometry shrink so the cost is pretty similar and if you spread your end user development cost across the production run size you'll find there's no value in reducing the cost further as you can pull extra functions into the micro so don't expect to see crazy cheap micros becoming the well supported norm (additionally ESD area overhead limits the smallest amount of functionality you can put on a die before it's all just ESD as ESD structures don't scale well with process). So given that you will have access to very powerful micros with RTOSes with low level drivers already written for you then trying to return to the nuts and bolts end of things is of limited practical use. There are many disgustingly overused statements to encapsulate this sentiment, standing on the shoulders of giants and not re-inventing the wheel being two, if someone has already done the work for you then your time would be better used adding value than repeating that work. Operating at the top of an OS stack is specifically designed to be treated as a black box, you want network connectivity, it's already there. Real time can be done at the top of an application stack in an RTOS by using the correct drivers. Linux is a fantastic OS if you are planning on attaching a screen to your project but if that's not the case you're going to be using an RTOS of which I can't call which one is going to come up top however being designed for larger code spaces we can expect the application code to be written in a higher level language allowing for larger teams to operate on it. Microsoft have been making excellent inroads into the embedded market, even managing to get into automotive which frankly still amazes me so the .NET framework will become an embedded contender. So as a first language to learn, even for embedded in say 5-10 years I maintain an OO language is the best place to start to expose people to programming in it's more modern state and also to allow simpler development in a threaded environment where stack abstraction will be the norm. I however do not dispute that there are a range of micros available today that one can perform simple functions on that high level languages are completely useless for, I am currently using one simply for the timers and USB so am writing this in C, because that's how you do it today. C was my first language but it doesn't allow the fluidity of modern languages to very quickly perform the task you set out to do, if I were to meet someone embarking on their first language I would not recommend C as it is specific to a small subset of tasks and I believe that subset is going to shrink drastically in the next ten years (not disappear of course, simply become more of a specialisation than a core skill).
p.s. Perhaps the lack of seperation of this into paragraphs was on purpose?!
p.p.s. Regarding your PDP-11 comment I think the choice of first programming language is different to how one learns about the details of processor operation and I don't really have a good answer to that other than perhaps an emulated system (my choice would be the Apollo guidance computer for no other reason than it went into space)
p.p.p.s. John, I see your GalaxC project and I am interested in how you balance an interest in more abstract languages and low level stuff.
p.p.p.p.s. I also append the disclaimer about this being an opinion and not a statement of absolute fact!
p.p.p.s. John, I see your GalaxC project and I am interested in how you balance an interest in more abstract languages and low level stuff.
Adam, you are the first person to comment on the GalaxC language itself, at least that I've seen. So thanks, I'm flattered!
To answer your question: while GalaxC is all about defining high-level notations that suit your problem domain, the underlying execution model is the same as C. So like C, you're using high-level notations to write portable ASM. Since GalaxC is a work in progress, I'm constantly considering low-level issues. For example, if a program doesn't work I often have to determine whether my compiler has generated the wrong code: this means looking at that low-level code.
For an even better example, the main thing I'm working on right now is using GalaxC for Hardware Design (GCHD), i.e., using GalaxC plus minor extensions as an HDL. This includes compiled-code simulation of hardware, where Boolean logic is mapped into executable code. So I'm spending a lot of time down at the executable code level to make sure GCHD is generating the correct low-level code. At some point in the near future, I'll be mapping GCHD to FPGA logic, which is even lower level than machine language.
At the present time, GalaxC and GCHD generate interpretive code, something like P-code or Bytecode. This is for convenience and portability. It will be easy enough to translate my PSI code into native ARM or x86 or whatever when the time comes. That adds another layer.
So you see that in my GalaxC project (actually the project is XXICC, GalaxC is part of it), I have to work at all levels and understand what's going on at all these levels. In fact, I don't consider GalaxC to be a high-level or low-level language -- it's an all-level language
Top Comments