
Artificial Intelligence has the potential to change many of the things we do. Take the poll and let us know how you think AI will be deployed in embedded applications, and please tell us why in the Comments section below!

Artificial Intelligence has the potential to change many of the things we do. Take the poll and let us know how you think AI will be deployed in embedded applications, and please tell us why in the Comments section below!
AI is probably near its maximum point on the hype curve. A few years ago everyone was talking about blockchain - it's not seen as being the future of everything any more. AI will follow the same route.
It will find some applications where it works really well, and be forgotten for a great many others where it is currently being mis-applied.
DAB makes a very good point - some years ago there was a lot of discussion about Fuzzy Logic replacing classical and modern control therory techniques.It didn't because it isn't easy to prove the robustness of Fuzzy systems. AI systems are worse - it has been in easy in many case to demonstrate that they are far from robust.
There are applications where this doesn't matter, and many others where it is essential.
So I expect to see AI make a big impact on search engines, chat robots and other things where a level of random nonsense in the output is expected and acceptable. Only the reckless will put AI in charge of potentially lethal devices (although I fear there are some quite reclkless types about.)
MK
I am not very well versed in AI so I might not be in the best position to predict its future trends. Anyway I chose the option that sees most processing occur in the cloud. This general approach saves manufacturers the cost of manufacturering specialized chips and they can harness the power of connectivity. Although we take it for granted a mind staggering about of data can be exchanged every second. Finally I think the opportunity for the AI companies to charge for some these cloud services will be irresistible.
There is a quite interesting podcast series over at:
https://www.youtube.com/playlist?list=PLqYmG7hTraZBiUr6_Qf8YTS2Oqy3OGZEj
if you want to get an easy listening insight into some of the AI research going on.
You are correct that large companies are working hard to make cloud services seductively irresistible because it is good for their bottom line. But it is a see-saw battle. As mainframe power became available in a PC, consumers bought PCs instead of mainframe terminals. Then servers were introduced to handle big data, but eventually consumers could afford terrabyte storage devices. Now the cloud is even more seductive, but technology will become affordable that allows individuals to remain independent. This balance between corporate interests and individual interests will become increasingly important as humans become more augmented by technology.
Would you rather have your own computer sifting the Internet for information or have advertisers paying Google nd Microsoft to dictate what you get to see?
Mostly at sensors/nodes using dedicated AI processors
This option is my vote because it is the next logical step. Microcontrollers designed for embedded edge applications already have neural network accelerators built into them. And the current trend is to have that as a separate block within the MCU that operates without turning on the high-performance / high-power consumer general-purpose processing cores.
So if that AI/ML work can be moved to the actual sensor, it would mean be able to operate the high-performance microcontroller even less often. Granted, everything depends on the application. For example, maybe the sensor nodes run on battery power, and the computing node has constant power. So, like all things engineering, there is no one-answer for all situations.
Along those lines, I think we sometimes blur the line between Artificial Intelligence and Machine Learning. Frankly, much of the "Artificial intelligence" in edge computing devices is pattern matching based on a trained neural network. Which is machine learning and not decision-making. So there isn't much "intelligence,", especially in edge applications.
I make that point because it explains why I see the next step as moving the ML inferences to the sensor node itself. Now the sensor is smart enough only to transmit a result when it fits a model. Again, this method won't make sense in all applications, but I could see where it might in some.
Personally, I refuse to fully trust 'cloud' until homomorphic encryption becomes widely available and deployed.
See-saw is a great word for the back and forth rivalry between cloud and closed system approaches. Although I was predicting cloud being dominant I prefer a self contained implementation. Part of this is because like you I don't like the intrusions of private companies using the data for targeted advertising but I'm also unimpressed by how useless my Google Home assistant becomes once its internet connection is lost. Its total reliance on outside data and algorithms to operate has inspired me to coin the term Artifical Artificial Intelligence. The self contained implementations are much more interesting to me. The way you 'taught' your HuskyLens to recognize different types of bugs is a recent example of this.
Companies like Amazon, Google, and Apple will only support cloud-based AI integration. It is not in their advantage to provide local control over home automation. It's unclear what Microsoft will do since they have an OS (Windows) that could resurface as the home OS server with local control processing for AI services.
I would prefer that AI\ML for home automation ensure local control by reducing the MML size to something that can run on a single graphics processor or dedicated AI processor like the endpoint Tensor, but it will depend on if companies like Microsoft provide that option along with a secondary escalation of communication between the local controlled AI and the cloud-based AI (e.g., ChatGPT).
In this later case, a local control AI would handle any intelligence gathering and processing of movements, images\sound, and sensor data to tailor the AI integration with the home automation system. Home Assistant could champion this approach and there will be multiple smaller sample MML corpus of knowledge to run under local control. The home AI would need additional information regarding identifying of persons, their locations, and what they say or do. In 5 years, mm wavelength positioning could be available to provide at home GPS-like function down to the centimeter.
There still is a layer of preprocessing that needs to mature for home AI to really take off. Localized domain of knowledge which includes the home sensor data plus details in terms of preferences, likes and dislikes of each person. Also, the ability to identify non-person identities (animals) and determining when a person is an intruder. There is already much work towards local processing of images via security camaras that does not require Internet connections (except for remote viewing and alerting). Tailoring the input queries based on person identity would be very special knowledge that must not be exposed to companies like Amazon, Google, and Apple which will use it to pervasively track individuals and apply negative actions based on their social credit score. That is why I am strongly opposed to mega-corp access and control of personal information from a home automation AI.
Unforineitly, the younger generations don't care about privacy and haven't learned from history the lessons of tyranny that arise from knowledge control.
I too feel think this will definitely grow. This is already deployed now, e.g. companies like PointGrab already advertise their sensors use AI/ML, on-device. Although it's not technically on the actual "sensor chip" necessarily (they are using a standard thermal or imaging sensor most likely) it is in the "sensor product", i.e. in the sensor node device (in-device AI processor or software).
Some other manufacturers absolutely need to do it in-device, no other choice, since they cannot be sending large quantities of data wirelessly on battery power. They need to make the inference on-device.
Probably one main thing which will slow "sensor-and-AI-on-a-single-chip" a bit (although I'm sure some already exist) is the job layoffs/economy/demand on chip fabs, and so on. It may be a tough few years : (