You may have heard about Meta's LLaMa leak - well, if you register for them via a google form it's not exactly a leak, but it's certainly not for commercial use.
Since then people have investigated whether or not you can run it yourself.
https://github.com/ggerganov/llama.cpp (or its spin-off alpaca.cpp)
You can. What's more is it uses the CPU rather than GPU, and the models can be refactored to run on only 4GB of RAM.
That's right, you can run your own "ChatGPT" on a Raspberry Pi.
There are options to 'scale' it to use more or less processing cores, and its responses can take about 280ms, but with a bit of fiddling and tweaking, it runs!
It certainly benefits more from faster data storage for its models, particularly when processing them.
Now who's going to jump the gun and get it working with home automation and produce their own 'Travis' from Iron Man ?