Experiments
I've spent some time experimenting with different apps, which where build to run LLMs from the edge. As well I was learning about community and key contributors. One of such contributors is Georgi Gerganov (ggerganov). He is one of the pioneers, who openned the world of LLM to machines with regural CPUs by building llama.cpp in his spare time, This project allows Inference of LLaMA model in pure C/C++ without other dependencies. But the main focus of his project is to get a great performance on Mac OS and related hardware. As I understood from one of the project threads it was running of RPi4. But when I tried with the latest version of the project it didn't worked for me. I've tried different models and it always failed.
GPT4ALL
I decided to switch focus to my main objective. And this time I've got lucky with GPT4ALL.
The project describes itself as "an ecosystem to train and deploy powerful and customized large language models that run locally on consumer grade CPUs.". The additional benefit of the project it can be used for commercial purposes as well. It is a limitation for many other models/projects.
First, I've found an issue in the gpt4all project, related to running gpt4all from the command line interface (CLI). Than one of the project contributors with the username cosmic-snow provided there a script to run gpt4all from CLI.
And it was working for me all the way till the last command.
I've entered prompt and right away got an answer. But this time it was an error message.
I went back to the project issue thread and provided my feedback and a call for help..
Here I've got luckky again. cosmic-snow provided a response with details how to fix the issue the same day!
And his fixed worked well for me.