Hi!
Welcome to the exciting world of local Large Language Models (LLMs) where we’re pushing the boundaries of what’s possible with AI.
Today let’s talk about a cool topic: run models locally, especially on devices like the Raspberry Pi 5. Let’s dive into the future of AI, right in our own backyards.
Ollama and using Open Source LLMs
OLLAMA stands out as a platform that simplifies the process of running open-source LLMs locally on your machine. It bundles model weights, configuration, and data into a single package, making it accessible for developers and AI enthusiasts alike. The key benefits of using Ollama include:
- Simplicity : Easy setup process without the need for deep machine learning knowledge.
- Cost-Effectiveness : Eliminates cloud costs, making it wallet-friendly.
- Privacy : Ensures data processing happens on your local machine, enhancing user privacy.
- Versatility : Suitable for various applications beyond Python, including web development.
Using Local LLMs like Llama3 or Phi-3
Local LLMs like Llama3 and Phi-3 represent a significant shift towards more efficient and compact AI models. Llama3, with its Mixture-of-Experts (MoE) architecture, offers specialized neural networks for different tasks, providing high-quality outputs with a smaller parameter count.
The use of local LLMs offers several advantages:
- Reduced Latency : Local models eliminate network latency associated with cloud-based solutions.
- Enhanced Privacy : Data remains on your local device, offering a secure environment for sensitive information.
- Customization : Local models allow for greater flexibility to tweak and optimize the models as per your needs.
How to Set Up a Local Ollama Inference Server on a Raspberry Pi 5
I already wrote a couple of times, my own version of the 1st time setup for a Raspberry Pi (link). Once the device is ready, setting up Ollama on a Raspberry Pi 5 (or older) is a straightforward process. Here’s a quick guide to get you started:
Installation : Use the official Ollama installation script to install it on your Raspberry Pi OS.
The main command is:
curl -fsSL https://ollama.com/install.sh | sh
In example to install and run llama 3, we can use the following command:
ollama run llama3
Once ollama is installed and a model is downloaded, the console should look similar to this one:
Tip: to check the realtime journal of the ollama service, we can run this command:
journalctl -u ollama -f
Important: by defaul ollama server is available only for local calls. In order to enable access from other machines, you need to follow these steps:
– Edit the systemd service by calling this command.
sudo systemctl edit ollama.service
– This will open an editor.
– Add a line Environment under section [Service]:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
– Save and exit.
– Reload systemd and restart Ollama:
More information in the ollama FAQ.
How to Use Semantic Kernel to Call a Chat Generation from a Remote Server
Let’s switch and write some code. This is a “Hello World” sample using Semantic Kernel and Azure OpenAI Services.
You can learn more about this ai samples in: https://aka.ms/dotnet-ai.
Now , to use a remote LLM, like Llama 3 in a Raspberry Pi, we can add a service to the Builder, that uses OpenAI API specification. In the next sample this change in the line 35:
This makes the trick! And with just a single change in a line.
And we also have the question about the performance, adding a StopWatch we can get a sense of the time elapsed for the call. For this simple call, the response is around 30-50 seconds.
Not bad at all for a small device!
Conclusion
The advent of local LLMs like Ollama is revolutionizing the way we approach AI, offering unprecedented opportunities for innovation and privacy. Whether you’re a seasoned developer or just starting out, the potential of local AI is immense and waiting for you to explore.
This blog post was generated using information from various online resources, including cheatsheet.md, anakin.ai, and techcommunity.microsoft.com, to provide a comprehensive guide on local LLMs and Ollama.
Happy coding!
Greetings
El Bruno