Solar Pro Preview is a high-performance model with 22 billion parameters, designed to operate on a single GPU. Despite its size, it consistently outperforms other models with fewer than 30 billion parameters and delivers results comparable to much larger solutions, such as the Llama 3.1 model with 70 billion parameters.
Optimized to run efficiently on GPUs with 80GB of VRAM, Solar Pro Preview builds on the Phi-3-medium framework, scaling it from 14 billion to 22 billion parameters through an enhanced depth-scaling technique. The carefully selected training process and dataset have contributed to significant improvements in tasks measuring comprehension and instruction-following abilities, like MMLU-Pro and IFEval.
As an early version of the upcoming Solar Pro, this release comes with certain limitations. It currently supports fewer languages and has a 4K context length ceiling. Nevertheless, the model stands out for its power and efficiency, with plans for the final release in November 2024, promising broader language support and an extended context window for more diverse use cases.
Evaluation
Solar Pro Preview is evaluated over a variety of benchmarks.
Evaluation Protocol
For easy reproduction of our evaluation results, we list the evaluation tools and settings used below. All evaluations are conducted with NVIDIA DGX H100.
The results may vary slightly for different batch sizes and experimental environment such as GPU type.
Step-by-Step Process to Deploy Solar Pro 22B Model in the Cloud
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you've signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift's GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the "GPU Nodes" tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x H100 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Solar Pro 22B on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Solar Pro 22B on your GPU Node.
After choosing the image, click the 'Create' button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the 'RUNNING' status, you can navigate to the page of your GPU Deployment Instance. Then, click the 'Connect' button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
Step 8: Install Solar Pro
After completing the steps above, it's time to download Solar Pro from the Ollama website.
Website Link: https://ollama.com/library/solar-pro
Then run the following command to install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
After the installation process is complete, run the following command to see a list of available commands:
Next, run the following command to host the Solar Pro model so that it can be accessed and utilized efficiently.
Step 9: Install Solar Pro 22B Model
To install the Solar Pro 22B Model, run the following command:
Website Link: https://ollama.com/library/solar-pro:22b
ollama pull solar-pro:22b
Step 10: Run Solar Pro 22B Model
Now, you can run the model in the terminal using the following command and interact with your model:
ollama run solar-pro:22b
Conclusion
Solar Pro 22B is a groundbreaking open-source model from Meta that brings state-of-the-art AI capabilities to developers and researchers. Following this step-by-step guide, you can quickly deploy Solar Pro 22B on a GPU-powered Virtual Machine with NodeShift, harnessing its full potential. NodeShift provides an accessible, secure, affordable platform to run your AI models efficiently. It is an excellent choice for those experimenting with Solar Pro 22B and other cutting-edge AI tools.
For more information about NodeShift: