What is everything-ai?
🤖 everything-ai is natively a multi-tasking agent, 100% local, that is able to perform several AI-related tasks
What's new?
🚀 I am more than thrilled to introduce some new functionalities that were added since last release:
- 🦙
llama.cpp-and-qdrant
: Chat with your PDFs (backed by Qdrant as vector database) through Hugging Face GGUF models running within llama.cpp - 💬
build-your-llm
: you can now create a customizable chat LLM to interact with your Qdrant database with the power of Anthropic, Groq, OpenAI and Cohere models, just providing an API key! You can also set the temperature and the max number of output tokens - 🧬
protein-folding
interface now shows the 3D structure of a protein along with its molecular one, no more static backbone images! - 🏋️
autotrain
now supports direct upload of config.yml file - 🤗 Small fixes in
retrieval-text-generation
RAG pipeline
How can you use all of these features?
You just need a docker compose up
!🐋
Where can I find everything I need?
Get the source code (and leave a little ⭐ while you're there):
https://github.com/AstraBert/everything-ai
Get a quick-start with the documentation:
https://astrabert.github.io/everything-ai/
Credits and inspiration
Shout-outs to Hugging Face, Gradio, Docker, AI at Meta, Abhishek Thakur, Qdrant, LangChain and Supabase for making all of this possible!
Inspired by: Jan, Cheshire Cat AI, LM Studio, Ollama and other awesome local AI solutions!