Running Mistral LLM locally with Ollama's

parmarjatin4911@gmail.com - Jan 28 - - Dev Community

๐Ÿฆ™ new Python ๐Ÿ library inside a dockerized ๐Ÿณ environment with the allocation of 4 CPUs and 8 GB RAM. It took 19 sec to get a response ๐Ÿš€. The last time I tried to run LLM locally, it took 10 minutes to get a response ๐Ÿคฏ#llm #mistral #python #ollama
Image description

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player