๐ฆ new Python ๐ library inside a dockerized ๐ณ environment with the allocation of 4 CPUs and 8 GB RAM. It took 19 sec to get a response ๐. The last time I tried to run LLM locally, it took 10 minutes to get a response ๐คฏ#llm #mistral #python #ollama