- Added support for running CLI and Ollama server via Docker - Introduced tests for local embeddings model and standalone Docker setup - Enabled conditional Ollama server launch via LLM_PROVIDER
This aims to offer alternative OpenAI capable api's. This offers people to experiment with running the application locally