Victor Dibia 7d057a93b2
Autogenstudio docs (#2890)
* add autogenstudio docs

* update ags readme to point to docs page

* update docs

* update docs

* update faqs

* update, fix typos
2024-06-10 03:29:34 +00:00

87 lines
4.6 KiB
Markdown

# AutoGen Studio FAQs
## Q: How do I specify the directory where files(e.g. database) are stored?
A: You can specify the directory where files are stored by setting the `--appdir` argument when running the application. For example, `autogenstudio ui --appdir /path/to/folder`. This will store the database (default) and other files in the specified directory e.g. `/path/to/folder/database.sqlite`.
## Q: Where can I adjust the default skills, agent and workflow configurations?
A: You can modify agent configurations directly from the UI or by editing the `init_db_samples` function in the `autogenstudio/database/utils.py` file which is used to initialize the database.
## Q: If I want to reset the entire conversation with an agent, how do I go about it?
A: To reset your conversation history, you can delete the `database.sqlite` file in the `--appdir` directory. This will reset the entire conversation history. To delete user files, you can delete the `files` directory in the `--appdir` directory.
## Q: Is it possible to view the output and messages generated by the agents during interactions?
A: Yes, you can view the generated messages in the debug console of the web UI, providing insights into the agent interactions. Alternatively, you can inspect the `database.sqlite` file for a comprehensive record of messages.
## Q: Can I use other models with AutoGen Studio?
Yes. AutoGen standardizes on the openai model api format, and you can use any api server that offers an openai compliant endpoint. In the AutoGen Studio UI, each agent has an `llm_config` field where you can input your model endpoint details including `model`, `api key`, `base url`, `model type` and `api version`. For Azure OpenAI models, you can find these details in the Azure portal. Note that for Azure OpenAI, the `model name` is the deployment id or engine, and the `model type` is "azure".
For other OSS models, we recommend using a server such as vllm, LMStudio, Ollama, to instantiate an openai compliant endpoint.
## Q: The server starts but I can't access the UI
A: If you are running the server on a remote machine (or a local machine that fails to resolve localhost correctly), you may need to specify the host address. By default, the host address is set to `localhost`. You can specify the host address using the `--host <host>` argument. For example, to start the server on port 8081 and local address such that it is accessible from other machines on the network, you can run the following command:
```bash
autogenstudio ui --port 8081 --host 0.0.0.0
```
## Q: Can I export my agent workflows for use in a python app?
Yes. In the Build view, you can click the export button to save your agent workflow as a JSON file. This file can be imported in a python application using the `WorkflowManager` class. For example:
```python
from autogenstudio import WorkflowManager
# load workflow from exported json workflow file.
workflow_manager = WorkflowManager(workflow="path/to/your/workflow_.json")
# run the workflow on a task
task_query = "What is the height of the Eiffel Tower?. Dont write code, just respond to the question."
workflow_manager.run(message=task_query)
```
## Q: Can I deploy my agent workflows as APIs?
Yes. You can launch the workflow as an API endpoint from the command line using the `autogenstudio` commandline tool. For example:
```bash
autogenstudio serve --workflow=workflow.json --port=5000
```
Similarly, the workflow launch command above can be wrapped into a Dockerfile that can be deployed on cloud services like Azure Container Apps or Azure Web Apps.
## Q: Can I run AutoGen Studio in a Docker container?
A: Yes, you can run AutoGen Studio in a Docker container. You can build the Docker image using the provided [Dockerfile](https://github.com/microsoft/autogen/blob/autogenstudio/samples/apps/autogen-studio/Dockerfile) and run the container using the following commands:
```bash
FROM python:3.10
WORKDIR /code
RUN pip install -U gunicorn autogenstudio
RUN useradd -m -u 1000 user
USER user
ENV HOME=/home/user \
PATH=/home/user/.local/bin:$PATH \
AUTOGENSTUDIO_APPDIR=/home/user/app
WORKDIR $HOME/app
COPY --chown=user . $HOME/app
CMD gunicorn -w $((2 * $(getconf _NPROCESSORS_ONLN) + 1)) --timeout 12600 -k uvicorn.workers.UvicornWorker autogenstudio.web.app:app --bind "0.0.0.0:8081"
```
Using Gunicorn as the application server for improved performance is recommended. To run AutoGen Studio with Gunicorn, you can use the following command:
```bash
gunicorn -w $((2 * $(getconf _NPROCESSORS_ONLN) + 1)) --timeout 12600 -k uvicorn.workers.UvicornWorker autogenstudio.web.app:app --bind
```