Eric Zhu 1578cd955f
Include all output to error output in docker jupyter code executor (#6572)
Currently when an error occurs when executing code in docker jupyter
executor, it returns only the error output.

This PR updates the handling of error output to include outputs from
previous code blocks that have been successfully executed.

Test it with this script:

```python
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.code_executors.docker_jupyter import DockerJupyterCodeExecutor, DockerJupyterServer
from autogen_ext.tools.code_execution import PythonCodeExecutionTool
from autogen_agentchat.ui import Console
from autogen_core.code_executor import CodeBlock
from autogen_core import CancellationToken
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMessageTermination

# Download the dataset from https://www.kaggle.com/datasets/nelgiriyewithana/top-spotify-songs-2023
# and place it the coding directory as `spotify-2023.csv`.
bind_dir = "./coding"

# Use a custom docker image with the Jupyter kernel gateway and data science libraries installed.
# Custom docker image: ds-kernel-gateway:latest -- you need to build this image yourself.
# Dockerfile:
# FROM quay.io/jupyter/docker-stacks-foundation:latest
# 
# # ensure that 'mamba' and 'fix-permissions' are on the PATH
# SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# 
# # Switch to the default notebook user
# USER ${NB_UID}
# 
# # Install data-science packages + kernel gateway
# RUN mamba install --quiet --yes \
#     numpy \
#     pandas \
#     scipy \
#     matplotlib \
#     scikit-learn \
#     seaborn \
#     jupyter_kernel_gateway \
#     ipykernel \
#     && mamba clean --all -f -y \
#     && fix-permissions "${CONDA_DIR}" \
#     && fix-permissions "/home/${NB_USER}"
# 
# # Allow you to set a token at runtime (or leave blank for no auth)
# ENV TOKEN=""
# 
# # Launch the Kernel Gateway, listening on all interfaces,
# # with the HTTP endpoint for listing kernels enabled
# CMD ["python", "-m", "jupyter", "kernelgateway", \
#     "--KernelGatewayApp.ip=0.0.0.0", \
#     "--KernelGatewayApp.port=8888", \
#     # "--KernelGatewayApp.auth_token=${TOKEN}", \
#     "--JupyterApp.answer_yes=true", \
#     "--JupyterWebsocketPersonality.list_kernels=true"]
# 
# EXPOSE 8888
# 
# WORKDIR "${HOME}"

async def main():
    model = OpenAIChatCompletionClient(model="gpt-4.1")
    async with DockerJupyterServer(
        custom_image_name="ds-kernel-gateway:latest", 
        bind_dir=bind_dir,
    ) as server:
        async with DockerJupyterCodeExecutor(jupyter_server=server) as code_executor:
            await code_executor.execute_code_blocks([
                CodeBlock(code="import pandas as pd\ndf = pd.read_csv('/workspace/spotify-2023.csv', encoding='latin-1')", language="python"),
            ],
                cancellation_token=CancellationToken(),
            )
            tool = PythonCodeExecutionTool(
                executor=code_executor,
            )
            assistant = AssistantAgent(
                "assistant",
                model_client=model,
                system_message="You have access to a Jupyter kernel. Do not write all code at once. Write one code block, observe the output, and then write the next code block.",
                tools=[tool],
            )
            team = RoundRobinGroupChat(
                [assistant],
                termination_condition=TextMessageTermination(source="assistant"),
            )
            task = f"Datafile has been loaded as variable `df`. First preview dataset. Then answer the following question: What is the highest streamed artist in the dataset?"
            await Console(team.run_stream(task=task))

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())
```

You can see the file encoding error gets recovered and the agent
successfully executes the query in the end.
2025-05-21 17:27:46 +00:00
..
2025-05-07 04:11:19 +00:00
2025-03-04 09:56:49 -08:00
2025-04-21 21:51:22 -07:00
2025-05-12 17:19:32 -07:00

AutoGen Python packages

0.4 Docs PyPi autogen-core PyPi autogen-agentchat PyPi autogen-ext

This directory works as a single uv workspace containing all project packages. See packages to discover all project packages.

Migrating from 0.2.x?

Please refer to the migration guide for how to migrate your code from 0.2.x to 0.4.x.

Development

TL;DR, run all checks with:

uv sync --all-extras
source .venv/bin/activate
poe check

Setup

uv is a package manager that assists in creating the necessary environment and installing packages to run AutoGen.

Note: To prevent incompatibilities between versions the same UV version as is running in CI should be used. Check the version in CI by looking the setup-uv action, here for example.

For example, to change your version to 0.5.18, run:

uv self update 0.5.18

Virtual Environment

During development, you may need to test changes made to any of the packages.
To do so, create a virtual environment where the AutoGen packages are installed based on the current state of the directory.
Run the following commands at the root level of the Python directory:

uv sync --all-extras
source .venv/bin/activate
  • uv sync --all-extras will create a .venv directory at the current level and install packages from the current directory along with any other dependencies. The all-extras flag adds optional dependencies.
  • source .venv/bin/activate activates the virtual environment.

Common Tasks

To create a pull request (PR), ensure the following checks are met. You can run each check individually:

  • Format: poe format
  • Lint: poe lint
  • Test: poe test
  • Mypy: poe mypy
  • Pyright: poe pyright
  • Build docs: poe --directory ./packages/autogen-core/ docs-build
  • Auto rebuild+serve docs: poe --directory ./packages/autogen-core/ docs-serve
  • Check samples in python/samples: poe samples-code-check Alternatively, you can run all the checks with:
  • poe check

Note

These need to be run in the virtual environment.

Syncing Dependencies

When you pull new changes, you may need to update the dependencies. To do so, first make sure you are in the virtual environment, and then in the python directory, run:

uv sync --all-extras

This will update the dependencies in the virtual environment.

Creating a New Package

To create a new package, similar to autogen-core or autogen-chat, use the following:

uv sync --python 3.12
source .venv/bin/activate
cookiecutter ./templates/new-package/