diff --git a/.github/workflows/deploy-website.yml b/.github/workflows/deploy-website.yml
index f90a78f87..7198a311d 100644
--- a/.github/workflows/deploy-website.yml
+++ b/.github/workflows/deploy-website.yml
@@ -37,7 +37,7 @@ jobs:
- name: pydoc-markdown install
run: |
python -m pip install --upgrade pip
- pip install pydoc-markdown
+ pip install pydoc-markdown pyyaml colored
- name: pydoc-markdown run
run: |
pydoc-markdown
@@ -50,6 +50,9 @@ jobs:
- name: quarto run
run: |
quarto render .
+ - name: Process notebooks
+ run: |
+ python process_notebooks.py
- name: Test Build
run: |
if [ -e yarn.lock ]; then
@@ -80,7 +83,7 @@ jobs:
- name: pydoc-markdown install
run: |
python -m pip install --upgrade pip
- pip install pydoc-markdown
+ pip install pydoc-markdown pyyaml colored
- name: pydoc-markdown run
run: |
pydoc-markdown
@@ -93,6 +96,9 @@ jobs:
- name: quarto run
run: |
quarto render .
+ - name: Process notebooks
+ run: |
+ python process_notebooks.py
- name: Build website
run: |
if [ -e yarn.lock ]; then
diff --git a/notebook/agentchat_RetrieveChat.ipynb b/notebook/agentchat_RetrieveChat.ipynb
index d229f53da..b40787fd8 100644
--- a/notebook/agentchat_RetrieveChat.ipynb
+++ b/notebook/agentchat_RetrieveChat.ipynb
@@ -5,16 +5,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- ""
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "\n",
- "# Auto Generated Agent Chat: Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering\n",
+ "\n",
+ "\n",
+ "# Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
@@ -24,35 +21,24 @@
"## Table of Contents\n",
"We'll demonstrate six examples of using RetrieveChat for code generation and question answering:\n",
"\n",
- "[Example 1: Generate code based off docstrings w/o human feedback](#example-1)\n",
+ "- [Example 1: Generate code based off docstrings w/o human feedback](#example-1)\n",
+ "- [Example 2: Answer a question based off docstrings w/o human feedback](#example-2)\n",
+ "- [Example 3: Generate code based off docstrings w/ human feedback](#example-3)\n",
+ "- [Example 4: Answer a question based off docstrings w/ human feedback](#example-4)\n",
+ "- [Example 5: Solve comprehensive QA problems with RetrieveChat's unique feature `Update Context`](#example-5)\n",
+ "- [Example 6: Solve comprehensive QA problems with customized prompt and few-shot learning](#example-6)\n",
"\n",
- "[Example 2: Answer a question based off docstrings w/o human feedback](#example-2)\n",
+ "\\:\\:\\:info Requirements\n",
"\n",
- "[Example 3: Generate code based off docstrings w/ human feedback](#example-3)\n",
+ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n",
"\n",
- "[Example 4: Answer a question based off docstrings w/ human feedback](#example-4)\n",
- "\n",
- "[Example 5: Solve comprehensive QA problems with RetrieveChat's unique feature `Update Context`](#example-5)\n",
- "\n",
- "[Example 6: Solve comprehensive QA problems with customized prompt and few-shot learning](#example-6)\n",
- "\n",
- "\n",
- "\n",
- "## Requirements\n",
- "\n",
- "AutoGen requires `Python>=3.8`. To run this notebook example, please install the [retrievechat] option.\n",
"```bash\n",
- "pip install \"pyautogen[retrievechat]>=0.2.3\" \"flaml[automl]\"\n",
- "```"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {},
- "outputs": [],
- "source": [
- "# %pip install \"pyautogen[retrievechat]>=0.2.3\" \"flaml[automl]\""
+ "pip install pyautogen[retrievechat] flaml[automl]\n",
+ "```\n",
+ "\n",
+ "For more information, please refer to the [installation guide](/docs/installation/).\n",
+ "\n",
+ "\\:\\:\\:\n"
]
},
{
@@ -94,7 +80,6 @@
"\n",
"config_list = autogen.config_list_from_json(\n",
" env_or_file=\"OAI_CONFIG_LIST\",\n",
- " file_location=\".\",\n",
" filter_dict={\n",
" \"model\": {\n",
" \"gpt-4\",\n",
@@ -116,35 +101,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). Only the gpt-4 and gpt-3.5-turbo models are kept in the list based on the filter condition.\n",
+ "\\:\\:\\:tip\n",
"\n",
- "The config list looks like the following:\n",
- "```python\n",
- "config_list = [\n",
- " {\n",
- " 'model': 'gpt-4',\n",
- " 'api_key': '',\n",
- " },\n",
- " {\n",
- " 'model': 'gpt-4',\n",
- " 'api_key': '',\n",
- " 'base_url': '',\n",
- " 'api_type': 'azure',\n",
- " 'api_version': '2023-06-01-preview',\n",
- " },\n",
- " {\n",
- " 'model': 'gpt-3.5-turbo',\n",
- " 'api_key': '',\n",
- " 'base_url': '',\n",
- " 'api_type': 'azure',\n",
- " 'api_version': '2023-06-01-preview',\n",
- " },\n",
- "]\n",
- "```\n",
+ "Learn more about the various ways to configure LLM endpoints [here](/docs/llm_endpoint_configuration).\n",
"\n",
- "If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n",
- "\n",
- "You can set the value of config_list in other ways you prefer, e.g., loading from a YAML file."
+ "\\:\\:\\:"
]
},
{
@@ -230,10 +191,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "\n",
"### Example 1\n",
"\n",
- "[back to top](#toc)\n",
+ "[Back to top](#table-of-contents)\n",
"\n",
"Use RetrieveChat to help generate sample code and automatically run the code and fix errors if there is any.\n",
"\n",
@@ -537,10 +497,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "\n",
"### Example 2\n",
"\n",
- "[back to top](#toc)\n",
+ "[Back to top](#table-of-contents)\n",
"\n",
"Use RetrieveChat to answer a question that is not related to code generation.\n",
"\n",
@@ -1092,10 +1051,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "\n",
"### Example 3\n",
"\n",
- "[back to top](#toc)\n",
+ "[Back to top](#table-of-contents)\n",
"\n",
"Use RetrieveChat to help generate sample code and ask for human-in-loop feedbacks.\n",
"\n",
@@ -1506,10 +1464,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "\n",
"### Example 4\n",
"\n",
- "[back to top](#toc)\n",
+ "[Back to top](#table-of-contents)\n",
"\n",
"Use RetrieveChat to answer a question and ask for human-in-loop feedbacks.\n",
"\n",
@@ -2065,10 +2022,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "\n",
"### Example 5\n",
"\n",
- "[back to top](#toc)\n",
+ "[Back to top](#table-of-contents)\n",
"\n",
"Use RetrieveChat to answer questions for [NaturalQuestion](https://ai.google.com/research/NaturalQuestions) dataset.\n",
"\n",
@@ -2665,10 +2621,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "\n",
"### Example 6\n",
"\n",
- "[back to top](#toc)\n",
+ "[Back to top](#table-of-contents)\n",
"\n",
"Use RetrieveChat to answer multi-hop questions for [2WikiMultihopQA](https://github.com/Alab-NII/2wikimultihop) dataset with customized prompt and few-shot learning.\n",
"\n",
diff --git a/notebook/agentchat_auto_feedback_from_code_execution.ipynb b/notebook/agentchat_auto_feedback_from_code_execution.ipynb
index dd5b1942f..b05242460 100644
--- a/notebook/agentchat_auto_feedback_from_code_execution.ipynb
+++ b/notebook/agentchat_auto_feedback_from_code_execution.ipynb
@@ -1,15 +1,6 @@
{
"cells": [
{
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- ""
- ]
- },
- {
- "attachments": {},
"cell_type": "markdown",
"metadata": {
"slideshow": {
@@ -17,45 +8,29 @@
}
},
"source": [
- "# Auto Generated Agent Chat: Task Solving with Code Generation, Execution & Debugging\n",
+ "\n",
+ "\n",
+ "# Task Solving with Code Generation, Execution and Debugging\n",
"\n",
"AutoGen offers conversable LLM agents, which can be used to solve various tasks with human or automatic feedback, including tasks that require using tools via code.\n",
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to write code and execute the code. Here `AssistantAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for the human user to execute the code written by `AssistantAgent`, or automatically execute the code. Depending on the setting of `human_input_mode` and `max_consecutive_auto_reply`, the `UserProxyAgent` either solicits feedback from the human user or returns auto-feedback based on the result of code execution (success or failure and corresponding outputs) to `AssistantAgent`. `AssistantAgent` will debug the code and suggest new code if the result contains error. The two agents keep communicating to each other until the task is done.\n",
"\n",
- "## Requirements\n",
+ "\\:\\:\\:info Requirements\n",
"\n",
- "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n",
+ "Install `pyautogen`:\n",
"```bash\n",
"pip install pyautogen\n",
- "```"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {
- "execution": {
- "iopub.execute_input": "2023-02-13T23:40:52.317406Z",
- "iopub.status.busy": "2023-02-13T23:40:52.316561Z",
- "iopub.status.idle": "2023-02-13T23:40:52.321193Z",
- "shell.execute_reply": "2023-02-13T23:40:52.320628Z"
- }
- },
- "outputs": [],
- "source": [
- "# %pip install pyautogen>=0.2.3"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Set your API Endpoint\n",
+ "```\n",
"\n",
- "The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
+ "For more information, please refer to the [installation guide](/docs/installation/).\n",
+ "\n",
+ "\\:\\:\\:"
]
},
{
@@ -84,33 +59,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). Only the gpt-4 models are kept in the list based on the filter condition.\n",
+ "\\:\\:\\:tip\n",
"\n",
- "The config list looks like the following:\n",
- "```python\n",
- "config_list = [\n",
- " {\n",
- " 'model': 'gpt-4',\n",
- " 'api_key': '',\n",
- " },\n",
- " {\n",
- " 'model': 'gpt-4',\n",
- " 'api_key': '',\n",
- " 'base_url': '',\n",
- " 'api_type': 'azure',\n",
- " 'api_version': '2023-06-01-preview',\n",
- " },\n",
- " {\n",
- " 'model': 'gpt-4-32k',\n",
- " 'api_key': '',\n",
- " 'base_url': '',\n",
- " 'api_type': 'azure',\n",
- " 'api_version': '2023-06-01-preview',\n",
- " },\n",
- "]\n",
- "```\n",
+ "Learn more about the various ways to configure LLM endpoints [here](/docs/llm_endpoint_configuration).\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods."
+ "\\:\\:\\:"
]
},
{
diff --git a/notebook/agentchat_function_call_async.ipynb b/notebook/agentchat_function_call_async.ipynb
index bb6fa48d6..e9831b8fc 100644
--- a/notebook/agentchat_function_call_async.ipynb
+++ b/notebook/agentchat_function_call_async.ipynb
@@ -1,12 +1,17 @@
{
"cells": [
{
- "attachments": {},
"cell_type": "markdown",
"id": "ae1f50ec",
"metadata": {},
"source": [
- ""
+ "\n",
+ "\n",
+ "# Task Solving with Provided Tools as Functions (Asynchronous Function Calls)\n"
]
},
{
@@ -15,48 +20,20 @@
"id": "9a71fa36",
"metadata": {},
"source": [
- "# Auto Generated Agent Chat: Task Solving with Provided Tools as Functions (Asynchronous Function Calls)\n",
- "\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to make function calls with the new feature of OpenAI models (in model version 0613). A specified prompt and function configs must be passed to `AssistantAgent` to initialize the agent. The corresponding functions must be passed to `UserProxyAgent`, which will execute any function calls made by `AssistantAgent`. Besides this requirement of matching descriptions with functions, we recommend checking the system message in the `AssistantAgent` to ensure the instructions align with the function call descriptions.\n",
"\n",
- "## Requirements\n",
+ "\\:\\:\\:info Requirements\n",
"\n",
- "AutoGen requires `Python>=3.8`. To run this notebook example, please install `pyautogen`:\n",
+ "Install `pyautogen`:\n",
"```bash\n",
"pip install pyautogen\n",
- "```"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "id": "2b803c17",
- "metadata": {},
- "outputs": [],
- "source": [
- "# %pip install \"pyautogen>=0.2.6\""
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "id": "5ebd2397",
- "metadata": {},
- "source": [
- "## Set your API Endpoint\n",
+ "```\n",
"\n",
- "The [`config_list_from_models`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_models) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints for the provided list of models. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
+ "For more information, please refer to the [installation guide](/docs/installation/).\n",
"\n",
- "- OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n",
- "- Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n",
- "- Azure OpenAI API base: os.environ[\"AZURE_OPENAI_API_BASE\"] or `aoai_api_base_file=\"base_aoai.txt\"`. Multiple bases can be stored, one per line.\n",
- "\n",
- "It's OK to have only the OpenAI API key, or only the Azure OpenAI API key + base.\n",
- "If you open this notebook in google colab, you can upload your files by clicking the file icon on the left panel and then choosing \"upload file\" icon.\n",
- "\n",
- "The following code excludes Azure OpenAI endpoints from the config list because some endpoints don't support functions yet. Remove the `exclude` argument if they do."
+ "\\:\\:\\:\n"
]
},
{
@@ -73,13 +50,7 @@
"import autogen\n",
"from autogen.cache import Cache\n",
"\n",
- "config_list = autogen.config_list_from_json(\n",
- " \"OAI_CONFIG_LIST\",\n",
- " file_location=\".\",\n",
- " filter_dict={\n",
- " \"model\": [\"gpt-4\"],\n",
- " },\n",
- ")"
+ "config_list = autogen.config_list_from_json(env_or_file=\"OAI_CONFIG_LIST\")"
]
},
{
@@ -88,23 +59,11 @@
"id": "92fde41f",
"metadata": {},
"source": [
- "The config list looks like the following:\n",
- "```python\n",
- "config_list = [\n",
- " {\n",
- " 'model': 'gpt-4',\n",
- " 'api_key': '',\n",
- " }, # OpenAI API endpoint for gpt-4\n",
- " {\n",
- " 'model': 'gpt-3.5-turbo',\n",
- " 'api_key': '',\n",
- " }, # OpenAI API endpoint for gpt-3.5-turbo\n",
- " {\n",
- " 'model': 'gpt-3.5-turbo-16k',\n",
- " 'api_key': '',\n",
- " }, # OpenAI API endpoint for gpt-3.5-turbo-16k\n",
- "]\n",
- "```\n"
+ "\\:\\:\\:tip\n",
+ "\n",
+ "Learn more about the various ways to configure LLM endpoints [here](/docs/llm_endpoint_configuration).\n",
+ "\n",
+ "\\:\\:\\:"
]
},
{
diff --git a/notebook/contributing.md b/notebook/contributing.md
new file mode 100644
index 000000000..81e6a8e34
--- /dev/null
+++ b/notebook/contributing.md
@@ -0,0 +1,56 @@
+# Contributing
+
+## How to get a notebook displayed on the website
+
+Ensure the first cell is markdown and before absolutely anything else include the following yaml within a comment.
+
+```markdown
+
+```
+
+The `tags` field is a list of tags that will be used to categorize the notebook. The `description` field is a brief description of the notebook.
+
+## Best practices for authoring notebooks
+
+The following points are best practices for authoring notebooks to ensure consistency and ease of use for the website.
+
+- The Colab button will be automatically generated on the website for all notebooks where it is missing. Going forward, it is recommended to not include the Colab button in the notebook itself.
+- Ensure the header is a `h1` header, - `#`
+- Don't put anything between the yaml and the header
+
+### Consistency for installation and LLM config
+
+You don't need to explain in depth how to install AutoGen. Unless there are specific instructions for the notebook just use the following markdown snippet:
+
+\:\:info Requirements
+
+Install `pyautogen`:
+```bash
+pip install pyautogen
+```
+
+For more information, please refer to the [installation guide](/docs/installation/).
+
+\:\:\:
+
+When specifying the config list, to ensure consistency it is best to use approximately the following code:
+
+```python
+config_list = autogen.config_list_from_json(
+ env_or_file="OAI_CONFIG_LIST",
+)
+```
+
+Then after the code cell where this is used, include the following markdown snippet:
+
+```
+\:\:\:tip
+
+Learn more about the various ways to configure LLM endpoints [here](/docs/llm_endpoint_configuration).
+
+\:\:\:
+```
diff --git a/website/.gitignore b/website/.gitignore
index 495a9753d..0234ebd06 100644
--- a/website/.gitignore
+++ b/website/.gitignore
@@ -9,6 +9,7 @@ package-lock.json
.docusaurus
.cache-loader
docs/reference
+/docs/notebooks
docs/llm_endpoint_configuration.mdx
diff --git a/website/README.md b/website/README.md
index 6aa24b7fd..fdc4e5162 100644
--- a/website/README.md
+++ b/website/README.md
@@ -14,7 +14,7 @@ npm install --global yarn
## Installation
```console
-pip install pydoc-markdown
+pip install pydoc-markdown pyyaml colored
cd website
yarn install
```
@@ -34,6 +34,7 @@ Navigate to the `website` folder and run:
```console
pydoc-markdown
quarto render ./docs
+python ./process_notebooks.py
yarn start
```
diff --git a/website/docs/Gallery.mdx b/website/docs/Gallery.mdx
index 1b07b1381..6a9c09752 100644
--- a/website/docs/Gallery.mdx
+++ b/website/docs/Gallery.mdx
@@ -1,4 +1,5 @@
import GalleryPage from '../src/components/GalleryPage';
+import galleryData from "../src/data/gallery.json";
# Gallery
@@ -7,7 +8,7 @@ This page contains a list of demos that use AutoGen in various applications from
**Contribution guide:**
Built something interesting with AutoGen? Submit a PR to add it to the list! See the [Contribution Guide below](#contributing) for more details.
-
+
## Contributing
diff --git a/website/docs/notebooks.mdx b/website/docs/notebooks.mdx
new file mode 100644
index 000000000..ec38a95bb
--- /dev/null
+++ b/website/docs/notebooks.mdx
@@ -0,0 +1,11 @@
+import {findAllNotebooks} from '../src/components/NotebookUtils';
+import GalleryPage from '../src/components/GalleryPage';
+
+# Notebooks
+
+This page contains a collection of notebooks that demonstrate how to use
+AutoGen. The notebooks are tagged with the topics they cover.
+For example, a notebook that demonstrates how to use function calling will
+be tagged with `function call`.
+
+
diff --git a/website/docusaurus.config.js b/website/docusaurus.config.js
index b12f45fc1..9d69ede18 100644
--- a/website/docusaurus.config.js
+++ b/website/docusaurus.config.js
@@ -74,6 +74,12 @@ module.exports = {
position: "left",
label: "Examples",
},
+ // Uncomment below to add Notebooks to the navbar
+ // {
+ // to: "docs/notebooks",
+ // position: "left",
+ // label: "Notebooks",
+ // },
{
label: "Resources",
type: "dropdown",
diff --git a/website/process_notebooks.py b/website/process_notebooks.py
new file mode 100644
index 000000000..5873c5dfa
--- /dev/null
+++ b/website/process_notebooks.py
@@ -0,0 +1,275 @@
+import sys
+from pathlib import Path
+import subprocess
+import argparse
+import shutil
+import json
+import typing
+import concurrent.futures
+
+try:
+ import yaml
+except ImportError:
+ print("pyyaml not found.\n\nPlease install pyyaml:\n\tpip install pyyaml\n")
+ sys.exit(1)
+
+try:
+ from termcolor import colored
+except ImportError:
+
+ def colored(x, *args, **kwargs):
+ return x
+
+
+class Result:
+ def __init__(self, returncode: int, stdout: str, stderr: str):
+ self.returncode = returncode
+ self.stdout = stdout
+ self.stderr = stderr
+
+
+def check_quarto_bin(quarto_bin: str = "quarto"):
+ """Check if quarto is installed."""
+ try:
+ subprocess.check_output([quarto_bin, "--version"])
+ except FileNotFoundError:
+ print("Quarto is not installed. Please install it from https://quarto.org")
+ sys.exit(1)
+
+
+def notebooks_target_dir(website_directory: Path) -> Path:
+ """Return the target directory for notebooks."""
+ return website_directory / "docs" / "notebooks"
+
+
+def extract_yaml_from_notebook(notebook: Path) -> typing.Optional[typing.Dict]:
+ with open(notebook, "r") as f:
+ content = f.read()
+
+ json_content = json.loads(content)
+ first_cell = json_content["cells"][0]
+
+ # must exists on lines on their own
+ if first_cell["cell_type"] != "markdown":
+ return None
+
+ lines = first_cell["source"]
+ if "" not in lines:
+ return None
+
+ closing_arrow_idx = lines.index("-->")
+
+ front_matter_lines = lines[1:closing_arrow_idx]
+ front_matter = yaml.safe_load("\n".join(front_matter_lines))
+ return front_matter
+
+
+def skip_reason_or_none_if_ok(notebook: Path) -> typing.Optional[str]:
+ """Return a reason to skip the notebook, or None if it should not be skipped."""
+ with open(notebook, "r") as f:
+ content = f.read()
+
+ # Load the json and get the first cell
+ json_content = json.loads(content)
+ first_cell = json_content["cells"][0]
+
+ # must exists on lines on their own
+ if first_cell["cell_type"] != "markdown":
+ return "first cell is not markdown"
+
+ lines = first_cell["source"]
+ if "" not in lines:
+ return "no closing --> found, or it is not on a line on its own"
+
+ try:
+ front_matter = extract_yaml_from_notebook(notebook)
+ except yaml.YAMLError as e:
+ return colored(f"Failed to parse front matter in {notebook.name}: {e}", "red")
+
+ # Should not be none at this point as we have already done the same checks as in extract_yaml_from_notebook
+ assert front_matter is not None, f"Front matter is None for {notebook.name}"
+
+ if "skip" in front_matter and front_matter["skip"] is True:
+ return "skip is set to true"
+
+ if "tags" not in front_matter:
+ return "tags is not in front matter"
+
+ if "description" not in front_matter:
+ return "description is not in front matter"
+
+ # Make sure tags is a list of strings
+ if not all([isinstance(tag, str) for tag in front_matter["tags"]]):
+ return "tags must be a list of strings"
+
+ # Make sure description is a string
+ if not isinstance(front_matter["description"], str):
+ return "description must be a string"
+
+ return None
+
+
+def process_notebook(src_notebook: Path, dest_dir: Path, quarto_bin: str, dry_run: bool) -> str:
+ """Process a single notebook."""
+ reason_or_none = skip_reason_or_none_if_ok(src_notebook)
+ if reason_or_none:
+ return colored(f"Skipping {src_notebook.name}, reason: {reason_or_none}", "yellow")
+
+ target_mdx_file = dest_dir / f"{src_notebook.stem}.mdx"
+ intermediate_notebook = dest_dir / f"{src_notebook.stem}.ipynb"
+
+ # If the intermediate_notebook already exists, check if it is newer than the source file
+ if target_mdx_file.exists():
+ if target_mdx_file.stat().st_mtime > src_notebook.stat().st_mtime:
+ return colored(f"Skipping {src_notebook.name}, as target file is newer", "blue")
+
+ if dry_run:
+ return colored(f"Would process {src_notebook.name}", "green")
+
+ # Copy notebook to target dir
+ # The reason we copy the notebook is that quarto does not support rendering from a different directory
+ shutil.copy(src_notebook, intermediate_notebook)
+
+ # Check if another file has to be copied too
+ # Solely added for the purpose of agent_library_example.json
+ front_matter = extract_yaml_from_notebook(src_notebook)
+ # Should not be none at this point as we have already done the same checks as in extract_yaml_from_notebook
+ assert front_matter is not None, f"Front matter is None for {src_notebook.name}"
+ if "extra_files_to_copy" in front_matter:
+ for file in front_matter["extra_files_to_copy"]:
+ shutil.copy(src_notebook.parent / file, dest_dir / file)
+
+ # Capture output
+ result = subprocess.run(
+ [quarto_bin, "render", intermediate_notebook], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
+ )
+ if result.returncode != 0:
+ return colored(f"Failed to render {intermediate_notebook}", "red") + f"\n{result.stderr}" + f"\n{result.stdout}"
+
+ # Unlink intermediate files
+ intermediate_notebook.unlink()
+
+ if "extra_files_to_copy" in front_matter:
+ for file in front_matter["extra_files_to_copy"]:
+ (dest_dir / file).unlink()
+
+ # Post process the file
+ post_process_mdx(target_mdx_file)
+
+ return colored(f"Processed {src_notebook.name}", "green")
+
+
+# rendered_notebook is the final mdx file
+def post_process_mdx(rendered_mdx: Path) -> None:
+ notebook_name = f"{rendered_mdx.stem}.ipynb"
+ with open(rendered_mdx, "r") as f:
+ content = f.read()
+
+ # Check for existence of "export const quartoRawHtml", this indicates there was a front matter line in the file
+ if "export const quartoRawHtml" not in content:
+ raise ValueError(f"File {rendered_mdx} does not contain 'export const quartoRawHtml'")
+
+ # Extract the text between
+ front_matter = content.split("")[0]
+ # Strip empty lines before and after
+ front_matter = "\n".join([line for line in front_matter.split("\n") if line.strip() != ""])
+
+ # add file path
+ front_matter += f"\nsource_notebook: /notebook/{notebook_name}"
+ # Custom edit url
+ front_matter += f"\ncustom_edit_url: https://github.com/microsoft/autogen/edit/main/notebook/{notebook_name}"
+
+ # inject in content directly after the markdown title the word done
+ # Find the end of the line with the title
+ title_end = content.find("\n", content.find("#"))
+
+ # Extract page title
+ title = content[content.find("#") + 1 : content.find("\n", content.find("#"))].strip()
+
+ front_matter += f"\ntitle: {title}"
+
+ github_link = f"https://github.com/microsoft/autogen/blob/main/notebook/{notebook_name}"
+ content = (
+ content[:title_end]
+ + "\n[]("
+ + github_link
+ + ")"
+ + content[title_end:]
+ )
+
+ # If no colab link is present, insert one
+ if "colab-badge.svg" not in content:
+ content = (
+ content[:title_end]
+ + "\n[](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/"
+ + notebook_name
+ + ")"
+ + content[title_end:]
+ )
+
+ # Rewrite the content as
+ # ---
+ # front_matter
+ # ---
+ # content
+ new_content = f"---\n{front_matter}\n---\n{content}"
+ with open(rendered_mdx, "w") as f:
+ f.write(new_content)
+
+
+def path(path_str: str) -> Path:
+ """Return a Path object."""
+ return Path(path_str)
+
+
+def main():
+ script_dir = Path(__file__).parent.absolute()
+ parser = argparse.ArgumentParser()
+ parser.add_argument(
+ "--notebook-directory",
+ type=path,
+ help="Directory containing notebooks to process",
+ default=script_dir / "../notebook",
+ )
+ parser.add_argument(
+ "--website-directory", type=path, help="Root directory of docusarus website", default=script_dir
+ )
+ parser.add_argument("--quarto-bin", help="Path to quarto binary", default="quarto")
+ parser.add_argument("--dry-run", help="Don't render", action="store_true")
+ parser.add_argument("--workers", help="Number of workers to use", type=int, default=-1)
+
+ args = parser.parse_args()
+
+ if args.workers == -1:
+ args.workers = None
+
+ check_quarto_bin(args.quarto_bin)
+
+ if not notebooks_target_dir(args.website_directory).exists():
+ notebooks_target_dir(args.website_directory).mkdir(parents=True)
+
+ with concurrent.futures.ProcessPoolExecutor(max_workers=args.workers) as executor:
+ futures = [
+ executor.submit(
+ process_notebook, f, notebooks_target_dir(args.website_directory), args.quarto_bin, args.dry_run
+ )
+ for f in args.notebook_directory.glob("*.ipynb")
+ ]
+ for future in concurrent.futures.as_completed(futures):
+ print(future.result())
+
+
+if __name__ == "__main__":
+ main()
diff --git a/website/src/components/GalleryPage.js b/website/src/components/GalleryPage.js
index 26b2bc7aa..839cb7279 100644
--- a/website/src/components/GalleryPage.js
+++ b/website/src/components/GalleryPage.js
@@ -1,12 +1,11 @@
import React, { useEffect, useState, useCallback } from "react";
-import galleryData from "../data/gallery.json";
import { Card, List, Select, Typography } from "antd";
import { useLocation, useHistory } from "react-router-dom";
const { Option } = Select;
const { Paragraph, Title } = Typography;
-const GalleryPage = () => {
+const GalleryPage = (props) => {
const location = useLocation();
const history = useHistory();
@@ -28,15 +27,23 @@ const GalleryPage = () => {
const TagsView = ({ tags }) => (