pre-commit version update and a few spelling fixes (#2913)

This commit is contained in:
Davor Runje 2024-06-12 08:26:22 +02:00 committed by GitHub
parent 53a59ddac6
commit a0787aced3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
7 changed files with 11 additions and 11 deletions

View File

@ -8,7 +8,7 @@ ci:
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
rev: v4.6.0
hooks:
- id: check-added-large-files
- id: check-ast
@ -23,21 +23,21 @@ repos:
- id: end-of-file-fixer
- id: no-commit-to-branch
- repo: https://github.com/psf/black
rev: 24.3.0
rev: 24.4.2
hooks:
- id: black
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.3.4
rev: v0.4.8
hooks:
- id: ruff
types_or: [ python, pyi, jupyter ]
args: ["--fix", "--ignore=E402"]
exclude: notebook/agentchat_databricks_dbrx.ipynb
- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
rev: v2.3.0
hooks:
- id: codespell
args: ["-L", "ans,linar,nam,tread,ot,"]
args: ["-L", "ans,linar,nam,tread,ot,assertIn,dependin,socio-economic"]
exclude: |
(?x)^(
pyproject.toml |

View File

@ -151,7 +151,7 @@ class GeminiClient:
if not model_name:
raise ValueError(
"Please provide a model name for the Gemini Client. "
"You can configurate it in the OAI Config List file. "
"You can configure it in the OAI Config List file. "
"See this [LLM configuration tutorial](https://microsoft.github.io/autogen/docs/topics/llm_configuration/) for more details."
)

View File

@ -74,7 +74,7 @@
"- Scientist: Read the papers and write a summary.\n",
"\n",
"\n",
"In the Figure, we define a simple workflow for research with 4 states: Init, Retrieve, Reserach and End. Within each state, we will call different agents to perform the tasks.\n",
"In the Figure, we define a simple workflow for research with 4 states: Init, Retrieve, Research and End. Within each state, we will call different agents to perform the tasks.\n",
"- Init: We use the initializer to start the workflow.\n",
"- Retrieve: We will first call the coder to write code and then call the executor to execute the code.\n",
"- Research: We will call the scientist to read the papers and write a summary.\n",

View File

@ -4,7 +4,6 @@ import json
import os
import sys
from functools import partial
from test.oai.test_utils import KEY_LOC, OAI_CONFIG_LIST
import datasets
import numpy as np
@ -18,6 +17,7 @@ from autogen.code_utils import (
implement,
)
from autogen.math_utils import eval_math_responses, solve_problem
from test.oai.test_utils import KEY_LOC, OAI_CONFIG_LIST
here = os.path.abspath(os.path.dirname(__file__))

View File

@ -33,7 +33,7 @@ user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stoc
To opt out of from this default behaviour there are some options.
### Diasable code execution entirely
### Disable code execution entirely
- Set `code_execution_config` to `False` for each code-execution agent. E.g.:

View File

@ -102,7 +102,7 @@ scientist = autogen.AssistantAgent(
)
```
In the Figure, we define a simple workflow for research with 4 states: Init, Retrieve, Reserach, and End. Within each state, we will call different agents to perform the tasks.
In the Figure, we define a simple workflow for research with 4 states: Init, Retrieve, Research, and End. Within each state, we will call different agents to perform the tasks.
- Init: We use the initializer to start the workflow.
- Retrieve: We will first call the coder to write code and then call the executor to execute the code.
- Research: We will call the scientist to read the papers and write a summary.

View File

@ -141,7 +141,7 @@ better with low cost. [EcoAssistant](/blog/2023/11/09/EcoAssistant) is a good ex
- [AutoDefense](/blog/2024/03/11/AutoDefense/Defending%20LLMs%20Against%20Jailbreak%20Attacks%20with%20AutoDefense) demonstrates that using multi-agents reduces the risk of suffering from jailbreak attacks.
There are certainly tradeoffs to make. The large design space of multi-agents offers these tradeoffs and opens up new opportunites for optimization.
There are certainly tradeoffs to make. The large design space of multi-agents offers these tradeoffs and opens up new opportunities for optimization.
> Over a year since the debut of Ask AT&T, the generative AI platform to which weve onboarded over 80,000 users, AT&T has been enhancing its capabilities by incorporating 'AI Agents'. These agents, powered by the Autogen framework pioneered by Microsoft (https://microsoft.github.io/autogen/blog/2023/12/01/AutoGenStudio/), are designed to tackle complicated workflows and tasks that traditional language models find challenging. To drive collaboration, AT&T is contributing back to the open-source project by introducing features that facilitate enhanced security and role-based access for various projects and data.
>