{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# Group Chat with Retrieval Augmented Generation\n", "\n", "AutoGen supports conversable agents powered by LLMs, tools, or humans, performing tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", "````{=mdx}\n", ":::info Requirements\n", "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", "pip install pyautogen[retrievechat]\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", ":::\n", "````" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Set your API Endpoint\n", "\n", "The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "LLM models: ['gpt-35-turbo', 'gpt4-1106-preview', 'gpt-4o']\n" ] } ], "source": [ "import chromadb\n", "from typing_extensions import Annotated\n", "\n", "import autogen\n", "from autogen import AssistantAgent\n", "from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent\n", "\n", "config_list = autogen.config_list_from_json(\"OAI_CONFIG_LIST\")\n", "\n", "print(\"LLM models: \", [config_list[i][\"model\"] for i in range(len(config_list))])" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "````{=mdx}\n", ":::tip\n", "Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n", ":::\n", "````\n", "\n", "## Construct Agents" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "def termination_msg(x):\n", " return isinstance(x, dict) and \"TERMINATE\" == str(x.get(\"content\", \"\"))[-9:].upper()\n", "\n", "\n", "llm_config = {\"config_list\": config_list, \"timeout\": 60, \"temperature\": 0.8, \"seed\": 1234}\n", "\n", "boss = autogen.UserProxyAgent(\n", " name=\"Boss\",\n", " is_termination_msg=termination_msg,\n", " human_input_mode=\"NEVER\",\n", " code_execution_config=False, # we don't want to execute code in this case.\n", " default_auto_reply=\"Reply `TERMINATE` if the task is done.\",\n", " description=\"The boss who ask questions and give tasks.\",\n", ")\n", "\n", "boss_aid = RetrieveUserProxyAgent(\n", " name=\"Boss_Assistant\",\n", " is_termination_msg=termination_msg,\n", " human_input_mode=\"NEVER\",\n", " default_auto_reply=\"Reply `TERMINATE` if the task is done.\",\n", " max_consecutive_auto_reply=3,\n", " retrieve_config={\n", " \"task\": \"code\",\n", " \"docs_path\": \"https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Examples/Integrate%20-%20Spark.md\",\n", " \"chunk_token_size\": 1000,\n", " \"model\": config_list[0][\"model\"],\n", " \"collection_name\": \"groupchat\",\n", " \"get_or_create\": True,\n", " },\n", " code_execution_config=False, # we don't want to execute code in this case.\n", " description=\"Assistant who has extra content retrieval power for solving difficult problems.\",\n", ")\n", "\n", "coder = AssistantAgent(\n", " name=\"Senior_Python_Engineer\",\n", " is_termination_msg=termination_msg,\n", " system_message=\"You are a senior python engineer, you provide python code to answer questions. Reply `TERMINATE` in the end when everything is done.\",\n", " llm_config=llm_config,\n", " description=\"Senior Python Engineer who can write code to solve problems and answer questions.\",\n", ")\n", "\n", "pm = autogen.AssistantAgent(\n", " name=\"Product_Manager\",\n", " is_termination_msg=termination_msg,\n", " system_message=\"You are a product manager. Reply `TERMINATE` in the end when everything is done.\",\n", " llm_config=llm_config,\n", " description=\"Product Manager who can design and plan the project.\",\n", ")\n", "\n", "reviewer = autogen.AssistantAgent(\n", " name=\"Code_Reviewer\",\n", " is_termination_msg=termination_msg,\n", " system_message=\"You are a code reviewer. Reply `TERMINATE` in the end when everything is done.\",\n", " llm_config=llm_config,\n", " description=\"Code Reviewer who can review the code.\",\n", ")\n", "\n", "PROBLEM = \"How to use spark for parallel training in FLAML? Give me sample code.\"\n", "\n", "\n", "def _reset_agents():\n", " boss.reset()\n", " boss_aid.reset()\n", " coder.reset()\n", " pm.reset()\n", " reviewer.reset()\n", "\n", "\n", "def rag_chat():\n", " _reset_agents()\n", " groupchat = autogen.GroupChat(\n", " agents=[boss_aid, pm, coder, reviewer], messages=[], max_round=12, speaker_selection_method=\"round_robin\"\n", " )\n", " manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)\n", "\n", " # Start chatting with boss_aid as this is the user proxy agent.\n", " boss_aid.initiate_chat(\n", " manager,\n", " message=boss_aid.message_generator,\n", " problem=PROBLEM,\n", " n_results=3,\n", " )\n", "\n", "\n", "def norag_chat():\n", " _reset_agents()\n", " groupchat = autogen.GroupChat(\n", " agents=[boss, pm, coder, reviewer],\n", " messages=[],\n", " max_round=12,\n", " speaker_selection_method=\"auto\",\n", " allow_repeat_speaker=False,\n", " )\n", " manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)\n", "\n", " # Start chatting with the boss as this is the user proxy agent.\n", " boss.initiate_chat(\n", " manager,\n", " message=PROBLEM,\n", " )\n", "\n", "\n", "def call_rag_chat():\n", " _reset_agents()\n", "\n", " # In this case, we will have multiple user proxy agents and we don't initiate the chat\n", " # with RAG user proxy agent.\n", " # In order to use RAG user proxy agent, we need to wrap RAG agents in a function and call\n", " # it from other agents.\n", " def retrieve_content(\n", " message: Annotated[\n", " str,\n", " \"Refined message which keeps the original meaning and can be used to retrieve content for code generation and question answering.\",\n", " ],\n", " n_results: Annotated[int, \"number of results\"] = 3,\n", " ) -> str:\n", " boss_aid.n_results = n_results # Set the number of results to be retrieved.\n", " _context = {\"problem\": message, \"n_results\": n_results}\n", " ret_msg = boss_aid.message_generator(boss_aid, None, _context)\n", " return ret_msg or message\n", "\n", " boss_aid.human_input_mode = \"NEVER\" # Disable human input for boss_aid since it only retrieves content.\n", "\n", " for caller in [pm, coder, reviewer]:\n", " d_retrieve_content = caller.register_for_llm(\n", " description=\"retrieve content for code generation and question answering.\", api_style=\"function\"\n", " )(retrieve_content)\n", "\n", " for executor in [boss, pm]:\n", " executor.register_for_execution()(d_retrieve_content)\n", "\n", " groupchat = autogen.GroupChat(\n", " agents=[boss, pm, coder, reviewer],\n", " messages=[],\n", " max_round=12,\n", " speaker_selection_method=\"round_robin\",\n", " allow_repeat_speaker=False,\n", " )\n", "\n", " manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)\n", "\n", " # Start chatting with the boss as this is the user proxy agent.\n", " boss.initiate_chat(\n", " manager,\n", " message=PROBLEM,\n", " )" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Start Chat\n", "\n", "### UserProxyAgent doesn't get the correct code\n", "[FLAML](https://github.com/microsoft/FLAML) was open sourced in 2020, so ChatGPT is familiar with it. However, Spark-related APIs were added in 2022, so they were not in ChatGPT's training data. As a result, we end up with invalid code." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mBoss\u001b[0m (to chat_manager):\n", "\n", "How to use spark for parallel training in FLAML? Give me sample code.\n", "\n", "--------------------------------------------------------------------------------\n", "How to use spark for parallel training in FLAML? Give me sample code.\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Senior_Python_Engineer\n", "\u001b[0m\n", "\u001b[33mSenior_Python_Engineer\u001b[0m (to chat_manager):\n", "\n", "To use Spark for parallel training in FLAML, you need to install `pyspark` package and set up a Spark cluster. Here's some sample code for using Spark in FLAML:\n", "\n", "```python\n", "from flaml import AutoML\n", "from pyspark.sql import SparkSession\n", "\n", "# create a SparkSession\n", "spark = SparkSession.builder.appName(\"FLAML-Spark\").getOrCreate()\n", "\n", "# create a FLAML AutoML object with Spark backend\n", "automl = AutoML()\n", "\n", "# load data from Spark DataFrame\n", "data = spark.read.format(\"csv\").option(\"header\", \"true\").load(\"data.csv\")\n", "\n", "# specify the target column and task type\n", "settings = {\n", " \"time_budget\": 60, # time budget in seconds\n", " \"metric\": 'accuracy',\n", " \"task\": 'classification',\n", "}\n", "\n", "# train and validate models in parallel using Spark\n", "best_model = automl.fit(data, **settings)\n", "\n", "# print the best model and its metadata\n", "print(automl.model_name)\n", "print(automl.best_model)\n", "print(automl.best_config)\n", "\n", "# stop the SparkSession\n", "spark.stop()\n", "\n", "# terminate the code execution\n", "TERMINATE\n", "```\n", "\n", "Note that this is just a sample code, you may need to modify it to fit your specific use case.\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Code_Reviewer\n", "\u001b[0m\n", "\u001b[33mCode_Reviewer\u001b[0m (to chat_manager):\n", "\n", "\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Product_Manager\n", "\u001b[0m\n", "\u001b[33mProduct_Manager\u001b[0m (to chat_manager):\n", "\n", "Do you have any questions related to the code sample?\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Senior_Python_Engineer\n", "\u001b[0m\n", "\u001b[33mSenior_Python_Engineer\u001b[0m (to chat_manager):\n", "\n", "No, I don't have any questions related to the code sample.\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Product_Manager\n", "\u001b[0m\n", "\u001b[33mProduct_Manager\u001b[0m (to chat_manager):\n", "\n", "Great, let me know if you need any further assistance.\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Senior_Python_Engineer\n", "\u001b[0m\n", "\u001b[33mSenior_Python_Engineer\u001b[0m (to chat_manager):\n", "\n", "Sure, will do. Thank you!\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Product_Manager\n", "\u001b[0m\n", "\u001b[33mProduct_Manager\u001b[0m (to chat_manager):\n", "\n", "You're welcome! Have a great day ahead!\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Senior_Python_Engineer\n", "\u001b[0m\n", "\u001b[33mSenior_Python_Engineer\u001b[0m (to chat_manager):\n", "\n", "You too, have a great day ahead!\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Product_Manager\n", "\u001b[0m\n", "\u001b[33mProduct_Manager\u001b[0m (to chat_manager):\n", "\n", "Thank you! Goodbye!\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Senior_Python_Engineer\n", "\u001b[0m\n", "\u001b[33mSenior_Python_Engineer\u001b[0m (to chat_manager):\n", "\n", "Goodbye!\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Code_Reviewer\n", "\u001b[0m\n", "\u001b[33mCode_Reviewer\u001b[0m (to chat_manager):\n", "\n", "TERMINATE\n", "\n", "--------------------------------------------------------------------------------\n" ] } ], "source": [ "norag_chat()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### RetrieveUserProxyAgent get the correct code\n", "Since RetrieveUserProxyAgent can perform retrieval-augmented generation based on the given documentation file, ChatGPT can generate the correct code for us!" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Trying to create collection.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2024-08-14 06:59:09,583 - autogen.agentchat.contrib.retrieve_user_proxy_agent - INFO - \u001b[32mUse the existing collection `groupchat`.\u001b[0m\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2024-08-14 06:59:09,902 - autogen.agentchat.contrib.retrieve_user_proxy_agent - INFO - Found 2 chunks.\u001b[0m\n", "2024-08-14 06:59:09,912 - autogen.agentchat.contrib.vectordb.chromadb - INFO - No content embedding is provided. Will use the VectorDB's embedding function to generate the content embedding.\u001b[0m\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "VectorDB returns doc_ids: [['bdfbc921', 'b2c1ec51', '0e57e70f']]\n", "\u001b[32mAdding content of doc bdfbc921 to context.\u001b[0m\n", "\u001b[32mAdding content of doc b2c1ec51 to context.\u001b[0m\n", "\u001b[33mBoss_Assistant\u001b[0m (to chat_manager):\n", "\n", "You're a retrieve augmented coding assistant. You answer user's questions based on your own knowledge and the\n", "context provided by the user.\n", "If you can't answer the question with or without the current context, you should reply exactly `UPDATE CONTEXT`.\n", "For code generation, you must obey the following rules:\n", "Rule 1. You MUST NOT install any packages because all the packages needed are already installed.\n", "Rule 2. You must follow the formats below to write your code:\n", "```language\n", "# your code\n", "```\n", "\n", "User's question is: How to use spark for parallel training in FLAML? Give me sample code.\n", "\n", "Context is: # Integrate - Spark\n", "\n", "FLAML has integrated Spark for distributed training. There are two main aspects of integration with Spark:\n", "\n", "- Use Spark ML estimators for AutoML.\n", "- Use Spark to run training in parallel spark jobs.\n", "\n", "## Spark ML Estimators\n", "\n", "FLAML integrates estimators based on Spark ML models. These models are trained in parallel using Spark, so we called them Spark estimators. To use these models, you first need to organize your data in the required format.\n", "\n", "### Data\n", "\n", "For Spark estimators, AutoML only consumes Spark data. FLAML provides a convenient function `to_pandas_on_spark` in the `flaml.automl.spark.utils` module to convert your data into a pandas-on-spark (`pyspark.pandas`) dataframe/series, which Spark estimators require.\n", "\n", "This utility function takes data in the form of a `pandas.Dataframe` or `pyspark.sql.Dataframe` and converts it into a pandas-on-spark dataframe. It also takes `pandas.Series` or `pyspark.sql.Dataframe` and converts it into a [pandas-on-spark](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/index.html) series. If you pass in a `pyspark.pandas.Dataframe`, it will not make any changes.\n", "\n", "This function also accepts optional arguments `index_col` and `default_index_type`.\n", "\n", "- `index_col` is the column name to use as the index, default is None.\n", "- `default_index_type` is the default index type, default is \"distributed-sequence\". More info about default index type could be found on Spark official [documentation](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/options.html#default-index-type)\n", "\n", "Here is an example code snippet for Spark Data:\n", "\n", "```python\n", "import pandas as pd\n", "from flaml.automl.spark.utils import to_pandas_on_spark\n", "\n", "# Creating a dictionary\n", "data = {\n", " \"Square_Feet\": [800, 1200, 1800, 1500, 850],\n", " \"Age_Years\": [20, 15, 10, 7, 25],\n", " \"Price\": [100000, 200000, 300000, 240000, 120000],\n", "}\n", "\n", "# Creating a pandas DataFrame\n", "dataframe = pd.DataFrame(data)\n", "label = \"Price\"\n", "\n", "# Convert to pandas-on-spark dataframe\n", "psdf = to_pandas_on_spark(dataframe)\n", "```\n", "\n", "To use Spark ML models you need to format your data appropriately. Specifically, use [`VectorAssembler`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.feature.VectorAssembler.html) to merge all feature columns into a single vector column.\n", "\n", "Here is an example of how to use it:\n", "\n", "```python\n", "from pyspark.ml.feature import VectorAssembler\n", "\n", "columns = psdf.columns\n", "feature_cols = [col for col in columns if col != label]\n", "featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n", "psdf = featurizer.transform(psdf.to_spark(index_col=\"index\"))[\"index\", \"features\"]\n", "```\n", "\n", "Later in conducting the experiment, use your pandas-on-spark data like non-spark data and pass them using `X_train, y_train` or `dataframe, label`.\n", "\n", "### Estimators\n", "\n", "#### Model List\n", "\n", "- `lgbm_spark`: The class for fine-tuning Spark version LightGBM models, using [SynapseML](https://microsoft.github.io/SynapseML/docs/features/lightgbm/about/) API.\n", "\n", "#### Usage\n", "\n", "First, prepare your data in the required format as described in the previous section.\n", "\n", "By including the models you intend to try in the `estimators_list` argument to `flaml.automl`, FLAML will start trying configurations for these models. If your input is Spark data, FLAML will also use estimators with the `_spark` postfix by default, even if you haven't specified them.\n", "\n", "Here is an example code snippet using SparkML models in AutoML:\n", "\n", "```python\n", "import flaml\n", "\n", "# prepare your data in pandas-on-spark format as we previously mentioned\n", "\n", "automl = flaml.AutoML()\n", "settings = {\n", " \"time_budget\": 30,\n", " \"metric\": \"r2\",\n", " \"estimator_list\": [\"lgbm_spark\"], # this setting is optional\n", " \"task\": \"regression\",\n", "}\n", "\n", "automl.fit(\n", " dataframe=psdf,\n", " label=label,\n", " **settings,\n", ")\n", "```\n", "\n", "[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/automl_bankrupt_synapseml.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/automl_bankrupt_synapseml.ipynb)\n", "\n", "## Parallel Spark Jobs\n", "\n", "You can activate Spark as the parallel backend during parallel tuning in both [AutoML](/docs/Use-Cases/Task-Oriented-AutoML#parallel-tuning) and [Hyperparameter Tuning](/docs/Use-Cases/Tune-User-Defined-Function#parallel-tuning), by setting the `use_spark` to `true`. FLAML will dispatch your job to the distributed Spark backend using [`joblib-spark`](https://github.com/joblib/joblib-spark).\n", "\n", "Please note that you should not set `use_spark` to `true` when applying AutoML and Tuning for Spark Data. This is because only SparkML models will be used for Spark Data in AutoML and Tuning. As SparkML models run in parallel, there is no need to distribute them with `use_spark` again.\n", "\n", "All the Spark-related arguments are stated below. These arguments are available in both Hyperparameter Tuning and AutoML:\n", "\n", "- `use_spark`: boolean, default=False | Whether to use spark to run the training in parallel spark jobs. This can be used to accelerate training on large models and large datasets, but will incur more overhead in time and thus slow down training in some cases. GPU training is not supported yet when use_spark is True. For Spark clusters, by default, we will launch one trial per executor. However, sometimes we want to launch more trials than the number of executors (e.g., local mode). In this case, we can set the environment variable `FLAML_MAX_CONCURRENT` to override the detected `num_executors`. The final number of concurrent trials will be the minimum of `n_concurrent_trials` and `num_executors`.\n", "- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performes parallel tuning.\n", "- `force_cancel`: boolean, default=False | Whether to forcely cancel Spark jobs if the search time exceeded the time budget. Spark jobs include parallel tuning jobs and Spark-based model training jobs.\n", "\n", "An example code snippet for using parallel Spark jobs:\n", "\n", "```python\n", "import flaml\n", "\n", "automl_experiment = flaml.AutoML()\n", "automl_settings = {\n", " \"time_budget\": 30,\n", " \"metric\": \"r2\",\n", " \"task\": \"regression\",\n", " \"n_concurrent_trials\": 2,\n", " \"use_spark\": True,\n", " \"force_cancel\": True, # Activating the force_cancel option can immediately halt Spark jobs once they exceed the allocated time_budget.\n", "}\n", "\n", "automl.fit(\n", " dataframe=dataframe,\n", " label=label,\n", " **automl_settings,\n", ")\n", "```\n", "\n", "[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb)\n", "# Integrate - Spark\n", "\n", "FLAML has integrated Spark for distributed training. There are two main aspects of integration with Spark:\n", "\n", "- Use Spark ML estimators for AutoML.\n", "- Use Spark to run training in parallel spark jobs.\n", "\n", "## Spark ML Estimators\n", "\n", "FLAML integrates estimators based on Spark ML models. These models are trained in parallel using Spark, so we called them Spark estimators. To use these models, you first need to organize your data in the required format.\n", "\n", "### Data\n", "\n", "For Spark estimators, AutoML only consumes Spark data. FLAML provides a convenient function `to_pandas_on_spark` in the `flaml.automl.spark.utils` module to convert your data into a pandas-on-spark (`pyspark.pandas`) dataframe/series, which Spark estimators require.\n", "\n", "This utility function takes data in the form of a `pandas.Dataframe` or `pyspark.sql.Dataframe` and converts it into a pandas-on-spark dataframe. It also takes `pandas.Series` or `pyspark.sql.Dataframe` and converts it into a [pandas-on-spark](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/index.html) series. If you pass in a `pyspark.pandas.Dataframe`, it will not make any changes.\n", "\n", "This function also accepts optional arguments `index_col` and `default_index_type`.\n", "\n", "- `index_col` is the column name to use as the index, default is None.\n", "- `default_index_type` is the default index type, default is \"distributed-sequence\". More info about default index type could be found on Spark official [documentation](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/options.html#default-index-type)\n", "\n", "Here is an example code snippet for Spark Data:\n", "\n", "```python\n", "import pandas as pd\n", "from flaml.automl.spark.utils import to_pandas_on_spark\n", "\n", "# Creating a dictionary\n", "data = {\n", " \"Square_Feet\": [800, 1200, 1800, 1500, 850],\n", " \"Age_Years\": [20, 15, 10, 7, 25],\n", " \"Price\": [100000, 200000, 300000, 240000, 120000],\n", "}\n", "\n", "# Creating a pandas DataFrame\n", "dataframe = pd.DataFrame(data)\n", "label = \"Price\"\n", "\n", "# Convert to pandas-on-spark dataframe\n", "psdf = to_pandas_on_spark(dataframe)\n", "```\n", "\n", "To use Spark ML models you need to format your data appropriately. Specifically, use [`VectorAssembler`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.feature.VectorAssembler.html) to merge all feature columns into a single vector column.\n", "\n", "Here is an example of how to use it:\n", "\n", "```python\n", "from pyspark.ml.feature import VectorAssembler\n", "\n", "columns = psdf.columns\n", "feature_cols = [col for col in columns if col != label]\n", "featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n", "psdf = featurizer.transform(psdf.to_spark(index_col=\"index\"))[\"index\", \"features\"]\n", "```\n", "\n", "Later in conducting the experiment, use your pandas-on-spark data like non-spark data and pass them using `X_train, y_train` or `dataframe, label`.\n", "\n", "### Estimators\n", "\n", "#### Model List\n", "\n", "- `lgbm_spark`: The class for fine-tuning Spark version LightGBM models, using [SynapseML](https://microsoft.github.io/SynapseML/docs/features/lightgbm/about/) API.\n", "\n", "#### Usage\n", "\n", "First, prepare your data in the required format as described in the previous section.\n", "\n", "By including the models you intend to try in the `estimators_list` argument to `flaml.automl`, FLAML will start trying configurations for these models. If your input is Spark data, FLAML will also use estimators with the `_spark` postfix by default, even if you haven't specified them.\n", "\n", "Here is an example code snippet using SparkML models in AutoML:\n", "\n", "```python\n", "import flaml\n", "\n", "# prepare your data in pandas-on-spark format as we previously mentioned\n", "\n", "\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Product_Manager\n", "\u001b[0m\n", "\u001b[32mAdding content of doc b2c1ec51 to context.\u001b[0m\n", "\u001b[33mBoss_Assistant\u001b[0m (to chat_manager):\n", "\n", "You're a retrieve augmented coding assistant. You answer user's questions based on your own knowledge and the\n", "context provided by the user.\n", "If you can't answer the question with or without the current context, you should reply exactly `UPDATE CONTEXT`.\n", "For code generation, you must obey the following rules:\n", "Rule 1. You MUST NOT install any packages because all the packages needed are already installed.\n", "Rule 2. You must follow the formats below to write your code:\n", "```language\n", "# your code\n", "```\n", "\n", "User's question is: How to use spark for parallel training in FLAML? Give me sample code.\n", "\n", "Context is: # Integrate - Spark\n", "\n", "FLAML has integrated Spark for distributed training. There are two main aspects of integration with Spark:\n", "\n", "- Use Spark ML estimators for AutoML.\n", "- Use Spark to run training in parallel spark jobs.\n", "\n", "## Spark ML Estimators\n", "\n", "FLAML integrates estimators based on Spark ML models. These models are trained in parallel using Spark, so we called them Spark estimators. To use these models, you first need to organize your data in the required format.\n", "\n", "### Data\n", "\n", "For Spark estimators, AutoML only consumes Spark data. FLAML provides a convenient function `to_pandas_on_spark` in the `flaml.automl.spark.utils` module to convert your data into a pandas-on-spark (`pyspark.pandas`) dataframe/series, which Spark estimators require.\n", "\n", "This utility function takes data in the form of a `pandas.Dataframe` or `pyspark.sql.Dataframe` and converts it into a pandas-on-spark dataframe. It also takes `pandas.Series` or `pyspark.sql.Dataframe` and converts it into a [pandas-on-spark](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/index.html) series. If you pass in a `pyspark.pandas.Dataframe`, it will not make any changes.\n", "\n", "This function also accepts optional arguments `index_col` and `default_index_type`.\n", "\n", "- `index_col` is the column name to use as the index, default is None.\n", "- `default_index_type` is the default index type, default is \"distributed-sequence\". More info about default index type could be found on Spark official [documentation](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/options.html#default-index-type)\n", "\n", "Here is an example code snippet for Spark Data:\n", "\n", "```python\n", "import pandas as pd\n", "from flaml.automl.spark.utils import to_pandas_on_spark\n", "\n", "# Creating a dictionary\n", "data = {\n", " \"Square_Feet\": [800, 1200, 1800, 1500, 850],\n", " \"Age_Years\": [20, 15, 10, 7, 25],\n", " \"Price\": [100000, 200000, 300000, 240000, 120000],\n", "}\n", "\n", "# Creating a pandas DataFrame\n", "dataframe = pd.DataFrame(data)\n", "label = \"Price\"\n", "\n", "# Convert to pandas-on-spark dataframe\n", "psdf = to_pandas_on_spark(dataframe)\n", "```\n", "\n", "To use Spark ML models you need to format your data appropriately. Specifically, use [`VectorAssembler`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.feature.VectorAssembler.html) to merge all feature columns into a single vector column.\n", "\n", "Here is an example of how to use it:\n", "\n", "```python\n", "from pyspark.ml.feature import VectorAssembler\n", "\n", "columns = psdf.columns\n", "feature_cols = [col for col in columns if col != label]\n", "featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n", "psdf = featurizer.transform(psdf.to_spark(index_col=\"index\"))[\"index\", \"features\"]\n", "```\n", "\n", "Later in conducting the experiment, use your pandas-on-spark data like non-spark data and pass them using `X_train, y_train` or `dataframe, label`.\n", "\n", "### Estimators\n", "\n", "#### Model List\n", "\n", "- `lgbm_spark`: The class for fine-tuning Spark version LightGBM models, using [SynapseML](https://microsoft.github.io/SynapseML/docs/features/lightgbm/about/) API.\n", "\n", "#### Usage\n", "\n", "First, prepare your data in the required format as described in the previous section.\n", "\n", "By including the models you intend to try in the `estimators_list` argument to `flaml.automl`, FLAML will start trying configurations for these models. If your input is Spark data, FLAML will also use estimators with the `_spark` postfix by default, even if you haven't specified them.\n", "\n", "Here is an example code snippet using SparkML models in AutoML:\n", "\n", "```python\n", "import flaml\n", "\n", "# prepare your data in pandas-on-spark format as we previously mentioned\n", "\n", "automl = flaml.AutoML()\n", "settings = {\n", " \"time_budget\": 30,\n", " \"metric\": \"r2\",\n", " \"estimator_list\": [\"lgbm_spark\"], # this setting is optional\n", " \"task\": \"regression\",\n", "}\n", "\n", "automl.fit(\n", " dataframe=psdf,\n", " label=label,\n", " **settings,\n", ")\n", "```\n", "\n", "[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/automl_bankrupt_synapseml.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/automl_bankrupt_synapseml.ipynb)\n", "\n", "## Parallel Spark Jobs\n", "\n", "You can activate Spark as the parallel backend during parallel tuning in both [AutoML](/docs/Use-Cases/Task-Oriented-AutoML#parallel-tuning) and [Hyperparameter Tuning](/docs/Use-Cases/Tune-User-Defined-Function#parallel-tuning), by setting the `use_spark` to `true`. FLAML will dispatch your job to the distributed Spark backend using [`joblib-spark`](https://github.com/joblib/joblib-spark).\n", "\n", "Please note that you should not set `use_spark` to `true` when applying AutoML and Tuning for Spark Data. This is because only SparkML models will be used for Spark Data in AutoML and Tuning. As SparkML models run in parallel, there is no need to distribute them with `use_spark` again.\n", "\n", "All the Spark-related arguments are stated below. These arguments are available in both Hyperparameter Tuning and AutoML:\n", "\n", "- `use_spark`: boolean, default=False | Whether to use spark to run the training in parallel spark jobs. This can be used to accelerate training on large models and large datasets, but will incur more overhead in time and thus slow down training in some cases. GPU training is not supported yet when use_spark is True. For Spark clusters, by default, we will launch one trial per executor. However, sometimes we want to launch more trials than the number of executors (e.g., local mode). In this case, we can set the environment variable `FLAML_MAX_CONCURRENT` to override the detected `num_executors`. The final number of concurrent trials will be the minimum of `n_concurrent_trials` and `num_executors`.\n", "- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performes parallel tuning.\n", "- `force_cancel`: boolean, default=False | Whether to forcely cancel Spark jobs if the search time exceeded the time budget. Spark jobs include parallel tuning jobs and Spark-based model training jobs.\n", "\n", "An example code snippet for using parallel Spark jobs:\n", "\n", "```python\n", "import flaml\n", "\n", "automl_experiment = flaml.AutoML()\n", "automl_settings = {\n", " \"time_budget\": 30,\n", " \"metric\": \"r2\",\n", " \"task\": \"regression\",\n", " \"n_concurrent_trials\": 2,\n", " \"use_spark\": True,\n", " \"force_cancel\": True, # Activating the force_cancel option can immediately halt Spark jobs once they exceed the allocated time_budget.\n", "}\n", "\n", "automl.fit(\n", " dataframe=dataframe,\n", " label=label,\n", " **automl_settings,\n", ")\n", "```\n", "\n", "[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb)\n", "# Integrate - Spark\n", "\n", "FLAML has integrated Spark for distributed training. There are two main aspects of integration with Spark:\n", "\n", "- Use Spark ML estimators for AutoML.\n", "- Use Spark to run training in parallel spark jobs.\n", "\n", "## Spark ML Estimators\n", "\n", "FLAML integrates estimators based on Spark ML models. These models are trained in parallel using Spark, so we called them Spark estimators. To use these models, you first need to organize your data in the required format.\n", "\n", "### Data\n", "\n", "For Spark estimators, AutoML only consumes Spark data. FLAML provides a convenient function `to_pandas_on_spark` in the `flaml.automl.spark.utils` module to convert your data into a pandas-on-spark (`pyspark.pandas`) dataframe/series, which Spark estimators require.\n", "\n", "This utility function takes data in the form of a `pandas.Dataframe` or `pyspark.sql.Dataframe` and converts it into a pandas-on-spark dataframe. It also takes `pandas.Series` or `pyspark.sql.Dataframe` and converts it into a [pandas-on-spark](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/index.html) series. If you pass in a `pyspark.pandas.Dataframe`, it will not make any changes.\n", "\n", "This function also accepts optional arguments `index_col` and `default_index_type`.\n", "\n", "- `index_col` is the column name to use as the index, default is None.\n", "- `default_index_type` is the default index type, default is \"distributed-sequence\". More info about default index type could be found on Spark official [documentation](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/options.html#default-index-type)\n", "\n", "Here is an example code snippet for Spark Data:\n", "\n", "```python\n", "import pandas as pd\n", "from flaml.automl.spark.utils import to_pandas_on_spark\n", "\n", "# Creating a dictionary\n", "data = {\n", " \"Square_Feet\": [800, 1200, 1800, 1500, 850],\n", " \"Age_Years\": [20, 15, 10, 7, 25],\n", " \"Price\": [100000, 200000, 300000, 240000, 120000],\n", "}\n", "\n", "# Creating a pandas DataFrame\n", "dataframe = pd.DataFrame(data)\n", "label = \"Price\"\n", "\n", "# Convert to pandas-on-spark dataframe\n", "psdf = to_pandas_on_spark(dataframe)\n", "```\n", "\n", "To use Spark ML models you need to format your data appropriately. Specifically, use [`VectorAssembler`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.feature.VectorAssembler.html) to merge all feature columns into a single vector column.\n", "\n", "Here is an example of how to use it:\n", "\n", "```python\n", "from pyspark.ml.feature import VectorAssembler\n", "\n", "columns = psdf.columns\n", "feature_cols = [col for col in columns if col != label]\n", "featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n", "psdf = featurizer.transform(psdf.to_spark(index_col=\"index\"))[\"index\", \"features\"]\n", "```\n", "\n", "Later in conducting the experiment, use your pandas-on-spark data like non-spark data and pass them using `X_train, y_train` or `dataframe, label`.\n", "\n", "### Estimators\n", "\n", "#### Model List\n", "\n", "- `lgbm_spark`: The class for fine-tuning Spark version LightGBM models, using [SynapseML](https://microsoft.github.io/SynapseML/docs/features/lightgbm/about/) API.\n", "\n", "#### Usage\n", "\n", "First, prepare your data in the required format as described in the previous section.\n", "\n", "By including the models you intend to try in the `estimators_list` argument to `flaml.automl`, FLAML will start trying configurations for these models. If your input is Spark data, FLAML will also use estimators with the `_spark` postfix by default, even if you haven't specified them.\n", "\n", "Here is an example code snippet using SparkML models in AutoML:\n", "\n", "```python\n", "import flaml\n", "\n", "# prepare your data in pandas-on-spark format as we previously mentioned\n", "\n", "\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Product_Manager\n", "\u001b[0m\n", "\u001b[33mProduct_Manager\u001b[0m (to chat_manager):\n", "\n", "```python\n", "from flaml import AutoML\n", "\n", "# Assuming psdf is the pandas-on-spark dataframe and label is the name of the target variable\n", "# Presuming that the data conversion and feature vectorization have been done as shown in the context\n", "\n", "automl = AutoML()\n", "\n", "settings = {\n", " \"time_budget\": 120, # for example, set the time budget to 2 minutes\n", " \"metric\": \"accuracy\", # assuming a classification problem, change to \"r2\" for regression\n", " \"estimator_list\": [\"lgbm_spark\"], # specify the Spark estimator\n", " \"task\": \"classification\", # assuming a classification problem, change to \"regression\" for regression\n", " \"n_concurrent_trials\": 2, # number of concurrent Spark jobs\n", " \"use_spark\": True, # enable distributed training using Spark\n", "}\n", "\n", "automl.fit(dataframe=psdf, label=label, **settings)\n", "```\n", "Please adjust the `metric`, `task`, and other settings according to your specific problem and requirements. This code snippet sets up FLAML with Spark for parallel training using the LightGBM Spark estimator, with two concurrent trials. Make sure your Spark environment is properly configured to run the distributed training.\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Senior_Python_Engineer\n", "\u001b[0m\n", "\u001b[33mSenior_Python_Engineer\u001b[0m (to chat_manager):\n", "\n", "```python\n", "from flaml import AutoML\n", "\n", "# Assuming psdf is the pandas-on-spark dataframe and label is the name of the target variable\n", "# Presuming that the data conversion and feature vectorization have been done as shown in the context\n", "\n", "automl = AutoML()\n", "\n", "settings = {\n", " \"time_budget\": 120, # for example, set the time budget to 2 minutes\n", " \"metric\": \"accuracy\", # assuming a classification problem, change to \"r2\" for regression\n", " \"estimator_list\": [\"lgbm_spark\"], # specify the Spark estimator\n", " \"task\": \"classification\", # assuming a classification problem, change to \"regression\" for regression\n", " \"n_concurrent_trials\": 2, # number of concurrent Spark jobs\n", " \"use_spark\": True, # enable distributed training using Spark\n", "}\n", "\n", "automl.fit(dataframe=psdf, label=label, **settings)\n", "```\n", "Please adjust the `metric`, `task`, and other settings according to your specific problem and requirements. This code snippet sets up FLAML with Spark for parallel training using the LightGBM Spark estimator, with two concurrent trials. Make sure your Spark environment is properly configured to run the distributed training.\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Code_Reviewer\n", "\u001b[0m\n", "\u001b[33mCode_Reviewer\u001b[0m (to chat_manager):\n", "\n", "The provided code snippet is mostly correct and follows the guidelines provided in the context. However, there is one minor issue: if we are using the pandas-on-spark DataFrame `psdf`, the `fit` method should be called with `dataframe` and `label` arguments, not `X_train` and `y_train`.\n", "\n", "This is because, with FLAML and Spark integration, the `fit` method expects the entire data as a single pandas-on-spark DataFrame along with the name of the target variable as `label`, rather than being provided with separate feature and target data as it would expect with standard pandas DataFrames.\n", "\n", "Here's the correct code snippet reflecting this:\n", "\n", "```python\n", "from flaml import AutoML\n", "\n", "# Assuming psdf is the pandas-on-spark dataframe and label is the name of the target variable\n", "# Presuming that the data conversion and feature vectorization have been done as shown in the context\n", "\n", "automl = AutoML()\n", "\n", "settings = {\n", " \"time_budget\": 120, # for example, set the time budget to 2 minutes\n", " \"metric\": \"accuracy\", # assuming a classification problem, change to \"r2\" for regression\n", " \"estimator_list\": [\"lgbm_spark\"], # specify the Spark estimator\n", " \"task\": \"classification\", # assuming a classification problem, change to \"regression\" for regression\n", " \"n_concurrent_trials\": 2, # number of concurrent Spark jobs\n", " \"use_spark\": True, # enable distributed training using Spark\n", "}\n", "\n", "# Use dataframe and label parameters to fit the model\n", "automl.fit(dataframe=psdf, label=label, **settings)\n", "```\n", "\n", "Please ensure that your Spark cluster is correctly configured to support distributed training, and adjust the `metric`, `task`, and other settings as needed for your specific use case.\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Boss_Assistant\n", "\u001b[0m\n", "\u001b[33mBoss_Assistant\u001b[0m (to chat_manager):\n", "\n", "Reply `TERMINATE` if the task is done.\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Product_Manager\n", "\u001b[0m\n", "\u001b[33mProduct_Manager\u001b[0m (to chat_manager):\n", "\n", "TERMINATE\n", "\n", "--------------------------------------------------------------------------------\n" ] } ], "source": [ "rag_chat()\n", "# type exit to terminate the chat" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Call RetrieveUserProxyAgent while init chat with another user proxy agent\n", "Sometimes, there might be a need to use RetrieveUserProxyAgent in group chat without initializing the chat with it. In such scenarios, it becomes essential to create a function that wraps the RAG agents and allows them to be called from other agents." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mBoss\u001b[0m (to chat_manager):\n", "\n", "How to use spark for parallel training in FLAML? Give me sample code.\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Product_Manager\n", "\u001b[0m\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mProduct_Manager\u001b[0m (to chat_manager):\n", "\n", "\u001b[32m***** Suggested function call: retrieve_content *****\u001b[0m\n", "Arguments: \n", "{\"message\":\"How to use spark for parallel training in FLAML? Give me sample code.\",\"n_results\":3}\n", "\u001b[32m*****************************************************\u001b[0m\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Boss\n", "\u001b[0m\n", "\u001b[35m\n", ">>>>>>>> EXECUTING FUNCTION retrieve_content...\u001b[0m\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2024-08-14 07:09:05,717 - autogen.agentchat.contrib.retrieve_user_proxy_agent - INFO - \u001b[32mUse the existing collection `groupchat`.\u001b[0m\n", "2024-08-14 07:09:05,845 - autogen.agentchat.contrib.retrieve_user_proxy_agent - INFO - Found 2 chunks.\u001b[0m\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Trying to create collection.\n", "VectorDB returns doc_ids: [['bdfbc921', 'b2c1ec51', '0e57e70f']]\n", "\u001b[32mAdding content of doc bdfbc921 to context.\u001b[0m\n", "\u001b[32mAdding content of doc b2c1ec51 to context.\u001b[0m\n", "\u001b[32mAdding content of doc 0e57e70f to context.\u001b[0m\n", "\u001b[33mBoss\u001b[0m (to chat_manager):\n", "\n", "\u001b[32m***** Response from calling function (retrieve_content) *****\u001b[0m\n", "You're a retrieve augmented coding assistant. You answer user's questions based on your own knowledge and the\n", "context provided by the user.\n", "If you can't answer the question with or without the current context, you should reply exactly `UPDATE CONTEXT`.\n", "For code generation, you must obey the following rules:\n", "Rule 1. You MUST NOT install any packages because all the packages needed are already installed.\n", "Rule 2. You must follow the formats below to write your code:\n", "```language\n", "# your code\n", "```\n", "\n", "User's question is: How to use spark for parallel training in FLAML? Give me sample code.\n", "\n", "Context is: # Integrate - Spark\n", "\n", "FLAML has integrated Spark for distributed training. There are two main aspects of integration with Spark:\n", "\n", "- Use Spark ML estimators for AutoML.\n", "- Use Spark to run training in parallel spark jobs.\n", "\n", "## Spark ML Estimators\n", "\n", "FLAML integrates estimators based on Spark ML models. These models are trained in parallel using Spark, so we called them Spark estimators. To use these models, you first need to organize your data in the required format.\n", "\n", "### Data\n", "\n", "For Spark estimators, AutoML only consumes Spark data. FLAML provides a convenient function `to_pandas_on_spark` in the `flaml.automl.spark.utils` module to convert your data into a pandas-on-spark (`pyspark.pandas`) dataframe/series, which Spark estimators require.\n", "\n", "This utility function takes data in the form of a `pandas.Dataframe` or `pyspark.sql.Dataframe` and converts it into a pandas-on-spark dataframe. It also takes `pandas.Series` or `pyspark.sql.Dataframe` and converts it into a [pandas-on-spark](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/index.html) series. If you pass in a `pyspark.pandas.Dataframe`, it will not make any changes.\n", "\n", "This function also accepts optional arguments `index_col` and `default_index_type`.\n", "\n", "- `index_col` is the column name to use as the index, default is None.\n", "- `default_index_type` is the default index type, default is \"distributed-sequence\". More info about default index type could be found on Spark official [documentation](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/options.html#default-index-type)\n", "\n", "Here is an example code snippet for Spark Data:\n", "\n", "```python\n", "import pandas as pd\n", "from flaml.automl.spark.utils import to_pandas_on_spark\n", "\n", "# Creating a dictionary\n", "data = {\n", " \"Square_Feet\": [800, 1200, 1800, 1500, 850],\n", " \"Age_Years\": [20, 15, 10, 7, 25],\n", " \"Price\": [100000, 200000, 300000, 240000, 120000],\n", "}\n", "\n", "# Creating a pandas DataFrame\n", "dataframe = pd.DataFrame(data)\n", "label = \"Price\"\n", "\n", "# Convert to pandas-on-spark dataframe\n", "psdf = to_pandas_on_spark(dataframe)\n", "```\n", "\n", "To use Spark ML models you need to format your data appropriately. Specifically, use [`VectorAssembler`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.feature.VectorAssembler.html) to merge all feature columns into a single vector column.\n", "\n", "Here is an example of how to use it:\n", "\n", "```python\n", "from pyspark.ml.feature import VectorAssembler\n", "\n", "columns = psdf.columns\n", "feature_cols = [col for col in columns if col != label]\n", "featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n", "psdf = featurizer.transform(psdf.to_spark(index_col=\"index\"))[\"index\", \"features\"]\n", "```\n", "\n", "Later in conducting the experiment, use your pandas-on-spark data like non-spark data and pass them using `X_train, y_train` or `dataframe, label`.\n", "\n", "### Estimators\n", "\n", "#### Model List\n", "\n", "- `lgbm_spark`: The class for fine-tuning Spark version LightGBM models, using [SynapseML](https://microsoft.github.io/SynapseML/docs/features/lightgbm/about/) API.\n", "\n", "#### Usage\n", "\n", "First, prepare your data in the required format as described in the previous section.\n", "\n", "By including the models you intend to try in the `estimators_list` argument to `flaml.automl`, FLAML will start trying configurations for these models. If your input is Spark data, FLAML will also use estimators with the `_spark` postfix by default, even if you haven't specified them.\n", "\n", "Here is an example code snippet using SparkML models in AutoML:\n", "\n", "```python\n", "import flaml\n", "\n", "# prepare your data in pandas-on-spark format as we previously mentioned\n", "\n", "automl = flaml.AutoML()\n", "settings = {\n", " \"time_budget\": 30,\n", " \"metric\": \"r2\",\n", " \"estimator_list\": [\"lgbm_spark\"], # this setting is optional\n", " \"task\": \"regression\",\n", "}\n", "\n", "automl.fit(\n", " dataframe=psdf,\n", " label=label,\n", " **settings,\n", ")\n", "```\n", "\n", "[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/automl_bankrupt_synapseml.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/automl_bankrupt_synapseml.ipynb)\n", "\n", "## Parallel Spark Jobs\n", "\n", "You can activate Spark as the parallel backend during parallel tuning in both [AutoML](/docs/Use-Cases/Task-Oriented-AutoML#parallel-tuning) and [Hyperparameter Tuning](/docs/Use-Cases/Tune-User-Defined-Function#parallel-tuning), by setting the `use_spark` to `true`. FLAML will dispatch your job to the distributed Spark backend using [`joblib-spark`](https://github.com/joblib/joblib-spark).\n", "\n", "Please note that you should not set `use_spark` to `true` when applying AutoML and Tuning for Spark Data. This is because only SparkML models will be used for Spark Data in AutoML and Tuning. As SparkML models run in parallel, there is no need to distribute them with `use_spark` again.\n", "\n", "All the Spark-related arguments are stated below. These arguments are available in both Hyperparameter Tuning and AutoML:\n", "\n", "- `use_spark`: boolean, default=False | Whether to use spark to run the training in parallel spark jobs. This can be used to accelerate training on large models and large datasets, but will incur more overhead in time and thus slow down training in some cases. GPU training is not supported yet when use_spark is True. For Spark clusters, by default, we will launch one trial per executor. However, sometimes we want to launch more trials than the number of executors (e.g., local mode). In this case, we can set the environment variable `FLAML_MAX_CONCURRENT` to override the detected `num_executors`. The final number of concurrent trials will be the minimum of `n_concurrent_trials` and `num_executors`.\n", "- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performes parallel tuning.\n", "- `force_cancel`: boolean, default=False | Whether to forcely cancel Spark jobs if the search time exceeded the time budget. Spark jobs include parallel tuning jobs and Spark-based model training jobs.\n", "\n", "An example code snippet for using parallel Spark jobs:\n", "\n", "```python\n", "import flaml\n", "\n", "automl_experiment = flaml.AutoML()\n", "automl_settings = {\n", " \"time_budget\": 30,\n", " \"metric\": \"r2\",\n", " \"task\": \"regression\",\n", " \"n_concurrent_trials\": 2,\n", " \"use_spark\": True,\n", " \"force_cancel\": True, # Activating the force_cancel option can immediately halt Spark jobs once they exceed the allocated time_budget.\n", "}\n", "\n", "automl.fit(\n", " dataframe=dataframe,\n", " label=label,\n", " **automl_settings,\n", ")\n", "```\n", "\n", "[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb)\n", "# Integrate - Spark\n", "\n", "FLAML has integrated Spark for distributed training. There are two main aspects of integration with Spark:\n", "\n", "- Use Spark ML estimators for AutoML.\n", "- Use Spark to run training in parallel spark jobs.\n", "\n", "## Spark ML Estimators\n", "\n", "FLAML integrates estimators based on Spark ML models. These models are trained in parallel using Spark, so we called them Spark estimators. To use these models, you first need to organize your data in the required format.\n", "\n", "### Data\n", "\n", "For Spark estimators, AutoML only consumes Spark data. FLAML provides a convenient function `to_pandas_on_spark` in the `flaml.automl.spark.utils` module to convert your data into a pandas-on-spark (`pyspark.pandas`) dataframe/series, which Spark estimators require.\n", "\n", "This utility function takes data in the form of a `pandas.Dataframe` or `pyspark.sql.Dataframe` and converts it into a pandas-on-spark dataframe. It also takes `pandas.Series` or `pyspark.sql.Dataframe` and converts it into a [pandas-on-spark](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/index.html) series. If you pass in a `pyspark.pandas.Dataframe`, it will not make any changes.\n", "\n", "This function also accepts optional arguments `index_col` and `default_index_type`.\n", "\n", "- `index_col` is the column name to use as the index, default is None.\n", "- `default_index_type` is the default index type, default is \"distributed-sequence\". More info about default index type could be found on Spark official [documentation](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/options.html#default-index-type)\n", "\n", "Here is an example code snippet for Spark Data:\n", "\n", "```python\n", "import pandas as pd\n", "from flaml.automl.spark.utils import to_pandas_on_spark\n", "\n", "# Creating a dictionary\n", "data = {\n", " \"Square_Feet\": [800, 1200, 1800, 1500, 850],\n", " \"Age_Years\": [20, 15, 10, 7, 25],\n", " \"Price\": [100000, 200000, 300000, 240000, 120000],\n", "}\n", "\n", "# Creating a pandas DataFrame\n", "dataframe = pd.DataFrame(data)\n", "label = \"Price\"\n", "\n", "# Convert to pandas-on-spark dataframe\n", "psdf = to_pandas_on_spark(dataframe)\n", "```\n", "\n", "To use Spark ML models you need to format your data appropriately. Specifically, use [`VectorAssembler`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.feature.VectorAssembler.html) to merge all feature columns into a single vector column.\n", "\n", "Here is an example of how to use it:\n", "\n", "```python\n", "from pyspark.ml.feature import VectorAssembler\n", "\n", "columns = psdf.columns\n", "feature_cols = [col for col in columns if col != label]\n", "featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n", "psdf = featurizer.transform(psdf.to_spark(index_col=\"index\"))[\"index\", \"features\"]\n", "```\n", "\n", "Later in conducting the experiment, use your pandas-on-spark data like non-spark data and pass them using `X_train, y_train` or `dataframe, label`.\n", "\n", "### Estimators\n", "\n", "#### Model List\n", "\n", "- `lgbm_spark`: The class for fine-tuning Spark version LightGBM models, using [SynapseML](https://microsoft.github.io/SynapseML/docs/features/lightgbm/about/) API.\n", "\n", "#### Usage\n", "\n", "First, prepare your data in the required format as described in the previous section.\n", "\n", "By including the models you intend to try in the `estimators_list` argument to `flaml.automl`, FLAML will start trying configurations for these models. If your input is Spark data, FLAML will also use estimators with the `_spark` postfix by default, even if you haven't specified them.\n", "\n", "Here is an example code snippet using SparkML models in AutoML:\n", "\n", "```python\n", "import flaml\n", "\n", "# prepare your data in pandas-on-spark format as we previously mentioned\n", "automl = flaml.AutoML()\n", "settings = {\n", " \"time_budget\": 30,\n", " \"metric\": \"r2\",\n", " \"estimator_list\": [\"lgbm_spark\"], # this setting is optional\n", " \"task\": \"regression\",\n", "}\n", "\n", "automl.fit(\n", " dataframe=psdf,\n", " label=label,\n", " **settings,\n", ")\n", "```\n", "\n", "[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/automl_bankrupt_synapseml.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/automl_bankrupt_synapseml.ipynb)\n", "\n", "## Parallel Spark Jobs\n", "\n", "You can activate Spark as the parallel backend during parallel tuning in both [AutoML](/docs/Use-Cases/Task-Oriented-AutoML#parallel-tuning) and [Hyperparameter Tuning](/docs/Use-Cases/Tune-User-Defined-Function#parallel-tuning), by setting the `use_spark` to `true`. FLAML will dispatch your job to the distributed Spark backend using [`joblib-spark`](https://github.com/joblib/joblib-spark).\n", "\n", "Please note that you should not set `use_spark` to `true` when applying AutoML and Tuning for Spark Data. This is because only SparkML models will be used for Spark Data in AutoML and Tuning. As SparkML models run in parallel, there is no need to distribute them with `use_spark` again.\n", "\n", "All the Spark-related arguments are stated below. These arguments are available in both Hyperparameter Tuning and AutoML:\n", "\n", "- `use_spark`: boolean, default=False | Whether to use spark to run the training in parallel spark jobs. This can be used to accelerate training on large models and large datasets, but will incur more overhead in time and thus slow down training in some cases. GPU training is not supported yet when use_spark is True. For Spark clusters, by default, we will launch one trial per executor. However, sometimes we want to launch more trials than the number of executors (e.g., local mode). In this case, we can set the environment variable `FLAML_MAX_CONCURRENT` to override the detected `num_executors`. The final number of concurrent trials will be the minimum of `n_concurrent_trials` and `num_executors`.\n", "- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performes parallel tuning.\n", "- `force_cancel`: boolean, default=False | Whether to forcely cancel Spark jobs if the search time exceeded the time budget. Spark jobs include parallel tuning jobs and Spark-based model training jobs.\n", "\n", "An example code snippet for using parallel Spark jobs:\n", "\n", "```python\n", "import flaml\n", "\n", "automl_experiment = flaml.AutoML()\n", "automl_settings = {\n", " \"time_budget\": 30,\n", " \"metric\": \"r2\",\n", " \"task\": \"regression\",\n", " \"n_concurrent_trials\": 2,\n", " \"use_spark\": True,\n", " \"force_cancel\": True, # Activating the force_cancel option can immediately halt Spark jobs once they exceed the allocated time_budget.\n", "}\n", "\n", "automl.fit(\n", " dataframe=dataframe,\n", " label=label,\n", " **automl_settings,\n", ")\n", "```\n", "\n", "[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb)\n", "\n", "\n", "\u001b[32m*************************************************************\u001b[0m\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Product_Manager\n", "\u001b[0m\n", "\u001b[33mProduct_Manager\u001b[0m (to chat_manager):\n", "\n", "To use Spark for parallel training in FLAML, follow these steps:\n", "\n", "## Steps:\n", "\n", "1. **Prepare Your Data:**\n", " Convert your data into a pandas-on-spark DataFrame using `to_pandas_on_spark` function.\n", "\n", "2. **Configure Spark Settings:**\n", " Set the `use_spark` parameter to `True` to enable Spark for parallel training jobs.\n", "\n", "3. **Run the AutoML Experiment:**\n", " Configure the AutoML settings and run the experiment.\n", "\n", "## Sample Code:\n", "\n", "```python\n", "import pandas as pd\n", "import flaml\n", "from flaml.automl.spark.utils import to_pandas_on_spark\n", "\n", "# Prepare your data\n", "data = {\n", " \"Square_Feet\": [800, 1200, 1800, 1500, 850],\n", " \"Age_Years\": [20, 15, 10, 7, 25],\n", " \"Price\": [100000, 200000, 300000, 240000, 120000],\n", "}\n", "\n", "dataframe = pd.DataFrame(data)\n", "label = \"Price\"\n", "\n", "# Convert to pandas-on-spark dataframe\n", "psdf = to_pandas_on_spark(dataframe)\n", "\n", "# Use VectorAssembler to format data for Spark ML\n", "from pyspark.ml.feature import VectorAssembler\n", "\n", "columns = psdf.columns\n", "feature_cols = [col for col in columns if col != label]\n", "featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n", "psdf = featurizer.transform(psdf.to_spark(index_col=\"index\"))[\"index\", \"features\"]\n", "\n", "# Configure AutoML settings\n", "automl = flaml.AutoML()\n", "automl_settings = {\n", " \"time_budget\": 30,\n", " \"metric\": \"r2\",\n", " \"task\": \"regression\",\n", " \"n_concurrent_trials\": 2,\n", " \"use_spark\": True,\n", " \"force_cancel\": True, # Optionally force cancel jobs that exceed time budget\n", "}\n", "\n", "# Run the AutoML experiment\n", "automl.fit(\n", " dataframe=psdf,\n", " label=label,\n", " **automl_settings,\n", ")\n", "```\n", "\n", "This code demonstrates how to prepare your data, configure Spark settings for parallel training, and run the AutoML experiment using FLAML with Spark.\n", "\n", "You can find more information and examples in the [FLAML documentation](https://github.com/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb).\n", "\n", "TERMINATE\n", "\n", "--------------------------------------------------------------------------------\n", "\u001b[32m\n", "Next speaker: Senior_Python_Engineer\n", "\u001b[0m\n" ] } ], "source": [ "call_rag_chat()" ] } ], "metadata": { "front_matter": { "description": "Implement and manage a multi-agent chat system using AutoGen, where AI assistants retrieve information, generate code, and interact collaboratively to solve complex tasks, especially in areas not covered by their training data.", "tags": [ "group chat", "orchestration", "RAG" ] }, "kernelspec": { "display_name": "flaml", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.4" } }, "nbformat": 4, "nbformat_minor": 2 }