# Download [Airlines dataset](https://www.openml.org/d/1169) from OpenML. The task is to predict whether a given flight will be delayed, given the information of the scheduled departure.
[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/integrate_azureml.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/integrate_azureml.ipynb)
### Use ray to distribute across a cluster
When you have a compute cluster in AzureML, you can distribute `flaml.AutoML` or `flaml.tune` with ray.
#### Build a ray environment in AzureML
Create a docker file such as [.Docker/Dockerfile-cpu](https://github.com/microsoft/FLAML/blob/main/test/.Docker/Dockerfile-cpu). Make sure `RUN pip install flaml[blendsearch,ray]` is included in the docker file.
Then build a AzureML environment in the workspace `ws`.
# For a more detailed view of current AmlCompute status, use get_status()
print(compute_target.get_status().serialize())
```
If the computer target "cpucluster" already exists, it will not be recreated.
#### Run distributed AutoML job
Assuming you have an automl script like [ray/distribute_automl.py](https://github.com/microsoft/FLAML/blob/main/test/ray/distribute_automl.py). It uses `ray.init(address="auto")` to initialize the cluster, and uses `n_concurrent_trials=k` to inform `AutoML.fit()` to perform k concurrent trials in parallel.
Submit an AzureML job as the following:
```python
from azureml.core import Workspace, Experiment, ScriptRunConfig, Environment
tells AzureML to start ray on each node of the cluster.
#### Run distributed tune job
Prepare a script like [ray/distribute_tune.py](https://github.com/microsoft/FLAML/blob/main/test/ray/distribute_tune.py). Replace the command in the above eample with: