Crawl4AI’s flexibility stems from two key classes:
1.**`BrowserConfig`** – Dictates **how** the browser is launched and behaves (e.g., headless or visible, proxy, user agent).
2.**`CrawlerRunConfig`** – Dictates **how** each **crawl** operates (e.g., caching, extraction, timeouts, JavaScript code to run, etc.).
In most examples, you create **one**`BrowserConfig` for the entire crawler session, then pass a **fresh** or re-used `CrawlerRunConfig` whenever you call `arun()`. This tutorial shows the most commonly used parameters. If you need advanced or rarely used fields, see the [Configuration Parameters](../api/parameters.md).
---
## 1. BrowserConfig Essentials
```python
class BrowserConfig:
def __init__(
browser_type="chromium",
headless=True,
proxy_config=None,
viewport_width=1080,
viewport_height=600,
verbose=True,
use_persistent_context=False,
user_data_dir=None,
cookies=None,
headers=None,
user_agent=None,
text_mode=False,
light_mode=False,
extra_args=None,
# ... other advanced parameters omitted here
):
...
```
### Key Fields to Note
1.**`browser_type`**
- Options: `"chromium"`, `"firefox"`, or `"webkit"`.
- Defaults to `"chromium"`.
- If you need a different engine, specify it here.
2.**`headless`**
-`True`: Runs the browser in headless mode (invisible browser).
-`False`: Runs the browser in visible mode, which helps with debugging.
3.**`proxy_config`**
- A dictionary with fields like:
```json
{
"server": "http://proxy.example.com:8080",
"username": "...",
"password": "..."
}
```
- Leave as `None` if a proxy is not required.
4.**`viewport_width`&`viewport_height`**:
- The initial window size.
- Some sites behave differently with smaller or bigger viewports.
5.**`verbose`**:
- If `True`, prints extra logs.
- Handy for debugging.
6.**`use_persistent_context`**:
- If `True`, uses a **persistent** browser profile, storing cookies/local storage across runs.
- Typically also set `user_data_dir` to point to a folder.
7.**`cookies`** &**`headers`**:
- If you want to start with specific cookies or add universal HTTP headers, set them here.
- E.g. `cookies=[{"name": "session", "value": "abc123", "domain": "example.com"}]`.
8.**`user_agent`**:
- Custom User-Agent string. If `None`, a default is used.
- You can also set `user_agent_mode="random"` for randomization (if you want to fight bot detection).
9.**`text_mode`** &**`light_mode`**:
-`text_mode=True` disables images, possibly speeding up text-only crawls.
-`light_mode=True` turns off certain background features for performance.
In a typical scenario, you define **one**`BrowserConfig` for your crawler session, then create **one or more**`CrawlerRunConfig` depending on each call’s needs:
```python
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
async def main():
# 1) Browser config: headless, bigger viewport, no proxy
**BrowserConfig** and **CrawlerRunConfig** give you straightforward ways to define:
- **Which** browser to launch, how it should run, and any proxy or user agent needs.
- **How** each crawl should behave—caching, timeouts, JavaScript code, extraction strategies, etc.
Use them together for **clear, maintainable** code, and when you need more specialized behavior, check out the advanced parameters in the [reference docs](../api/parameters.md). Happy crawling!