Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI.
## Introduction
Crawl4AI has one clear task: to make crawling and data extraction from web pages easy and efficient, especially for large language models (LLMs) and AI applications. Whether you are using it as a REST API or a Python library, Crawl4AI offers a robust and flexible solution with full asynchronous support.
## Quick Start
Here's a quick example to show you how easy it is to use Crawl4AI with its asynchronous capabilities:
```python
import asyncio
from crawl4ai import AsyncWebCrawler
async def main():
# Create an instance of AsyncWebCrawler
async with AsyncWebCrawler(verbose=True) as crawler:
# Run the crawler on a URL
result = await crawler.arun(url="https://www.nbcnews.com/business")
# Print the extracted content
print(result.markdown)
# Run the async main function
asyncio.run(main())
```
## Key Features ✨
- 🆓 Completely free and open-source
- 🚀 Blazing fast performance, outperforming many paid services