feat(crawler): add network request and console message capturing
Implement comprehensive network request and console message capturing functionality: - Add capture_network_requests and capture_console_messages config parameters - Add network_requests and console_messages fields to models - Implement Playwright event listeners to capture requests, responses, and console output - Create detailed documentation and examples - Add comprehensive tests This feature enables deep visibility into web page activity for debugging, security analysis, performance profiling, and API discovery in web applications.
This commit is contained in:
parent
a2061bf31e
commit
66ac07b4f3
61
JOURNAL.md
61
JOURNAL.md
@ -46,4 +46,63 @@ The MHTML capture feature allows users to capture complete web pages including a
|
||||
**Future Enhancements to Consider:**
|
||||
- Add option to save MHTML to file
|
||||
- Support for filtering what resources get included in MHTML
|
||||
- Add support for specifying MHTML capture options
|
||||
- Add support for specifying MHTML capture options
|
||||
|
||||
## [2025-04-10] Added Network Request and Console Message Capturing
|
||||
|
||||
**Feature:** Comprehensive capturing of network requests/responses and browser console messages during crawling
|
||||
|
||||
**Changes Made:**
|
||||
1. Added `capture_network_requests: bool = False` and `capture_console_messages: bool = False` parameters to `CrawlerRunConfig` class
|
||||
2. Added `network_requests: Optional[List[Dict[str, Any]]] = None` and `console_messages: Optional[List[Dict[str, Any]]] = None` fields to both `AsyncCrawlResponse` and `CrawlResult` models
|
||||
3. Implemented event listeners in `AsyncPlaywrightCrawlerStrategy._crawl_web()` to capture browser network events and console messages
|
||||
4. Added proper event listener cleanup in the finally block to prevent resource leaks
|
||||
5. Modified the crawler flow to pass captured data from AsyncCrawlResponse to CrawlResult
|
||||
|
||||
**Implementation Details:**
|
||||
- Network capture uses Playwright event listeners (`request`, `response`, and `requestfailed`) to record all network activity
|
||||
- Console capture uses Playwright event listeners (`console` and `pageerror`) to record console messages and errors
|
||||
- Each network event includes metadata like URL, headers, status, and timing information
|
||||
- Each console message includes type, text content, and source location when available
|
||||
- All captured events include timestamps for chronological analysis
|
||||
- Error handling ensures even failed capture attempts won't crash the main crawling process
|
||||
|
||||
**Files Modified:**
|
||||
- `crawl4ai/models.py`: Added new fields to AsyncCrawlResponse and CrawlResult
|
||||
- `crawl4ai/async_configs.py`: Added new configuration parameters to CrawlerRunConfig
|
||||
- `crawl4ai/async_crawler_strategy.py`: Implemented capture logic using event listeners
|
||||
- `crawl4ai/async_webcrawler.py`: Added data transfer from AsyncCrawlResponse to CrawlResult
|
||||
|
||||
**Documentation:**
|
||||
- Created detailed documentation in `docs/md_v2/advanced/network-console-capture.md`
|
||||
- Added feature to site navigation in `mkdocs.yml`
|
||||
- Updated CrawlResult documentation in `docs/md_v2/api/crawl-result.md`
|
||||
- Created comprehensive example in `docs/examples/network_console_capture_example.py`
|
||||
|
||||
**Testing:**
|
||||
- Created `tests/general/test_network_console_capture.py` with tests for:
|
||||
- Verifying capture is disabled by default
|
||||
- Testing network request capturing
|
||||
- Testing console message capturing
|
||||
- Ensuring both capture types can be enabled simultaneously
|
||||
- Checking correct content is captured in expected formats
|
||||
|
||||
**Challenges:**
|
||||
- Initial implementation had synchronous/asynchronous mismatches in event handlers
|
||||
- Needed to fix type of property access vs. method calls in handlers
|
||||
- Required careful cleanup of event listeners to prevent memory leaks
|
||||
|
||||
**Why This Feature:**
|
||||
The network and console capture feature provides deep visibility into web page activity, enabling:
|
||||
1. Debugging complex web applications by seeing all network requests and errors
|
||||
2. Security analysis to detect unexpected third-party requests and data flows
|
||||
3. Performance profiling to identify slow-loading resources
|
||||
4. API discovery in single-page applications
|
||||
5. Comprehensive analysis of web application behavior
|
||||
|
||||
**Future Enhancements to Consider:**
|
||||
- Option to filter captured events by type, domain, or content
|
||||
- Support for capturing response bodies (with size limits)
|
||||
- Aggregate statistics calculation for performance metrics
|
||||
- Integration with visualization tools for network waterfall analysis
|
||||
- Exporting captures in HAR format for use with external tools
|
@ -787,6 +787,9 @@ class CrawlerRunConfig():
|
||||
# Debugging and Logging Parameters
|
||||
verbose: bool = True,
|
||||
log_console: bool = False,
|
||||
# Network and Console Capturing Parameters
|
||||
capture_network_requests: bool = False,
|
||||
capture_console_messages: bool = False,
|
||||
# Connection Parameters
|
||||
method: str = "GET",
|
||||
stream: bool = False,
|
||||
@ -881,6 +884,10 @@ class CrawlerRunConfig():
|
||||
# Debugging and Logging Parameters
|
||||
self.verbose = verbose
|
||||
self.log_console = log_console
|
||||
|
||||
# Network and Console Capturing Parameters
|
||||
self.capture_network_requests = capture_network_requests
|
||||
self.capture_console_messages = capture_console_messages
|
||||
|
||||
# Connection Parameters
|
||||
self.stream = stream
|
||||
@ -1017,6 +1024,9 @@ class CrawlerRunConfig():
|
||||
# Debugging and Logging Parameters
|
||||
verbose=kwargs.get("verbose", True),
|
||||
log_console=kwargs.get("log_console", False),
|
||||
# Network and Console Capturing Parameters
|
||||
capture_network_requests=kwargs.get("capture_network_requests", False),
|
||||
capture_console_messages=kwargs.get("capture_console_messages", False),
|
||||
# Connection Parameters
|
||||
method=kwargs.get("method", "GET"),
|
||||
stream=kwargs.get("stream", False),
|
||||
@ -1107,6 +1117,8 @@ class CrawlerRunConfig():
|
||||
"exclude_internal_links": self.exclude_internal_links,
|
||||
"verbose": self.verbose,
|
||||
"log_console": self.log_console,
|
||||
"capture_network_requests": self.capture_network_requests,
|
||||
"capture_console_messages": self.capture_console_messages,
|
||||
"method": self.method,
|
||||
"stream": self.stream,
|
||||
"check_robots_txt": self.check_robots_txt,
|
||||
|
@ -478,6 +478,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
) -> AsyncCrawlResponse:
|
||||
"""
|
||||
Internal method to crawl web URLs with the specified configuration.
|
||||
Includes optional network and console capturing.
|
||||
|
||||
Args:
|
||||
url (str): The web URL to crawl
|
||||
@ -494,6 +495,10 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
|
||||
# Reset downloaded files list for new crawl
|
||||
self._downloaded_files = []
|
||||
|
||||
# Initialize capture lists
|
||||
captured_requests = []
|
||||
captured_console = []
|
||||
|
||||
# Handle user agent with magic mode
|
||||
user_agent_to_override = config.user_agent
|
||||
@ -521,9 +526,144 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
# Call hook after page creation
|
||||
await self.execute_hook("on_page_context_created", page, context=context, config=config)
|
||||
|
||||
# Network Request Capturing
|
||||
if config.capture_network_requests:
|
||||
async def handle_request_capture(request):
|
||||
try:
|
||||
post_data_str = None
|
||||
try:
|
||||
# Be cautious with large post data
|
||||
post_data = request.post_data_buffer
|
||||
if post_data:
|
||||
# Attempt to decode, fallback to base64 or size indication
|
||||
try:
|
||||
post_data_str = post_data.decode('utf-8', errors='replace')
|
||||
except UnicodeDecodeError:
|
||||
post_data_str = f"[Binary data: {len(post_data)} bytes]"
|
||||
except Exception:
|
||||
post_data_str = "[Error retrieving post data]"
|
||||
|
||||
captured_requests.append({
|
||||
"event_type": "request",
|
||||
"url": request.url,
|
||||
"method": request.method,
|
||||
"headers": dict(request.headers), # Convert Header dict
|
||||
"post_data": post_data_str,
|
||||
"resource_type": request.resource_type,
|
||||
"is_navigation_request": request.is_navigation_request(),
|
||||
"timestamp": time.time()
|
||||
})
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.warning(f"Error capturing request details for {request.url}: {e}", tag="CAPTURE")
|
||||
captured_requests.append({"event_type": "request_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
|
||||
|
||||
async def handle_response_capture(response):
|
||||
try:
|
||||
captured_requests.append({
|
||||
"event_type": "response",
|
||||
"url": response.url,
|
||||
"status": response.status,
|
||||
"status_text": response.status_text,
|
||||
"headers": dict(response.headers), # Convert Header dict
|
||||
"from_service_worker": response.from_service_worker,
|
||||
"request_timing": response.request.timing, # Detailed timing info
|
||||
"timestamp": time.time()
|
||||
})
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.warning(f"Error capturing response details for {response.url}: {e}", tag="CAPTURE")
|
||||
captured_requests.append({"event_type": "response_capture_error", "url": response.url, "error": str(e), "timestamp": time.time()})
|
||||
|
||||
async def handle_request_failed_capture(request):
|
||||
try:
|
||||
captured_requests.append({
|
||||
"event_type": "request_failed",
|
||||
"url": request.url,
|
||||
"method": request.method,
|
||||
"resource_type": request.resource_type,
|
||||
"failure_text": request.failure.error_text if request.failure else "Unknown failure",
|
||||
"timestamp": time.time()
|
||||
})
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.warning(f"Error capturing request failed details for {request.url}: {e}", tag="CAPTURE")
|
||||
captured_requests.append({"event_type": "request_failed_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
|
||||
|
||||
page.on("request", handle_request_capture)
|
||||
page.on("response", handle_response_capture)
|
||||
page.on("requestfailed", handle_request_failed_capture)
|
||||
|
||||
# Console Message Capturing
|
||||
if config.capture_console_messages:
|
||||
def handle_console_capture(msg):
|
||||
try:
|
||||
message_type = "unknown"
|
||||
try:
|
||||
message_type = msg.type
|
||||
except:
|
||||
pass
|
||||
|
||||
message_text = "unknown"
|
||||
try:
|
||||
message_text = msg.text
|
||||
except:
|
||||
pass
|
||||
|
||||
# Basic console message with minimal content
|
||||
entry = {
|
||||
"type": message_type,
|
||||
"text": message_text,
|
||||
"timestamp": time.time()
|
||||
}
|
||||
|
||||
captured_console.append(entry)
|
||||
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.warning(f"Error capturing console message: {e}", tag="CAPTURE")
|
||||
# Still add something to the list even on error
|
||||
captured_console.append({
|
||||
"type": "console_capture_error",
|
||||
"error": str(e),
|
||||
"timestamp": time.time()
|
||||
})
|
||||
|
||||
def handle_pageerror_capture(err):
|
||||
try:
|
||||
error_message = "Unknown error"
|
||||
try:
|
||||
error_message = err.message
|
||||
except:
|
||||
pass
|
||||
|
||||
error_stack = ""
|
||||
try:
|
||||
error_stack = err.stack
|
||||
except:
|
||||
pass
|
||||
|
||||
captured_console.append({
|
||||
"type": "error",
|
||||
"text": error_message,
|
||||
"stack": error_stack,
|
||||
"timestamp": time.time()
|
||||
})
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.warning(f"Error capturing page error: {e}", tag="CAPTURE")
|
||||
captured_console.append({
|
||||
"type": "pageerror_capture_error",
|
||||
"error": str(e),
|
||||
"timestamp": time.time()
|
||||
})
|
||||
|
||||
# Add event listeners directly
|
||||
page.on("console", handle_console_capture)
|
||||
page.on("pageerror", handle_pageerror_capture)
|
||||
|
||||
# Set up console logging if requested
|
||||
if config.log_console:
|
||||
|
||||
def log_consol(
|
||||
msg, console_log_type="debug"
|
||||
): # Corrected the parameter syntax
|
||||
@ -887,6 +1027,9 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
self._downloaded_files if self._downloaded_files else None
|
||||
),
|
||||
redirected_url=redirected_url,
|
||||
# Include captured data if enabled
|
||||
network_requests=captured_requests if config.capture_network_requests else None,
|
||||
console_messages=captured_console if config.capture_console_messages else None,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
@ -895,6 +1038,15 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
finally:
|
||||
# If no session_id is given we should close the page
|
||||
if not config.session_id:
|
||||
# Detach listeners before closing to prevent potential errors during close
|
||||
if config.capture_network_requests:
|
||||
page.remove_listener("request", handle_request_capture)
|
||||
page.remove_listener("response", handle_response_capture)
|
||||
page.remove_listener("requestfailed", handle_request_failed_capture)
|
||||
if config.capture_console_messages:
|
||||
page.remove_listener("console", handle_console_capture)
|
||||
page.remove_listener("pageerror", handle_pageerror_capture)
|
||||
|
||||
await page.close()
|
||||
|
||||
async def _handle_full_page_scan(self, page: Page, scroll_delay: float = 0.1):
|
||||
|
@ -366,9 +366,10 @@ class AsyncWebCrawler:
|
||||
crawl_result.downloaded_files = async_response.downloaded_files
|
||||
crawl_result.js_execution_result = js_execution_result
|
||||
crawl_result.mhtml = async_response.mhtml_data
|
||||
crawl_result.ssl_certificate = (
|
||||
async_response.ssl_certificate
|
||||
) # Add SSL certificate
|
||||
crawl_result.ssl_certificate = async_response.ssl_certificate
|
||||
# Add captured network and console data if available
|
||||
crawl_result.network_requests = async_response.network_requests
|
||||
crawl_result.console_messages = async_response.console_messages
|
||||
|
||||
crawl_result.success = bool(html)
|
||||
crawl_result.session_id = getattr(config, "session_id", None)
|
||||
|
@ -148,6 +148,8 @@ class CrawlResult(BaseModel):
|
||||
ssl_certificate: Optional[SSLCertificate] = None
|
||||
dispatch_result: Optional[DispatchResult] = None
|
||||
redirected_url: Optional[str] = None
|
||||
network_requests: Optional[List[Dict[str, Any]]] = None
|
||||
console_messages: Optional[List[Dict[str, Any]]] = None
|
||||
|
||||
class Config:
|
||||
arbitrary_types_allowed = True
|
||||
@ -315,6 +317,8 @@ class AsyncCrawlResponse(BaseModel):
|
||||
downloaded_files: Optional[List[str]] = None
|
||||
ssl_certificate: Optional[SSLCertificate] = None
|
||||
redirected_url: Optional[str] = None
|
||||
network_requests: Optional[List[Dict[str, Any]]] = None
|
||||
console_messages: Optional[List[Dict[str, Any]]] = None
|
||||
|
||||
class Config:
|
||||
arbitrary_types_allowed = True
|
||||
|
471
docs/examples/network_console_capture_example.py
Normal file
471
docs/examples/network_console_capture_example.py
Normal file
@ -0,0 +1,471 @@
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import base64
|
||||
from pathlib import Path
|
||||
from typing import List, Dict, Any
|
||||
from datetime import datetime
|
||||
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode, CrawlResult
|
||||
from crawl4ai import BrowserConfig
|
||||
|
||||
__cur_dir__ = Path(__file__).parent
|
||||
|
||||
# Create temp directory if it doesn't exist
|
||||
os.makedirs(os.path.join(__cur_dir__, "tmp"), exist_ok=True)
|
||||
|
||||
async def demo_basic_network_capture():
|
||||
"""Basic network request capturing example"""
|
||||
print("\n=== 1. Basic Network Request Capturing ===")
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
config = CrawlerRunConfig(
|
||||
capture_network_requests=True,
|
||||
wait_until="networkidle" # Wait for network to be idle
|
||||
)
|
||||
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/",
|
||||
config=config
|
||||
)
|
||||
|
||||
if result.success and result.network_requests:
|
||||
print(f"Captured {len(result.network_requests)} network events")
|
||||
|
||||
# Count by event type
|
||||
event_types = {}
|
||||
for req in result.network_requests:
|
||||
event_type = req.get("event_type", "unknown")
|
||||
event_types[event_type] = event_types.get(event_type, 0) + 1
|
||||
|
||||
print("Event types:")
|
||||
for event_type, count in event_types.items():
|
||||
print(f" - {event_type}: {count}")
|
||||
|
||||
# Show a sample request and response
|
||||
request = next((r for r in result.network_requests if r.get("event_type") == "request"), None)
|
||||
response = next((r for r in result.network_requests if r.get("event_type") == "response"), None)
|
||||
|
||||
if request:
|
||||
print("\nSample request:")
|
||||
print(f" URL: {request.get('url')}")
|
||||
print(f" Method: {request.get('method')}")
|
||||
print(f" Headers: {list(request.get('headers', {}).keys())}")
|
||||
|
||||
if response:
|
||||
print("\nSample response:")
|
||||
print(f" URL: {response.get('url')}")
|
||||
print(f" Status: {response.get('status')} {response.get('status_text', '')}")
|
||||
print(f" Headers: {list(response.get('headers', {}).keys())}")
|
||||
|
||||
async def demo_basic_console_capture():
|
||||
"""Basic console message capturing example"""
|
||||
print("\n=== 2. Basic Console Message Capturing ===")
|
||||
|
||||
# Create a simple HTML file with console messages
|
||||
html_file = os.path.join(__cur_dir__, "tmp", "console_test.html")
|
||||
with open(html_file, "w") as f:
|
||||
f.write("""
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Console Test</title>
|
||||
</head>
|
||||
<body>
|
||||
<h1>Console Message Test</h1>
|
||||
<script>
|
||||
console.log("This is a basic log message");
|
||||
console.info("This is an info message");
|
||||
console.warn("This is a warning message");
|
||||
console.error("This is an error message");
|
||||
|
||||
// Generate an error
|
||||
try {
|
||||
nonExistentFunction();
|
||||
} catch (e) {
|
||||
console.error("Caught error:", e);
|
||||
}
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
""")
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
config = CrawlerRunConfig(
|
||||
capture_console_messages=True,
|
||||
wait_until="networkidle" # Wait to make sure all scripts execute
|
||||
)
|
||||
|
||||
result = await crawler.arun(
|
||||
url=f"file://{html_file}",
|
||||
config=config
|
||||
)
|
||||
|
||||
if result.success and result.console_messages:
|
||||
print(f"Captured {len(result.console_messages)} console messages")
|
||||
|
||||
# Count by message type
|
||||
message_types = {}
|
||||
for msg in result.console_messages:
|
||||
msg_type = msg.get("type", "unknown")
|
||||
message_types[msg_type] = message_types.get(msg_type, 0) + 1
|
||||
|
||||
print("Message types:")
|
||||
for msg_type, count in message_types.items():
|
||||
print(f" - {msg_type}: {count}")
|
||||
|
||||
# Show all messages
|
||||
print("\nAll console messages:")
|
||||
for i, msg in enumerate(result.console_messages, 1):
|
||||
print(f" {i}. [{msg.get('type', 'unknown')}] {msg.get('text', '')}")
|
||||
|
||||
async def demo_combined_capture():
|
||||
"""Capturing both network requests and console messages"""
|
||||
print("\n=== 3. Combined Network and Console Capture ===")
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
config = CrawlerRunConfig(
|
||||
capture_network_requests=True,
|
||||
capture_console_messages=True,
|
||||
wait_until="networkidle"
|
||||
)
|
||||
|
||||
result = await crawler.arun(
|
||||
url="https://httpbin.org/html",
|
||||
config=config
|
||||
)
|
||||
|
||||
if result.success:
|
||||
network_count = len(result.network_requests) if result.network_requests else 0
|
||||
console_count = len(result.console_messages) if result.console_messages else 0
|
||||
|
||||
print(f"Captured {network_count} network events and {console_count} console messages")
|
||||
|
||||
# Save the captured data to a JSON file for analysis
|
||||
output_file = os.path.join(__cur_dir__, "tmp", "capture_data.json")
|
||||
with open(output_file, "w") as f:
|
||||
json.dump({
|
||||
"url": result.url,
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"network_requests": result.network_requests,
|
||||
"console_messages": result.console_messages
|
||||
}, f, indent=2)
|
||||
|
||||
print(f"Full capture data saved to {output_file}")
|
||||
|
||||
async def analyze_spa_network_traffic():
|
||||
"""Analyze network traffic of a Single-Page Application"""
|
||||
print("\n=== 4. Analyzing SPA Network Traffic ===")
|
||||
|
||||
async with AsyncWebCrawler(config=BrowserConfig(
|
||||
headless=True,
|
||||
viewport_width=1280,
|
||||
viewport_height=800
|
||||
)) as crawler:
|
||||
config = CrawlerRunConfig(
|
||||
capture_network_requests=True,
|
||||
capture_console_messages=True,
|
||||
# Wait longer to ensure all resources are loaded
|
||||
wait_until="networkidle",
|
||||
page_timeout=60000, # 60 seconds
|
||||
)
|
||||
|
||||
result = await crawler.arun(
|
||||
url="https://weather.com",
|
||||
config=config
|
||||
)
|
||||
|
||||
if result.success and result.network_requests:
|
||||
# Extract different types of requests
|
||||
requests = []
|
||||
responses = []
|
||||
failures = []
|
||||
|
||||
for event in result.network_requests:
|
||||
event_type = event.get("event_type")
|
||||
if event_type == "request":
|
||||
requests.append(event)
|
||||
elif event_type == "response":
|
||||
responses.append(event)
|
||||
elif event_type == "request_failed":
|
||||
failures.append(event)
|
||||
|
||||
print(f"Captured {len(requests)} requests, {len(responses)} responses, and {len(failures)} failures")
|
||||
|
||||
# Analyze request types
|
||||
resource_types = {}
|
||||
for req in requests:
|
||||
resource_type = req.get("resource_type", "unknown")
|
||||
resource_types[resource_type] = resource_types.get(resource_type, 0) + 1
|
||||
|
||||
print("\nResource types:")
|
||||
for resource_type, count in sorted(resource_types.items(), key=lambda x: x[1], reverse=True):
|
||||
print(f" - {resource_type}: {count}")
|
||||
|
||||
# Analyze API calls
|
||||
api_calls = [r for r in requests if "api" in r.get("url", "").lower()]
|
||||
if api_calls:
|
||||
print(f"\nDetected {len(api_calls)} API calls:")
|
||||
for i, call in enumerate(api_calls[:5], 1): # Show first 5
|
||||
print(f" {i}. {call.get('method')} {call.get('url')}")
|
||||
if len(api_calls) > 5:
|
||||
print(f" ... and {len(api_calls) - 5} more")
|
||||
|
||||
# Analyze response status codes
|
||||
status_codes = {}
|
||||
for resp in responses:
|
||||
status = resp.get("status", 0)
|
||||
status_codes[status] = status_codes.get(status, 0) + 1
|
||||
|
||||
print("\nResponse status codes:")
|
||||
for status, count in sorted(status_codes.items()):
|
||||
print(f" - {status}: {count}")
|
||||
|
||||
# Analyze failures
|
||||
if failures:
|
||||
print("\nFailed requests:")
|
||||
for i, failure in enumerate(failures[:5], 1): # Show first 5
|
||||
print(f" {i}. {failure.get('url')} - {failure.get('failure_text')}")
|
||||
if len(failures) > 5:
|
||||
print(f" ... and {len(failures) - 5} more")
|
||||
|
||||
# Check for console errors
|
||||
if result.console_messages:
|
||||
errors = [msg for msg in result.console_messages if msg.get("type") == "error"]
|
||||
if errors:
|
||||
print(f"\nDetected {len(errors)} console errors:")
|
||||
for i, error in enumerate(errors[:3], 1): # Show first 3
|
||||
print(f" {i}. {error.get('text', '')[:100]}...")
|
||||
if len(errors) > 3:
|
||||
print(f" ... and {len(errors) - 3} more")
|
||||
|
||||
# Save analysis to file
|
||||
output_file = os.path.join(__cur_dir__, "tmp", "weather_network_analysis.json")
|
||||
with open(output_file, "w") as f:
|
||||
json.dump({
|
||||
"url": result.url,
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"statistics": {
|
||||
"request_count": len(requests),
|
||||
"response_count": len(responses),
|
||||
"failure_count": len(failures),
|
||||
"resource_types": resource_types,
|
||||
"status_codes": {str(k): v for k, v in status_codes.items()},
|
||||
"api_call_count": len(api_calls),
|
||||
"console_error_count": len(errors) if result.console_messages else 0
|
||||
},
|
||||
"network_requests": result.network_requests,
|
||||
"console_messages": result.console_messages
|
||||
}, f, indent=2)
|
||||
|
||||
print(f"\nFull analysis saved to {output_file}")
|
||||
|
||||
async def demo_security_analysis():
|
||||
"""Using network capture for security analysis"""
|
||||
print("\n=== 5. Security Analysis with Network Capture ===")
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
config = CrawlerRunConfig(
|
||||
capture_network_requests=True,
|
||||
capture_console_messages=True,
|
||||
wait_until="networkidle"
|
||||
)
|
||||
|
||||
# A site that makes multiple third-party requests
|
||||
result = await crawler.arun(
|
||||
url="https://www.nytimes.com/",
|
||||
config=config
|
||||
)
|
||||
|
||||
if result.success and result.network_requests:
|
||||
print(f"Captured {len(result.network_requests)} network events")
|
||||
|
||||
# Extract all domains
|
||||
domains = set()
|
||||
for req in result.network_requests:
|
||||
if req.get("event_type") == "request":
|
||||
url = req.get("url", "")
|
||||
try:
|
||||
from urllib.parse import urlparse
|
||||
domain = urlparse(url).netloc
|
||||
if domain:
|
||||
domains.add(domain)
|
||||
except:
|
||||
pass
|
||||
|
||||
print(f"\nDetected requests to {len(domains)} unique domains:")
|
||||
main_domain = urlparse(result.url).netloc
|
||||
|
||||
# Separate first-party vs third-party domains
|
||||
first_party = [d for d in domains if main_domain in d]
|
||||
third_party = [d for d in domains if main_domain not in d]
|
||||
|
||||
print(f" - First-party domains: {len(first_party)}")
|
||||
print(f" - Third-party domains: {len(third_party)}")
|
||||
|
||||
# Look for potential trackers/analytics
|
||||
tracking_keywords = ["analytics", "tracker", "pixel", "tag", "stats", "metric", "collect", "beacon"]
|
||||
potential_trackers = []
|
||||
|
||||
for domain in third_party:
|
||||
if any(keyword in domain.lower() for keyword in tracking_keywords):
|
||||
potential_trackers.append(domain)
|
||||
|
||||
if potential_trackers:
|
||||
print(f"\nPotential tracking/analytics domains ({len(potential_trackers)}):")
|
||||
for i, domain in enumerate(sorted(potential_trackers)[:10], 1):
|
||||
print(f" {i}. {domain}")
|
||||
if len(potential_trackers) > 10:
|
||||
print(f" ... and {len(potential_trackers) - 10} more")
|
||||
|
||||
# Check for insecure (HTTP) requests
|
||||
insecure_requests = [
|
||||
req.get("url") for req in result.network_requests
|
||||
if req.get("event_type") == "request" and req.get("url", "").startswith("http://")
|
||||
]
|
||||
|
||||
if insecure_requests:
|
||||
print(f"\nWarning: Found {len(insecure_requests)} insecure (HTTP) requests:")
|
||||
for i, url in enumerate(insecure_requests[:5], 1):
|
||||
print(f" {i}. {url}")
|
||||
if len(insecure_requests) > 5:
|
||||
print(f" ... and {len(insecure_requests) - 5} more")
|
||||
|
||||
# Save security analysis to file
|
||||
output_file = os.path.join(__cur_dir__, "tmp", "security_analysis.json")
|
||||
with open(output_file, "w") as f:
|
||||
json.dump({
|
||||
"url": result.url,
|
||||
"main_domain": main_domain,
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"analysis": {
|
||||
"total_requests": len([r for r in result.network_requests if r.get("event_type") == "request"]),
|
||||
"unique_domains": len(domains),
|
||||
"first_party_domains": first_party,
|
||||
"third_party_domains": third_party,
|
||||
"potential_trackers": potential_trackers,
|
||||
"insecure_requests": insecure_requests
|
||||
}
|
||||
}, f, indent=2)
|
||||
|
||||
print(f"\nFull security analysis saved to {output_file}")
|
||||
|
||||
async def demo_performance_analysis():
|
||||
"""Using network capture for performance analysis"""
|
||||
print("\n=== 6. Performance Analysis with Network Capture ===")
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
config = CrawlerRunConfig(
|
||||
capture_network_requests=True,
|
||||
wait_until="networkidle",
|
||||
page_timeout=60000 # 60 seconds
|
||||
)
|
||||
|
||||
result = await crawler.arun(
|
||||
url="https://www.cnn.com/",
|
||||
config=config
|
||||
)
|
||||
|
||||
if result.success and result.network_requests:
|
||||
# Filter only response events with timing information
|
||||
responses_with_timing = [
|
||||
r for r in result.network_requests
|
||||
if r.get("event_type") == "response" and r.get("request_timing")
|
||||
]
|
||||
|
||||
if responses_with_timing:
|
||||
print(f"Analyzing timing for {len(responses_with_timing)} network responses")
|
||||
|
||||
# Group by resource type
|
||||
resource_timings = {}
|
||||
for resp in responses_with_timing:
|
||||
url = resp.get("url", "")
|
||||
timing = resp.get("request_timing", {})
|
||||
|
||||
# Determine resource type from URL extension
|
||||
ext = url.split(".")[-1].lower() if "." in url.split("/")[-1] else "unknown"
|
||||
if ext in ["jpg", "jpeg", "png", "gif", "webp", "svg", "ico"]:
|
||||
resource_type = "image"
|
||||
elif ext in ["js"]:
|
||||
resource_type = "javascript"
|
||||
elif ext in ["css"]:
|
||||
resource_type = "css"
|
||||
elif ext in ["woff", "woff2", "ttf", "otf", "eot"]:
|
||||
resource_type = "font"
|
||||
else:
|
||||
resource_type = "other"
|
||||
|
||||
if resource_type not in resource_timings:
|
||||
resource_timings[resource_type] = []
|
||||
|
||||
# Calculate request duration if timing information is available
|
||||
if isinstance(timing, dict) and "requestTime" in timing and "receiveHeadersEnd" in timing:
|
||||
# Convert to milliseconds
|
||||
duration = (timing["receiveHeadersEnd"] - timing["requestTime"]) * 1000
|
||||
resource_timings[resource_type].append({
|
||||
"url": url,
|
||||
"duration_ms": duration
|
||||
})
|
||||
|
||||
# Calculate statistics for each resource type
|
||||
print("\nPerformance by resource type:")
|
||||
for resource_type, timings in resource_timings.items():
|
||||
if timings:
|
||||
durations = [t["duration_ms"] for t in timings]
|
||||
avg_duration = sum(durations) / len(durations)
|
||||
max_duration = max(durations)
|
||||
slowest_resource = next(t["url"] for t in timings if t["duration_ms"] == max_duration)
|
||||
|
||||
print(f" {resource_type.upper()}:")
|
||||
print(f" - Count: {len(timings)}")
|
||||
print(f" - Avg time: {avg_duration:.2f} ms")
|
||||
print(f" - Max time: {max_duration:.2f} ms")
|
||||
print(f" - Slowest: {slowest_resource}")
|
||||
|
||||
# Identify the slowest resources overall
|
||||
all_timings = []
|
||||
for resource_type, timings in resource_timings.items():
|
||||
for timing in timings:
|
||||
timing["type"] = resource_type
|
||||
all_timings.append(timing)
|
||||
|
||||
all_timings.sort(key=lambda x: x["duration_ms"], reverse=True)
|
||||
|
||||
print("\nTop 5 slowest resources:")
|
||||
for i, timing in enumerate(all_timings[:5], 1):
|
||||
print(f" {i}. [{timing['type']}] {timing['url']} - {timing['duration_ms']:.2f} ms")
|
||||
|
||||
# Save performance analysis to file
|
||||
output_file = os.path.join(__cur_dir__, "tmp", "performance_analysis.json")
|
||||
with open(output_file, "w") as f:
|
||||
json.dump({
|
||||
"url": result.url,
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"resource_timings": resource_timings,
|
||||
"slowest_resources": all_timings[:10] # Save top 10
|
||||
}, f, indent=2)
|
||||
|
||||
print(f"\nFull performance analysis saved to {output_file}")
|
||||
|
||||
async def main():
|
||||
"""Run all demo functions sequentially"""
|
||||
print("=== Network and Console Capture Examples ===")
|
||||
|
||||
# Make sure tmp directory exists
|
||||
os.makedirs(os.path.join(__cur_dir__, "tmp"), exist_ok=True)
|
||||
|
||||
# Run basic examples
|
||||
await demo_basic_network_capture()
|
||||
await demo_basic_console_capture()
|
||||
await demo_combined_capture()
|
||||
|
||||
# Run advanced examples
|
||||
await analyze_spa_network_traffic()
|
||||
await demo_security_analysis()
|
||||
await demo_performance_analysis()
|
||||
|
||||
print("\n=== Examples Complete ===")
|
||||
print(f"Check the tmp directory for output files: {os.path.join(__cur_dir__, 'tmp')}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
205
docs/md_v2/advanced/network-console-capture.md
Normal file
205
docs/md_v2/advanced/network-console-capture.md
Normal file
@ -0,0 +1,205 @@
|
||||
# Network Requests & Console Message Capturing
|
||||
|
||||
Crawl4AI can capture all network requests and browser console messages during a crawl, which is invaluable for debugging, security analysis, or understanding page behavior.
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable network and console capturing, use these configuration options:
|
||||
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
# Enable both network request capture and console message capture
|
||||
config = CrawlerRunConfig(
|
||||
capture_network_requests=True, # Capture all network requests and responses
|
||||
capture_console_messages=True # Capture all browser console output
|
||||
)
|
||||
```
|
||||
|
||||
## Example Usage
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import json
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
# Enable both network request capture and console message capture
|
||||
config = CrawlerRunConfig(
|
||||
capture_network_requests=True,
|
||||
capture_console_messages=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
config=config
|
||||
)
|
||||
|
||||
if result.success:
|
||||
# Analyze network requests
|
||||
if result.network_requests:
|
||||
print(f"Captured {len(result.network_requests)} network events")
|
||||
|
||||
# Count request types
|
||||
request_count = len([r for r in result.network_requests if r.get("event_type") == "request"])
|
||||
response_count = len([r for r in result.network_requests if r.get("event_type") == "response"])
|
||||
failed_count = len([r for r in result.network_requests if r.get("event_type") == "request_failed"])
|
||||
|
||||
print(f"Requests: {request_count}, Responses: {response_count}, Failed: {failed_count}")
|
||||
|
||||
# Find API calls
|
||||
api_calls = [r for r in result.network_requests
|
||||
if r.get("event_type") == "request" and "api" in r.get("url", "")]
|
||||
if api_calls:
|
||||
print(f"Detected {len(api_calls)} API calls:")
|
||||
for call in api_calls[:3]: # Show first 3
|
||||
print(f" - {call.get('method')} {call.get('url')}")
|
||||
|
||||
# Analyze console messages
|
||||
if result.console_messages:
|
||||
print(f"Captured {len(result.console_messages)} console messages")
|
||||
|
||||
# Group by type
|
||||
message_types = {}
|
||||
for msg in result.console_messages:
|
||||
msg_type = msg.get("type", "unknown")
|
||||
message_types[msg_type] = message_types.get(msg_type, 0) + 1
|
||||
|
||||
print("Message types:", message_types)
|
||||
|
||||
# Show errors (often the most important)
|
||||
errors = [msg for msg in result.console_messages if msg.get("type") == "error"]
|
||||
if errors:
|
||||
print(f"Found {len(errors)} console errors:")
|
||||
for err in errors[:2]: # Show first 2
|
||||
print(f" - {err.get('text', '')[:100]}")
|
||||
|
||||
# Export all captured data to a file for detailed analysis
|
||||
with open("network_capture.json", "w") as f:
|
||||
json.dump({
|
||||
"url": result.url,
|
||||
"network_requests": result.network_requests or [],
|
||||
"console_messages": result.console_messages or []
|
||||
}, f, indent=2)
|
||||
|
||||
print("Exported detailed capture data to network_capture.json")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Captured Data Structure
|
||||
|
||||
### Network Requests
|
||||
|
||||
The `result.network_requests` contains a list of dictionaries, each representing a network event with these common fields:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `event_type` | Type of event: `"request"`, `"response"`, or `"request_failed"` |
|
||||
| `url` | The URL of the request |
|
||||
| `timestamp` | Unix timestamp when the event was captured |
|
||||
|
||||
#### Request Event Fields
|
||||
|
||||
```json
|
||||
{
|
||||
"event_type": "request",
|
||||
"url": "https://example.com/api/data.json",
|
||||
"method": "GET",
|
||||
"headers": {"User-Agent": "...", "Accept": "..."},
|
||||
"post_data": "key=value&otherkey=value",
|
||||
"resource_type": "fetch",
|
||||
"is_navigation_request": false,
|
||||
"timestamp": 1633456789.123
|
||||
}
|
||||
```
|
||||
|
||||
#### Response Event Fields
|
||||
|
||||
```json
|
||||
{
|
||||
"event_type": "response",
|
||||
"url": "https://example.com/api/data.json",
|
||||
"status": 200,
|
||||
"status_text": "OK",
|
||||
"headers": {"Content-Type": "application/json", "Cache-Control": "..."},
|
||||
"from_service_worker": false,
|
||||
"request_timing": {"requestTime": 1234.56, "receiveHeadersEnd": 1234.78},
|
||||
"timestamp": 1633456789.456
|
||||
}
|
||||
```
|
||||
|
||||
#### Failed Request Event Fields
|
||||
|
||||
```json
|
||||
{
|
||||
"event_type": "request_failed",
|
||||
"url": "https://example.com/missing.png",
|
||||
"method": "GET",
|
||||
"resource_type": "image",
|
||||
"failure_text": "net::ERR_ABORTED 404",
|
||||
"timestamp": 1633456789.789
|
||||
}
|
||||
```
|
||||
|
||||
### Console Messages
|
||||
|
||||
The `result.console_messages` contains a list of dictionaries, each representing a console message with these common fields:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `type` | Message type: `"log"`, `"error"`, `"warning"`, `"info"`, etc. |
|
||||
| `text` | The message text |
|
||||
| `timestamp` | Unix timestamp when the message was captured |
|
||||
|
||||
#### Console Message Example
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "error",
|
||||
"text": "Uncaught TypeError: Cannot read property 'length' of undefined",
|
||||
"location": "https://example.com/script.js:123:45",
|
||||
"timestamp": 1633456790.123
|
||||
}
|
||||
```
|
||||
|
||||
## Key Benefits
|
||||
|
||||
- **Full Request Visibility**: Capture all network activity including:
|
||||
- Requests (URLs, methods, headers, post data)
|
||||
- Responses (status codes, headers, timing)
|
||||
- Failed requests (with error messages)
|
||||
|
||||
- **Console Message Access**: View all JavaScript console output:
|
||||
- Log messages
|
||||
- Warnings
|
||||
- Errors with stack traces
|
||||
- Developer debugging information
|
||||
|
||||
- **Debugging Power**: Identify issues such as:
|
||||
- Failed API calls or resource loading
|
||||
- JavaScript errors affecting page functionality
|
||||
- CORS or other security issues
|
||||
- Hidden API endpoints and data flows
|
||||
|
||||
- **Security Analysis**: Detect:
|
||||
- Unexpected third-party requests
|
||||
- Data leakage in request payloads
|
||||
- Suspicious script behavior
|
||||
|
||||
- **Performance Insights**: Analyze:
|
||||
- Request timing data
|
||||
- Resource loading patterns
|
||||
- Potential bottlenecks
|
||||
|
||||
## Use Cases
|
||||
|
||||
1. **API Discovery**: Identify hidden endpoints and data flows in single-page applications
|
||||
2. **Debugging**: Track down JavaScript errors affecting page functionality
|
||||
3. **Security Auditing**: Detect unwanted third-party requests or data leakage
|
||||
4. **Performance Analysis**: Identify slow-loading resources
|
||||
5. **Ad/Tracker Analysis**: Detect and catalog advertising or tracking calls
|
||||
|
||||
This capability is especially valuable for complex sites with heavy JavaScript, single-page applications, or when you need to understand the exact communication happening between a browser and servers.
|
@ -281,7 +281,69 @@ for result in results:
|
||||
|
||||
---
|
||||
|
||||
## 7. Example: Accessing Everything
|
||||
## 7. Network Requests & Console Messages
|
||||
|
||||
When you enable network and console message capturing in `CrawlerRunConfig` using `capture_network_requests=True` and `capture_console_messages=True`, the `CrawlResult` will include these fields:
|
||||
|
||||
### 7.1 **`network_requests`** *(Optional[List[Dict[str, Any]]])*
|
||||
**What**: A list of dictionaries containing information about all network requests, responses, and failures captured during the crawl.
|
||||
**Structure**:
|
||||
- Each item has an `event_type` field that can be `"request"`, `"response"`, or `"request_failed"`.
|
||||
- Request events include `url`, `method`, `headers`, `post_data`, `resource_type`, and `is_navigation_request`.
|
||||
- Response events include `url`, `status`, `status_text`, `headers`, and `request_timing`.
|
||||
- Failed request events include `url`, `method`, `resource_type`, and `failure_text`.
|
||||
- All events include a `timestamp` field.
|
||||
|
||||
**Usage**:
|
||||
```python
|
||||
if result.network_requests:
|
||||
# Count different types of events
|
||||
requests = [r for r in result.network_requests if r.get("event_type") == "request"]
|
||||
responses = [r for r in result.network_requests if r.get("event_type") == "response"]
|
||||
failures = [r for r in result.network_requests if r.get("event_type") == "request_failed"]
|
||||
|
||||
print(f"Captured {len(requests)} requests, {len(responses)} responses, and {len(failures)} failures")
|
||||
|
||||
# Analyze API calls
|
||||
api_calls = [r for r in requests if "api" in r.get("url", "")]
|
||||
|
||||
# Identify failed resources
|
||||
for failure in failures:
|
||||
print(f"Failed to load: {failure.get('url')} - {failure.get('failure_text')}")
|
||||
```
|
||||
|
||||
### 7.2 **`console_messages`** *(Optional[List[Dict[str, Any]]])*
|
||||
**What**: A list of dictionaries containing all browser console messages captured during the crawl.
|
||||
**Structure**:
|
||||
- Each item has a `type` field indicating the message type (e.g., `"log"`, `"error"`, `"warning"`, etc.).
|
||||
- The `text` field contains the actual message text.
|
||||
- Some messages include `location` information (URL, line, column).
|
||||
- All messages include a `timestamp` field.
|
||||
|
||||
**Usage**:
|
||||
```python
|
||||
if result.console_messages:
|
||||
# Count messages by type
|
||||
message_types = {}
|
||||
for msg in result.console_messages:
|
||||
msg_type = msg.get("type", "unknown")
|
||||
message_types[msg_type] = message_types.get(msg_type, 0) + 1
|
||||
|
||||
print(f"Message type counts: {message_types}")
|
||||
|
||||
# Display errors (which are usually most important)
|
||||
for msg in result.console_messages:
|
||||
if msg.get("type") == "error":
|
||||
print(f"Error: {msg.get('text')}")
|
||||
```
|
||||
|
||||
These fields provide deep visibility into the page's network activity and browser console, which is invaluable for debugging, security analysis, and understanding complex web applications.
|
||||
|
||||
For more details on network and console capturing, see the [Network & Console Capture documentation](../advanced/network-console-capture.md).
|
||||
|
||||
---
|
||||
|
||||
## 8. Example: Accessing Everything
|
||||
|
||||
```python
|
||||
async def handle_result(result: CrawlResult):
|
||||
@ -321,11 +383,29 @@ async def handle_result(result: CrawlResult):
|
||||
print("PDF bytes length:", len(result.pdf))
|
||||
if result.mhtml:
|
||||
print("MHTML length:", len(result.mhtml))
|
||||
|
||||
# Network and console capturing
|
||||
if result.network_requests:
|
||||
print(f"Network requests captured: {len(result.network_requests)}")
|
||||
# Analyze request types
|
||||
req_types = {}
|
||||
for req in result.network_requests:
|
||||
if "resource_type" in req:
|
||||
req_types[req["resource_type"]] = req_types.get(req["resource_type"], 0) + 1
|
||||
print(f"Resource types: {req_types}")
|
||||
|
||||
if result.console_messages:
|
||||
print(f"Console messages captured: {len(result.console_messages)}")
|
||||
# Count by message type
|
||||
msg_types = {}
|
||||
for msg in result.console_messages:
|
||||
msg_types[msg.get("type", "unknown")] = msg_types.get(msg.get("type", "unknown"), 0) + 1
|
||||
print(f"Message types: {msg_types}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Key Points & Future
|
||||
## 9. Key Points & Future
|
||||
|
||||
1. **Deprecated legacy properties of CrawlResult**
|
||||
- `markdown_v2` - Deprecated in v0.5. Just use `markdown`. It holds the `MarkdownGenerationResult` now!
|
||||
|
@ -38,6 +38,7 @@ nav:
|
||||
- "Crawl Dispatcher": "advanced/crawl-dispatcher.md"
|
||||
- "Identity Based Crawling": "advanced/identity-based-crawling.md"
|
||||
- "SSL Certificate": "advanced/ssl-certificate.md"
|
||||
- "Network & Console Capture": "advanced/network-console-capture.md"
|
||||
- Extraction:
|
||||
- "LLM-Free Strategies": "extraction/no-llm-strategies.md"
|
||||
- "LLM Strategies": "extraction/llm-strategies.md"
|
||||
|
20
parameter_updates.txt
Normal file
20
parameter_updates.txt
Normal file
@ -0,0 +1,20 @@
|
||||
The file /docs/md_v2/api/parameters.md should be updated to include the new network and console capturing parameters.
|
||||
|
||||
Here's what needs to be updated:
|
||||
|
||||
1. Change section title from:
|
||||
```
|
||||
### G) **Debug & Logging**
|
||||
```
|
||||
to:
|
||||
```
|
||||
### G) **Debug, Logging & Capturing**
|
||||
```
|
||||
|
||||
2. Add new parameters to the table:
|
||||
```
|
||||
| **`capture_network_requests`** | `bool` (False) | Captures all network requests, responses, and failures during the crawl. Available in `result.network_requests`. |
|
||||
| **`capture_console_messages`** | `bool` (False) | Captures all browser console messages (logs, warnings, errors) during the crawl. Available in `result.console_messages`. |
|
||||
```
|
||||
|
||||
These changes demonstrate how to use the new network and console capturing features in the CrawlerRunConfig.
|
489
prompts/prompt_net_requests.md
Normal file
489
prompts/prompt_net_requests.md
Normal file
@ -0,0 +1,489 @@
|
||||
I want to enhance the `AsyncPlaywrightCrawlerStrategy` to optionally capture network requests and console messages during a crawl, storing them in the final `CrawlResult`.
|
||||
|
||||
Here's a breakdown of the proposed changes across the relevant files:
|
||||
|
||||
**1. Configuration (`crawl4ai/async_configs.py`)**
|
||||
|
||||
* **Goal:** Add flags to `CrawlerRunConfig` to enable/disable capturing.
|
||||
* **Changes:**
|
||||
* Add two new boolean attributes to `CrawlerRunConfig`:
|
||||
* `capture_network_requests: bool = False`
|
||||
* `capture_console_messages: bool = False`
|
||||
* Update `__init__`, `from_kwargs`, `to_dict`, and implicitly `clone`/`dump`/`load` to include these new attributes.
|
||||
|
||||
```python
|
||||
# ==== File: crawl4ai/async_configs.py ====
|
||||
# ... (imports) ...
|
||||
|
||||
class CrawlerRunConfig():
|
||||
# ... (existing attributes) ...
|
||||
|
||||
# NEW: Network and Console Capturing Parameters
|
||||
capture_network_requests: bool = False
|
||||
capture_console_messages: bool = False
|
||||
|
||||
# Experimental Parameters
|
||||
experimental: Dict[str, Any] = None,
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
# ... (existing parameters) ...
|
||||
|
||||
# NEW: Network and Console Capturing Parameters
|
||||
capture_network_requests: bool = False,
|
||||
capture_console_messages: bool = False,
|
||||
|
||||
# Experimental Parameters
|
||||
experimental: Dict[str, Any] = None,
|
||||
):
|
||||
# ... (existing assignments) ...
|
||||
|
||||
# NEW: Assign new parameters
|
||||
self.capture_network_requests = capture_network_requests
|
||||
self.capture_console_messages = capture_console_messages
|
||||
|
||||
# Experimental Parameters
|
||||
self.experimental = experimental or {}
|
||||
|
||||
# ... (rest of __init__) ...
|
||||
|
||||
@staticmethod
|
||||
def from_kwargs(kwargs: dict) -> "CrawlerRunConfig":
|
||||
return CrawlerRunConfig(
|
||||
# ... (existing kwargs gets) ...
|
||||
|
||||
# NEW: Get new parameters
|
||||
capture_network_requests=kwargs.get("capture_network_requests", False),
|
||||
capture_console_messages=kwargs.get("capture_console_messages", False),
|
||||
|
||||
# Experimental Parameters
|
||||
experimental=kwargs.get("experimental"),
|
||||
)
|
||||
|
||||
def to_dict(self):
|
||||
return {
|
||||
# ... (existing dict entries) ...
|
||||
|
||||
# NEW: Add new parameters to dict
|
||||
"capture_network_requests": self.capture_network_requests,
|
||||
"capture_console_messages": self.capture_console_messages,
|
||||
|
||||
"experimental": self.experimental,
|
||||
}
|
||||
|
||||
# clone(), dump(), load() should work automatically if they rely on to_dict() and from_kwargs()
|
||||
# or the serialization logic correctly handles all attributes.
|
||||
```
|
||||
|
||||
**2. Data Models (`crawl4ai/models.py`)**
|
||||
|
||||
* **Goal:** Add fields to store the captured data in the response/result objects.
|
||||
* **Changes:**
|
||||
* Add `network_requests: Optional[List[Dict[str, Any]]] = None` and `console_messages: Optional[List[Dict[str, Any]]] = None` to `AsyncCrawlResponse`.
|
||||
* Add the same fields to `CrawlResult`.
|
||||
|
||||
```python
|
||||
# ==== File: crawl4ai/models.py ====
|
||||
# ... (imports) ...
|
||||
|
||||
# ... (Existing dataclasses/models) ...
|
||||
|
||||
class AsyncCrawlResponse(BaseModel):
|
||||
html: str
|
||||
response_headers: Dict[str, str]
|
||||
js_execution_result: Optional[Dict[str, Any]] = None
|
||||
status_code: int
|
||||
screenshot: Optional[str] = None
|
||||
pdf_data: Optional[bytes] = None
|
||||
get_delayed_content: Optional[Callable[[Optional[float]], Awaitable[str]]] = None
|
||||
downloaded_files: Optional[List[str]] = None
|
||||
ssl_certificate: Optional[SSLCertificate] = None
|
||||
redirected_url: Optional[str] = None
|
||||
# NEW: Fields for captured data
|
||||
network_requests: Optional[List[Dict[str, Any]]] = None
|
||||
console_messages: Optional[List[Dict[str, Any]]] = None
|
||||
|
||||
class Config:
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
# ... (Existing models like MediaItem, Link, etc.) ...
|
||||
|
||||
class CrawlResult(BaseModel):
|
||||
url: str
|
||||
html: str
|
||||
success: bool
|
||||
cleaned_html: Optional[str] = None
|
||||
media: Dict[str, List[Dict]] = {}
|
||||
links: Dict[str, List[Dict]] = {}
|
||||
downloaded_files: Optional[List[str]] = None
|
||||
js_execution_result: Optional[Dict[str, Any]] = None
|
||||
screenshot: Optional[str] = None
|
||||
pdf: Optional[bytes] = None
|
||||
mhtml: Optional[str] = None # Added mhtml based on the provided models.py
|
||||
_markdown: Optional[MarkdownGenerationResult] = PrivateAttr(default=None)
|
||||
extracted_content: Optional[str] = None
|
||||
metadata: Optional[dict] = None
|
||||
error_message: Optional[str] = None
|
||||
session_id: Optional[str] = None
|
||||
response_headers: Optional[dict] = None
|
||||
status_code: Optional[int] = None
|
||||
ssl_certificate: Optional[SSLCertificate] = None
|
||||
dispatch_result: Optional[DispatchResult] = None
|
||||
redirected_url: Optional[str] = None
|
||||
# NEW: Fields for captured data
|
||||
network_requests: Optional[List[Dict[str, Any]]] = None
|
||||
console_messages: Optional[List[Dict[str, Any]]] = None
|
||||
|
||||
class Config:
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
# ... (Existing __init__, properties, model_dump for markdown compatibility) ...
|
||||
|
||||
# ... (Rest of the models) ...
|
||||
```
|
||||
|
||||
**3. Crawler Strategy (`crawl4ai/async_crawler_strategy.py`)**
|
||||
|
||||
* **Goal:** Implement the actual capturing logic within `AsyncPlaywrightCrawlerStrategy._crawl_web`.
|
||||
* **Changes:**
|
||||
* Inside `_crawl_web`, initialize empty lists `captured_requests = []` and `captured_console = []`.
|
||||
* Conditionally attach Playwright event listeners (`page.on(...)`) based on the `config.capture_network_requests` and `config.capture_console_messages` flags.
|
||||
* Define handler functions for these listeners to extract relevant data and append it to the respective lists. Include timestamps.
|
||||
* Pass the captured lists to the `AsyncCrawlResponse` constructor at the end of the method.
|
||||
|
||||
```python
|
||||
# ==== File: crawl4ai/async_crawler_strategy.py ====
|
||||
# ... (imports) ...
|
||||
import time # Make sure time is imported
|
||||
|
||||
class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
# ... (existing methods like __init__, start, close, etc.) ...
|
||||
|
||||
async def _crawl_web(
|
||||
self, url: str, config: CrawlerRunConfig
|
||||
) -> AsyncCrawlResponse:
|
||||
"""
|
||||
Internal method to crawl web URLs with the specified configuration.
|
||||
Includes optional network and console capturing. # MODIFIED DOCSTRING
|
||||
"""
|
||||
config.url = url
|
||||
response_headers = {}
|
||||
execution_result = None
|
||||
status_code = None
|
||||
redirected_url = url
|
||||
|
||||
# Reset downloaded files list for new crawl
|
||||
self._downloaded_files = []
|
||||
|
||||
# Initialize capture lists - IMPORTANT: Reset per crawl
|
||||
captured_requests: List[Dict[str, Any]] = []
|
||||
captured_console: List[Dict[str, Any]] = []
|
||||
|
||||
# Handle user agent ... (existing code) ...
|
||||
|
||||
# Get page for session
|
||||
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
|
||||
|
||||
# ... (existing code for cookies, navigator overrides, hooks) ...
|
||||
|
||||
# --- Setup Capturing Listeners ---
|
||||
# NOTE: These listeners are attached *before* page.goto()
|
||||
|
||||
# Network Request Capturing
|
||||
if config.capture_network_requests:
|
||||
async def handle_request_capture(request):
|
||||
try:
|
||||
post_data_str = None
|
||||
try:
|
||||
# Be cautious with large post data
|
||||
post_data = request.post_data_buffer
|
||||
if post_data:
|
||||
# Attempt to decode, fallback to base64 or size indication
|
||||
try:
|
||||
post_data_str = post_data.decode('utf-8', errors='replace')
|
||||
except UnicodeDecodeError:
|
||||
post_data_str = f"[Binary data: {len(post_data)} bytes]"
|
||||
except Exception:
|
||||
post_data_str = "[Error retrieving post data]"
|
||||
|
||||
captured_requests.append({
|
||||
"event_type": "request",
|
||||
"url": request.url,
|
||||
"method": request.method,
|
||||
"headers": dict(request.headers), # Convert Header dict
|
||||
"post_data": post_data_str,
|
||||
"resource_type": request.resource_type,
|
||||
"is_navigation_request": request.is_navigation_request(),
|
||||
"timestamp": time.time()
|
||||
})
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error capturing request details for {request.url}: {e}", tag="CAPTURE")
|
||||
captured_requests.append({"event_type": "request_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
|
||||
|
||||
async def handle_response_capture(response):
|
||||
try:
|
||||
# Avoid capturing full response body by default due to size/security
|
||||
# security_details = await response.security_details() # Optional: More SSL info
|
||||
captured_requests.append({
|
||||
"event_type": "response",
|
||||
"url": response.url,
|
||||
"status": response.status,
|
||||
"status_text": response.status_text,
|
||||
"headers": dict(response.headers), # Convert Header dict
|
||||
"from_service_worker": response.from_service_worker,
|
||||
# "security_details": security_details, # Uncomment if needed
|
||||
"request_timing": response.request.timing, # Detailed timing info
|
||||
"timestamp": time.time()
|
||||
})
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error capturing response details for {response.url}: {e}", tag="CAPTURE")
|
||||
captured_requests.append({"event_type": "response_capture_error", "url": response.url, "error": str(e), "timestamp": time.time()})
|
||||
|
||||
async def handle_request_failed_capture(request):
|
||||
try:
|
||||
captured_requests.append({
|
||||
"event_type": "request_failed",
|
||||
"url": request.url,
|
||||
"method": request.method,
|
||||
"resource_type": request.resource_type,
|
||||
"failure_text": request.failure.error_text if request.failure else "Unknown failure",
|
||||
"timestamp": time.time()
|
||||
})
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error capturing request failed details for {request.url}: {e}", tag="CAPTURE")
|
||||
captured_requests.append({"event_type": "request_failed_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
|
||||
|
||||
page.on("request", handle_request_capture)
|
||||
page.on("response", handle_response_capture)
|
||||
page.on("requestfailed", handle_request_failed_capture)
|
||||
|
||||
# Console Message Capturing
|
||||
if config.capture_console_messages:
|
||||
def handle_console_capture(msg):
|
||||
try:
|
||||
location = msg.location()
|
||||
# Attempt to resolve JSHandle args to primitive values
|
||||
resolved_args = []
|
||||
try:
|
||||
for arg in msg.args:
|
||||
resolved_args.append(arg.json_value()) # May fail for complex objects
|
||||
except Exception:
|
||||
resolved_args.append("[Could not resolve JSHandle args]")
|
||||
|
||||
captured_console.append({
|
||||
"type": msg.type(), # e.g., 'log', 'error', 'warning'
|
||||
"text": msg.text(),
|
||||
"args": resolved_args, # Captured arguments
|
||||
"location": f"{location['url']}:{location['lineNumber']}:{location['columnNumber']}" if location else "N/A",
|
||||
"timestamp": time.time()
|
||||
})
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error capturing console message: {e}", tag="CAPTURE")
|
||||
captured_console.append({"type": "console_capture_error", "error": str(e), "timestamp": time.time()})
|
||||
|
||||
def handle_pageerror_capture(err):
|
||||
try:
|
||||
captured_console.append({
|
||||
"type": "error", # Consistent type for page errors
|
||||
"text": err.message,
|
||||
"stack": err.stack,
|
||||
"timestamp": time.time()
|
||||
})
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error capturing page error: {e}", tag="CAPTURE")
|
||||
captured_console.append({"type": "pageerror_capture_error", "error": str(e), "timestamp": time.time()})
|
||||
|
||||
page.on("console", handle_console_capture)
|
||||
page.on("pageerror", handle_pageerror_capture)
|
||||
# --- End Setup Capturing Listeners ---
|
||||
|
||||
|
||||
# Set up console logging if requested (Keep original logging logic separate or merge carefully)
|
||||
if config.log_console:
|
||||
# ... (original log_console setup using page.on(...) remains here) ...
|
||||
# This allows logging to screen *and* capturing to the list if both flags are True
|
||||
def log_consol(msg, console_log_type="debug"):
|
||||
# ... existing implementation ...
|
||||
pass # Placeholder for existing code
|
||||
|
||||
page.on("console", lambda msg: log_consol(msg, "debug"))
|
||||
page.on("pageerror", lambda e: log_consol(e, "error"))
|
||||
|
||||
|
||||
try:
|
||||
# ... (existing code for SSL, downloads, goto, waits, JS execution, etc.) ...
|
||||
|
||||
# Get final HTML content
|
||||
# ... (existing code for selector logic or page.content()) ...
|
||||
if config.css_selector:
|
||||
# ... existing selector logic ...
|
||||
html = f"<div class='crawl4ai-result'>\n" + "\n".join(html_parts) + "\n</div>"
|
||||
else:
|
||||
html = await page.content()
|
||||
|
||||
await self.execute_hook(
|
||||
"before_return_html", page=page, html=html, context=context, config=config
|
||||
)
|
||||
|
||||
# Handle PDF and screenshot generation
|
||||
# ... (existing code) ...
|
||||
|
||||
# Define delayed content getter
|
||||
# ... (existing code) ...
|
||||
|
||||
# Return complete response - ADD CAPTURED DATA HERE
|
||||
return AsyncCrawlResponse(
|
||||
html=html,
|
||||
response_headers=response_headers,
|
||||
js_execution_result=execution_result,
|
||||
status_code=status_code,
|
||||
screenshot=screenshot_data,
|
||||
pdf_data=pdf_data,
|
||||
get_delayed_content=get_delayed_content,
|
||||
ssl_certificate=ssl_cert,
|
||||
downloaded_files=(
|
||||
self._downloaded_files if self._downloaded_files else None
|
||||
),
|
||||
redirected_url=redirected_url,
|
||||
# NEW: Pass captured data conditionally
|
||||
network_requests=captured_requests if config.capture_network_requests else None,
|
||||
console_messages=captured_console if config.capture_console_messages else None,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
raise e # Re-raise the original exception
|
||||
|
||||
finally:
|
||||
# If no session_id is given we should close the page
|
||||
if not config.session_id:
|
||||
# Detach listeners before closing to prevent potential errors during close
|
||||
if config.capture_network_requests:
|
||||
page.remove_listener("request", handle_request_capture)
|
||||
page.remove_listener("response", handle_response_capture)
|
||||
page.remove_listener("requestfailed", handle_request_failed_capture)
|
||||
if config.capture_console_messages:
|
||||
page.remove_listener("console", handle_console_capture)
|
||||
page.remove_listener("pageerror", handle_pageerror_capture)
|
||||
# Also remove logging listeners if they were attached
|
||||
if config.log_console:
|
||||
# Need to figure out how to remove the lambdas if necessary,
|
||||
# or ensure they don't cause issues on close. Often, it's fine.
|
||||
pass
|
||||
|
||||
await page.close()
|
||||
|
||||
# ... (rest of AsyncPlaywrightCrawlerStrategy methods) ...
|
||||
|
||||
```
|
||||
|
||||
**4. Core Crawler (`crawl4ai/async_webcrawler.py`)**
|
||||
|
||||
* **Goal:** Ensure the captured data from `AsyncCrawlResponse` is transferred to the final `CrawlResult`.
|
||||
* **Changes:**
|
||||
* In `arun`, when processing a non-cached result (inside the `if not cached_result or not html:` block), after receiving `async_response` and calling `aprocess_html` to get `crawl_result`, copy the `network_requests` and `console_messages` from `async_response` to `crawl_result`.
|
||||
|
||||
```python
|
||||
# ==== File: crawl4ai/async_webcrawler.py ====
|
||||
# ... (imports) ...
|
||||
|
||||
class AsyncWebCrawler:
|
||||
# ... (existing methods) ...
|
||||
|
||||
async def arun(
|
||||
self,
|
||||
url: str,
|
||||
config: CrawlerRunConfig = None,
|
||||
**kwargs,
|
||||
) -> RunManyReturn:
|
||||
# ... (existing setup, cache check) ...
|
||||
|
||||
async with self._lock or self.nullcontext():
|
||||
try:
|
||||
# ... (existing logging, cache context setup) ...
|
||||
|
||||
if cached_result:
|
||||
# ... (existing cache handling logic) ...
|
||||
# Note: Captured network/console usually not useful from cache
|
||||
# Ensure they are None or empty if read from cache, unless stored explicitly
|
||||
cached_result.network_requests = cached_result.network_requests or None
|
||||
cached_result.console_messages = cached_result.console_messages or None
|
||||
# ... (rest of cache logic) ...
|
||||
|
||||
# Fetch fresh content if needed
|
||||
if not cached_result or not html:
|
||||
t1 = time.perf_counter()
|
||||
|
||||
# ... (existing user agent update, robots.txt check) ...
|
||||
|
||||
##############################
|
||||
# Call CrawlerStrategy.crawl #
|
||||
##############################
|
||||
async_response = await self.crawler_strategy.crawl(
|
||||
url,
|
||||
config=config,
|
||||
)
|
||||
|
||||
# ... (existing assignment of html, screenshot, pdf, js_result from async_response) ...
|
||||
|
||||
t2 = time.perf_counter()
|
||||
# ... (existing logging) ...
|
||||
|
||||
###############################################################
|
||||
# Process the HTML content, Call CrawlerStrategy.process_html #
|
||||
###############################################################
|
||||
crawl_result: CrawlResult = await self.aprocess_html(
|
||||
# ... (existing args) ...
|
||||
)
|
||||
|
||||
# --- Transfer data from AsyncCrawlResponse to CrawlResult ---
|
||||
crawl_result.status_code = async_response.status_code
|
||||
crawl_result.redirected_url = async_response.redirected_url or url
|
||||
crawl_result.response_headers = async_response.response_headers
|
||||
crawl_result.downloaded_files = async_response.downloaded_files
|
||||
crawl_result.js_execution_result = js_execution_result
|
||||
crawl_result.ssl_certificate = async_response.ssl_certificate
|
||||
# NEW: Copy captured data
|
||||
crawl_result.network_requests = async_response.network_requests
|
||||
crawl_result.console_messages = async_response.console_messages
|
||||
# ------------------------------------------------------------
|
||||
|
||||
crawl_result.success = bool(html)
|
||||
crawl_result.session_id = getattr(config, "session_id", None)
|
||||
|
||||
# ... (existing logging) ...
|
||||
|
||||
# Update cache if appropriate
|
||||
if cache_context.should_write() and not bool(cached_result):
|
||||
# crawl_result now includes network/console data if captured
|
||||
await async_db_manager.acache_url(crawl_result)
|
||||
|
||||
return CrawlResultContainer(crawl_result)
|
||||
|
||||
else: # Cached result was used
|
||||
# ... (existing logging for cache hit) ...
|
||||
cached_result.success = bool(html)
|
||||
cached_result.session_id = getattr(config, "session_id", None)
|
||||
cached_result.redirected_url = cached_result.redirected_url or url
|
||||
return CrawlResultContainer(cached_result)
|
||||
|
||||
except Exception as e:
|
||||
# ... (existing error handling) ...
|
||||
return CrawlResultContainer(
|
||||
CrawlResult(
|
||||
url=url, html="", success=False, error_message=error_message
|
||||
)
|
||||
)
|
||||
|
||||
# ... (aprocess_html remains unchanged regarding capture) ...
|
||||
|
||||
# ... (arun_many remains unchanged regarding capture) ...
|
||||
```
|
||||
|
||||
**Summary of Changes:**
|
||||
|
||||
1. **Configuration:** Added `capture_network_requests` and `capture_console_messages` flags to `CrawlerRunConfig`.
|
||||
2. **Models:** Added corresponding `network_requests` and `console_messages` fields (List of Dicts) to `AsyncCrawlResponse` and `CrawlResult`.
|
||||
3. **Strategy:** Implemented conditional event listeners in `AsyncPlaywrightCrawlerStrategy._crawl_web` to capture data into lists when flags are true. Populated these fields in the returned `AsyncCrawlResponse`. Added basic error handling within capture handlers. Added timestamps.
|
||||
4. **Crawler:** Modified `AsyncWebCrawler.arun` to copy the captured data from `AsyncCrawlResponse` into the final `CrawlResult` for non-cached fetches.
|
||||
|
||||
This approach keeps the capturing logic contained within the Playwright strategy, uses clear configuration flags, and integrates the results into the existing data flow. The data format (list of dictionaries) is flexible for storing varied information from requests/responses/console messages.
|
3
temp.txt
3
temp.txt
@ -1,3 +0,0 @@
|
||||
7. **`screenshot`**, **`pdf`**, & **`capture_mhtml`**:
|
||||
- If `True`, captures a screenshot, PDF, or MHTML snapshot after the page is fully loaded.
|
||||
- The results go to `result.screenshot` (base64), `result.pdf` (bytes), or `result.mhtml` (string).
|
185
tests/general/test_network_console_capture.py
Normal file
185
tests/general/test_network_console_capture.py
Normal file
@ -0,0 +1,185 @@
|
||||
from crawl4ai.async_webcrawler import AsyncWebCrawler
|
||||
from crawl4ai.async_configs import CrawlerRunConfig, BrowserConfig
|
||||
import asyncio
|
||||
import aiohttp
|
||||
from aiohttp import web
|
||||
import tempfile
|
||||
import shutil
|
||||
import os, sys, time, json
|
||||
|
||||
|
||||
async def start_test_server():
|
||||
app = web.Application()
|
||||
|
||||
async def basic_page(request):
|
||||
return web.Response(text="""
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Network Request Test</title>
|
||||
</head>
|
||||
<body>
|
||||
<h1>Test Page for Network Capture</h1>
|
||||
<p>This page performs network requests and console logging.</p>
|
||||
<img src="/image.png" alt="Test Image">
|
||||
<script>
|
||||
console.log("Basic console log");
|
||||
console.error("Error message");
|
||||
console.warn("Warning message");
|
||||
|
||||
// Make some XHR requests
|
||||
const xhr = new XMLHttpRequest();
|
||||
xhr.open('GET', '/api/data', true);
|
||||
xhr.send();
|
||||
|
||||
// Make a fetch request
|
||||
fetch('/api/json')
|
||||
.then(response => response.json())
|
||||
.catch(error => console.error('Fetch error:', error));
|
||||
|
||||
// Trigger an error
|
||||
setTimeout(() => {
|
||||
try {
|
||||
nonExistentFunction();
|
||||
} catch (e) {
|
||||
console.error("Caught error:", e);
|
||||
}
|
||||
}, 100);
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
""", content_type="text/html")
|
||||
|
||||
async def image(request):
|
||||
# Return a small 1x1 transparent PNG
|
||||
return web.Response(body=bytes.fromhex('89504E470D0A1A0A0000000D49484452000000010000000108060000001F15C4890000000D4944415478DA63FAFFFF3F030079DB00018D959DE70000000049454E44AE426082'), content_type="image/png")
|
||||
|
||||
async def api_data(request):
|
||||
return web.Response(text="sample data")
|
||||
|
||||
async def api_json(request):
|
||||
return web.json_response({"status": "success", "message": "JSON data"})
|
||||
|
||||
# Register routes
|
||||
app.router.add_get('/', basic_page)
|
||||
app.router.add_get('/image.png', image)
|
||||
app.router.add_get('/api/data', api_data)
|
||||
app.router.add_get('/api/json', api_json)
|
||||
|
||||
runner = web.AppRunner(app)
|
||||
await runner.setup()
|
||||
site = web.TCPSite(runner, 'localhost', 8080)
|
||||
await site.start()
|
||||
|
||||
return runner
|
||||
|
||||
|
||||
async def test_network_console_capture():
|
||||
print("\n=== Testing Network and Console Capture ===\n")
|
||||
|
||||
# Start test server
|
||||
runner = await start_test_server()
|
||||
try:
|
||||
browser_config = BrowserConfig(headless=True)
|
||||
|
||||
# Test with capture disabled (default)
|
||||
print("\n1. Testing with capture disabled (default)...")
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
config = CrawlerRunConfig(
|
||||
wait_until="networkidle", # Wait for network to be idle
|
||||
)
|
||||
result = await crawler.arun(url="http://localhost:8080/", config=config)
|
||||
|
||||
assert result.network_requests is None, "Network requests should be None when capture is disabled"
|
||||
assert result.console_messages is None, "Console messages should be None when capture is disabled"
|
||||
print("✓ Default config correctly returns None for network_requests and console_messages")
|
||||
|
||||
# Test with network capture enabled
|
||||
print("\n2. Testing with network capture enabled...")
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
config = CrawlerRunConfig(
|
||||
wait_until="networkidle", # Wait for network to be idle
|
||||
capture_network_requests=True
|
||||
)
|
||||
result = await crawler.arun(url="http://localhost:8080/", config=config)
|
||||
|
||||
assert result.network_requests is not None, "Network requests should be captured"
|
||||
print(f"✓ Captured {len(result.network_requests)} network requests")
|
||||
|
||||
# Check if we have both requests and responses
|
||||
request_count = len([r for r in result.network_requests if r.get("event_type") == "request"])
|
||||
response_count = len([r for r in result.network_requests if r.get("event_type") == "response"])
|
||||
print(f" - {request_count} requests, {response_count} responses")
|
||||
|
||||
# Check if we captured specific resources
|
||||
urls = [r.get("url") for r in result.network_requests]
|
||||
has_image = any("/image.png" in url for url in urls)
|
||||
has_api_data = any("/api/data" in url for url in urls)
|
||||
has_api_json = any("/api/json" in url for url in urls)
|
||||
|
||||
assert has_image, "Should have captured image request"
|
||||
assert has_api_data, "Should have captured API data request"
|
||||
assert has_api_json, "Should have captured API JSON request"
|
||||
print("✓ Captured expected network requests (image, API endpoints)")
|
||||
|
||||
# Test with console capture enabled
|
||||
print("\n3. Testing with console capture enabled...")
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
config = CrawlerRunConfig(
|
||||
wait_until="networkidle", # Wait for network to be idle
|
||||
capture_console_messages=True
|
||||
)
|
||||
result = await crawler.arun(url="http://localhost:8080/", config=config)
|
||||
|
||||
assert result.console_messages is not None, "Console messages should be captured"
|
||||
print(f"✓ Captured {len(result.console_messages)} console messages")
|
||||
|
||||
# Check if we have different types of console messages
|
||||
message_types = set(msg.get("type") for msg in result.console_messages if "type" in msg)
|
||||
print(f" - Message types: {', '.join(message_types)}")
|
||||
|
||||
# Print all captured messages for debugging
|
||||
print(" - Captured messages:")
|
||||
for msg in result.console_messages:
|
||||
print(f" * Type: {msg.get('type', 'N/A')}, Text: {msg.get('text', 'N/A')}")
|
||||
|
||||
# Look for specific messages
|
||||
messages = [msg.get("text") for msg in result.console_messages if "text" in msg]
|
||||
has_basic_log = any("Basic console log" in msg for msg in messages)
|
||||
has_error_msg = any("Error message" in msg for msg in messages)
|
||||
has_warning_msg = any("Warning message" in msg for msg in messages)
|
||||
|
||||
assert has_basic_log, "Should have captured basic console.log message"
|
||||
assert has_error_msg, "Should have captured console.error message"
|
||||
assert has_warning_msg, "Should have captured console.warn message"
|
||||
print("✓ Captured expected console messages (log, error, warning)")
|
||||
|
||||
# Test with both captures enabled
|
||||
print("\n4. Testing with both network and console capture enabled...")
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
config = CrawlerRunConfig(
|
||||
wait_until="networkidle", # Wait for network to be idle
|
||||
capture_network_requests=True,
|
||||
capture_console_messages=True
|
||||
)
|
||||
result = await crawler.arun(url="http://localhost:8080/", config=config)
|
||||
|
||||
assert result.network_requests is not None, "Network requests should be captured"
|
||||
assert result.console_messages is not None, "Console messages should be captured"
|
||||
print(f"✓ Successfully captured both {len(result.network_requests)} network requests and {len(result.console_messages)} console messages")
|
||||
|
||||
finally:
|
||||
await runner.cleanup()
|
||||
print("\nTest server shutdown")
|
||||
|
||||
|
||||
async def main():
|
||||
try:
|
||||
await test_network_console_capture()
|
||||
print("\n✅ All tests passed successfully!")
|
||||
except Exception as e:
|
||||
print(f"\n❌ Test failed: {str(e)}")
|
||||
raise
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
Loading…
x
Reference in New Issue
Block a user