Docker support and Ollama support (#47)

- Added support for running CLI and Ollama server via Docker
- Introduced tests for local embeddings model and standalone Docker setup
- Enabled conditional Ollama server launch via LLM_PROVIDER
This commit is contained in:
Geeta Chauhan 2025-06-25 20:57:05 -07:00 committed by GitHub
parent 7abff0f354
commit 78ea029a0b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
23 changed files with 2141 additions and 19 deletions

125
.dockerignore Normal file
View File

@ -0,0 +1,125 @@
# Git specific
.git
.gitignore
.gitattributes
*.git
# Python specific
__pycache__/
*.pyc
*.pyo
*.pyd
*.egg-info/
.Python
env/
venv/
.venv/
.pytest_cache/
.coverage
.coverage.*
htmlcov/
.tox/
.mypy_cache/
.dmypy.json
dmypy.json
# Environment files
.env
.env.*
!.env.example
# IDE/Editor specific
.vscode/
.idea/
*.swp
*.swo
*.sublime-*
.spyderproject
.spyproject
# Model cache directories (can be large)
.ollama/
ollama_data/
.cache/
.local/
# Documentation and non-essential files
*.md
!README.md
docs/
assets/
*.png
*.jpg
*.jpeg
*.gif
*.svg
!assets/TauricResearch.png
# Build artifacts and logs
build/
dist/
*.log
logs/
*.tmp
*.temp
# Test files (uncomment if you don't want tests in production image)
# tests/test_*.py
# test_*.py
# *_test.py
# Docker and deployment files
Dockerfile*
docker-compose*.yml
.dockerignore
build*.sh
deploy*.sh
k8s/
helm/
# Development tools
.devcontainer/
.github/
.gitlab-ci.yml
.travis.yml
.circleci/
Makefile
# Data files (can be large)
data/
*.csv
*.json
*.xlsx
*.db
*.sqlite
# Temporary and backup files
*.bak
*.backup
*.orig
*.rej
~*
.#*
\#*#
# OS specific
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
Desktop.ini
# Node.js (if any frontend assets)
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Lock files (include them for reproducible builds)
# Uncomment if you want to exclude them
# uv.lock
# poetry.lock
# Pipfile.lock

36
.env.example Normal file
View File

@ -0,0 +1,36 @@
# This is an example .env file for the Trading Agent project.
# Copy this file to .env and fill in your API keys and environment configurations.
# "NOTE: When using for `docker` command do not use quotes around the values, otherwise environment variables will not be set."
# API Keys
# Set your OpenAI API key, for OpenAI, Ollama or other OpenAI-compatible models
OPENAI_API_KEY=<your-openai-key>
# Set your Finnhub API key
FINNHUB_API_KEY=<your_finnhub_api_key_here>
#LLM Configuration for OpenAI
# Set LLM_Provider to one of: openai, anthropic, google, openrouter or ollama,
LLM_PROVIDER=openai
# Set the API URL for the LLM backend
LLM_BACKEND_URL=https://api.openai.com/v1
# Uncomment for LLM Configuration for local ollama
#LLM_PROVIDER=ollama
## For Ollama running in the same container, /v1 added for OpenAI compatibility
#LLM_BACKEND_URL=http://localhost:11434/v1
# Set name of the Deep think model
LLM_DEEP_THINK_MODEL=llama3.2
## Setname of the quick think model
LLM_QUICK_THINK_MODEL=qwen3
# Set the name of the embedding model
LLM_EMBEDDING_MODEL=nomic-embed-text
# Agent Configuration
# Maximum number of debate rounds for the agent to engage in choose from 1, 3, 5
MAX_DEBATE_ROUNDS=1
# Set to False if you want to disable tools that access the internet
ONLINE_TOOLS=True

2
.gitattributes vendored Normal file
View File

@ -0,0 +1,2 @@
init-ollama.sh text eol=lf
build.sh text eol=lf

7
.gitignore vendored
View File

@ -6,3 +6,10 @@ src/
eval_results/
eval_data/
*.egg-info/
.ollama/
ollama_data/
.local/
.cache/
.pytest_cache/
.devcontainer/
.env

506
Docker-readme.md Normal file
View File

@ -0,0 +1,506 @@
# 🚀 Docker Setup for Trading Agents
This guide provides instructions for running the Trading Agents application within a secure and reproducible Docker environment. Using Docker simplifies setup, manages dependencies, and ensures a consistent experience across different machines.
The recommended method is using `docker-compose`, which handles the entire stack, including the Ollama server and model downloads.
## Prerequisites
Before you begin, ensure you have the following installed:
- [**Docker**](https://docs.docker.com/get-docker/)
- [**Docker Compose**](https://docs.docker.com/compose/install/) (usually included with Docker Desktop)
## 🤔 Which Option Should I Choose?
| Feature | OpenAI | Local Ollama |
| ------------------------- | ------------------------- | ----------------------------- |
| **Setup Time** | 2-5 minutes | 15-30 minutes |
| **Cost** | ~$0.01-0.05 per query | Free after setup |
| **Quality** | GPT-4o (excellent) | Depends on model |
| **Privacy** | Data sent to OpenAI | Fully private |
| **Internet Required** | Yes | No (after setup) |
| **Hardware Requirements** | None | 4GB+ RAM recommended |
| **Model Downloads** | None | Depends on model |
| **Best For** | Quick testing, production | Privacy-focused, cost control |
**💡 Recommendation**: Start with OpenAI for quick testing, then switch to Ollama for production if privacy/cost is important.
## ⚡ Quickstart
### Option A: Using OpenAI (Recommended for beginners)
```bash
# 1. Clone the repository
git clone https://github.com/TauricResearch/TradingAgents.git
cd TradingAgents
# 2. Create and configure environment file
cp .env.example .env
# Edit .env: Set LLM_PROVIDER=openai and add your OPENAI_API_KEY
# 3. Build and run with OpenAI
docker compose --profile openai build
docker compose --profile openai run -it app-openai
```
### Option B: Using Local Ollama (Free but requires more setup)
```bash
# 1. Clone the repository
git clone https://github.com/TauricResearch/TradingAgents.git
cd TradingAgents
# 2. Create environment file
cp .env.example .env
# Edit .env: Set LLM_PROVIDER=ollama
# 3. Start Ollama service
docker compose --profile ollama up -d --build
# 4. Initialize models (first time only)
# Linux/macOS:
./init-ollama.sh
# Windows Command Prompt:
init-ollama.bat
# 5. Run the command-line app
docker compose --profile ollama run -it app-ollama
```
## 🛠️ Build Methods
Choose your preferred build method:
### Method 1: Quick Build (Recommended)
```bash
# Standard Docker build
docker build -t trading-agents .
# Or with docker-compose
docker compose build
```
### Method 2: Optimized Build (Advanced)
For faster rebuilds with caching:
**Linux/macOS:**
```bash
# Build with BuildKit optimization
./build.sh
# With testing
./build.sh --test
# Clean cache and rebuild
./build.sh --clean --test
```
**Windows Command Prompt:**
```cmd
REM Build with BuildKit optimization
build.bat
REM With testing
build.bat --test
REM Clean cache and rebuild
build.bat --clean --test
```
**Benefits of Optimized Build:**
- ⚡ 60-90% faster rebuilds via BuildKit cache
- 🔄 Automatic fallback to simple build if needed
- 📊 Cache statistics and build info
- 🧪 Built-in testing capabilities
## Step-by-Step Instructions
### Step 1: Clone the Repository
```bash
git clone https://github.com/TauricResearch/TradingAgents.git
cd TradingAgents
```
### Step 2: Configure Your Environment (`.env` file)
The application is configured using an environment file. Create your own `.env` file by copying the provided template.
```bash
cp .env.example .env
```
#### Option A: OpenAI Configuration (Recommended)
Edit your `.env` file and set:
```env
# LLM Provider Configuration
LLM_PROVIDER=openai
LLM_BACKEND_URL=https://api.openai.com/v1
# API Keys
OPENAI_API_KEY=your-actual-openai-api-key-here
FINNHUB_API_KEY=your-finnhub-api-key-here
# Agent Configuration
MAX_DEBATE_ROUNDS=1
ONLINE_TOOLS=True
```
**Benefits of OpenAI:**
- ✅ No local setup required
- ✅ Higher quality responses (GPT-4o)
- ✅ Faster startup (no model downloads)
- ✅ No GPU/CPU requirements
- ❌ Requires API costs ($0.01-0.05 per query)
#### Option B: Local Ollama Configuration (Free)
Edit your `.env` file and set:
```env
# LLM Provider Configuration
LLM_PROVIDER=ollama
LLM_BACKEND_URL=http://ollama:11434/v1
# Local Models
LLM_DEEP_THINK_MODEL=llama3.2
LLM_QUICK_THINK_MODEL=qwen3
LLM_EMBEDDING_MODEL=nomic-embed-text
# API Keys (still need Finnhub for market data)
FINNHUB_API_KEY=your-finnhub-api-key-here
# Agent Configuration
MAX_DEBATE_ROUNDS=1
ONLINE_TOOLS=True
```
**Benefits of Ollama:**
- ✅ Completely free after setup
- ✅ Data privacy (runs locally)
- ✅ Works offline
- ❌ Requires initial setup and model downloads
- ❌ Slower responses than cloud APIs
### Step 3: Run with Docker Compose
Choose the appropriate method based on your LLM provider configuration:
#### Option A: Running with OpenAI
```bash
# Build the app container
docker compose --profile openai build
# Or use optimized build: ./build.sh
# Test OpenAI connection (optional)
docker compose --profile openai run --rm app-openai python tests/test_openai_connection.py
# Run the trading agents
docker compose --profile openai run -it app-openai
```
**No additional services needed** - the app connects directly to OpenAI's API.
#### Option B: Running with Ollama (CPU)
```bash
# Start the Ollama service
docker compose --profile ollama up -d --build
# Or use optimized build: ./build.sh
# Initialize Ollama models (first time only)
# Linux/macOS:
./init-ollama.sh
# Windows Command Prompt:
init-ollama.bat
# Test Ollama connection (optional)
docker compose --profile ollama exec app-ollama python tests/test_ollama_connection.py
# Run the trading agents
docker compose --profile ollama run -it app-ollama
```
#### Option C: Running with Ollama (GPU)
First, uncomment the GPU configuration in docker-compose.yml:
```yaml
# deploy:
# resources:
# reservations:
# devices:
# - capabilities: ["gpu"]
```
Then run:
```bash
# Start with GPU support
docker compose --profile ollama up -d --build
# Or use optimized build: ./build.sh
# Initialize Ollama models (first time only)
# Linux/macOS:
./init-ollama.sh
# Windows Command Prompt:
init-ollama.bat
# Run the trading agents
docker compose --profile ollama run -it app-ollama
```
#### View Logs
To view the application logs in real-time, you can run:
```bash
docker compose --profile ollama logs -f
```
#### Stop the Containers
To stop and remove the containers:
```bash
docker compose --profile ollama down
```
### Step 4: Verify Your Setup (Optional)
#### For OpenAI Setup:
```bash
# Test OpenAI API connection
docker compose --profile openai run --rm app-openai python tests/test_openai_connection.py
# Run a quick trading analysis test
docker compose --profile openai run --rm app-openai python tests/test_setup.py
# Run all tests automatically
docker compose --profile openai run --rm app-openai python tests/run_tests.py
```
#### For Ollama Setup:
```bash
# Test Ollama connection
docker compose --profile ollama exec app-ollama python tests/test_ollama_connection.py
# Run a quick trading analysis test
docker compose --profile ollama exec app-ollama python tests/test_setup.py
# Run all tests automatically
docker compose --profile ollama exec app-ollama python tests/run_tests.py
```
### Step 5: Model Management (Optional)
#### View and Manage Models
```bash
# List all available models
docker compose --profile ollama exec ollama ollama list
# Check model cache size
du -sh ./ollama_data
# Pull additional models (cached locally)
docker compose --profile ollama exec ollama ollama pull llama3.2
# Remove a model (frees up cache space)
docker compose --profile ollama exec ollama ollama rm model-name
```
#### Model Cache Benefits
- **Persistence**: Models downloaded once are reused across container restarts
- **Speed**: Subsequent startups are much faster (seconds vs minutes)
- **Bandwidth**: No need to re-download multi-GB models
- **Offline**: Once cached, models work without internet connection
#### Troubleshooting Cache Issues
```bash
# If models seem corrupted, clear cache and re-initialize
docker compose --profile ollama down
rm -rf ./ollama_data
docker compose --profile ollama up -d
# Linux/macOS:
./init-ollama.sh
# Windows Command Prompt:
init-ollama.bat
```
✅ **Expected Output:**
```
Testing Ollama connection:
Backend URL: http://localhost:11434/v1
Model: qwen3:0.6b
Embedding Model: nomic-embed-text
✅ Ollama API is responding
✅ Model 'qwen3:0.6b' is available
✅ OpenAI-compatible API is working
Response: ...
```
---
## Alternative Method: Using `docker` Only
If you prefer not to use `docker-compose`, you can build and run the container manually.
**1. Build the Docker Image:**
```bash
# Standard build
docker build -t trading-agents .
# Or optimized build (recommended)
# Linux/macOS:
./build.sh
# Windows Command Prompt:
build.bat
```
**2. Test local ollama setup (Optional):**
Make sure you have a `.env` file configured as described in Step 2. If you are using `LLM_PROVIDER="ollama"`, you can verify that the Ollama server is running correctly and has the necessary models.
```bash
docker run -it --network host --env-file .env trading-agents python tests/test_ollama_connection.py
```
for picking environment settings from .env file. You can pass values directly using:
```bash
docker run -it --network host \
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
-e LLM_DEEP_THINK_MODEL="qwen3:0.6b" \
-e LLM_EMBEDDING_MODEL="nomic-embed-text"\
trading-agents \
python tests/test_ollama_connection.py
```
To prevent re-downloading of Ollama models, mount folder from your host and run as
```bash
docker run -it --network host \
-e LLM_PROVIDER="ollama" \
-e LLM_BACKEND_URL="http://localhost:11434/v1" \
-e LLM_DEEP_THINK_MODEL="qwen3:0.6b" \
-e LLM_EMBEDDING_MODEL="nomic-embed-text"\
-v ./ollama_cache:/app/.ollama \
trading-agents \
python tests/test_ollama_connection.py
```
**3. Run the Docker Container:**
Make sure you have a `.env` file configured as described in Step 2.
```bash
docker run --rm -it \
--network host \
--env-file .env \
-v ./data:/app/data \
--name trading-agents \
trading-agents
```
**4. Run on GPU machine:**
For running on GPU machine, pass `--gpus=all` flag to the `docker run` command:
```bash
docker run --rm -it \
--gpus=all \
--network host \
--env-file .env \
-v ./data:/app/data \
--name trading-agents \
trading-agents
```
## Configuration Details
### Test Suite Organization
All test scripts are organized in the `tests/` directory:
```
tests/
├── __init__.py # Python package initialization
├── run_tests.py # Automated test runner
├── test_openai_connection.py # OpenAI API connectivity tests
├── test_ollama_connection.py # Ollama connectivity tests
└── test_setup.py # General setup and configuration tests
```
**Automated Testing:**
```bash
# Run all tests automatically (detects provider) - from project root
python tests/run_tests.py
# Run specific test - from project root
python tests/test_openai_connection.py
python tests/test_ollama_connection.py
python tests/test_setup.py
```
**⚠️ Important**: When running tests locally (outside Docker), always run from the **project root directory**, not from inside the `tests/` folder. The Docker commands automatically handle this.
### Live Reloading
The `app` directory is mounted as a volume into the container. This means any changes you make to the source code on your local machine will be reflected instantly in the running container without needing to rebuild the image.
### Persistent Data & Model Caching
The following volumes are used to persist data between container runs:
- **`./data`**: Stores application data, trading reports, and cached market data
- **`./ollama_data`**: Caches downloaded Ollama models (typically 1-4GB per model)
#### Model Cache Management
The Ollama models are automatically cached in `./ollama_data/` on your host machine:
- **First run**: Models are downloaded automatically (may take 5-15 minutes depending on internet speed)
- **Subsequent runs**: Models are reused from cache, startup is much faster
- **Cache location**: `./ollama_data/` directory in your project folder
- **Cache size**: Typically 2-6GB total for the required models
```bash
# Check cache size
du -sh ./ollama_data
# Clean cache if needed (will require re-downloading models)
rm -rf ./ollama_data
# List cached models
docker compose --profile ollama exec ollama ollama list
```
### GPU troubleshooting
If you find model is running very slow on GPU machine, make sur you the latest GPU drivers installed and GPU is working fine with docker. Eg you can check for Nvidia GPUs by running:
```bash
docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
or
nvidia-smi
```

65
Dockerfile Normal file
View File

@ -0,0 +1,65 @@
# syntax=docker/dockerfile:1.4
# Build stage for dependencies
FROM python:3.9-slim-bookworm AS builder
# Set environment variables for build
ENV PYTHONDONTWRITEBYTECODE=1 \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100
# Install build dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y --no-install-recommends \
curl \
git \
&& apt-get clean
# Create virtual environment
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# Copy requirements and install Python dependencies
COPY requirements.txt .
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --no-cache-dir -r requirements.txt
# Runtime stage
FROM python:3.9-slim-bookworm AS runtime
# Set environment variables
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100
# Install runtime dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y --no-install-recommends \
curl \
git \
&& apt-get clean
# Copy virtual environment from builder stage
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# Create a non-root user and group
RUN groupadd -r appuser && useradd -r -g appuser -s /bin/bash -d /app appuser
# Create app directory
WORKDIR /app
# Copy the application code
COPY . .
# Change ownership of the app directory to the non-root user
RUN chown -R appuser:appuser /app
# Switch to non-root user
USER appuser
# Default command (can be overridden, e.g., by pytest command in CI)
CMD ["python", "-m", "cli.main"]

View File

@ -192,6 +192,10 @@ print(decision)
You can view the full list of configurations in `tradingagents/default_config.py`.
## Docker usage and local ollama tests ##
See [Docker Readme](./Docker-readme.md) for details.
## Contributing
We welcome contributions from the community! Whether it's fixing a bug, improving documentation, or suggesting a new feature, your input helps make this project better. If you are interested in this line of research, please consider joining our open-source financial AI research community [Tauric Research](https://tauric.ai/).

176
build.bat Normal file
View File

@ -0,0 +1,176 @@
@echo off
REM 🚀 Optimized BuildKit Docker Build Script for TradingAgents (Windows Batch)
REM This script uses Docker BuildKit for faster builds with advanced caching
setlocal EnableDelayedExpansion
REM Configuration
set "IMAGE_NAME=trading-agents"
set "CACHE_TAG=%IMAGE_NAME%:cache"
set "LATEST_TAG=%IMAGE_NAME%:latest"
set "REGISTRY="
set "TARGET=production"
set "CLEAN_CACHE="
set "RUN_TESTS="
set "SHOW_STATS="
set "SHOW_HELP="
REM Parse command line arguments
:parse_args
if "%~1"=="" goto end_parse
if /i "%~1"=="--clean" (
set "CLEAN_CACHE=1"
shift
goto parse_args
)
if /i "%~1"=="--test" (
set "RUN_TESTS=1"
shift
goto parse_args
)
if /i "%~1"=="--stats" (
set "SHOW_STATS=1"
shift
goto parse_args
)
if /i "%~1"=="--help" (
set "SHOW_HELP=1"
shift
goto parse_args
)
if /i "%~1"=="-h" (
set "SHOW_HELP=1"
shift
goto parse_args
)
echo [ERROR] Unknown option: %~1
exit /b 1
:end_parse
REM Show help if requested
if defined SHOW_HELP (
echo 🚀 TradingAgents Optimized Docker Build ^(Windows^)
echo Usage: build-optimized.bat [OPTIONS]
echo.
echo Options:
echo --clean Clean build cache before building
echo --test Run tests after building
echo --stats Show cache statistics after building
echo --help, -h Show this help message
echo.
echo Examples:
echo build-optimized.bat # Build image
echo build-optimized.bat --clean --test # Clean cache, build, and test
exit /b 0
)
echo 🚀 TradingAgents Optimized Docker Build ^(Windows^)
echo =========================================
REM Check if BuildKit is available
echo [INFO] Checking BuildKit availability...
docker buildx version >nul 2>&1
if errorlevel 1 (
echo [ERROR] Docker BuildKit ^(buildx^) is not available
echo [ERROR] Please install Docker BuildKit or update Docker to a newer version
exit /b 1
)
echo [SUCCESS] BuildKit is available
REM Create buildx builder if it doesn't exist
echo [INFO] Setting up BuildKit builder...
docker buildx inspect trading-agents-builder >nul 2>&1
if errorlevel 1 (
echo [INFO] Creating new buildx builder 'trading-agents-builder'...
docker buildx create --name trading-agents-builder --driver docker-container --bootstrap
if errorlevel 1 (
echo [ERROR] Failed to create builder
exit /b 1
)
)
REM Use our builder
docker buildx use trading-agents-builder
if errorlevel 1 (
echo [ERROR] Failed to use builder
exit /b 1
)
echo [SUCCESS] Builder 'trading-agents-builder' is ready
REM Clean cache if requested
if defined CLEAN_CACHE (
echo [INFO] Cleaning build cache...
docker buildx prune -f
echo [SUCCESS] Build cache cleaned
)
REM Show build information
echo [INFO] Build Information:
echo 📦 Image: %LATEST_TAG%
echo 📊 Cache: Local BuildKit cache
echo 🔄 Multi-stage: Yes ^(builder → runtime^)
echo 🌐 Network: Host networking mode
REM Build the image
echo [INFO] Building image with BuildKit cache optimization...
REM Get build metadata
for /f "tokens=*" %%i in ('powershell -Command "(Get-Date).ToUniversalTime().ToString('yyyy-MM-ddTHH:mm:ssZ')"') do set "BUILD_DATE=%%i"
for /f "tokens=*" %%i in ('git rev-parse --short HEAD 2^>nul') do set "GIT_HASH=%%i"
if "!GIT_HASH!"=="" set "GIT_HASH=unknown"
REM Execute build
echo [INFO] Starting Docker build...
docker buildx build ^
--file Dockerfile ^
--tag %LATEST_TAG% ^
--cache-from type=local,src=C:\tmp\.buildx-cache ^
--cache-to type=local,dest=C:\tmp\.buildx-cache,mode=max ^
--label build.date=%BUILD_DATE% ^
--label build.version=%GIT_HASH% ^
--load ^
.
if errorlevel 1 (
echo [ERROR] ❌ Build failed!
exit /b 1
)
echo [SUCCESS] ✅ Build completed successfully!
REM Test the image if requested
if defined RUN_TESTS (
echo [INFO] Testing built image...
REM Basic functionality test
docker run --rm %LATEST_TAG% python -c "print('✅ Image test successful')"
if errorlevel 1 (
echo [ERROR] Image test failed
exit /b 1
)
echo [SUCCESS] Image test passed
REM Test import capabilities
docker run --rm %LATEST_TAG% python -c "from tradingagents.default_config import DEFAULT_CONFIG; print('✅ Import test successful')"
if errorlevel 1 (
echo [WARNING] Import test failed ^(this might be expected if dependencies are missing^)
) else (
echo [SUCCESS] Import test passed
)
)
REM Show cache statistics if requested
if defined SHOW_STATS (
echo [INFO] Cache Statistics:
docker buildx du 2>nul || echo Cache statistics not available
)
echo.
echo [SUCCESS] 🎉 Ready to use! Try:
echo docker run -it --network host %LATEST_TAG%
echo docker compose --profile openai run -it app-openai
echo docker compose --profile ollama up -d ^&^& docker compose --profile ollama exec app-ollama cmd
echo docker compose --profile default run -it app
exit /b 0

248
build.sh Normal file
View File

@ -0,0 +1,248 @@
#!/bin/bash
# 🚀 Optimized BuildKit Docker Build Script for TradingAgents
# This script uses Docker BuildKit for faster builds with advanced caching
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
IMAGE_NAME="trading-agents"
CACHE_TAG="${IMAGE_NAME}:cache"
LATEST_TAG="${IMAGE_NAME}:latest"
REGISTRY="" # Set this if you want to push to a registry
# Function to print colored output
print_status() {
echo -e "${BLUE}[INFO]${NC} $1"
}
print_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Check if BuildKit is available
check_buildkit() {
print_status "Checking BuildKit availability..."
if ! docker buildx version > /dev/null 2>&1; then
print_error "Docker BuildKit (buildx) is not available"
print_error "Please install Docker BuildKit or update Docker to a newer version"
exit 1
fi
print_success "BuildKit is available"
}
# Create buildx builder if it doesn't exist
setup_builder() {
print_status "Setting up BuildKit builder..."
# Check if our builder exists
if ! docker buildx inspect trading-agents-builder > /dev/null 2>&1; then
print_status "Creating new buildx builder 'trading-agents-builder'..."
docker buildx create --name trading-agents-builder --driver docker-container --bootstrap
fi
# Use our builder
docker buildx use trading-agents-builder
print_success "Builder 'trading-agents-builder' is ready"
}
# Build with cache optimization
build_image() {
print_status "Building image with BuildKit cache optimization..."
# Build arguments
local build_args=(
--file Dockerfile
--tag "$LATEST_TAG"
--cache-from "type=local,src=/tmp/.buildx-cache"
--cache-to "type=local,dest=/tmp/.buildx-cache,mode=max"
--load # Load into local Docker daemon
)
# Add build metadata
build_args+=(
--label "build.date=$(date -u +'%Y-%m-%dT%H:%M:%SZ')"
--label "build.version=$(git rev-parse --short HEAD 2>/dev/null || echo 'unknown')"
--label "build.target=$target"
)
print_status "Build command: docker buildx build ${build_args[*]} ."
# Execute build
if docker buildx build "${build_args[@]}" .; then
print_success "Build completed successfully!"
return 0
else
print_error "Build failed!"
print_warning "Attempting fallback build with simple Dockerfile..."
return build_simple_fallback
fi
}
# Fallback build function for when BuildKit fails
build_simple_fallback() {
print_status "Using simple Dockerfile as fallback..."
if [ -f "Dockerfile.simple" ]; then
if docker build -f Dockerfile.simple -t "$LATEST_TAG" .; then
print_success "Fallback build completed successfully!"
print_warning "Note: Using simple build without advanced caching"
return 0
else
print_error "Fallback build also failed!"
return 1
fi
else
print_error "Dockerfile.simple not found for fallback"
return 1
fi
}
# Show build info
show_build_info() {
print_status "Build Information:"
echo " 📦 Image: $LATEST_TAG"
echo " 🏗️ Builder: $(docker buildx inspect --bootstrap | grep "Name:" | head -1 | cut -d: -f2 | xargs)"
echo " 📊 Cache: Local BuildKit cache"
echo " 🔄 Multi-stage: Yes (builder → runtime)"
echo " 🌐 Network: Host networking mode"
}
# Test the built image
test_image() {
print_status "Testing built image..."
# Basic functionality test
if docker run --rm "$LATEST_TAG" python -c "print('✅ Image test successful')"; then
print_success "Image test passed"
else
print_error "Image test failed"
return 1
fi
# Test import capabilities
if docker run --rm "$LATEST_TAG" python -c "from tradingagents.default_config import DEFAULT_CONFIG; print('✅ Import test successful')"; then
print_success "Import test passed"
else
print_warning "Import test failed (this might be expected if dependencies are missing)"
fi
}
# Show cache statistics
show_cache_stats() {
print_status "Cache Statistics:"
# Show buildx disk usage
if docker buildx du > /dev/null 2>&1; then
docker buildx du
else
echo " Cache statistics not available"
fi
}
# Clean up build cache
clean_cache() {
print_status "Cleaning build cache..."
docker buildx prune -f
print_success "Build cache cleaned"
}
# Main function
main() {
echo "🚀 TradingAgents Optimized Docker Build"
echo "========================================"
# Parse arguments
local clean=false
local test=false
local stats=false
while [[ $# -gt 0 ]]; do
case $1 in
--clean)
clean=true
shift
;;
--test)
test=true
shift
;;
--stats)
stats=true
shift
;;
--help|-h)
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Options:"
echo " --clean Clean build cache before building"
echo " --test Run tests after building"
echo " --stats Show cache statistics after building"
echo " --help, -h Show this help message"
echo ""
echo "Examples:"
echo " $0 # Build image"
echo " $0 --clean --test # Clean cache, build, and test"
echo " $0 --stats # Build and show cache stats"
exit 0
;;
*)
print_error "Unknown option: $1"
exit 1
;;
esac
done
# Execute steps
check_buildkit
setup_builder
if [ "$clean" = true ]; then
clean_cache
fi
show_build_info
if build_image; then
print_success "✅ Build completed successfully!"
if [ "$test" = true ]; then
test_image
fi
if [ "$stats" = true ]; then
show_cache_stats
fi
echo ""
print_success "🎉 Ready to use! Try:"
echo " docker run -it --network host $LATEST_TAG"
echo " docker compose --profile openai run -it app-openai"
echo " docker compose --profile ollama up -d && docker compose --profile ollama exec app-ollama bash"
echo " docker compose --profile default run -it app"
else
print_error "❌ Build failed!"
exit 1
fi
}
# Run main function with all arguments
main "$@"

View File

@ -151,6 +151,8 @@ def select_shallow_thinking_agent(provider) -> str:
],
"ollama": [
("llama3.2 local", "llama3.2"),
("qwen3 small local", "qwen3:0.6b"),
("deepseek-r1 local", "deepseek-r1:1.5b"),
]
}
@ -211,7 +213,9 @@ def select_deep_thinking_agent(provider) -> str:
("Deepseek - latest iteration of the flagship chat model family from the DeepSeek team.", "deepseek/deepseek-chat-v3-0324:free"),
],
"ollama": [
("qwen3", "qwen3"),
("qwen3 local", "qwen3"),
("qwen3 small local", "qwen3:0.6b"),
("deepseek-r1 local", "deepseek-r1:1.5b"),
]
}

74
docker-compose.yml Normal file
View File

@ -0,0 +1,74 @@
version: "3.8"
services:
# Ollama service for local LLM
ollama:
image: ollama/ollama:latest
container_name: ollama
network_mode: host
volumes:
- ./ollama_data:/root/.ollama
# Uncomment for GPU support
# deploy:
# resources:
# reservations:
# devices:
# - capabilities: ["gpu"]
profiles:
- ollama
# App container for Ollama setup
app-ollama:
build:
context: .
container_name: trading-agents-ollama
network_mode: host
volumes:
- .:/app
- ./data:/app/data
env_file:
- .env
environment:
- LLM_BACKEND_URL=http://localhost:11434/v1
- LLM_PROVIDER=ollama
depends_on:
- ollama
tty: true
stdin_open: true
profiles:
- ollama
# App container for OpenAI setup (no Ollama dependency)
app-openai:
build:
context: .
container_name: trading-agents-openai
network_mode: host
volumes:
- .:/app
- ./data:/app/data
env_file:
- .env
environment:
- LLM_PROVIDER=openai
- LLM_BACKEND_URL=https://api.openai.com/v1
tty: true
stdin_open: true
profiles:
- openai
# Generic app container (uses .env settings as-is)
app:
build:
context: .
container_name: trading-agents
network_mode: host
volumes:
- .:/app
- ./data:/app/data
env_file:
- .env
tty: true
stdin_open: true
profiles:
- default

97
init-ollama.bat Normal file
View File

@ -0,0 +1,97 @@
@echo off
setlocal enabledelayedexpansion
echo 🚀 Initializing Ollama models...
REM Define required models
set DEEP_THINK_MODEL=qwen3:0.6b
set EMBEDDING_MODEL=nomic-embed-text
REM Wait for Ollama to be ready
echo ⏳ Waiting for Ollama service to start...
set max_attempts=30
set attempt=0
:wait_loop
if %attempt% geq %max_attempts% goto timeout_error
docker compose --profile ollama exec ollama ollama list >nul 2>&1
if %errorlevel% equ 0 (
echo ✅ Ollama is ready!
goto ollama_ready
)
set /a attempt=%attempt%+1
echo Waiting for Ollama... (attempt %attempt%/%max_attempts%)
timeout /t 2 /nobreak >nul
goto wait_loop
:timeout_error
echo ❌ Error: Ollama failed to start within the expected time
exit /b 1
:ollama_ready
REM Check cache directory
if exist ".\ollama_data" (
echo 📁 Found existing ollama_data cache directory
for /f "tokens=*" %%a in ('dir ".\ollama_data" /s /-c ^| find "bytes"') do (
echo Cache directory exists
)
) else (
echo 📁 Creating ollama_data cache directory...
mkdir ".\ollama_data"
)
REM Get list of currently available models
echo 🔍 Checking for existing models...
docker compose --profile ollama exec ollama ollama list > temp_models.txt 2>nul
if %errorlevel% neq 0 (
echo > temp_models.txt
)
REM Check if deep thinking model exists
findstr /c:"%DEEP_THINK_MODEL%" temp_models.txt >nul
if %errorlevel% equ 0 (
echo ✅ Deep thinking model '%DEEP_THINK_MODEL%' already available
) else (
echo 📥 Pulling deep thinking model: %DEEP_THINK_MODEL%...
docker compose --profile ollama exec ollama ollama pull %DEEP_THINK_MODEL%
if %errorlevel% equ 0 (
echo ✅ Model %DEEP_THINK_MODEL% pulled successfully
) else (
echo ❌ Failed to pull model %DEEP_THINK_MODEL%
goto cleanup
)
)
REM Check if embedding model exists
findstr /c:"%EMBEDDING_MODEL%" temp_models.txt >nul
if %errorlevel% equ 0 (
echo ✅ Embedding model '%EMBEDDING_MODEL%' already available
) else (
echo 📥 Pulling embedding model: %EMBEDDING_MODEL%...
docker compose --profile ollama exec ollama ollama pull %EMBEDDING_MODEL%
if %errorlevel% equ 0 (
echo ✅ Model %EMBEDDING_MODEL% pulled successfully
) else (
echo ❌ Failed to pull model %EMBEDDING_MODEL%
goto cleanup
)
)
REM List all available models
echo 📋 Available models:
docker compose --profile ollama exec ollama ollama list
REM Show cache info
if exist ".\ollama_data" (
echo 💾 Model cache directory: .\ollama_data
)
echo 🎉 Ollama initialization complete!
echo 💡 Tip: Models are cached in .\ollama_data and will be reused on subsequent runs
:cleanup
if exist temp_models.txt del temp_models.txt
exit /b 0

78
init-ollama.sh Normal file
View File

@ -0,0 +1,78 @@
#!/bin/bash
set -e
echo "🚀 Initializing Ollama models..."
# Define required models
DEEP_THINK_MODEL="qwen3:0.6b"
EMBEDDING_MODEL="nomic-embed-text"
# Wait for Ollama to be ready
echo "⏳ Waiting for Ollama service to start..."
max_attempts=30
attempt=0
while [ $attempt -lt $max_attempts ]; do
if docker compose --profile ollama exec ollama ollama list > /dev/null 2>&1; then
echo "✅ Ollama is ready!"
break
fi
echo " Waiting for Ollama... (attempt $((attempt + 1))/$max_attempts)"
sleep 2
attempt=$((attempt + 1))
done
if [ $attempt -eq $max_attempts ]; then
echo "❌ Error: Ollama failed to start within the expected time"
exit 1
fi
# Check cache directory
if [ -d "./ollama_data" ]; then
echo "📁 Found existing ollama_data cache directory"
cache_size=$(du -sh ./ollama_data 2>/dev/null | cut -f1 || echo "0")
echo " Cache size: $cache_size"
else
echo "📁 Creating ollama_data cache directory..."
mkdir -p ./ollama_data
fi
# Get list of currently available models
echo "🔍 Checking for existing models..."
available_models=$(docker compose --profile ollama exec ollama ollama list 2>/dev/null | tail -n +2 | awk '{print $1}' || echo "")
# Function to check if model exists
model_exists() {
local model_name="$1"
echo "$available_models" | grep -q "^$model_name"
}
# Pull deep thinking model if not present
if model_exists "$DEEP_THINK_MODEL"; then
echo "✅ Deep thinking model '$DEEP_THINK_MODEL' already available"
else
echo "📥 Pulling deep thinking model: $DEEP_THINK_MODEL..."
docker compose --profile ollama exec ollama ollama pull "$DEEP_THINK_MODEL"
echo "✅ Model $DEEP_THINK_MODEL pulled successfully"
fi
# Pull embedding model if not present
if model_exists "$EMBEDDING_MODEL"; then
echo "✅ Embedding model '$EMBEDDING_MODEL' already available"
else
echo "📥 Pulling embedding model: $EMBEDDING_MODEL..."
docker compose --profile ollama exec ollama ollama pull "$EMBEDDING_MODEL"
echo "✅ Model $EMBEDDING_MODEL pulled successfully"
fi
# List all available models
echo "📋 Available models:"
docker compose --profile ollama exec ollama ollama list
# Show cache info
if [ -d "./ollama_data" ]; then
cache_size=$(du -sh ./ollama_data 2>/dev/null | cut -f1 || echo "unknown")
echo "💾 Model cache size: $cache_size (stored in ./ollama_data)"
fi
echo "🎉 Ollama initialization complete!"
echo "💡 Tip: Models are cached in ./ollama_data and will be reused on subsequent runs"

55
main.py
View File

@ -1,21 +1,46 @@
import os
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
from dotenv import load_dotenv
# Create a custom config
config = DEFAULT_CONFIG.copy()
config["llm_provider"] = "google" # Use a different model
config["backend_url"] = "https://generativelanguage.googleapis.com/v1" # Use a different backend
config["deep_think_llm"] = "gemini-2.0-flash" # Use a different model
config["quick_think_llm"] = "gemini-2.0-flash" # Use a different model
config["max_debate_rounds"] = 1 # Increase debate rounds
config["online_tools"] = True # Increase debate rounds
def run_analysis(config_overrides=None):
"""
Initializes and runs a trading cycle with configurable overrides.
"""
load_dotenv() # Load .env file variables
# Initialize with custom config
ta = TradingAgentsGraph(debug=True, config=config)
config = DEFAULT_CONFIG.copy()
# forward propagate
_, decision = ta.propagate("NVDA", "2024-05-10")
print(decision)
# Override with environment variables if set
config["llm_provider"] = os.environ.get("LLM_PROVIDER", config.get("llm_provider", "google"))
config["backend_url"] = os.environ.get("LLM_BACKEND_URL", config.get("backend_url", "https://generativelanguage.googleapis.com/v1"))
config["deep_think_llm"] = os.environ.get("LLM_DEEP_THINK_MODEL", config.get("deep_think_llm", "gemini-2.0-flash"))
config["quick_think_llm"] = os.environ.get("LLM_QUICK_THINK_MODEL", config.get("quick_think_llm", "gemini-2.0-flash"))
config["max_debate_rounds"] = int(os.environ.get("MAX_DEBATE_ROUNDS", config.get("max_debate_rounds", 1)))
config["online_tools"] = os.environ.get("ONLINE_TOOLS", str(config.get("online_tools", True))).lower() == 'true'
# Memorize mistakes and reflect
# ta.reflect_and_remember(1000) # parameter is the position returns
# Apply overrides from function argument
if config_overrides:
config.update(config_overrides)
print("Using configuration:")
for key, value in config.items():
print(f"{key}: {value}")
# Initialize with the final config
ta = TradingAgentsGraph(debug=True, config=config)
# Forward propagate
_, decision = ta.propagate("NVDA", "2024-05-10")
return decision
if __name__ == "__main__":
# Example of running the trading analysis
# You can override specific configurations here if needed, e.g.:
# decision = run_trading_cyrun_analysiscle(config_overrides={"max_debate_rounds": 2})
decision = run_analysis()
print(decision)
# Memorize mistakes and reflect
# ta.reflect_and_remember(1000) # parameter is the position returns

View File

@ -1,6 +1,8 @@
typing-extensions
langchain-openai
langchain-experimental
langchain_anthropic
langchain_google_genai
pandas
yfinance
praw
@ -22,5 +24,6 @@ redis
chainlit
rich
questionary
langchain_anthropic
langchain-google-genai
ollama
pytest
python-dotenv

185
tests/README.md Normal file
View File

@ -0,0 +1,185 @@
# TradingAgents Test Suite
This directory contains all test scripts for validating the TradingAgents setup and configuration.
## Test Scripts
### 🧪 `run_tests.py` - Automated Test Runner
**Purpose**: Automatically detects your LLM provider and runs appropriate tests.
**Usage**:
```bash
# Run all tests (auto-detects provider from LLM_PROVIDER env var)
# Always run from project root, not from tests/ directory
python tests/run_tests.py
# In Docker
docker compose --profile openai run --rm app-openai python tests/run_tests.py
docker compose --profile ollama exec app-ollama python tests/run_tests.py
```
**Important**: Always run the test runner from the **project root directory**, not from inside the `tests/` directory. The runner automatically handles path resolution and changes to the correct working directory.
**Features**:
- Auto-detects LLM provider from environment
- Runs provider-specific tests only
- Provides comprehensive test summary
- Handles timeouts and error reporting
---
### 🔌 `test_openai_connection.py` - OpenAI API Tests
**Purpose**: Validates OpenAI API connectivity and functionality.
**Tests**:
- ✅ API key validation
- ✅ Chat completion (using `gpt-4o-mini`)
- ✅ Embeddings (using `text-embedding-3-small`)
- ✅ Configuration validation
**Usage**:
```bash
# From project root
python tests/test_openai_connection.py
# In Docker
docker compose --profile openai run --rm app-openai python tests/test_openai_connection.py
```
**Requirements**:
- `OPENAI_API_KEY` environment variable
- `LLM_PROVIDER=openai`
---
### 🦙 `test_ollama_connection.py` - Ollama Connectivity Tests
**Purpose**: Validates Ollama server connectivity and model availability.
**Tests**:
- ✅ Ollama API accessibility
- ✅ Model availability (`qwen3:0.6b`, `nomic-embed-text`)
- ✅ OpenAI-compatible API functionality
- ✅ Chat completion and embeddings
**Usage**:
```bash
# From project root
python tests/test_ollama_connection.py
# In Docker
docker compose --profile ollama exec app-ollama python tests/test_ollama_connection.py
```
**Requirements**:
- Ollama server running
- Required models downloaded
- `LLM_PROVIDER=ollama`
---
### ⚙️ `test_setup.py` - General Setup Validation
**Purpose**: Validates basic TradingAgents setup and configuration.
**Tests**:
- ✅ Python package imports
- ✅ Configuration loading
- ✅ TradingAgentsGraph initialization
- ✅ Data access capabilities
**Usage**:
```bash
# From project root
python tests/test_setup.py
# In Docker
docker compose --profile openai run --rm app-openai python tests/test_setup.py
docker compose --profile ollama exec app-ollama python tests/test_setup.py
```
**Requirements**:
- TradingAgents dependencies installed
- Basic environment configuration
---
## Test Results Interpretation
### ✅ Success Indicators
- All tests pass
- API connections established
- Models available and responding
- Configuration properly loaded
### ❌ Common Issues
**OpenAI Tests Failing**:
- Check `OPENAI_API_KEY` is set correctly
- Verify API key has sufficient quota
- Ensure internet connectivity
**Ollama Tests Failing**:
- Verify Ollama service is running
- Check if models are downloaded (`./init-ollama.sh`)
- Confirm `ollama list` shows required models
**Setup Tests Failing**:
- Check Python dependencies are installed
- Verify environment variables are set
- Ensure `.env` file is properly configured
---
## Quick Testing Commands
**⚠️ Important**: Always run these commands from the **project root directory** (not from inside `tests/`):
```bash
# Test everything automatically (from project root)
python tests/run_tests.py
# Test specific provider (from project root)
LLM_PROVIDER=openai python tests/run_tests.py
LLM_PROVIDER=ollama python tests/run_tests.py
# Test individual components (from project root)
python tests/test_openai_connection.py
python tests/test_ollama_connection.py
python tests/test_setup.py
```
**Why from project root?**
- Tests need to import the `tradingagents` package
- The `tradingagents` package is located in the project root
- Running from `tests/` directory would cause import errors
---
## Adding New Tests
To add new tests:
1. Create new test script in `tests/` directory
2. Follow the naming convention: `test_<component>.py`
3. Include proper error handling and status reporting
4. Update `run_tests.py` if automatic detection is needed
5. Document the test in this README
**Test Script Template**:
```python
#!/usr/bin/env python3
"""Test script for <component>"""
def test_component():
"""Test <component> functionality."""
try:
# Test implementation
print("✅ Test passed")
return True
except Exception as e:
print(f"❌ Test failed: {e}")
return False
if __name__ == "__main__":
success = test_component()
exit(0 if success else 1)
```

10
tests/__init__.py Normal file
View File

@ -0,0 +1,10 @@
"""
TradingAgents Test Suite
This package contains all test scripts for the TradingAgents application:
- test_openai_connection.py: OpenAI API connectivity tests
- test_ollama_connection.py: Ollama connectivity tests
- test_setup.py: General setup and configuration tests
"""
__version__ = "1.0.0"

101
tests/run_tests.py Normal file
View File

@ -0,0 +1,101 @@
#!/usr/bin/env python3
"""
Test runner script for TradingAgents
This script automatically detects the LLM provider and runs appropriate tests.
"""
import os
import sys
import subprocess
def get_llm_provider():
"""Get the configured LLM provider from environment."""
return os.environ.get("LLM_PROVIDER", "").lower()
def run_test_script(script_name):
"""Run a test script and return success status."""
try:
print(f"🧪 Running {script_name}...")
result = subprocess.run([sys.executable, script_name],
capture_output=True, text=True, timeout=120)
if result.returncode == 0:
print(f"{script_name} passed")
if result.stdout:
print(f" Output: {result.stdout.strip()}")
return True
else:
print(f"{script_name} failed")
if result.stderr:
print(f" Error: {result.stderr.strip()}")
return False
except subprocess.TimeoutExpired:
print(f"{script_name} timed out")
return False
except Exception as e:
print(f"💥 {script_name} crashed: {e}")
return False
def main():
"""Main test runner function."""
print("🚀 TradingAgents Test Runner")
print("=" * 50)
# Get project root directory (parent of tests directory)
tests_dir = os.path.dirname(os.path.abspath(__file__))
project_root = os.path.dirname(tests_dir)
os.chdir(project_root)
provider = get_llm_provider()
print(f"📋 Detected LLM Provider: {provider or 'not set'}")
tests_run = []
tests_passed = []
# Always run setup tests
if run_test_script("tests/test_setup.py"):
tests_passed.append("tests/test_setup.py")
tests_run.append("tests/test_setup.py")
# Run provider-specific tests
if provider == "openai":
print("\n🔍 Running OpenAI-specific tests...")
if run_test_script("tests/test_openai_connection.py"):
tests_passed.append("tests/test_openai_connection.py")
tests_run.append("tests/test_openai_connection.py")
elif provider == "ollama":
print("\n🔍 Running Ollama-specific tests...")
if run_test_script("tests/test_ollama_connection.py"):
tests_passed.append("tests/test_ollama_connection.py")
tests_run.append("tests/test_ollama_connection.py")
else:
print(f"\n⚠️ Unknown or unset LLM provider: '{provider}'")
print(" Running all connectivity tests...")
for test_script in ["tests/test_openai_connection.py", "tests/test_ollama_connection.py"]:
if run_test_script(test_script):
tests_passed.append(test_script)
tests_run.append(test_script)
# Summary
print("\n" + "=" * 50)
print(f"📊 Test Results: {len(tests_passed)}/{len(tests_run)} tests passed")
for test in tests_run:
status = "✅ PASS" if test in tests_passed else "❌ FAIL"
print(f" {test}: {status}")
if len(tests_passed) == len(tests_run):
print("\n🎉 All tests passed! TradingAgents is ready to use.")
return 0
else:
print(f"\n⚠️ {len(tests_run) - len(tests_passed)} test(s) failed. Check configuration.")
return 1
if __name__ == "__main__":
exit_code = main()
sys.exit(exit_code)

View File

@ -0,0 +1,108 @@
#!/usr/bin/env python3
"""
Simple test script to verify Ollama connection is working.
"""
import os
import requests
import time
from openai import OpenAI
def test_ollama_connection():
"""Test if Ollama is accessible and responding."""
# Get configuration from environment
backend_url = os.environ.get("LLM_BACKEND_URL", "http://localhost:11434/v1")
model = os.environ.get("LLM_DEEP_THINK_MODEL", "qwen3:0.6b")
embedding_model = os.environ.get("LLM_EMBEDDING_MODEL", "nomic-embed-text")
print(f"Testing Ollama connection:")
print(f" Backend URL: {backend_url}")
print(f" Model: {model}")
print(f" Embedding Model: {embedding_model}")
# Test 1: Check if Ollama API is responding
try:
response = requests.get(f"{backend_url.replace('/v1', '')}/api/tags", timeout=10)
if response.status_code == 200:
print("✅ Ollama API is responding")
else:
print(f"❌ Ollama API returned status code: {response.status_code}")
return False
except Exception as e:
print(f"❌ Failed to connect to Ollama API: {e}")
return False
# Test 2: Check if the model is available
try:
response = requests.get(f"{backend_url.replace('/v1', '')}/api/tags", timeout=10)
models = response.json().get("models", [])
model_names = [m.get("name", "") for m in models]
if any(name.startswith(model) for name in model_names):
print(f"✅ Model '{model}' is available")
else:
print(f"❌ Model '{model}' not found. Available models: {model_names}")
return False
except Exception as e:
print(f"❌ Failed to check model availability: {e}")
return False
# Test 3: Test OpenAI-compatible API
try:
client = OpenAI(base_url=backend_url, api_key="dummy")
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": "Hello, say 'test successful'"}],
max_tokens=50
)
print("✅ OpenAI-compatible API is working")
print(f" Response: {response.choices[0].message.content}")
return True
except Exception as e:
print(f"❌ OpenAI-compatible API test failed: {e}")
return False
# Test 4: Check if the embedding model is available
try:
response = requests.get(f"{backend_url.replace('/v1', '')}/api/tags", timeout=10)
models = response.json().get("models", [])
model_names = [m.get("name") for m in models if m.get("name")]
# Check if any of the available models starts with the embedding model name
if any(name.startswith(embedding_model) for name in model_names):
print(f"✅ Embedding Model '{embedding_model}' is available")
else:
print(f"❌ Embedding Model '{embedding_model}' not found. Available models: {model_names}")
return False
except Exception as e:
print(f"❌ Failed to check embedding model availability: {e}")
return False
# Test 5: Test OpenAI-compatible embedding API
try:
client = OpenAI(base_url=backend_url, api_key="dummy")
response = client.embeddings.create(
model=embedding_model,
input="This is a test sentence.",
encoding_format="float"
)
if response.data and len(response.data) > 0 and response.data[0].embedding:
print("✅ OpenAI-compatible embedding API is working")
print(f" Successfully generated embedding of dimension: {len(response.data[0].embedding)}")
return True
else:
print("❌ Embedding API test failed: No embedding data in response")
return False
except Exception as e:
print(f"❌ OpenAI-compatible embedding API test failed: {e}")
return False
if __name__ == "__main__":
success = test_ollama_connection()
if success:
print("\n🎉 All tests passed! Ollama is ready.")
exit(0)
else:
print("\n💥 Tests failed! Check Ollama configuration.")
exit(1)

View File

@ -0,0 +1,142 @@
#!/usr/bin/env python3
"""
Test script to verify OpenAI API connection is working.
"""
import os
import sys
from openai import OpenAI
def test_openai_connection():
"""Test if OpenAI API is accessible and responding."""
# Get configuration from environment
api_key = os.environ.get("OPENAI_API_KEY")
backend_url = os.environ.get("LLM_BACKEND_URL", "https://api.openai.com/v1")
provider = os.environ.get("LLM_PROVIDER", "openai")
print(f"Testing OpenAI API connection:")
print(f" Provider: {provider}")
print(f" Backend URL: {backend_url}")
print(f" API Key: {'✅ Set' if api_key and api_key != '<your-openai-key>' else '❌ Not set or using placeholder'}")
if not api_key or api_key == "<your-openai-key>":
print("❌ OPENAI_API_KEY is not set or still using placeholder value")
print(" Please set your OpenAI API key in the .env file")
return False
# Test 1: Initialize OpenAI client
try:
client = OpenAI(
api_key=api_key,
base_url=backend_url
)
print("✅ OpenAI client initialized successfully")
except Exception as e:
print(f"❌ Failed to initialize OpenAI client: {e}")
return False
# Test 2: Test chat completion with a simple query
try:
print("🧪 Testing chat completion...")
response = client.chat.completions.create(
model="gpt-4o-mini", # Use the most cost-effective model for testing
messages=[
{"role": "user", "content": "Hello! Please respond with exactly: 'OpenAI API test successful'"}
],
max_tokens=50,
temperature=0
)
if response.choices and response.choices[0].message.content:
content = response.choices[0].message.content.strip()
print(f"✅ Chat completion successful")
print(f" Model: {response.model}")
print(f" Response: {content}")
print(f" Tokens used: {response.usage.total_tokens if response.usage else 'unknown'}")
else:
print("❌ Chat completion returned empty response")
return False
except Exception as e:
print(f"❌ Chat completion test failed: {e}")
if "insufficient_quota" in str(e).lower():
print(" 💡 This might be a quota/billing issue. Check your OpenAI account.")
elif "invalid_api_key" in str(e).lower():
print(" 💡 Invalid API key. Please check your OPENAI_API_KEY.")
return False
# Test 3: Test embeddings (optional, for completeness)
try:
print("🧪 Testing embeddings...")
response = client.embeddings.create(
model="text-embedding-3-small", # Cost-effective embedding model
input="This is a test sentence for embeddings."
)
if response.data and len(response.data) > 0 and response.data[0].embedding:
embedding = response.data[0].embedding
print(f"✅ Embeddings successful")
print(f" Model: {response.model}")
print(f" Embedding dimension: {len(embedding)}")
print(f" Tokens used: {response.usage.total_tokens if response.usage else 'unknown'}")
else:
print("❌ Embeddings returned empty response")
return False
except Exception as e:
print(f"❌ Embeddings test failed: {e}")
print(" ⚠️ Embeddings test failed but chat completion worked. This is usually fine for basic usage.")
# Don't return False here as embeddings might not be critical for all use cases
return True
def test_config_validation():
"""Validate the configuration is properly set for OpenAI."""
provider = os.environ.get("LLM_PROVIDER", "").lower()
backend_url = os.environ.get("LLM_BACKEND_URL", "")
print("\n🔧 Configuration validation:")
if provider != "openai":
print(f"⚠️ LLM_PROVIDER is '{provider}', expected 'openai'")
print(" The app might still work if the provider supports OpenAI-compatible API")
else:
print("✅ LLM_PROVIDER correctly set to 'openai'")
if "openai.com" in backend_url:
print("✅ Using official OpenAI API endpoint")
elif backend_url:
print(f" Using custom endpoint: {backend_url}")
print(" Make sure this endpoint is OpenAI-compatible")
else:
print("⚠️ LLM_BACKEND_URL not set, using default")
# Check for common environment issues
finnhub_key = os.environ.get("FINNHUB_API_KEY")
if not finnhub_key or finnhub_key == "<your_finnhub_api_key_here>":
print("⚠️ FINNHUB_API_KEY not set - financial data fetching may not work")
else:
print("✅ FINNHUB_API_KEY is set")
return True
if __name__ == "__main__":
print("🧪 OpenAI API Connection Test\n")
config_ok = test_config_validation()
api_ok = test_openai_connection()
print(f"\n📊 Test Results:")
print(f" Configuration: {'✅ OK' if config_ok else '❌ Issues'}")
print(f" API Connection: {'✅ OK' if api_ok else '❌ Failed'}")
if config_ok and api_ok:
print("\n🎉 All tests passed! OpenAI API is ready for TradingAgents.")
print("💡 You can now run the trading agents with OpenAI as the LLM provider.")
else:
print("\n💥 Some tests failed. Please check your configuration and API key.")
print("💡 Make sure OPENAI_API_KEY is set correctly in your .env file.")
sys.exit(0 if (config_ok and api_ok) else 1)

122
tests/test_setup.py Normal file
View File

@ -0,0 +1,122 @@
#!/usr/bin/env python3
"""
Test script to verify the complete TradingAgents setup works end-to-end.
"""
import os
import sys
from datetime import datetime, timedelta
def test_basic_setup():
"""Test basic imports and configuration"""
try:
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
print("✅ Basic imports successful")
return True
except Exception as e:
print(f"❌ Basic import failed: {e}")
return False
def test_config():
"""Test configuration loading"""
try:
from tradingagents.default_config import DEFAULT_CONFIG
# Check required environment variables
required_vars = ['LLM_PROVIDER', 'OPENAI_API_KEY', 'FINNHUB_API_KEY']
missing_vars = []
for var in required_vars:
if not os.environ.get(var):
missing_vars.append(var)
if missing_vars:
print(f"⚠️ Missing environment variables: {missing_vars}")
print(" This may cause issues with data fetching or LLM calls")
else:
print("✅ Required environment variables set")
print(f"✅ Configuration loaded successfully")
print(f" LLM Provider: {os.environ.get('LLM_PROVIDER', 'not set')}")
print(f" OPENAI API KEY: {os.environ.get('OPENAI_API_KEY', 'not set')}")
print(f" Backend URL: {os.environ.get('LLM_BACKEND_URL', 'not set')}")
return True
except Exception as e:
print(f"❌ Configuration test failed: {e}")
return False
def test_trading_graph_init():
"""Test TradingAgentsGraph initialization"""
try:
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG
# Create a minimal config for testing
config = DEFAULT_CONFIG.copy()
config["online_tools"] = False # Use cached data for testing
config["max_debate_rounds"] = 1 # Minimize API calls
ta = TradingAgentsGraph(debug=True, config=config)
print("✅ TradingAgentsGraph initialized successfully")
return True
except Exception as e:
print(f"❌ TradingAgentsGraph initialization failed: {e}")
return False
def test_data_access():
"""Test if we can access basic data"""
try:
from tradingagents.dataflows.yfin_utils import get_stock_data
# Test with a simple stock query
test_date = (datetime.now() - timedelta(days=30)).strftime('%Y-%m-%d')
# This should work even without API keys if using cached data
data = get_stock_data("AAPL", test_date)
if data:
print("✅ Data access test successful")
return True
else:
print("⚠️ Data access returned empty results (may be expected with cached data)")
return True
except Exception as e:
print(f"❌ Data access test failed: {e}")
return False
def run_all_tests():
"""Run all tests"""
print("🧪 Running TradingAgents setup tests...\n")
tests = [
("Basic Setup", test_basic_setup),
("Configuration", test_config),
("TradingGraph Init", test_trading_graph_init),
("Data Access", test_data_access),
]
passed = 0
total = len(tests)
for test_name, test_func in tests:
print(f"Running {test_name} test...")
try:
if test_func():
passed += 1
print()
except Exception as e:
print(f"{test_name} test crashed: {e}\n")
print(f"📊 Test Results: {passed}/{total} tests passed")
if passed == total:
print("🎉 All tests passed! TradingAgents setup is working correctly.")
return True
else:
print("⚠️ Some tests failed. Check the output above for details.")
return False
if __name__ == "__main__":
success = run_all_tests()
sys.exit(0 if success else 1)

View File

@ -7,6 +7,7 @@ class FinancialSituationMemory:
def __init__(self, name, config):
if config["backend_url"] == "http://localhost:11434/v1":
self.embedding = "nomic-embed-text"
self.client = OpenAI(base_url=config["backend_url"])
else:
self.embedding = "text-embedding-3-small"
self.client = OpenAI()

View File

@ -2,7 +2,10 @@ import os
DEFAULT_CONFIG = {
"project_dir": os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
"data_dir": "/Users/yluo/Documents/Code/ScAI/FR1-data",
"data_dir": os.path.join(
os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
"data",
),
"data_cache_dir": os.path.join(
os.path.abspath(os.path.join(os.path.dirname(__file__), ".")),
"dataflows/data_cache",