Ben Vargas dd96f51179
feat: Add gemini-cli provider integration for Task Master (#897)
* feat: Add gemini-cli provider integration for Task Master

This commit adds comprehensive support for the Gemini CLI provider, enabling users
to leverage Google's Gemini models through OAuth authentication via the gemini CLI
tool. This integration provides a seamless experience for users who prefer using
their existing Google account authentication rather than managing API keys.

## Implementation Details

### Provider Class (`src/ai-providers/gemini-cli.js`)
- Created GeminiCliProvider extending BaseAIProvider
- Implements dual authentication support:
  - Primary: OAuth authentication via `gemini auth login` (authType: 'oauth-personal')
  - Secondary: API key authentication for compatibility (authType: 'api-key')
- Uses the npm package `ai-sdk-provider-gemini-cli` (v0.0.3) for SDK integration
- Properly handles authentication validation without console output

### Model Configuration (`scripts/modules/supported-models.json`)
- Added two Gemini models with accurate specifications:
  - gemini-2.5-pro: 72% SWE score, 65,536 max output tokens
  - gemini-2.5-flash: 71% SWE score, 65,536 max output tokens
- Both models support main, fallback, and research roles
- Configured with zero cost (free tier)

### System Integration
- Registered provider in PROVIDERS map (`scripts/modules/ai-services-unified.js`)
- Added to OPTIONAL_AUTH_PROVIDERS set for flexible authentication
- Added GEMINI_CLI constant to provider constants (`src/constants/providers.js`)
- Exported GeminiCliProvider from index (`src/ai-providers/index.js`)

### Command Line Support (`scripts/modules/commands.js`)
- Added --gemini-cli flag to models command for provider hint
- Integrated into model selection logic (setModel function)
- Updated error messages to include gemini-cli in provider list
- Removed unrelated azure/vertex changes to maintain PR focus

### Documentation (`docs/providers/gemini-cli.md`)
- Comprehensive provider documentation emphasizing OAuth-first approach
- Clear explanation of why users would choose gemini-cli over standard google provider
- Detailed installation, authentication, and configuration instructions
- Troubleshooting section with common issues and solutions

### Testing (`tests/unit/ai-providers/gemini-cli.test.js`)
- Complete test suite with 12 tests covering all functionality
- Tests for both OAuth and API key authentication paths
- Error handling and edge case coverage
- Updated mocks in ai-services-unified.test.js for integration testing

## Key Design Decisions

1. **OAuth-First Design**: The provider assumes users want to leverage their existing
   `gemini auth login` credentials, making this the default authentication method.

2. **Authentication Type Mapping**: Discovered through testing that the SDK expects:
   - 'oauth-personal' for OAuth/CLI authentication (not 'gemini-cli' or 'oauth')
   - 'api-key' for API key authentication (not 'gemini-api-key')

3. **Silent Operation**: Removed console.log statements from validateAuth to match
   the pattern used by other providers like claude-code.

4. **Limited Model Support**: Only gemini-2.5-pro and gemini-2.5-flash are available
   through the CLI, as confirmed by the package author.

## Usage

```bash
# Install gemini CLI globally
npm install -g @google/gemini-cli

# Authenticate with Google account
gemini auth login

# Configure Task Master to use gemini-cli
task-master models --set-main gemini-2.5-pro --gemini-cli

# Use Task Master normally
task-master new "Create a REST API endpoint"
```

## Dependencies
- Added `ai-sdk-provider-gemini-cli@^0.0.3` to package.json
- This package wraps the Google Gemini CLI Core functionality for Vercel AI SDK

## Testing
All tests pass (613 total), including the new gemini-cli provider tests.
Code has been formatted with biome to maintain consistency.

This implementation provides a clean, well-tested integration that follows Task Master's
existing patterns while offering users a convenient way to use Gemini models with their
existing Google authentication.

* feat: implement lazy loading for gemini-cli provider

- Move ai-sdk-provider-gemini-cli to optionalDependencies
- Implement dynamic import with loadGeminiCliModule() function
- Make getClient() async to support lazy loading
- Update base-provider to handle async getClient() calls
- Update tests to handle async getClient() method

This allows the application to start without the gemini-cli package
installed, only loading it when actually needed.

* feat(gemini-cli): replace regex-based JSON extraction with jsonc-parser

- Add jsonc-parser dependency for robust JSON parsing
- Replace simple regex approach with progressive parsing strategy:
  1. Direct parsing after cleanup
  2. Smart boundary detection with single-pass analysis
  3. Limited fallback for edge cases
- Optimize performance with early termination and strategic sampling
- Add comprehensive tests for variable declarations, trailing commas,
  escaped quotes, nested objects, and performance edge cases
- Improve reliability for complex JSON structures that Gemini commonly produces
- Fix code formatting with biome

This addresses JSON parsing failures in generateObject operations while
maintaining backward compatibility and significantly improving performance
for large responses.

* fix: update package-lock.json and fix formatting for CI/CD

- Add jsonc-parser to package-lock.json for proper npm ci compatibility
- Fix biome formatting issues in gemini-cli provider and tests
- Ensure all CI/CD checks pass

* feat(gemini-cli): implement comprehensive JSON output reliability system

- Add automatic JSON request detection via content analysis patterns
- Implement task-specific prompt simplification for improved AI compliance
- Add strict JSON enforcement through enhanced system prompts
- Implement response interception with intelligent JSON extraction fallback
- Add comprehensive test coverage for all new JSON handling methods
- Move debug logging to appropriate level for clean user experience

This multi-layered approach addresses gemini-cli's conversational response
tendencies, ensuring reliable structured JSON output for task expansion
operations. Achieves 100% success rate in end-to-end testing while
maintaining full backward compatibility with existing functionality.

Technical implementation includes:
• JSON detection via user message content analysis
• Expand-task prompt simplification with cleaner instructions
• System prompt enhancement with strict JSON enforcement
• Response processing with jsonc-parser-based extraction
• Comprehensive unit test coverage for edge cases
• Debug-level logging to prevent user interface clutter

Resolves: gemini-cli JSON formatting inconsistencies
Tested: All 46 test suites pass, formatting verified

* chore: add changeset for gemini-cli provider implementation

Adds minor version bump for comprehensive gemini-cli provider with:
- Lazy loading and optional dependency management
- Advanced JSON parsing with jsonc-parser
- Multi-layer reliability system for structured output
- Complete test coverage and CI/CD compliance

* refactor: consolidate optional auth provider logic

- Add gemini-cli to existing providersWithoutApiKeys array in config-manager
- Export providersWithoutApiKeys for reuse across modules
- Remove duplicate OPTIONAL_AUTH_PROVIDERS Set from ai-services-unified
- Update ai-services-unified to import and use centralized array
- Fix Jest mock to include new providersWithoutApiKeys export

This eliminates code duplication and provides a single source of truth
for which providers support optional authentication, addressing PR
reviewer feedback about existing similar functionality in src/constants.
2025-07-02 21:46:19 +02:00

691 lines
22 KiB
JavaScript

/**
* models.js
* Core functionality for managing AI model configurations
*/
import https from 'https';
import http from 'http';
import {
getMainModelId,
getResearchModelId,
getFallbackModelId,
getAvailableModels,
getMainProvider,
getResearchProvider,
getFallbackProvider,
isApiKeySet,
getMcpApiKeyStatus,
getConfig,
writeConfig,
isConfigFilePresent,
getAllProviders,
getBaseUrlForRole
} from '../config-manager.js';
import { findConfigPath } from '../../../src/utils/path-utils.js';
import { log } from '../utils.js';
import { CUSTOM_PROVIDERS } from '../../../src/constants/providers.js';
/**
* Fetches the list of models from OpenRouter API.
* @returns {Promise<Array|null>} A promise that resolves with the list of model IDs or null if fetch fails.
*/
function fetchOpenRouterModels() {
return new Promise((resolve) => {
const options = {
hostname: 'openrouter.ai',
path: '/api/v1/models',
method: 'GET',
headers: {
Accept: 'application/json'
}
};
const req = https.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
if (res.statusCode === 200) {
try {
const parsedData = JSON.parse(data);
resolve(parsedData.data || []); // Return the array of models
} catch (e) {
console.error('Error parsing OpenRouter response:', e);
resolve(null); // Indicate failure
}
} else {
console.error(
`OpenRouter API request failed with status code: ${res.statusCode}`
);
resolve(null); // Indicate failure
}
});
});
req.on('error', (e) => {
console.error('Error fetching OpenRouter models:', e);
resolve(null); // Indicate failure
});
req.end();
});
}
/**
* Fetches the list of models from Ollama instance.
* @param {string} baseURL - The base URL for the Ollama API (e.g., "http://localhost:11434/api")
* @returns {Promise<Array|null>} A promise that resolves with the list of model objects or null if fetch fails.
*/
function fetchOllamaModels(baseURL = 'http://localhost:11434/api') {
return new Promise((resolve) => {
try {
// Parse the base URL to extract hostname, port, and base path
const url = new URL(baseURL);
const isHttps = url.protocol === 'https:';
const port = url.port || (isHttps ? 443 : 80);
const basePath = url.pathname.endsWith('/')
? url.pathname.slice(0, -1)
: url.pathname;
const options = {
hostname: url.hostname,
port: parseInt(port, 10),
path: `${basePath}/tags`,
method: 'GET',
headers: {
Accept: 'application/json'
}
};
const requestLib = isHttps ? https : http;
const req = requestLib.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
if (res.statusCode === 200) {
try {
const parsedData = JSON.parse(data);
resolve(parsedData.models || []); // Return the array of models
} catch (e) {
console.error('Error parsing Ollama response:', e);
resolve(null); // Indicate failure
}
} else {
console.error(
`Ollama API request failed with status code: ${res.statusCode}`
);
resolve(null); // Indicate failure
}
});
});
req.on('error', (e) => {
console.error('Error fetching Ollama models:', e);
resolve(null); // Indicate failure
});
req.end();
} catch (e) {
console.error('Error parsing Ollama base URL:', e);
resolve(null); // Indicate failure
}
});
}
/**
* Get the current model configuration
* @param {Object} [options] - Options for the operation
* @param {Object} [options.session] - Session object containing environment variables (for MCP)
* @param {Function} [options.mcpLog] - MCP logger object (for MCP)
* @param {string} [options.projectRoot] - Project root directory
* @returns {Object} RESTful response with current model configuration
*/
async function getModelConfiguration(options = {}) {
const { mcpLog, projectRoot, session } = options;
const report = (level, ...args) => {
if (mcpLog && typeof mcpLog[level] === 'function') {
mcpLog[level](...args);
}
};
if (!projectRoot) {
throw new Error('Project root is required but not found.');
}
// Use centralized config path finding instead of hardcoded path
const configPath = findConfigPath(null, { projectRoot });
const configExists = isConfigFilePresent(projectRoot);
log(
'debug',
`Checking for config file using findConfigPath, found: ${configPath}`
);
log(
'debug',
`Checking config file using isConfigFilePresent(), exists: ${configExists}`
);
if (!configExists) {
throw new Error(
'The configuration file is missing. Run "task-master models --setup" to create it.'
);
}
try {
// Get current settings - these should use the config from the found path automatically
const mainProvider = getMainProvider(projectRoot);
const mainModelId = getMainModelId(projectRoot);
const researchProvider = getResearchProvider(projectRoot);
const researchModelId = getResearchModelId(projectRoot);
const fallbackProvider = getFallbackProvider(projectRoot);
const fallbackModelId = getFallbackModelId(projectRoot);
// Check API keys
const mainCliKeyOk = isApiKeySet(mainProvider, session, projectRoot);
const mainMcpKeyOk = getMcpApiKeyStatus(mainProvider, projectRoot);
const researchCliKeyOk = isApiKeySet(
researchProvider,
session,
projectRoot
);
const researchMcpKeyOk = getMcpApiKeyStatus(researchProvider, projectRoot);
const fallbackCliKeyOk = fallbackProvider
? isApiKeySet(fallbackProvider, session, projectRoot)
: true;
const fallbackMcpKeyOk = fallbackProvider
? getMcpApiKeyStatus(fallbackProvider, projectRoot)
: true;
// Get available models to find detailed info
const availableModels = getAvailableModels(projectRoot);
// Find model details
const mainModelData = availableModels.find((m) => m.id === mainModelId);
const researchModelData = availableModels.find(
(m) => m.id === researchModelId
);
const fallbackModelData = fallbackModelId
? availableModels.find((m) => m.id === fallbackModelId)
: null;
// Return structured configuration data
return {
success: true,
data: {
activeModels: {
main: {
provider: mainProvider,
modelId: mainModelId,
sweScore: mainModelData?.swe_score || null,
cost: mainModelData?.cost_per_1m_tokens || null,
keyStatus: {
cli: mainCliKeyOk,
mcp: mainMcpKeyOk
}
},
research: {
provider: researchProvider,
modelId: researchModelId,
sweScore: researchModelData?.swe_score || null,
cost: researchModelData?.cost_per_1m_tokens || null,
keyStatus: {
cli: researchCliKeyOk,
mcp: researchMcpKeyOk
}
},
fallback: fallbackProvider
? {
provider: fallbackProvider,
modelId: fallbackModelId,
sweScore: fallbackModelData?.swe_score || null,
cost: fallbackModelData?.cost_per_1m_tokens || null,
keyStatus: {
cli: fallbackCliKeyOk,
mcp: fallbackMcpKeyOk
}
}
: null
},
message: 'Successfully retrieved current model configuration'
}
};
} catch (error) {
report('error', `Error getting model configuration: ${error.message}`);
return {
success: false,
error: {
code: 'CONFIG_ERROR',
message: error.message
}
};
}
}
/**
* Get all available models not currently in use
* @param {Object} [options] - Options for the operation
* @param {Object} [options.session] - Session object containing environment variables (for MCP)
* @param {Function} [options.mcpLog] - MCP logger object (for MCP)
* @param {string} [options.projectRoot] - Project root directory
* @returns {Object} RESTful response with available models
*/
async function getAvailableModelsList(options = {}) {
const { mcpLog, projectRoot } = options;
const report = (level, ...args) => {
if (mcpLog && typeof mcpLog[level] === 'function') {
mcpLog[level](...args);
}
};
if (!projectRoot) {
throw new Error('Project root is required but not found.');
}
// Use centralized config path finding instead of hardcoded path
const configPath = findConfigPath(null, { projectRoot });
const configExists = isConfigFilePresent(projectRoot);
log(
'debug',
`Checking for config file using findConfigPath, found: ${configPath}`
);
log(
'debug',
`Checking config file using isConfigFilePresent(), exists: ${configExists}`
);
if (!configExists) {
throw new Error(
'The configuration file is missing. Run "task-master models --setup" to create it.'
);
}
try {
// Get all available models
const allAvailableModels = getAvailableModels(projectRoot);
if (!allAvailableModels || allAvailableModels.length === 0) {
return {
success: true,
data: {
models: [],
message: 'No available models found'
}
};
}
// Get currently used model IDs
const mainModelId = getMainModelId(projectRoot);
const researchModelId = getResearchModelId(projectRoot);
const fallbackModelId = getFallbackModelId(projectRoot);
// Filter out placeholder models and active models
const activeIds = [mainModelId, researchModelId, fallbackModelId].filter(
Boolean
);
const otherAvailableModels = allAvailableModels.map((model) => ({
provider: model.provider || 'N/A',
modelId: model.id,
sweScore: model.swe_score || null,
cost: model.cost_per_1m_tokens || null,
allowedRoles: model.allowed_roles || []
}));
return {
success: true,
data: {
models: otherAvailableModels,
message: `Successfully retrieved ${otherAvailableModels.length} available models`
}
};
} catch (error) {
report('error', `Error getting available models: ${error.message}`);
return {
success: false,
error: {
code: 'MODELS_LIST_ERROR',
message: error.message
}
};
}
}
/**
* Update a specific model in the configuration
* @param {string} role - The model role to update ('main', 'research', 'fallback')
* @param {string} modelId - The model ID to set for the role
* @param {Object} [options] - Options for the operation
* @param {string} [options.providerHint] - Provider hint if already determined ('openrouter' or 'ollama')
* @param {Object} [options.session] - Session object containing environment variables (for MCP)
* @param {Function} [options.mcpLog] - MCP logger object (for MCP)
* @param {string} [options.projectRoot] - Project root directory
* @returns {Object} RESTful response with result of update operation
*/
async function setModel(role, modelId, options = {}) {
const { mcpLog, projectRoot, providerHint } = options;
const report = (level, ...args) => {
if (mcpLog && typeof mcpLog[level] === 'function') {
mcpLog[level](...args);
}
};
if (!projectRoot) {
throw new Error('Project root is required but not found.');
}
// Use centralized config path finding instead of hardcoded path
const configPath = findConfigPath(null, { projectRoot });
const configExists = isConfigFilePresent(projectRoot);
log(
'debug',
`Checking for config file using findConfigPath, found: ${configPath}`
);
log(
'debug',
`Checking config file using isConfigFilePresent(), exists: ${configExists}`
);
if (!configExists) {
throw new Error(
'The configuration file is missing. Run "task-master models --setup" to create it.'
);
}
// Validate role
if (!['main', 'research', 'fallback'].includes(role)) {
return {
success: false,
error: {
code: 'INVALID_ROLE',
message: `Invalid role: ${role}. Must be one of: main, research, fallback.`
}
};
}
// Validate model ID
if (typeof modelId !== 'string' || modelId.trim() === '') {
return {
success: false,
error: {
code: 'INVALID_MODEL_ID',
message: `Invalid model ID: ${modelId}. Must be a non-empty string.`
}
};
}
try {
const availableModels = getAvailableModels(projectRoot);
const currentConfig = getConfig(projectRoot);
let determinedProvider = null; // Initialize provider
let warningMessage = null;
// Find the model data in internal list initially to see if it exists at all
let modelData = availableModels.find((m) => m.id === modelId);
// --- Revised Logic: Prioritize providerHint --- //
if (providerHint) {
// Hint provided (--ollama or --openrouter flag used)
if (modelData && modelData.provider === providerHint) {
// Found internally AND provider matches the hint
determinedProvider = providerHint;
report(
'info',
`Model ${modelId} found internally with matching provider hint ${determinedProvider}.`
);
} else {
// Either not found internally, OR found but under a DIFFERENT provider than hinted.
// Proceed with custom logic based ONLY on the hint.
if (providerHint === CUSTOM_PROVIDERS.OPENROUTER) {
// Check OpenRouter ONLY because hint was openrouter
report('info', `Checking OpenRouter for ${modelId} (as hinted)...`);
const openRouterModels = await fetchOpenRouterModels();
if (
openRouterModels &&
openRouterModels.some((m) => m.id === modelId)
) {
determinedProvider = CUSTOM_PROVIDERS.OPENROUTER;
// Check if this is a free model (ends with :free)
if (modelId.endsWith(':free')) {
warningMessage = `Warning: OpenRouter free model '${modelId}' selected. Free models have significant limitations including lower context windows, reduced rate limits, and may not support advanced features like tool_use. Consider using the paid version '${modelId.replace(':free', '')}' for full functionality.`;
} else {
warningMessage = `Warning: Custom OpenRouter model '${modelId}' set. This model is not officially validated by Taskmaster and may not function as expected.`;
}
report('warn', warningMessage);
} else {
// Hinted as OpenRouter but not found in live check
throw new Error(
`Model ID "${modelId}" not found in the live OpenRouter model list. Please verify the ID and ensure it's available on OpenRouter.`
);
}
} else if (providerHint === CUSTOM_PROVIDERS.OLLAMA) {
// Check Ollama ONLY because hint was ollama
report('info', `Checking Ollama for ${modelId} (as hinted)...`);
// Get the Ollama base URL from config
const ollamaBaseURL = getBaseUrlForRole(role, projectRoot);
const ollamaModels = await fetchOllamaModels(ollamaBaseURL);
if (ollamaModels === null) {
// Connection failed - server probably not running
throw new Error(
`Unable to connect to Ollama server at ${ollamaBaseURL}. Please ensure Ollama is running and try again.`
);
} else if (ollamaModels.some((m) => m.model === modelId)) {
determinedProvider = CUSTOM_PROVIDERS.OLLAMA;
warningMessage = `Warning: Custom Ollama model '${modelId}' set. Ensure your Ollama server is running and has pulled this model. Taskmaster cannot guarantee compatibility.`;
report('warn', warningMessage);
} else {
// Server is running but model not found
const tagsUrl = `${ollamaBaseURL}/tags`;
throw new Error(
`Model ID "${modelId}" not found in the Ollama instance. Please verify the model is pulled and available. You can check available models with: curl ${tagsUrl}`
);
}
} else if (providerHint === CUSTOM_PROVIDERS.BEDROCK) {
// Set provider without model validation since Bedrock models are managed by AWS
determinedProvider = CUSTOM_PROVIDERS.BEDROCK;
warningMessage = `Warning: Custom Bedrock model '${modelId}' set. Please ensure the model ID is valid and accessible in your AWS account.`;
report('warn', warningMessage);
} else if (providerHint === CUSTOM_PROVIDERS.CLAUDE_CODE) {
// Claude Code provider - check if model exists in our list
determinedProvider = CUSTOM_PROVIDERS.CLAUDE_CODE;
// Re-find modelData specifically for claude-code provider
const claudeCodeModels = availableModels.filter(
(m) => m.provider === 'claude-code'
);
const claudeCodeModelData = claudeCodeModels.find(
(m) => m.id === modelId
);
if (claudeCodeModelData) {
// Update modelData to the found claude-code model
modelData = claudeCodeModelData;
report('info', `Setting Claude Code model '${modelId}'.`);
} else {
warningMessage = `Warning: Claude Code model '${modelId}' not found in supported models. Setting without validation.`;
report('warn', warningMessage);
}
} else if (providerHint === CUSTOM_PROVIDERS.AZURE) {
// Set provider without model validation since Azure models are managed by Azure
determinedProvider = CUSTOM_PROVIDERS.AZURE;
warningMessage = `Warning: Custom Azure model '${modelId}' set. Please ensure the model deployment is valid and accessible in your Azure account.`;
report('warn', warningMessage);
} else if (providerHint === CUSTOM_PROVIDERS.VERTEX) {
// Set provider without model validation since Vertex models are managed by Google Cloud
determinedProvider = CUSTOM_PROVIDERS.VERTEX;
warningMessage = `Warning: Custom Vertex AI model '${modelId}' set. Please ensure the model is valid and accessible in your Google Cloud project.`;
report('warn', warningMessage);
} else if (providerHint === CUSTOM_PROVIDERS.GEMINI_CLI) {
// Gemini CLI provider - check if model exists in our list
determinedProvider = CUSTOM_PROVIDERS.GEMINI_CLI;
// Re-find modelData specifically for gemini-cli provider
const geminiCliModels = availableModels.filter(
(m) => m.provider === 'gemini-cli'
);
const geminiCliModelData = geminiCliModels.find(
(m) => m.id === modelId
);
if (geminiCliModelData) {
// Update modelData to the found gemini-cli model
modelData = geminiCliModelData;
report('info', `Setting Gemini CLI model '${modelId}'.`);
} else {
warningMessage = `Warning: Gemini CLI model '${modelId}' not found in supported models. Setting without validation.`;
report('warn', warningMessage);
}
} else {
// Invalid provider hint - should not happen with our constants
throw new Error(`Invalid provider hint received: ${providerHint}`);
}
}
} else {
// No hint provided (flags not used)
if (modelData) {
// Found internally, use the provider from the internal list
determinedProvider = modelData.provider;
report(
'info',
`Model ${modelId} found internally with provider ${determinedProvider}.`
);
} else {
// Model not found and no provider hint was given
return {
success: false,
error: {
code: 'MODEL_NOT_FOUND_NO_HINT',
message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, or --vertex.`
}
};
}
}
// --- End of Revised Logic --- //
// At this point, we should have a determinedProvider if the model is valid (internally or custom)
if (!determinedProvider) {
// This case acts as a safeguard
return {
success: false,
error: {
code: 'PROVIDER_UNDETERMINED',
message: `Could not determine the provider for model ID "${modelId}".`
}
};
}
// Update configuration
currentConfig.models[role] = {
...currentConfig.models[role], // Keep existing params like temperature
provider: determinedProvider,
modelId: modelId
};
// If model data is available, update maxTokens from supported-models.json
if (modelData && modelData.max_tokens) {
currentConfig.models[role].maxTokens = modelData.max_tokens;
}
// Write updated configuration
const writeResult = writeConfig(currentConfig, projectRoot);
if (!writeResult) {
return {
success: false,
error: {
code: 'CONFIG_WRITE_ERROR',
message: 'Error writing updated configuration to configuration file'
}
};
}
const successMessage = `Successfully set ${role} model to ${modelId} (Provider: ${determinedProvider})`;
report('info', successMessage);
return {
success: true,
data: {
role,
provider: determinedProvider,
modelId,
message: successMessage,
warning: warningMessage // Include warning in the response data
}
};
} catch (error) {
report('error', `Error setting ${role} model: ${error.message}`);
return {
success: false,
error: {
code: 'SET_MODEL_ERROR',
message: error.message
}
};
}
}
/**
* Get API key status for all known providers.
* @param {Object} [options] - Options for the operation
* @param {Object} [options.session] - Session object containing environment variables (for MCP)
* @param {Function} [options.mcpLog] - MCP logger object (for MCP)
* @param {string} [options.projectRoot] - Project root directory
* @returns {Object} RESTful response with API key status report
*/
async function getApiKeyStatusReport(options = {}) {
const { mcpLog, projectRoot, session } = options;
const report = (level, ...args) => {
if (mcpLog && typeof mcpLog[level] === 'function') {
mcpLog[level](...args);
}
};
try {
const providers = getAllProviders();
const providersToCheck = providers.filter(
(p) => p.toLowerCase() !== 'ollama'
); // Ollama is not a provider, it's a service, doesn't need an api key usually
const statusReport = providersToCheck.map((provider) => {
// Use provided projectRoot for MCP status check
const cliOk = isApiKeySet(provider, session, projectRoot); // Pass session and projectRoot for CLI check
const mcpOk = getMcpApiKeyStatus(provider, projectRoot);
return {
provider,
cli: cliOk,
mcp: mcpOk
};
});
report('info', 'Successfully generated API key status report.');
return {
success: true,
data: {
report: statusReport,
message: 'API key status report generated.'
}
};
} catch (error) {
report('error', `Error generating API key status report: ${error.message}`);
return {
success: false,
error: {
code: 'API_KEY_STATUS_ERROR',
message: error.message
}
};
}
}
export {
getModelConfiguration,
getAvailableModelsList,
setModel,
getApiKeyStatusReport
};