claude-task-master/scripts/modules/task-manager/analyze-task-complexity.js

485 lines
16 KiB
JavaScript
Raw Normal View History

import chalk from 'chalk';
import boxen from 'boxen';
import readline from 'readline';
import { log, readJSON, writeJSON, isSilentMode } from '../utils.js';
import { startLoadingIndicator, stopLoadingIndicator } from '../ui.js';
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
import { generateTextService } from '../ai-services-unified.js';
chore: Remove unused imports across modules Removes unused import statements identified after the major refactoring of the AI service layer and other components. This cleanup improves code clarity and removes unnecessary dependencies. Unused imports removed from: - **`mcp-server/src/core/direct-functions/analyze-task-complexity.js`:** - Removed `path` - **`mcp-server/src/core/direct-functions/complexity-report.js`:** - Removed `path` - **`mcp-server/src/core/direct-functions/expand-all-tasks.js`:** - Removed `path`, `fs` - **`mcp-server/src/core/direct-functions/generate-task-files.js`:** - Removed `path` - **`mcp-server/src/core/direct-functions/parse-prd.js`:** - Removed `os`, `findTasksJsonPath` - **`mcp-server/src/core/direct-functions/update-tasks.js`:** - Removed `isSilentMode` - **`mcp-server/src/tools/add-task.js`:** - Removed `createContentResponse`, `executeTaskMasterCommand` - **`mcp-server/src/tools/analyze.js`:** - Removed `getProjectRootFromSession` (as `projectRoot` is now required in args) - **`mcp-server/src/tools/expand-task.js`:** - Removed `path` - **`mcp-server/src/tools/initialize-project.js`:** - Removed `createContentResponse` - **`mcp-server/src/tools/parse-prd.js`:** - Removed `findPRDDocumentPath`, `resolveTasksOutputPath` (logic moved or handled by `resolveProjectPaths`) - **`mcp-server/src/tools/update.js`:** - Removed `getProjectRootFromSession` (as `projectRoot` is now required in args) - **`scripts/modules/commands.js`:** - Removed `exec`, `readline` - Removed AI config getters (`getMainModelId`, etc.) - Removed MCP helpers (`getMcpApiKeyStatus`) - **`scripts/modules/config-manager.js`:** - Removed `ZodError`, `readJSON`, `writeJSON` - **`scripts/modules/task-manager/analyze-task-complexity.js`:** - Removed AI config getters (`getMainModelId`, etc.) - **`scripts/modules/task-manager/expand-all-tasks.js`:** - Removed `fs`, `path`, `writeJSON` - **`scripts/modules/task-manager/models.js`:** - Removed `VALID_PROVIDERS` - **`scripts/modules/task-manager/update-subtask-by-id.js`:** - Removed AI config getters (`getMainModelId`, etc.) - **`scripts/modules/task-manager/update-tasks.js`:** - Removed AI config getters (`getMainModelId`, etc.) - **`scripts/modules/ui.js`:** - Removed `getDebugFlag` - **`scripts/modules/utils.js`:** - Removed `ZodError`
2025-04-25 15:11:55 -04:00
import { getDebugFlag, getProjectName } from '../config-manager.js';
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
/**
* Generates the prompt for complexity analysis.
* (Moved from ai-services.js and simplified)
* @param {Object} tasksData - The tasks data object.
* @returns {string} The generated prompt.
*/
function generateInternalComplexityAnalysisPrompt(tasksData) {
const tasksString = JSON.stringify(tasksData.tasks, null, 2);
return `Analyze the following tasks to determine their complexity (1-10 scale) and recommend the number of subtasks for expansion. Provide a brief reasoning and an initial expansion prompt for each.
Tasks:
${tasksString}
Respond ONLY with a valid JSON array matching the schema:
[
{
"taskId": <number>,
"taskTitle": "<string>",
"complexityScore": <number 1-10>,
"recommendedSubtasks": <number>,
"expansionPrompt": "<string>",
"reasoning": "<string>"
},
...
]
Do not include any explanatory text, markdown formatting, or code block markers before or after the JSON array.`;
}
/**
* Analyzes task complexity and generates expansion recommendations
* @param {Object} options Command options
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
* @param {string} options.file - Path to tasks file
* @param {string} options.output - Path to report output file
* @param {string|number} [options.threshold] - Complexity threshold
* @param {boolean} [options.research] - Use research role
* @param {Object} [options._filteredTasksData] - Pre-filtered task data (internal use)
* @param {number} [options._originalTaskCount] - Original task count (internal use)
* @param {Object} context - Context object, potentially containing session and mcpLog
* @param {Object} [context.session] - Session object from MCP server (optional)
* @param {Object} [context.mcpLog] - MCP logger object (optional)
* @param {function} [context.reportProgress] - Deprecated: Function to report progress (ignored)
*/
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
async function analyzeTaskComplexity(options, context = {}) {
const { session, mcpLog } = context;
const tasksPath = options.file || 'tasks/tasks.json';
const outputPath = options.output || 'scripts/task-complexity-report.json';
const thresholdScore = parseFloat(options.threshold || '5');
const useResearch = options.research || false;
const outputFormat = mcpLog ? 'json' : 'text';
const reportLog = (message, level = 'info') => {
if (mcpLog) {
mcpLog[level](message);
} else if (!isSilentMode() && outputFormat === 'text') {
log(level, message);
}
};
if (outputFormat === 'text') {
console.log(
chalk.blue(
`Analyzing task complexity and generating expansion recommendations...`
)
);
}
try {
reportLog(`Reading tasks from ${tasksPath}...`, 'info');
let tasksData;
let originalTaskCount = 0;
if (options._filteredTasksData) {
tasksData = options._filteredTasksData;
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
originalTaskCount = options._originalTaskCount || tasksData.tasks.length;
if (!options._originalTaskCount) {
try {
const originalData = readJSON(tasksPath);
if (originalData && originalData.tasks) {
originalTaskCount = originalData.tasks.length;
}
} catch (e) {
log('warn', `Could not read original tasks file: ${e.message}`);
}
}
} else {
tasksData = readJSON(tasksPath);
if (
!tasksData ||
!tasksData.tasks ||
!Array.isArray(tasksData.tasks) ||
tasksData.tasks.length === 0
) {
throw new Error('No tasks found in the tasks file');
}
originalTaskCount = tasksData.tasks.length;
const activeStatuses = ['pending', 'blocked', 'in-progress'];
const filteredTasks = tasksData.tasks.filter((task) =>
activeStatuses.includes(task.status?.toLowerCase() || 'pending')
);
tasksData = {
...tasksData,
tasks: filteredTasks,
_originalTaskCount: originalTaskCount
};
}
const skippedCount = originalTaskCount - tasksData.tasks.length;
reportLog(
`Found ${originalTaskCount} total tasks in the task file.`,
'info'
);
if (skippedCount > 0) {
const skipMessage = `Skipping ${skippedCount} tasks marked as done/cancelled/deferred. Analyzing ${tasksData.tasks.length} active tasks.`;
reportLog(skipMessage, 'info');
if (outputFormat === 'text') {
console.log(chalk.yellow(skipMessage));
}
}
refactor: Standardize configuration and environment variable access This commit centralizes configuration and environment variable access across various modules by consistently utilizing getters from scripts/modules/config-manager.js. This replaces direct access to process.env and the global CONFIG object, leading to improved consistency, maintainability, testability, and better handling of session-specific configurations within the MCP context. Key changes include: - Centralized Getters: Replaced numerous instances of process.env.* and CONFIG.* with corresponding getter functions (e.g., getLogLevel, getMainModelId, getResearchMaxTokens, getMainTemperature, isApiKeySet, getDebugFlag, getDefaultSubtasks). - Session Awareness: Ensured that the session object is passed to config getters where necessary, particularly within AI service calls (ai-services.js, add-task.js) and error handling (ai-services.js), allowing for session-specific environment overrides. - API Key Checks: Standardized API key availability checks using isApiKeySet() instead of directly checking process.env.* (e.g., for Perplexity in commands.js and ai-services.js). - Client Instantiation Cleanup: Removed now-redundant/obsolete local client instantiation functions (getAnthropicClient, getPerplexityClient) from ai-services.js and the global Anthropic client initialization from dependency-manager.js. Client creation should now rely on the config manager and factory patterns. - Consistent Debug Flag Usage: Standardized calls to getDebugFlag() in commands.js, removing potentially unnecessary null arguments. - Accurate Progress Calculation: Updated AI stream progress reporting (ai-services.js, add-task.js) to use getMainMaxTokens(session) for more accurate calculations. - Minor Cleanup: Removed unused import from scripts/modules/commands.js. Specific module updates: - : - Uses getLogLevel() instead of process.env.LOG_LEVEL. - : - Replaced direct env/config access for model IDs, tokens, temperature, API keys, and default subtasks with appropriate getters. - Passed session to handleClaudeError. - Removed local getPerplexityClient and getAnthropicClient functions. - Updated progress calculations to use getMainMaxTokens(session). - : - Uses isApiKeySet('perplexity') for API key checks. - Uses getDebugFlag() consistently for debug checks. - Removed unused import. - : - Removed global Anthropic client initialization. - : - Uses config getters (getResearch..., getMain...) for Perplexity and Claude API call parameters, preserving customEnv override logic. This refactoring also resolves a potential SyntaxError: Identifier 'getPerplexityClient' has already been declared by removing the duplicated/obsolete function definition previously present in ai-services.js.
2025-04-21 21:30:12 -04:00
if (tasksData.tasks.length === 0) {
const emptyReport = {
meta: {
generatedAt: new Date().toISOString(),
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
tasksAnalyzed: 0,
refactor: Standardize configuration and environment variable access This commit centralizes configuration and environment variable access across various modules by consistently utilizing getters from scripts/modules/config-manager.js. This replaces direct access to process.env and the global CONFIG object, leading to improved consistency, maintainability, testability, and better handling of session-specific configurations within the MCP context. Key changes include: - Centralized Getters: Replaced numerous instances of process.env.* and CONFIG.* with corresponding getter functions (e.g., getLogLevel, getMainModelId, getResearchMaxTokens, getMainTemperature, isApiKeySet, getDebugFlag, getDefaultSubtasks). - Session Awareness: Ensured that the session object is passed to config getters where necessary, particularly within AI service calls (ai-services.js, add-task.js) and error handling (ai-services.js), allowing for session-specific environment overrides. - API Key Checks: Standardized API key availability checks using isApiKeySet() instead of directly checking process.env.* (e.g., for Perplexity in commands.js and ai-services.js). - Client Instantiation Cleanup: Removed now-redundant/obsolete local client instantiation functions (getAnthropicClient, getPerplexityClient) from ai-services.js and the global Anthropic client initialization from dependency-manager.js. Client creation should now rely on the config manager and factory patterns. - Consistent Debug Flag Usage: Standardized calls to getDebugFlag() in commands.js, removing potentially unnecessary null arguments. - Accurate Progress Calculation: Updated AI stream progress reporting (ai-services.js, add-task.js) to use getMainMaxTokens(session) for more accurate calculations. - Minor Cleanup: Removed unused import from scripts/modules/commands.js. Specific module updates: - : - Uses getLogLevel() instead of process.env.LOG_LEVEL. - : - Replaced direct env/config access for model IDs, tokens, temperature, API keys, and default subtasks with appropriate getters. - Passed session to handleClaudeError. - Removed local getPerplexityClient and getAnthropicClient functions. - Updated progress calculations to use getMainMaxTokens(session). - : - Uses isApiKeySet('perplexity') for API key checks. - Uses getDebugFlag() consistently for debug checks. - Removed unused import. - : - Removed global Anthropic client initialization. - : - Uses config getters (getResearch..., getMain...) for Perplexity and Claude API call parameters, preserving customEnv override logic. This refactoring also resolves a potential SyntaxError: Identifier 'getPerplexityClient' has already been declared by removing the duplicated/obsolete function definition previously present in ai-services.js.
2025-04-21 21:30:12 -04:00
thresholdScore: thresholdScore,
projectName: getProjectName(session),
usedResearch: useResearch
},
complexityAnalysis: []
};
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
reportLog(`Writing empty complexity report to ${outputPath}...`, 'info');
refactor: Standardize configuration and environment variable access This commit centralizes configuration and environment variable access across various modules by consistently utilizing getters from scripts/modules/config-manager.js. This replaces direct access to process.env and the global CONFIG object, leading to improved consistency, maintainability, testability, and better handling of session-specific configurations within the MCP context. Key changes include: - Centralized Getters: Replaced numerous instances of process.env.* and CONFIG.* with corresponding getter functions (e.g., getLogLevel, getMainModelId, getResearchMaxTokens, getMainTemperature, isApiKeySet, getDebugFlag, getDefaultSubtasks). - Session Awareness: Ensured that the session object is passed to config getters where necessary, particularly within AI service calls (ai-services.js, add-task.js) and error handling (ai-services.js), allowing for session-specific environment overrides. - API Key Checks: Standardized API key availability checks using isApiKeySet() instead of directly checking process.env.* (e.g., for Perplexity in commands.js and ai-services.js). - Client Instantiation Cleanup: Removed now-redundant/obsolete local client instantiation functions (getAnthropicClient, getPerplexityClient) from ai-services.js and the global Anthropic client initialization from dependency-manager.js. Client creation should now rely on the config manager and factory patterns. - Consistent Debug Flag Usage: Standardized calls to getDebugFlag() in commands.js, removing potentially unnecessary null arguments. - Accurate Progress Calculation: Updated AI stream progress reporting (ai-services.js, add-task.js) to use getMainMaxTokens(session) for more accurate calculations. - Minor Cleanup: Removed unused import from scripts/modules/commands.js. Specific module updates: - : - Uses getLogLevel() instead of process.env.LOG_LEVEL. - : - Replaced direct env/config access for model IDs, tokens, temperature, API keys, and default subtasks with appropriate getters. - Passed session to handleClaudeError. - Removed local getPerplexityClient and getAnthropicClient functions. - Updated progress calculations to use getMainMaxTokens(session). - : - Uses isApiKeySet('perplexity') for API key checks. - Uses getDebugFlag() consistently for debug checks. - Removed unused import. - : - Removed global Anthropic client initialization. - : - Uses config getters (getResearch..., getMain...) for Perplexity and Claude API call parameters, preserving customEnv override logic. This refactoring also resolves a potential SyntaxError: Identifier 'getPerplexityClient' has already been declared by removing the duplicated/obsolete function definition previously present in ai-services.js.
2025-04-21 21:30:12 -04:00
writeJSON(outputPath, emptyReport);
reportLog(
`Task complexity analysis complete. Report written to ${outputPath}`,
'success'
);
if (outputFormat === 'text') {
console.log(
chalk.green(
`Task complexity analysis complete. Report written to ${outputPath}`
)
);
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
const highComplexity = 0;
const mediumComplexity = 0;
const lowComplexity = 0;
const totalAnalyzed = 0;
refactor: Standardize configuration and environment variable access This commit centralizes configuration and environment variable access across various modules by consistently utilizing getters from scripts/modules/config-manager.js. This replaces direct access to process.env and the global CONFIG object, leading to improved consistency, maintainability, testability, and better handling of session-specific configurations within the MCP context. Key changes include: - Centralized Getters: Replaced numerous instances of process.env.* and CONFIG.* with corresponding getter functions (e.g., getLogLevel, getMainModelId, getResearchMaxTokens, getMainTemperature, isApiKeySet, getDebugFlag, getDefaultSubtasks). - Session Awareness: Ensured that the session object is passed to config getters where necessary, particularly within AI service calls (ai-services.js, add-task.js) and error handling (ai-services.js), allowing for session-specific environment overrides. - API Key Checks: Standardized API key availability checks using isApiKeySet() instead of directly checking process.env.* (e.g., for Perplexity in commands.js and ai-services.js). - Client Instantiation Cleanup: Removed now-redundant/obsolete local client instantiation functions (getAnthropicClient, getPerplexityClient) from ai-services.js and the global Anthropic client initialization from dependency-manager.js. Client creation should now rely on the config manager and factory patterns. - Consistent Debug Flag Usage: Standardized calls to getDebugFlag() in commands.js, removing potentially unnecessary null arguments. - Accurate Progress Calculation: Updated AI stream progress reporting (ai-services.js, add-task.js) to use getMainMaxTokens(session) for more accurate calculations. - Minor Cleanup: Removed unused import from scripts/modules/commands.js. Specific module updates: - : - Uses getLogLevel() instead of process.env.LOG_LEVEL. - : - Replaced direct env/config access for model IDs, tokens, temperature, API keys, and default subtasks with appropriate getters. - Passed session to handleClaudeError. - Removed local getPerplexityClient and getAnthropicClient functions. - Updated progress calculations to use getMainMaxTokens(session). - : - Uses isApiKeySet('perplexity') for API key checks. - Uses getDebugFlag() consistently for debug checks. - Removed unused import. - : - Removed global Anthropic client initialization. - : - Uses config getters (getResearch..., getMain...) for Perplexity and Claude API call parameters, preserving customEnv override logic. This refactoring also resolves a potential SyntaxError: Identifier 'getPerplexityClient' has already been declared by removing the duplicated/obsolete function definition previously present in ai-services.js.
2025-04-21 21:30:12 -04:00
console.log('\nComplexity Analysis Summary:');
console.log('----------------------------');
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
console.log(`Tasks in input file: ${originalTaskCount}`);
refactor: Standardize configuration and environment variable access This commit centralizes configuration and environment variable access across various modules by consistently utilizing getters from scripts/modules/config-manager.js. This replaces direct access to process.env and the global CONFIG object, leading to improved consistency, maintainability, testability, and better handling of session-specific configurations within the MCP context. Key changes include: - Centralized Getters: Replaced numerous instances of process.env.* and CONFIG.* with corresponding getter functions (e.g., getLogLevel, getMainModelId, getResearchMaxTokens, getMainTemperature, isApiKeySet, getDebugFlag, getDefaultSubtasks). - Session Awareness: Ensured that the session object is passed to config getters where necessary, particularly within AI service calls (ai-services.js, add-task.js) and error handling (ai-services.js), allowing for session-specific environment overrides. - API Key Checks: Standardized API key availability checks using isApiKeySet() instead of directly checking process.env.* (e.g., for Perplexity in commands.js and ai-services.js). - Client Instantiation Cleanup: Removed now-redundant/obsolete local client instantiation functions (getAnthropicClient, getPerplexityClient) from ai-services.js and the global Anthropic client initialization from dependency-manager.js. Client creation should now rely on the config manager and factory patterns. - Consistent Debug Flag Usage: Standardized calls to getDebugFlag() in commands.js, removing potentially unnecessary null arguments. - Accurate Progress Calculation: Updated AI stream progress reporting (ai-services.js, add-task.js) to use getMainMaxTokens(session) for more accurate calculations. - Minor Cleanup: Removed unused import from scripts/modules/commands.js. Specific module updates: - : - Uses getLogLevel() instead of process.env.LOG_LEVEL. - : - Replaced direct env/config access for model IDs, tokens, temperature, API keys, and default subtasks with appropriate getters. - Passed session to handleClaudeError. - Removed local getPerplexityClient and getAnthropicClient functions. - Updated progress calculations to use getMainMaxTokens(session). - : - Uses isApiKeySet('perplexity') for API key checks. - Uses getDebugFlag() consistently for debug checks. - Removed unused import. - : - Removed global Anthropic client initialization. - : - Uses config getters (getResearch..., getMain...) for Perplexity and Claude API call parameters, preserving customEnv override logic. This refactoring also resolves a potential SyntaxError: Identifier 'getPerplexityClient' has already been declared by removing the duplicated/obsolete function definition previously present in ai-services.js.
2025-04-21 21:30:12 -04:00
console.log(`Tasks successfully analyzed: ${totalAnalyzed}`);
console.log(`High complexity tasks: ${highComplexity}`);
console.log(`Medium complexity tasks: ${mediumComplexity}`);
console.log(`Low complexity tasks: ${lowComplexity}`);
console.log(
`Sum verification: ${highComplexity + mediumComplexity + lowComplexity} (should equal ${totalAnalyzed})`
);
console.log(`Research-backed analysis: ${useResearch ? 'Yes' : 'No'}`);
console.log(
`\nSee ${outputPath} for the full report and expansion commands.`
);
console.log(
boxen(
chalk.white.bold('Suggested Next Steps:') +
'\n\n' +
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master complexity-report')} to review detailed findings\n` +
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down complex tasks\n` +
`${chalk.cyan('3.')} Run ${chalk.yellow('task-master expand --all')} to expand all pending tasks based on complexity`,
{
padding: 1,
borderColor: 'cyan',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
}
return emptyReport;
}
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
const prompt = generateInternalComplexityAnalysisPrompt(tasksData);
// System prompt remains simple for text generation
const systemPrompt =
'You are an expert software architect and project manager analyzing task complexity. Respond only with the requested valid JSON array.';
let loadingIndicator = null;
if (outputFormat === 'text') {
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
loadingIndicator = startLoadingIndicator('Calling AI service...');
}
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
let fullResponse = ''; // To store the raw text response
try {
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
const role = useResearch ? 'research' : 'main';
reportLog(`Using AI service with role: ${role}`, 'info');
// *** CHANGED: Use generateTextService ***
fullResponse = await generateTextService({
prompt,
systemPrompt,
role,
session
// No schema or objectName needed
});
// *** End Service Call Change ***
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
reportLog(
'Successfully received text response via AI service',
'success'
);
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// --- Stop Loading Indicator (Unchanged) ---
if (loadingIndicator) {
stopLoadingIndicator(loadingIndicator);
loadingIndicator = null;
}
if (outputFormat === 'text') {
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
readline.clearLine(process.stdout, 0);
readline.cursorTo(process.stdout, 0);
console.log(
chalk.green('AI service call complete. Parsing response...')
);
}
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// --- End Stop Loading Indicator ---
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// --- Re-introduce Manual JSON Parsing & Cleanup ---
reportLog(`Parsing complexity analysis from text response...`, 'info');
let complexityAnalysis;
try {
let cleanedResponse = fullResponse;
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// Basic trim first
cleanedResponse = cleanedResponse.trim();
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// Remove potential markdown code block fences
const codeBlockMatch = cleanedResponse.match(
/```(?:json)?\s*([\s\S]*?)\s*```/
);
if (codeBlockMatch) {
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
cleanedResponse = codeBlockMatch[1].trim(); // Trim content inside block
reportLog('Extracted JSON from code block', 'info');
} else {
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// If no code block, ensure it starts with '[' and ends with ']'
// This is less robust but a common fallback
const firstBracket = cleanedResponse.indexOf('[');
const lastBracket = cleanedResponse.lastIndexOf(']');
if (firstBracket !== -1 && lastBracket > firstBracket) {
cleanedResponse = cleanedResponse.substring(
firstBracket,
lastBracket + 1
);
reportLog('Extracted content between first [ and last ]', 'info');
} else {
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
reportLog(
'Warning: Response does not appear to be a JSON array.',
'warn'
);
// Keep going, maybe JSON.parse can handle it or will fail informatively
}
}
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
if (outputFormat === 'text' && getDebugFlag(session)) {
console.log(chalk.gray('Attempting to parse cleaned JSON...'));
console.log(chalk.gray('Cleaned response (first 100 chars):'));
console.log(chalk.gray(cleanedResponse.substring(0, 100)));
console.log(chalk.gray('Last 100 chars:'));
console.log(
chalk.gray(cleanedResponse.substring(cleanedResponse.length - 100))
);
}
try {
complexityAnalysis = JSON.parse(cleanedResponse);
} catch (jsonError) {
reportLog(
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
'Initial JSON parsing failed. Raw response might be malformed.',
'error'
);
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
reportLog(`Original JSON Error: ${jsonError.message}`, 'error');
if (outputFormat === 'text' && getDebugFlag(session)) {
console.log(chalk.red('--- Start Raw Malformed Response ---'));
console.log(chalk.gray(fullResponse));
console.log(chalk.red('--- End Raw Malformed Response ---'));
}
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// Re-throw the specific JSON parsing error
throw new Error(
`Failed to parse JSON response: ${jsonError.message}`
);
}
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// Ensure it's an array after parsing
if (!Array.isArray(complexityAnalysis)) {
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
throw new Error('Parsed response is not a valid JSON array.');
}
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
} catch (error) {
// Catch errors specifically from the parsing/cleanup block
if (loadingIndicator) stopLoadingIndicator(loadingIndicator); // Ensure indicator stops
reportLog(
`Error parsing complexity analysis JSON: ${error.message}`,
'error'
);
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
if (outputFormat === 'text') {
console.error(
chalk.red(
`Error parsing complexity analysis JSON: ${error.message}`
)
);
}
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
throw error; // Re-throw parsing error
}
// --- End Manual JSON Parsing & Cleanup ---
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// --- Post-processing (Missing Task Check) - (Unchanged) ---
const taskIds = tasksData.tasks.map((t) => t.id);
const analysisTaskIds = complexityAnalysis.map((a) => a.taskId);
const missingTaskIds = taskIds.filter(
(id) => !analysisTaskIds.includes(id)
);
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
if (missingTaskIds.length > 0) {
reportLog(
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
`Missing analysis for ${missingTaskIds.length} tasks: ${missingTaskIds.join(', ')}`,
'warn'
);
if (outputFormat === 'text') {
console.log(
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
chalk.yellow(
`Missing analysis for ${missingTaskIds.length} tasks: ${missingTaskIds.join(', ')}`
)
);
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
}
for (const missingId of missingTaskIds) {
const missingTask = tasksData.tasks.find((t) => t.id === missingId);
if (missingTask) {
reportLog(`Adding default analysis for task ${missingId}`, 'info');
complexityAnalysis.push({
taskId: missingId,
taskTitle: missingTask.title,
complexityScore: 5,
recommendedSubtasks: 3,
expansionPrompt: `Break down this task with a focus on ${missingTask.title.toLowerCase()}.`,
reasoning:
'Automatically added due to missing analysis in AI response.'
});
refactor: Standardize configuration and environment variable access This commit centralizes configuration and environment variable access across various modules by consistently utilizing getters from scripts/modules/config-manager.js. This replaces direct access to process.env and the global CONFIG object, leading to improved consistency, maintainability, testability, and better handling of session-specific configurations within the MCP context. Key changes include: - Centralized Getters: Replaced numerous instances of process.env.* and CONFIG.* with corresponding getter functions (e.g., getLogLevel, getMainModelId, getResearchMaxTokens, getMainTemperature, isApiKeySet, getDebugFlag, getDefaultSubtasks). - Session Awareness: Ensured that the session object is passed to config getters where necessary, particularly within AI service calls (ai-services.js, add-task.js) and error handling (ai-services.js), allowing for session-specific environment overrides. - API Key Checks: Standardized API key availability checks using isApiKeySet() instead of directly checking process.env.* (e.g., for Perplexity in commands.js and ai-services.js). - Client Instantiation Cleanup: Removed now-redundant/obsolete local client instantiation functions (getAnthropicClient, getPerplexityClient) from ai-services.js and the global Anthropic client initialization from dependency-manager.js. Client creation should now rely on the config manager and factory patterns. - Consistent Debug Flag Usage: Standardized calls to getDebugFlag() in commands.js, removing potentially unnecessary null arguments. - Accurate Progress Calculation: Updated AI stream progress reporting (ai-services.js, add-task.js) to use getMainMaxTokens(session) for more accurate calculations. - Minor Cleanup: Removed unused import from scripts/modules/commands.js. Specific module updates: - : - Uses getLogLevel() instead of process.env.LOG_LEVEL. - : - Replaced direct env/config access for model IDs, tokens, temperature, API keys, and default subtasks with appropriate getters. - Passed session to handleClaudeError. - Removed local getPerplexityClient and getAnthropicClient functions. - Updated progress calculations to use getMainMaxTokens(session). - : - Uses isApiKeySet('perplexity') for API key checks. - Uses getDebugFlag() consistently for debug checks. - Removed unused import. - : - Removed global Anthropic client initialization. - : - Uses config getters (getResearch..., getMain...) for Perplexity and Claude API call parameters, preserving customEnv override logic. This refactoring also resolves a potential SyntaxError: Identifier 'getPerplexityClient' has already been declared by removing the duplicated/obsolete function definition previously present in ai-services.js.
2025-04-21 21:30:12 -04:00
}
}
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
}
// --- End Post-processing ---
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// --- Report Creation & Writing (Unchanged) ---
const finalReport = {
meta: {
generatedAt: new Date().toISOString(),
tasksAnalyzed: tasksData.tasks.length,
thresholdScore: thresholdScore,
projectName: getProjectName(session),
usedResearch: useResearch
},
complexityAnalysis: complexityAnalysis
};
reportLog(`Writing complexity report to ${outputPath}...`, 'info');
writeJSON(outputPath, finalReport);
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
reportLog(
`Task complexity analysis complete. Report written to ${outputPath}`,
'success'
);
// --- End Report Creation & Writing ---
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// --- Display CLI Summary (Unchanged) ---
if (outputFormat === 'text') {
console.log(
chalk.green(
`Task complexity analysis complete. Report written to ${outputPath}`
)
);
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
const highComplexity = complexityAnalysis.filter(
(t) => t.complexityScore >= 8
).length;
const mediumComplexity = complexityAnalysis.filter(
(t) => t.complexityScore >= 5 && t.complexityScore < 8
).length;
const lowComplexity = complexityAnalysis.filter(
(t) => t.complexityScore < 5
).length;
const totalAnalyzed = complexityAnalysis.length;
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
console.log('\nComplexity Analysis Summary:');
console.log('----------------------------');
console.log(
`Active tasks sent for analysis: ${tasksData.tasks.length}`
);
console.log(`Tasks successfully analyzed: ${totalAnalyzed}`);
console.log(`High complexity tasks: ${highComplexity}`);
console.log(`Medium complexity tasks: ${mediumComplexity}`);
console.log(`Low complexity tasks: ${lowComplexity}`);
console.log(
`Sum verification: ${highComplexity + mediumComplexity + lowComplexity} (should equal ${totalAnalyzed})`
);
console.log(`Research-backed analysis: ${useResearch ? 'Yes' : 'No'}`);
console.log(
`\nSee ${outputPath} for the full report and expansion commands.`
);
console.log(
boxen(
chalk.white.bold('Suggested Next Steps:') +
'\n\n' +
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master complexity-report')} to review detailed findings\n` +
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down complex tasks\n` +
`${chalk.cyan('3.')} Run ${chalk.yellow('task-master expand --all')} to expand all pending tasks based on complexity`,
{
padding: 1,
borderColor: 'cyan',
borderStyle: 'round',
margin: { top: 1 }
}
)
);
if (getDebugFlag(session)) {
console.debug(
chalk.gray(
`Final analysis object: ${JSON.stringify(finalReport, null, 2)}`
)
);
}
}
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// --- End Display CLI Summary ---
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
return finalReport;
} catch (error) {
// Catches errors from generateTextService call
if (loadingIndicator) stopLoadingIndicator(loadingIndicator);
reportLog(`Error during AI service call: ${error.message}`, 'error');
if (outputFormat === 'text') {
console.error(
chalk.red(`Error during AI service call: ${error.message}`)
);
if (error.message.includes('API key')) {
console.log(
chalk.yellow(
'\nPlease ensure your API keys are correctly configured in .env or ~/.taskmaster/.env'
)
);
console.log(
chalk.yellow("Run 'task-master models --setup' if needed.")
);
}
}
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
throw error; // Re-throw AI service error
}
} catch (error) {
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
// Catches general errors (file read, etc.)
reportLog(`Error analyzing task complexity: ${error.message}`, 'error');
if (outputFormat === 'text') {
console.error(
chalk.red(`Error analyzing task complexity: ${error.message}`)
);
refactor: Standardize configuration and environment variable access This commit centralizes configuration and environment variable access across various modules by consistently utilizing getters from scripts/modules/config-manager.js. This replaces direct access to process.env and the global CONFIG object, leading to improved consistency, maintainability, testability, and better handling of session-specific configurations within the MCP context. Key changes include: - Centralized Getters: Replaced numerous instances of process.env.* and CONFIG.* with corresponding getter functions (e.g., getLogLevel, getMainModelId, getResearchMaxTokens, getMainTemperature, isApiKeySet, getDebugFlag, getDefaultSubtasks). - Session Awareness: Ensured that the session object is passed to config getters where necessary, particularly within AI service calls (ai-services.js, add-task.js) and error handling (ai-services.js), allowing for session-specific environment overrides. - API Key Checks: Standardized API key availability checks using isApiKeySet() instead of directly checking process.env.* (e.g., for Perplexity in commands.js and ai-services.js). - Client Instantiation Cleanup: Removed now-redundant/obsolete local client instantiation functions (getAnthropicClient, getPerplexityClient) from ai-services.js and the global Anthropic client initialization from dependency-manager.js. Client creation should now rely on the config manager and factory patterns. - Consistent Debug Flag Usage: Standardized calls to getDebugFlag() in commands.js, removing potentially unnecessary null arguments. - Accurate Progress Calculation: Updated AI stream progress reporting (ai-services.js, add-task.js) to use getMainMaxTokens(session) for more accurate calculations. - Minor Cleanup: Removed unused import from scripts/modules/commands.js. Specific module updates: - : - Uses getLogLevel() instead of process.env.LOG_LEVEL. - : - Replaced direct env/config access for model IDs, tokens, temperature, API keys, and default subtasks with appropriate getters. - Passed session to handleClaudeError. - Removed local getPerplexityClient and getAnthropicClient functions. - Updated progress calculations to use getMainMaxTokens(session). - : - Uses isApiKeySet('perplexity') for API key checks. - Uses getDebugFlag() consistently for debug checks. - Removed unused import. - : - Removed global Anthropic client initialization. - : - Uses config getters (getResearch..., getMain...) for Perplexity and Claude API call parameters, preserving customEnv override logic. This refactoring also resolves a potential SyntaxError: Identifier 'getPerplexityClient' has already been declared by removing the duplicated/obsolete function definition previously present in ai-services.js.
2025-04-21 21:30:12 -04:00
if (getDebugFlag(session)) {
console.error(error);
}
process.exit(1);
} else {
refactor(analyze): Align complexity analysis with unified AI service Refactored the feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer (). Initially, was implemented to leverage structured output generation. However, this approach encountered persistent errors: - Perplexity provider returned internal server errors. - Anthropic provider failed with schema type and model errors. Due to the unreliability of for this specific use case, the core AI interaction within was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced. Key changes include: - Removed direct AI client initialization (Anthropic, Perplexity). - Removed direct fetching of AI model configuration parameters. - Removed manual AI retry/fallback/streaming logic. - Replaced direct AI calls with a call to . - Updated wrapper to pass session context correctly. - Updated MCP tool for correct path resolution and argument passing. - Updated CLI command for correct path resolution. - Preserved core functionality: task loading/filtering, report generation, CLI summary display. Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer [INFO] Initialized Perplexity client with OpenAI compatibility layer Analyzing task complexity from: tasks/tasks.json Output report will be saved to: scripts/task-complexity-report.json Analyzing task complexity and generating expansion recommendations... [INFO] Reading tasks from tasks/tasks.json... [INFO] Found 62 total tasks in the task file. [INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks. [INFO] Claude API attempt 1/2 [ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}} [ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
2025-04-24 22:33:33 -04:00
throw error;
}
}
}
export default analyzeTaskComplexity;