* Fix: Correct version resolution for banner and update check
Resolves issues where the tool's version was displayed as 'unknown'.
- Modified 'displayBanner' in 'ui.js' and 'checkForUpdate' in 'commands.js' to read package.json relative to their own script locations using import.meta.url.
- This ensures the correct local version is identified for both the main banner display and the update notification mechanism.
- Restored a missing closing brace in 'ui.js' to fix a SyntaxError.
* fix: refactor and cleanup
* fix: chores and cleanup and testing
* chore: cleanup
* fix: add changeset
---------
Co-authored-by: Christer Soederlund <christer.soderlund@gmail.com>
* Update .taskmasterconfig
Max tokens in 3.5 is lower. With the current number get this error:
Service call failed for role fallback (Provider: anthropic, Model: claude-3-5-sonnet-20240620): max_tokens: 120000 > 8192, which is the maximum allowed number of output tokens for claude-3-5-sonnet-20240620
* Fix fallback model ID format and update maxTokens in Taskmaster configuration
---------
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
* chore: rename log level environment variable to `TASKMASTER_LOG_LEVEL`
### CHANGES
- Update environment variable from `LOG_LEVEL` to `TASKMASTER_LOG_LEVEL`.
- Reflect change in documentation for clarity.
- Adjust variable name in script and test files.
- Maintain default log level as `info`.
* fix: add changeset
* chore: rename `LOG_LEVEL` to `TASKMASTER_LOG_LEVEL` for consistency
### CHANGES
- Update environment variable name to `TASKMASTER_LOG_LEVEL` in documentation.
- Reflect rename in configuration rules for clarity.
- Maintain consistency across project configuration settings.
This commit introduces several improvements to AI interactions and
task management functionalities:
- AI Provider Enhancements (for Telemetry & Robustness):
- :
- Added a check in to ensure
is a string, throwing an error if not. This prevents downstream
errors (e.g., in ).
- , , :
- Standardized return structures for their respective
and functions to consistently include /
and fields. This aligns them with other providers (like
Anthropic, Google, Perplexity) for consistent telemetry data
collection, as part of implementing subtask 77.14 and similar work.
- Task Expansion ():
- Updated to be more explicit
about using an empty array for empty to
better guide AI output.
- Implemented a pre-emptive cleanup step in
to replace malformed with
before JSON parsing. This improves resilience to AI output quirks,
particularly observed with Perplexity.
- Adjusts issue in commands.js where successfulRemovals would be undefined. It's properly invoked from the result variable now.
- Updates supported models for Gemini
These changes address issues observed during E2E tests, enhance the
reliability of AI-driven task analysis and expansion, and promote
consistent telemetry data across multiple AI providers.
This commit updates the Perplexity AI provider () to ensure its functions return data in a structure consistent with other providers and the expectations of the unified AI service layer ().
Specifically:
- now returns an object instead of only the text string.
- now returns an object instead of only the result object.
These changes ensure that can correctly extract both the primary AI-generated content and the token usage data for telemetry purposes when Perplexity models are used. This resolves issues encountered during E2E testing where complexity analysis (which can use Perplexity for its research role) failed due to unexpected response formats.
The function was already compliant.
This commit updates to more robustly handle responses from .
Previously, the module strictly expected the AI-generated object to be nested under . This change ensures that it now first checks if itself contains the expected task data object, and then falls back to checking .
This enhancement increases compatibility with varying AI provider response structures, similar to the improvements recently made in .
This commit introduces two key improvements:
1. **Google Provider Telemetry:**
- Updated to include token usage data (, ) in the responses from and .
- This aligns the Google provider with others for consistent AI usage telemetry.
2. **Robust AI Object Response Handling:**
- Modified to more flexibly handle responses from .
- The add-task module now check for the AI-generated object in both and , improving compatibility with different AI provider response structures (e.g., Gemini).
These changes enhance the reliability of AI interactions, particularly with the Google provider, and ensure accurate telemetry collection.
This commit applies the standard telemetry pattern to the analyze-task-complexity command and its corresponding MCP tool.
Key Changes:
1. Core Logic (scripts/modules/task-manager/analyze-task-complexity.js):
- The call to generateTextService now includes commandName: 'analyze-complexity' and outputType.
- The full response { mainResult, telemetryData } is captured.
- mainResult (the AI-generated text) is used for parsing the complexity report JSON.
- If running in CLI mode (outputFormat === 'text'), displayAiUsageSummary is called with the telemetryData.
- The function now returns { report: ..., telemetryData: ... }.
2. Direct Function (mcp-server/src/core/direct-functions/analyze-task-complexity.js):
- The call to the core analyzeTaskComplexity function now passes the necessary context for telemetry (commandName, outputType).
- The successful response object now correctly extracts coreResult.telemetryData and includes it in the data.telemetryData field returned to the MCP client.
This commit applies the standard telemetry pattern to the update-subtask command and its corresponding MCP tool.
Key Changes:
1. Core Logic (scripts/modules/task-manager/update-subtask-by-id.js):
- The call to generateTextService now includes commandName: 'update-subtask' and outputType.
- The full response { mainResult, telemetryData } is captured.
- mainResult (the AI-generated text) is used for the appended content.
- If running in CLI mode (outputFormat === 'text'), displayAiUsageSummary is called with the telemetryData.
- The function now returns { updatedSubtask: ..., telemetryData: ... }.
2. Direct Function (mcp-server/src/core/direct-functions/update-subtask-by-id.js):
- The call to the core updateSubtaskById function now passes the necessary context for telemetry (commandName, outputType).
- The successful response object now correctly extracts coreResult.telemetryData and includes it in the data.telemetryData field returned to the MCP client.
This commit applies the standard telemetry pattern to the update-tasks command and its corresponding MCP tool.
Key Changes:
1. Core Logic (scripts/modules/task-manager/update-tasks.js):
- The call to generateTextService now includes commandName: 'update-tasks' and outputType.
- The full response { mainResult, telemetryData } is captured.
- mainResult (the AI-generated text) is used for parsing the updated task JSON.
- If running in CLI mode (outputFormat === 'text'), displayAiUsageSummary is called with the telemetryData.
- The function now returns { success: true, updatedTasks: ..., telemetryData: ... }.
2. Direct Function (mcp-server/src/core/direct-functions/update-tasks.js):
- The call to the core updateTasks function now passes the necessary context for telemetry (commandName, outputType).
- The successful response object now correctly extracts coreResult.telemetryData and includes it in the data.telemetryData field returned to the MCP client.
This commit applies the standard telemetry pattern to the command and its corresponding MCP tool.
Key Changes:
1. **Core Logic ():**
- The call to now includes and .
- The full response is captured.
- (the AI-generated text) is used for parsing the updated task JSON.
- If running in CLI mode (), is called with the .
- The function now returns .
2. **Direct Function ():**
- The call to the core function now passes the necessary context for telemetry (, ).
- The successful response object now correctly extracts and includes it in the field returned to the MCP client.
This commit implements AI usage telemetry for the `expand-all-tasks` command/tool and refactors its CLI output for clarity and consistency.
Key Changes:
1. **Telemetry Integration for `expand-all-tasks` (Subtask 77.8):**\n - The `expandAllTasks` core logic (`scripts/modules/task-manager/expand-all-tasks.js`) now calls the `expandTask` function for each eligible task and collects the individual `telemetryData` returned.\n - A new helper function `_aggregateTelemetry` (in `utils.js`) is used to sum up token counts and costs from all individual expansions into a single `telemetryData` object for the entire `expand-all` operation.\n - The `expandAllTasksDirect` wrapper (`mcp-server/src/core/direct-functions/expand-all-tasks.js`) now receives and passes this aggregated `telemetryData` in the MCP response.\n - For CLI usage, `displayAiUsageSummary` is called once with the aggregated telemetry.
2. **Improved CLI Output for `expand-all`:**\n - The `expandAllTasks` core function now handles displaying a final "Expansion Summary" box (showing Attempted, Expanded, Skipped, Failed counts) directly after the aggregated telemetry summary.\n - This consolidates all summary output within the core function for better flow and removes redundant logging from the command action in `scripts/modules/commands.js`.\n - The summary box border is green for success and red if any expansions failed.
3. **Code Refinements:**\n - Ensured `chalk` and `boxen` are imported in `expand-all-tasks.js` for the new summary box.\n - Minor adjustments to logging messages for clarity.
This commit integrates AI usage telemetry for the `expand-task` command/tool and resolves issues related to incorrect return type handling and logging.
Key Changes:
1. **Telemetry Integration for `expand-task` (Subtask 77.7):**\n - Applied the standard telemetry pattern to the `expandTask` core logic (`scripts/modules/task-manager/expand-task.js`) and the `expandTaskDirect` wrapper (`mcp-server/src/core/direct-functions/expand-task.js`).\n - AI service calls now pass `commandName` and `outputType`.\n - Core function returns `{ task, telemetryData }`.\n - Direct function correctly extracts `task` and passes `telemetryData` in the MCP response `data` field.\n - Telemetry summary is now displayed in the CLI output for the `expand` command.
2. **Fix AI Service Return Type Handling (`ai-services-unified.js`):**\n - Corrected the `_unifiedServiceRunner` function to properly handle the return objects from provider-specific functions (`generateText`, `generateObject`).\n - It now correctly extracts `providerResponse.text` or `providerResponse.object` into the `mainResult` field based on `serviceType`, resolving the "text.trim is not a function" error encountered during `expand-task`.
3. **Log Cleanup:**\n - Removed various redundant or excessive `console.log` statements across multiple files (as indicated by recent changes) to reduce noise and improve clarity, particularly for MCP interactions.
Implements AI usage telemetry capture and propagation for the command and MCP tool, following the established telemetry pattern.
Key changes:
- **Core ():**
- Modified the call to include and .
- Updated to receive from .
- Adjusted to return an object .
- Added a call to to show telemetry data in the CLI output when not in MCP mode.
- **Direct Function ():**
- Updated the call to the core function to pass , , and .
- Modified to correctly handle the new return structure from the core function.
- Ensures received from the core function is included in the field of the successful MCP response.
- **MCP Tool ():**
- No changes required; existing correctly passes through the object containing .
- **CLI Command ():**
- The command's action now relies on the core function to handle CLI success messages and telemetry display.
This ensures that AI usage for the functionality is tracked and can be displayed or logged as appropriate for both CLI and MCP interactions.
This commit introduces a standardized pattern for capturing and propagating AI usage telemetry (cost, tokens, model used) across the Task Master stack and applies it to the 'add-task' functionality.
Key changes include:
- **Telemetry Pattern Definition:**
- Added defining the integration pattern for core logic, direct functions, MCP tools, and CLI commands.
- Updated related rules (, ,
Usage: mcp [OPTIONS] COMMAND [ARGS]...
MCP development tools
╭─ Options ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --help Show this message and exit. │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ version Show the MCP version. │
│ dev Run a MCP server with the MCP Inspector. │
│ run Run a MCP server. │
│ install Install a MCP server in the Claude desktop app. │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯, , ) to reference the new telemetry rule.
- **Core Telemetry Implementation ():**
- Refactored the unified AI service to generate and return a object alongside the main AI result.
- Fixed an MCP server startup crash by removing redundant local loading of and instead using the imported from for cost calculations.
- Added to the object.
- ** Integration:**
- Modified (core) to receive from the AI service, return it, and call the new UI display function for CLI output.
- Updated to receive from the core function and include it in the payload of its response.
- Ensured (MCP tool) correctly passes the through via .
- Updated to correctly pass context (, ) to the core function and rely on it for CLI telemetry display.
- **UI Enhancement:**
- Added function to to show telemetry details in the CLI.
- **Project Management:**
- Added subtasks 77.6 through 77.12 to track the rollout of this telemetry pattern to other AI-powered commands (, , , , , , ).
This establishes the foundation for tracking AI usage across the application.