This commit integrates AI usage telemetry for the `expand-task` command/tool and resolves issues related to incorrect return type handling and logging.
Key Changes:
1. **Telemetry Integration for `expand-task` (Subtask 77.7):**\n - Applied the standard telemetry pattern to the `expandTask` core logic (`scripts/modules/task-manager/expand-task.js`) and the `expandTaskDirect` wrapper (`mcp-server/src/core/direct-functions/expand-task.js`).\n - AI service calls now pass `commandName` and `outputType`.\n - Core function returns `{ task, telemetryData }`.\n - Direct function correctly extracts `task` and passes `telemetryData` in the MCP response `data` field.\n - Telemetry summary is now displayed in the CLI output for the `expand` command.
2. **Fix AI Service Return Type Handling (`ai-services-unified.js`):**\n - Corrected the `_unifiedServiceRunner` function to properly handle the return objects from provider-specific functions (`generateText`, `generateObject`).\n - It now correctly extracts `providerResponse.text` or `providerResponse.object` into the `mainResult` field based on `serviceType`, resolving the "text.trim is not a function" error encountered during `expand-task`.
3. **Log Cleanup:**\n - Removed various redundant or excessive `console.log` statements across multiple files (as indicated by recent changes) to reduce noise and improve clarity, particularly for MCP interactions.
This commit introduces a standardized pattern for capturing and propagating AI usage telemetry (cost, tokens, model used) across the Task Master stack and applies it to the 'add-task' functionality.
Key changes include:
- **Telemetry Pattern Definition:**
- Added defining the integration pattern for core logic, direct functions, MCP tools, and CLI commands.
- Updated related rules (, ,
Usage: mcp [OPTIONS] COMMAND [ARGS]...
MCP development tools
╭─ Options ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --help Show this message and exit. │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ version Show the MCP version. │
│ dev Run a MCP server with the MCP Inspector. │
│ run Run a MCP server. │
│ install Install a MCP server in the Claude desktop app. │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯, , ) to reference the new telemetry rule.
- **Core Telemetry Implementation ():**
- Refactored the unified AI service to generate and return a object alongside the main AI result.
- Fixed an MCP server startup crash by removing redundant local loading of and instead using the imported from for cost calculations.
- Added to the object.
- ** Integration:**
- Modified (core) to receive from the AI service, return it, and call the new UI display function for CLI output.
- Updated to receive from the core function and include it in the payload of its response.
- Ensured (MCP tool) correctly passes the through via .
- Updated to correctly pass context (, ) to the core function and rely on it for CLI telemetry display.
- **UI Enhancement:**
- Added function to to show telemetry details in the CLI.
- **Project Management:**
- Added subtasks 77.6 through 77.12 to track the rollout of this telemetry pattern to other AI-powered commands (, , , , , , ).
This establishes the foundation for tracking AI usage across the application.
Modified ai-services-unified.js, update.js tool, and update-tasks.js direct function to correctly pass projectRoot. This enables the .env file API key fallback mechanism for the update command when running via MCP, ensuring consistent key resolution with the CLI context.
Integrates the OpenRouter AI provider using the Vercel AI SDK adapter (@openrouter/ai-sdk-provider). This allows users to configure and utilize models available through the OpenRouter platform.
- Added src/ai-providers/openrouter.js with standard Vercel AI SDK wrapper functions (generateText, streamText, generateObject).
- Updated ai-services-unified.js to include the OpenRouter provider in the PROVIDER_FUNCTIONS map and API key resolution logic.
- Verified config-manager.js handles OpenRouter API key checks correctly.
- Users can configure OpenRouter models via .taskmasterconfig using the task-master models command or MCP models tool. Requires OPENROUTER_API_KEY.
- Enhanced error handling in ai-services-unified.js to provide clearer messages when generateObjectService fails due to lack of underlying tool support in the selected model/provider endpoint.
Integrates the xAI provider into the unified AI service layer, allowing the use of Grok models (e.g., grok-3, grok-3-mini).
Changes include:
- Added dependency.
- Created with implementations for generateText, streamText, and generateObject (stubbed).
- Updated to include the xAI provider in the function map.
- Updated to recognize the 'xai' provider and the environment variable.
- Updated to include known Grok models and their capabilities (object generation marked as likely unsupported).
- Add OpenAI provider implementation using @ai-sdk/openai.\n- Update `models` command/tool to display API key status for configured providers.\n- Implement model-specific `maxTokens` override logic in `config-manager.js` using `supported-models.json`.\n- Improve AI error message parsing in `ai-services-unified.js` for better clarity.
This commit updates all relevant documentation (READMEs, docs/*, .cursor/rules) to accurately reflect the finalized unified AI service architecture and the new configuration system (.taskmasterconfig + .env/mcp.json). It also includes the final code cleanup steps related to the refactoring.
Key Changes:
1. **Documentation Updates:**
* Revised `README.md`, `README-task-master.md`, `assets/scripts_README.md`, `docs/configuration.md`, and `docs/tutorial.md` to explain the new configuration split (.taskmasterconfig vs .env/mcp.json).
* Updated MCP configuration examples in READMEs and tutorials to only include API keys in the `env` block.
* Added/updated examples for using the `--research` flag in `docs/command-reference.md`, `docs/examples.md`, and `docs/tutorial.md`.
* Updated `.cursor/rules/ai_services.mdc`, `.cursor/rules/architecture.mdc`, `.cursor/rules/dev_workflow.mdc`, `.cursor/rules/mcp.mdc`, `.cursor/rules/taskmaster.mdc`, `.cursor/rules/utilities.mdc`, and `.cursor/rules/new_features.mdc` to align with the new architecture, removing references to old patterns/files.
* Removed internal rule links from user-facing rules (`taskmaster.mdc`, `dev_workflow.mdc`, `self_improve.mdc`).
* Deleted outdated example file `docs/ai-client-utils-example.md`.
2. **Final Code Refactor & Cleanup:**
* Corrected `update-task-by-id.js` by removing the last import from the old `ai-services.js`.
* Refactored `update-subtask-by-id.js` to correctly use the unified service and logger patterns.
* Removed the obsolete export block from `mcp-server/src/core/task-master-core.js`.
* Corrected logger implementation in `update-tasks.js` for CLI context.
* Updated API key mapping in `config-manager.js` and `ai-services-unified.js`.
3. **Configuration Files:**
* Updated API keys in `.cursor/mcp.json`, replacing `GROK_API_KEY` with `XAI_API_KEY`.
* Updated `.env.example` with current API key names.
* Added `azureOpenaiBaseUrl` to `.taskmasterconfig` example.
4. **Task Management:**
* Marked documentation subtask 61.10 as 'done'.
* Includes various other task content/status updates from the diff summary.
5. **Changeset:**
* Added `.changeset/cuddly-zebras-matter.md` for user-facing `expand`/`expand-all` improvements.
This commit concludes the major architectural refactoring (Task 61) and ensures the documentation accurately reflects the current system.
- Fixed MCP server initialization warnings by refactoring config-manager.js to handle missing project roots silently during startup
- Added project root tracking (loadedConfigRoot) to improve config caching and prevent unnecessary reloads
- Modified _loadAndValidateConfig to return defaults without warnings when no explicitRoot provided
- Improved getConfig to only update cache when loading config with a specific project root
- Ensured warning messages still appear when explicitly specified roots have missing/invalid configs
- Prevented console output during MCP startup that was causing JSON parsing errors
- Verified parse_prd and other MCP tools still work correctly with the new config loading approach.
- Replaces test perplexity api key in mcp.json and rolls it. It's invalid now.
- Unified Service: Introduced 'scripts/modules/ai-services-unified.js' to centralize AI interactions using provider modules ('src/ai-providers/') and the Vercel AI SDK.
- Provider Modules: Implemented 'anthropic.js' and 'perplexity.js' wrappers for Vercel SDK.
- 'updateSubtaskById' Fix: Refactored the AI call within 'updateSubtaskById' to use 'generateTextService' from the unified layer, resolving runtime errors related to parameter passing and streaming. This serves as the pattern for refactoring other AI calls in 'scripts/modules/task-manager/'.
- Task Status: Marked Subtask 61.19 as 'done'.
- Rules: Added new 'ai-services.mdc' rule.
This centralizes AI logic, replacing previous direct SDK calls and custom implementations. API keys are resolved via 'resolveEnvVariable' within the service layer. The refactoring of 'updateSubtaskById' establishes the standard approach for migrating other AI-dependent functions in the task manager module to use the unified service.
Relates to Task 61.