mirror of
https://github.com/eyaltoledano/claude-task-master.git
synced 2025-07-03 15:10:26 +00:00

* Update SWE scores (#657) * docs: Auto-update and format models.md * feat: Flexible brand rules management (#460) * chore(docs): update docs and rules related to model management. * feat(ai): Add OpenRouter AI provider support Integrates the OpenRouter AI provider using the Vercel AI SDK adapter (@openrouter/ai-sdk-provider). This allows users to configure and utilize models available through the OpenRouter platform. - Added src/ai-providers/openrouter.js with standard Vercel AI SDK wrapper functions (generateText, streamText, generateObject). - Updated ai-services-unified.js to include the OpenRouter provider in the PROVIDER_FUNCTIONS map and API key resolution logic. - Verified config-manager.js handles OpenRouter API key checks correctly. - Users can configure OpenRouter models via .taskmasterconfig using the task-master models command or MCP models tool. Requires OPENROUTER_API_KEY. - Enhanced error handling in ai-services-unified.js to provide clearer messages when generateObjectService fails due to lack of underlying tool support in the selected model/provider endpoint. * feat(cli): Add --status/-s filter flag to show command and get-task MCP tool Implements the ability to filter subtasks displayed by the `task-master show <id>` command using the `--status` (or `-s`) flag. This is also available in the MCP context. - Modified `commands.js` to add the `--status` option to the `show` command definition. - Updated `utils.js` (`findTaskById`) to handle the filtering logic and return original subtask counts/arrays when filtering. - Updated `ui.js` (`displayTaskById`) to use the filtered subtasks for the table, display a summary line when filtering, and use the original subtask list for the progress bar calculation. - Updated MCP `get_task` tool and `showTaskDirect` function to accept and pass the `status` parameter. - Added changeset entry. * fix(tasks): Improve next task logic to be subtask-aware * fix(tasks): Enable removing multiple tasks/subtasks via comma-separated IDs - Refactors the core `removeTask` function (`task-manager/remove-task.js`) to accept and iterate over comma-separated task/subtask IDs. - Updates dependency cleanup and file regeneration logic to run once after processing all specified IDs. - Adjusts the `remove-task` CLI command (`commands.js`) description and confirmation prompt to handle multiple IDs correctly. - Fixes a bug in the CLI confirmation prompt where task/subtask titles were not being displayed correctly. - Updates the `remove_task` MCP tool description to reflect the new multi-ID capability. This addresses the previously known issue where only the first ID in a comma-separated list was processed. Closes #140 * Update README.md (#342) * Update Discord badge (#337) * refactor(init): Improve robustness and dependencies; Update template deps for AI SDKs; Silence npm install in MCP; Improve conditional model setup logic; Refactor init.js flags; Tweak Getting Started text; Fix MCP server launch command; Update default model in config template * Refactor: Improve MCP logging, update E2E & tests Refactors MCP server logging and updates testing infrastructure. - MCP Server: - Replaced manual logger wrappers with centralized `createLogWrapper` utility. - Updated direct function calls to use `{ session, mcpLog }` context. - Removed deprecated `model` parameter from analyze, expand-all, expand-task tools. - Adjusted MCP tool import paths and parameter descriptions. - Documentation: - Modified `docs/configuration.md`. - Modified `docs/tutorial.md`. - Testing: - E2E Script (`run_e2e.sh`): - Removed `set -e`. - Added LLM analysis function (`analyze_log_with_llm`) & integration. - Adjusted test run directory creation timing. - Added debug echo statements. - Deleted Unit Tests: Removed `ai-client-factory.test.js`, `ai-client-utils.test.js`, `ai-services.test.js`. - Modified Fixtures: Updated `scripts/task-complexity-report.json`. - Dev Scripts: - Modified `scripts/dev.js`. * chore(tests): Passes tests for merge candidate - Adjusted the interactive model default choice to be 'no change' instead of 'cancel setup' - E2E script has been perfected and works as designed provided there are all provider API keys .env in the root - Fixes the entire test suite to make sure it passes with the new architecture. - Fixes dependency command to properly show there is a validation failure if there is one. - Refactored config-manager.test.js mocking strategy and fixed assertions to read the real supported-models.json - Fixed rule-transformer.test.js assertion syntax and transformation logic adjusting replacement for search which was too broad. - Skip unstable tests in utils.test.js (log, readJSON, writeJSON error paths) due to SIGABRT crash. These tests trigger a native crash (SIGABRT), likely stemming from a conflict between internal chalk usage within the functions and Jest's test environment, possibly related to ESM module handling. * chore(wtf): removes chai. not sure how that even made it in here. also removes duplicate test in scripts/. * fix: ensure API key detection properly reads .env in MCP context Problem: - Task Master model configuration wasn't properly checking for API keys in the project's .env file when running through MCP - The isApiKeySet function was only checking session.env and process.env but not inspecting the .env file directly - This caused incorrect API key status reporting in MCP tools even when keys were properly set in .env Solution: - Modified resolveEnvVariable function in utils.js to properly read from .env file at projectRoot - Updated isApiKeySet to correctly pass projectRoot to resolveEnvVariable - Enhanced the key detection logic to have consistent behavior between CLI and MCP contexts - Maintains the correct precedence: session.env → .env file → process.env Testing: - Verified working correctly with both MCP and CLI tools - API keys properly detected in .env file in both contexts - Deleted .cursor/mcp.json to confirm introspection of .env as fallback works * fix(update): pass projectRoot through update command flow Modified ai-services-unified.js, update.js tool, and update-tasks.js direct function to correctly pass projectRoot. This enables the .env file API key fallback mechanism for the update command when running via MCP, ensuring consistent key resolution with the CLI context. * fix(analyze-complexity): pass projectRoot through analyze-complexity flow Modified analyze-task-complexity.js core function, direct function, and analyze.js tool to correctly pass projectRoot. Fixed import error in tools/index.js. Added debug logging to _resolveApiKey in ai-services-unified.js. This enables the .env API key fallback for analyze_project_complexity. * fix(add-task): pass projectRoot and fix logging/refs Modified add-task core, direct function, and tool to pass projectRoot for .env API key fallback. Fixed logFn reference error and removed deprecated reportProgress call in core addTask function. Verified working. * fix(parse-prd): pass projectRoot and fix schema/logging Modified parse-prd core, direct function, and tool to pass projectRoot for .env API key fallback. Corrected Zod schema used in generateObjectService call. Fixed logFn reference error in core parsePRD. Updated unit test mock for utils.js. * fix(update-task): pass projectRoot and adjust parsing Modified update-task-by-id core, direct function, and tool to pass projectRoot. Reverted parsing logic in core function to prioritize `{...}` extraction, resolving parsing errors. Fixed ReferenceError by correctly destructuring projectRoot. * fix(update-subtask): pass projectRoot and allow updating done subtasks Modified update-subtask-by-id core, direct function, and tool to pass projectRoot for .env API key fallback. Removed check preventing appending details to completed subtasks. * fix(mcp, expand): pass projectRoot through expand/expand-all flows Problem: expand_task & expand_all MCP tools failed with .env keys due to missing projectRoot propagation for API key resolution. Also fixed a ReferenceError: wasSilent is not defined in expandTaskDirect. Solution: Modified core logic, direct functions, and MCP tools for expand-task and expand-all to correctly destructure projectRoot from arguments and pass it down through the context object to the AI service call (generateTextService). Fixed wasSilent scope in expandTaskDirect. Verification: Tested expand_task successfully in MCP using .env keys. Reviewed expand_all flow for correct projectRoot propagation. * chore: prettier * fix(expand-all): add projectRoot to expandAllTasksDirect invokation. * fix(update-tasks): Improve AI response parsing for 'update' command Refactors the JSON array parsing logic within in . The previous logic primarily relied on extracting content from markdown code blocks (json or javascript), which proved brittle when the AI response included comments or non-JSON text within the block, leading to parsing errors for the command. This change modifies the parsing strategy to first attempt extracting content directly between the outermost '[' and ']' brackets. This is more robust as it targets the expected array structure directly. If bracket extraction fails, it falls back to looking for a strict json code block, then prefix stripping, before attempting a raw parse. This approach aligns with the successful parsing strategy used for single-object responses in and resolves the parsing errors previously observed with the command. * refactor(mcp): introduce withNormalizedProjectRoot HOF for path normalization Added HOF to mcp tools utils to normalize projectRoot from args/session. Refactored get-task tool to use HOF. Updated relevant documentation. * refactor(mcp): apply withNormalizedProjectRoot HOF to update tool Problem: The MCP tool previously handled project root acquisition and path resolution within its method, leading to potential inconsistencies and repetition. Solution: Refactored the tool () to utilize the new Higher-Order Function (HOF) from . Specific Changes: - Imported HOF. - Updated the Zod schema for the parameter to be optional, as the HOF handles deriving it from the session if not provided. - Wrapped the entire function body with the HOF. - Removed the manual call to from within the function body. - Destructured the from the object received by the wrapped function, ensuring it's the normalized path provided by the HOF. - Used the normalized variable when calling and when passing arguments to . This change standardizes project root handling for the tool, simplifies its method, and ensures consistent path normalization. This serves as the pattern for refactoring other MCP tools. * fix: apply to all tools withNormalizedProjectRoot to fix projectRoot issues for linux and windows * fix: add rest of tools that need wrapper * chore: cleanup tools to stop using rootFolder and remove unused imports * chore: more cleanup * refactor: Improve update-subtask, consolidate utils, update config This commit introduces several improvements and refactorings across MCP tools, core logic, and configuration. **Major Changes:** 1. **Refactor updateSubtaskById:** - Switched from generateTextService to generateObjectService for structured AI responses, using a Zod schema (subtaskSchema) for validation. - Revised prompts to have the AI generate relevant content based on user request and context (parent/sibling tasks), while explicitly preventing AI from handling timestamp/tag formatting. - Implemented **local timestamp generation (new Date().toISOString()) and formatting** (using <info added on ...> tags) within the function *after* receiving the AI response. This ensures reliable and correctly formatted details are appended. - Corrected logic to append only the locally formatted, AI-generated content block to the existing subtask.details. 2. **Consolidate MCP Utilities:** - Moved/consolidated the withNormalizedProjectRoot HOF into mcp-server/src/tools/utils.js. - Updated MCP tools (like update-subtask.js) to import withNormalizedProjectRoot from the new location. 3. **Refactor Project Initialization:** - Deleted the redundant mcp-server/src/core/direct-functions/initialize-project-direct.js file. - Updated mcp-server/src/core/task-master-core.js to import initializeProjectDirect from its correct location (./direct-functions/initialize-project.js). **Other Changes:** - Updated .taskmasterconfig fallback model to claude-3-7-sonnet-20250219. - Clarified model cost representation in the models tool description (taskmaster.mdc and mcp-server/src/tools/models.js). * fix: displayBanner logging when silentMode is active (#385) * fix: improve error handling, test options, and model configuration - Enhance error validation in parse-prd.js and update-tasks.js - Fix bug where mcpLog was incorrectly passed as logWrapper - Improve error messages and response formatting - Add --skip-verification flag to E2E tests - Update MCP server config that ships with init to match new API key structure - Fix task force/append handling in parse-prd command - Increase column width in update-tasks display * chore: fixes parse prd to show loading indicator in cli. * fix(parse-prd): suggested fix for mcpLog was incorrect. reverting to my previously working code. * chore(init): No longer ships readme with task-master init (commented out for now). No longer looking for task-master-mcp, instead checked for task-master-ai - this should prevent the init sequence from needlessly adding another mcp server with task-master-mcp to the mpc.json which a ton of people probably ran into. * chore: restores 3.7 sonnet as the main role. * fix(add/remove-dependency): dependency mcp tools were failing due to hard-coded tasks path in generate task files. * chore: removes tasks json backup that was temporarily created. * fix(next): adjusts mcp tool response to correctly return the next task/subtask. Also adds nextSteps to the next task response. * chore: prettier * chore: readme typos * fix(config): restores sonnet 3.7 as default main role. * Version Packages * hotfix: move production package to "dependencies" (#399) * Version Packages * Fix: issues with 0.13.0 not working (#402) * Exit prerelease mode and version packages * hotfix: move production package to "dependencies" * Enter prerelease mode and version packages * Enter prerelease mode and version packages * chore: cleanup * chore: improve pre.json and add pre-release workflow * chore: fix package.json * chore: cleanup * chore: improve pre-release workflow * chore: allow github actions to commit * extract fileMap and conversionConfig into brand profile * extract into brand profile * add windsurf profile * add remove brand rules function * fix regex * add rules command to add/remove rules for a specific brand * fix post processing for roo * allow multiples * add cursor profile * update test for new structure * move rules to assets * use assets/rules for rules files * use standardized setupMCP function * fix formatting * fix formatting * add logging * fix escapes * default to cursor * allow init with certain rulesets; no more .windsurfrules * update docs * update log msg * fix formatting * keep mdc extension for cursor * don't rewrite .mdc to .md inside the files * fix roo init (add modes) * fix cursor init (don't use roo transformation by default) * use more generic function names * update docs * fix formatting * update function names * add changeset * add rules to mcp initialize project * register tool with mcp server * update docs * add integration test * fix cursor initialization * rule selection * fix formatting * fix MCP - remove yes flag * add import * update roo tests * add/update tests * remove test * add rules command test * update MCP responses, centralize rules profiles & helpers * fix logging and MCP response messages * fix formatting * incorrect test * fix tests * update fileMap * fix file extension transformations * fix formatting * add rules command test * test already covered * fix formatting * move renaming logic into profiles * make sure dir is deleted (DS_Store) * add confirmation for rules removal * add force flag for rules remove * use force flag for test * remove yes parameter * fix formatting * import brand profiles from rule-transformer.js * update comment * add interactive rules setup * optimize * only copy rules specifically listed in fileMap * update comment * add cline profile * add brandDir to remove ambiguity and support Cline * specify whether to create mcp config and filename * add mcpConfigName value for parh * fix formatting * remove rules just for this repository - only include rules to be distributed * update error message * update "brand rules" to "rules" * update to minor * remove comment * remove comments * move to /src/utils * optimize imports * move rules-setup.js to /src/utils * move rule-transformer.js to /src/utils * move confirmation to /src/ui/confirm.js * default to all rules * use profile js for mcp config settings * only run rules interactive setup if not provided via command line * update comments * initialize with all brands if nothing specified * update var name * clean up * enumerate brands for brand rules * update instructions * add test to check for brand profiles * fix quotes * update semantics and terminology from 'brand rules' to 'rules profiles' * fix formatting * fix formatting * update function name and remove copying of cursor rules, now handled by rules transformer * update comment * rename to mcp-config-setup.js * use enums for rules actions * add aggregate reporting for rules add command * add missing log message * use simpler path * use base profile with modifications for each brand * use displayName and don't select any defaults in setup * add confirmation if removing ALL rules profiles, and add --force flag on rules remove * Use profile-detection instead of rules-detection * add newline at end of mcp config * add proper formatting for mcp.json * update rules * update rules * update rules * add checks for other rules and other profile folder items before removing * update confirmation for rules remove * update docs * update changeset * fix for filepath at bottom of rule * Update cline profile and add test; adjust other rules tests * update changeset * update changeset * clarify init for all profiles if not specified * update rule text * revert text * use "rule profiles" instead of "rules profiles" * use standard tool mappings for windsurf * add Trae support * update changeset * update wording * update to 'rule profile' * remove unneeded exports to optimize loc * combine to /src/utils/profiles.js; add codex and claude code profiles * rename function and add boxen * add claude and codex integration tests * organize tests into profiles folder * mock fs for transformer tests * update UI * add cline and trae integration tests * update test * update function name * update formatting * Update change set with new profiles * move profile integration tests to subdirectory * properly create temp directories in /tmp folder * fix formatting * use taskmaster subfolder for the 2 TM rules * update wording * ensure subdirectory exists * update rules from next * update from next * update taskmaster rule * add details on new rules command and init * fix mcp init * fix MCP path to assets * remove duplication * remove duplication * MCP server path fixes for rules command * fix for CLI roo rules add/remove * update tests * fix formatting * fix pattern for interactive rule profiles setup * restore comments * restore comments * restore comments * remove unused import, fix quotes * add missing integration tests * add VS Code profile and tests * update docs and rules to include vscode profile * add rules subdirectory support per-profile * move profiles to /src * fix formatting * rename to remove ambiguity * use --setup for rules interactive setup * Fix Cursor deeplink installation with copy-paste instructions (#723) * change roo boomerang to orchestrator; update tests that don't use modes * fix newline * chore: cleanup --------- Co-authored-by: Eyal Toledano <eyal@microangel.so> Co-authored-by: Yuval <yuvalbl@users.noreply.github.com> Co-authored-by: Marijn van der Werf <marijn.vanderwerf@gmail.com> Co-authored-by: Eyal Toledano <eutait@gmail.com> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * fix: providers config for azure, bedrock, and vertex (#822) * fix: providers config for azure, bedrock, and vertex * chore: improve changelog * chore: fix CI * fix: switch to ESM export to avoid mixed format (#633) * fix: switch to ESM export to avoid mixed format The CLI entrypoint was using `module.exports` alongside ESM `import` statements, resulting in an invalid mixed module format. Replaced the CommonJS export with a proper ESM `export` to maintain consistency and prevent module resolution issues. * chore: add changeset --------- Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> * fix: Fix external provider support (#726) * fix(bedrock): improve AWS credential handling and add model definitions (#826) * fix(bedrock): improve AWS credential handling and add model definitions - Change error to warning when AWS credentials are missing in environment - Allow fallback to system configuration (aws config files or instance profiles) - Remove hardcoded region and profile parameters in Bedrock client - Add Claude 3.7 Sonnet and DeepSeek R1 model definitions for Bedrock - Update config manager to properly handle Bedrock provider * chore: cleanup and format and small refactor --------- Co-authored-by: Ray Krueger <raykrueger@gmail.com> * docs: Auto-update and format models.md * Version Packages * chore: fix package.json * Fix/expand command tag corruption (#827) * fix(expand): Fix tag corruption in expand command - Fix tag parameter passing through MCP expand-task flow - Add tag parameter to direct function and tool registration - Fix contextGatherer method name from _buildDependencyContext to _buildDependencyGraphs - Add comprehensive test coverage for tag handling in expand-task - Ensures tagged task structure is preserved during expansion - Prevents corruption when tag is undefined. Fixes expand command causing tag corruption in tagged task lists. All existing tests pass and new test coverage added. * test(e2e): Add comprehensive tag-aware expand testing to verify tag corruption fix - Add new test section for feature-expand tag creation and testing - Verify tag preservation during expand, force expand, and expand --all operations - Test that master tag remains intact and feature-expand tag receives subtasks correctly - Fix file path references to use correct .taskmaster/tasks/tasks.json location - Fix config file check to use .taskmaster/config.json instead of .taskmasterconfig - All tag corruption verification tests pass successfully in E2E test * fix(changeset): Update E2E test improvements changeset to properly reflect tag corruption fix verification * chore(changeset): combine duplicate changesets for expand tag corruption fix Merge eighty-breads-wonder.md into bright-llamas-enter.md to consolidate the expand command fix and its comprehensive E2E testing enhancements into a single changeset entry. * Delete .changeset/eighty-breads-wonder.md * Version Packages * chore: fix package.json * fix(expand): Enhance context handling in expandAllTasks function - Added `tag` to context destructuring for better context management. - Updated `readJSON` call to include `contextTag` for improved data integrity. - Ensured the correct tag is passed during task expansion to prevent tag corruption. --------- Co-authored-by: Parththipan Thaniperumkarunai <parththipan.thaniperumkarunai@milkmonkey.de> Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Add pyproject.toml as project root marker (#804) * feat: Add pyproject.toml as project root marker - Added 'pyproject.toml' to the project markers array in findProjectRoot() - Enables Task Master to recognize Python projects using pyproject.toml - Improves project root detection for modern Python development workflows - Maintains compatibility with existing Node.js and Git-based detection * chore: add changeset --------- Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> * feat: add Claude Code provider support Implements Claude Code as a new AI provider that uses the Claude Code CLI without requiring API keys. This enables users to leverage Claude models through their local Claude Code installation. Key changes: - Add complete AI SDK v1 implementation for Claude Code provider - Custom SDK with streaming/non-streaming support - Session management for conversation continuity - JSON extraction for object generation mode - Support for advanced settings (maxTurns, allowedTools, etc.) - Integrate Claude Code into Task Master's provider system - Update ai-services-unified.js to handle keyless authentication - Add provider to supported-models.json with opus/sonnet models - Ensure correct maxTokens values are applied (opus: 32000, sonnet: 64000) - Fix maxTokens configuration issue - Add max_tokens property to getAvailableModels() output - Update setModel() to properly handle claude-code models - Create update-config-tokens.js utility for init process - Add comprehensive documentation - User guide with configuration examples - Advanced settings explanation and future integration options The implementation maintains full backward compatibility with existing providers while adding seamless Claude Code support to all Task Master commands. * fix(docs): correct invalid commands in claude-code usage examples - Remove non-existent 'do', 'estimate', and 'analyze' commands - Replace with actual Task Master commands: next, show, set-status - Use correct syntax for parse-prd and analyze-complexity * feat: make @anthropic-ai/claude-code an optional dependency This change makes the Claude Code SDK package optional, preventing installation failures for users who don't need Claude Code functionality. Changes: - Added @anthropic-ai/claude-code to optionalDependencies in package.json - Implemented lazy loading in language-model.js to only import the SDK when actually used - Updated documentation to explain the optional installation requirement - Applied formatting fixes to ensure code consistency Benefits: - Users without Claude Code subscriptions don't need to install the dependency - Reduces package size for users who don't use Claude Code - Prevents installation failures if the package is unavailable - Provides clear error messages when the package is needed but not installed The implementation uses dynamic imports to load the SDK only when doGenerate() or doStream() is called, ensuring the provider can be instantiated without the package present. * test: add comprehensive tests for ClaudeCodeProvider Addresses code review feedback about missing automated tests for the ClaudeCodeProvider. ## Changes - Added unit tests for ClaudeCodeProvider class covering constructor, validateAuth, and getClient methods - Added unit tests for ClaudeCodeLanguageModel testing lazy loading behavior and error handling - Added integration tests verifying optional dependency behavior when @anthropic-ai/claude-code is not installed ## Test Coverage 1. **Unit Tests**: - ClaudeCodeProvider: Basic functionality, no API key requirement, client creation - ClaudeCodeLanguageModel: Model initialization, lazy loading, error messages, warning generation 2. **Integration Tests**: - Optional dependency behavior when package is not installed - Clear error messages for users about missing package - Provider instantiation works but usage fails gracefully All tests pass and provide comprehensive coverage for the claude-code provider implementation. * revert: remove maxTokens update functionality from init This functionality was out of scope for the Claude Code provider PR. The automatic updating of maxTokens values in config.json during initialization is a general improvement that should be in a separate PR. Additionally, Claude Code ignores maxTokens and temperature parameters anyway, making this change irrelevant for the Claude Code integration. Removed: - scripts/modules/update-config-tokens.js - Import and usage in scripts/init.js * docs: add Claude Code support information to README - Added Claude Code to the list of supported providers in Requirements section - Noted that Claude Code requires no API key but needs Claude Code CLI - Added example of configuring claude-code/sonnet model - Created dedicated Claude Code Support section with key information - Added link to detailed Claude Code setup documentation This ensures users are aware of the Claude Code option as a no-API-key alternative for using Claude models. * style: apply biome formatting to test files * fix(models): add missing --claude-code flag to models command The models command was missing the --claude-code provider flag, preventing users from setting Claude Code models via CLI. While the backend already supported claude-code as a provider hint, there was no command-line flag to trigger it. Changes: - Added --claude-code option to models command alongside existing provider flags - Updated provider flags validation to include claudeCode option - Added claude-code to providerHint logic for all three model roles (main, research, fallback) - Updated error message to include --claude-code in list of mutually exclusive flags - Added example usage in help text This allows users to properly set Claude Code models using commands like: task-master models --set-main sonnet --claude-code task-master models --set-main opus --claude-code Without this flag, users would get "Model ID not found" errors when trying to set claude-code models, as the system couldn't determine the correct provider for generic model names like "sonnet" or "opus". * chore: add changeset for Claude Code provider feature * docs: Auto-update and format models.md * readme: add troubleshooting note for MCP tools not working * Feature/compatibleapisupport (#830) * add compatible platform api support * Adjust the code according to the suggestions * Fully revised as requested: restored all required checks, improved compatibility, and converted all comments to English. * feat: Add support for compatible API endpoints via baseURL * chore: Add changeset for compatible API support * chore: cleanup * chore: improve changeset * fix: package-lock.json * fix: package-lock.json --------- Co-authored-by: He-Xun <1226807142@qq.com> * Rename Roo Code "Boomerang" role to "Orchestrator" (#831) * feat: Enhanced project initialization with Git worktree detection (#743) * Fix Cursor deeplink installation with copy-paste instructions (#723) * detect git worktree * add changeset * add aliases and git flags * add changeset * rename and update test * add store tasks in git functionality * update changeset * fix newline * remove unused import * update command wording * update command option text * fix: update task by id (#834) * store tasks in git by default (#835) * Call rules interactive setup during init (#833) * chore: rc version bump * feat: Claude Code slash commands for Task Master (#774) * Fix Cursor deeplink installation with copy-paste instructions (#723) * fix: expand-task (#755) * docs: Update o3 model price (#751) * docs: Auto-update and format models.md * docs: Auto-update and format models.md * feat: Add Claude Code task master commands Adds Task Master slash commands for Claude Code under /project:tm/ namespace --------- Co-authored-by: Joe Danziger <joe@ticc.net> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Co-authored-by: Volodymyr Zahorniak <7808206+zahorniak@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com> * feat: make more compatible with "o" family models (#839) * docs: Auto-update and format models.md * docs: Add comprehensive Azure OpenAI configuration documentation (#837) * docs: Add comprehensive Azure OpenAI configuration documentation - Add detailed Azure OpenAI configuration section with prerequisites, authentication, and setup options - Include both global and per-model baseURL configuration examples - Add comprehensive troubleshooting guide for common Azure OpenAI issues - Update environment variables section with Azure OpenAI examples - Add Azure OpenAI models to all model tables (Main, Research, Fallback) - Include prominent Azure configuration example in main documentation - Fix azureBaseURL format to use correct Azure OpenAI endpoint structure Addresses common Azure OpenAI setup challenges and provides clear guidance for new users. * refactor: Move Azure models from docs/models.md to scripts/modules/supported-models.json - Remove Azure model entries from documentation tables - Add Azure provider section to supported-models.json with gpt-4o, gpt-4o-mini, and gpt-4-1 - Maintain consistency with existing model configuration structure * docs: Auto-update and format models.md * Version Packages * chore: format fix --------- Co-authored-by: Riccardo (Ricky) Esclapon <32306488+ries9112@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Joe Danziger <joe@ticc.net> Co-authored-by: Eyal Toledano <eyal@microangel.so> Co-authored-by: Yuval <yuvalbl@users.noreply.github.com> Co-authored-by: Marijn van der Werf <marijn.vanderwerf@gmail.com> Co-authored-by: Eyal Toledano <eutait@gmail.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Nathan Marley <nathan@glowberrylabs.com> Co-authored-by: Ray Krueger <raykrueger@gmail.com> Co-authored-by: Parththipan Thaniperumkarunai <parththipan.thaniperumkarunai@milkmonkey.de> Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com> Co-authored-by: ejones40 <ethan.jones@fortyau.com> Co-authored-by: Ben Vargas <ben@vargas.com> Co-authored-by: V4G4X <34249137+V4G4X@users.noreply.github.com> Co-authored-by: He-Xun <1226807142@qq.com> Co-authored-by: neno <github@meaning.systems> Co-authored-by: Volodymyr Zahorniak <7808206+zahorniak@users.noreply.github.com> Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com> Co-authored-by: Jitesh Thakur <56656484+Jitha-afk@users.noreply.github.com>
432 lines
17 KiB
JavaScript
432 lines
17 KiB
JavaScript
import fs from 'fs';
|
|
import path from 'path';
|
|
import chalk from 'chalk';
|
|
import boxen from 'boxen';
|
|
import { z } from 'zod';
|
|
|
|
import {
|
|
log,
|
|
writeJSON,
|
|
enableSilentMode,
|
|
disableSilentMode,
|
|
isSilentMode,
|
|
readJSON,
|
|
findTaskById,
|
|
ensureTagMetadata,
|
|
getCurrentTag
|
|
} from '../utils.js';
|
|
|
|
import { generateObjectService } from '../ai-services-unified.js';
|
|
import { getDebugFlag } from '../config-manager.js';
|
|
import generateTaskFiles from './generate-task-files.js';
|
|
import { displayAiUsageSummary } from '../ui.js';
|
|
|
|
// Define the Zod schema for a SINGLE task object
|
|
const prdSingleTaskSchema = z.object({
|
|
id: z.number().int().positive(),
|
|
title: z.string().min(1),
|
|
description: z.string().min(1),
|
|
details: z.string().nullable(),
|
|
testStrategy: z.string().nullable(),
|
|
priority: z.enum(['high', 'medium', 'low']).nullable(),
|
|
dependencies: z.array(z.number().int().positive()).nullable(),
|
|
status: z.string().nullable()
|
|
});
|
|
|
|
// Define the Zod schema for the ENTIRE expected AI response object
|
|
const prdResponseSchema = z.object({
|
|
tasks: z.array(prdSingleTaskSchema),
|
|
metadata: z.object({
|
|
projectName: z.string(),
|
|
totalTasks: z.number(),
|
|
sourceFile: z.string(),
|
|
generatedAt: z.string()
|
|
})
|
|
});
|
|
|
|
/**
|
|
* Parse a PRD file and generate tasks
|
|
* @param {string} prdPath - Path to the PRD file
|
|
* @param {string} tasksPath - Path to the tasks.json file
|
|
* @param {number} numTasks - Number of tasks to generate
|
|
* @param {Object} options - Additional options
|
|
* @param {boolean} [options.force=false] - Whether to overwrite existing tasks.json.
|
|
* @param {boolean} [options.append=false] - Append to existing tasks file.
|
|
* @param {boolean} [options.research=false] - Use research model for enhanced PRD analysis.
|
|
* @param {Object} [options.reportProgress] - Function to report progress (optional, likely unused).
|
|
* @param {Object} [options.mcpLog] - MCP logger object (optional).
|
|
* @param {Object} [options.session] - Session object from MCP server (optional).
|
|
* @param {string} [options.projectRoot] - Project root path (for MCP/env fallback).
|
|
* @param {string} [options.tag] - Target tag for task generation.
|
|
* @param {string} [outputFormat='text'] - Output format ('text' or 'json').
|
|
*/
|
|
async function parsePRD(prdPath, tasksPath, numTasks, options = {}) {
|
|
const {
|
|
reportProgress,
|
|
mcpLog,
|
|
session,
|
|
projectRoot,
|
|
force = false,
|
|
append = false,
|
|
research = false,
|
|
tag
|
|
} = options;
|
|
const isMCP = !!mcpLog;
|
|
const outputFormat = isMCP ? 'json' : 'text';
|
|
|
|
// Use the provided tag, or the current active tag, or default to 'master'
|
|
const targetTag = tag || getCurrentTag(projectRoot) || 'master';
|
|
|
|
const logFn = mcpLog
|
|
? mcpLog
|
|
: {
|
|
// Wrapper for CLI
|
|
info: (...args) => log('info', ...args),
|
|
warn: (...args) => log('warn', ...args),
|
|
error: (...args) => log('error', ...args),
|
|
debug: (...args) => log('debug', ...args),
|
|
success: (...args) => log('success', ...args)
|
|
};
|
|
|
|
// Create custom reporter using logFn
|
|
const report = (message, level = 'info') => {
|
|
// Check logFn directly
|
|
if (logFn && typeof logFn[level] === 'function') {
|
|
logFn[level](message);
|
|
} else if (!isSilentMode() && outputFormat === 'text') {
|
|
// Fallback to original log only if necessary and in CLI text mode
|
|
log(level, message);
|
|
}
|
|
};
|
|
|
|
report(
|
|
`Parsing PRD file: ${prdPath}, Force: ${force}, Append: ${append}, Research: ${research}`
|
|
);
|
|
|
|
let existingTasks = [];
|
|
let nextId = 1;
|
|
let aiServiceResponse = null;
|
|
|
|
try {
|
|
// Check if there are existing tasks in the target tag
|
|
let hasExistingTasksInTag = false;
|
|
if (fs.existsSync(tasksPath)) {
|
|
try {
|
|
// Read the entire file to check if the tag exists
|
|
const existingFileContent = fs.readFileSync(tasksPath, 'utf8');
|
|
const allData = JSON.parse(existingFileContent);
|
|
|
|
// Check if the target tag exists and has tasks
|
|
if (
|
|
allData[targetTag] &&
|
|
Array.isArray(allData[targetTag].tasks) &&
|
|
allData[targetTag].tasks.length > 0
|
|
) {
|
|
hasExistingTasksInTag = true;
|
|
existingTasks = allData[targetTag].tasks;
|
|
nextId = Math.max(...existingTasks.map((t) => t.id || 0)) + 1;
|
|
}
|
|
} catch (error) {
|
|
// If we can't read the file or parse it, assume no existing tasks in this tag
|
|
hasExistingTasksInTag = false;
|
|
}
|
|
}
|
|
|
|
// Handle file existence and overwrite/append logic based on target tag
|
|
if (hasExistingTasksInTag) {
|
|
if (append) {
|
|
report(
|
|
`Append mode enabled. Found ${existingTasks.length} existing tasks in tag '${targetTag}'. Next ID will be ${nextId}.`,
|
|
'info'
|
|
);
|
|
} else if (!force) {
|
|
// Not appending and not forcing overwrite, and there are existing tasks in the target tag
|
|
const overwriteError = new Error(
|
|
`Tag '${targetTag}' already contains ${existingTasks.length} tasks. Use --force to overwrite or --append to add to existing tasks.`
|
|
);
|
|
report(overwriteError.message, 'error');
|
|
if (outputFormat === 'text') {
|
|
console.error(chalk.red(overwriteError.message));
|
|
process.exit(1);
|
|
} else {
|
|
throw overwriteError;
|
|
}
|
|
} else {
|
|
// Force overwrite is true
|
|
report(
|
|
`Force flag enabled. Overwriting existing tasks in tag '${targetTag}'.`,
|
|
'info'
|
|
);
|
|
}
|
|
} else {
|
|
// No existing tasks in target tag, proceed without confirmation
|
|
report(
|
|
`Tag '${targetTag}' is empty or doesn't exist. Creating/updating tag with new tasks.`,
|
|
'info'
|
|
);
|
|
}
|
|
|
|
report(`Reading PRD content from ${prdPath}`, 'info');
|
|
const prdContent = fs.readFileSync(prdPath, 'utf8');
|
|
if (!prdContent) {
|
|
throw new Error(`Input file ${prdPath} is empty or could not be read.`);
|
|
}
|
|
|
|
// Research-specific enhancements to the system prompt
|
|
const researchPromptAddition = research
|
|
? `\nBefore breaking down the PRD into tasks, you will:
|
|
1. Research and analyze the latest technologies, libraries, frameworks, and best practices that would be appropriate for this project
|
|
2. Identify any potential technical challenges, security concerns, or scalability issues not explicitly mentioned in the PRD without discarding any explicit requirements or going overboard with complexity -- always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches
|
|
3. Consider current industry standards and evolving trends relevant to this project (this step aims to solve LLM hallucinations and out of date information due to training data cutoff dates)
|
|
4. Evaluate alternative implementation approaches and recommend the most efficient path
|
|
5. Include specific library versions, helpful APIs, and concrete implementation guidance based on your research
|
|
6. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches
|
|
|
|
Your task breakdown should incorporate this research, resulting in more detailed implementation guidance, more accurate dependency mapping, and more precise technology recommendations than would be possible from the PRD text alone, while maintaining all explicit requirements and best practices and all details and nuances of the PRD.`
|
|
: '';
|
|
|
|
// Base system prompt for PRD parsing
|
|
const systemPrompt = `You are an AI assistant specialized in analyzing Product Requirements Documents (PRDs) and generating a structured, logically ordered, dependency-aware and sequenced list of development tasks in JSON format.${researchPromptAddition}
|
|
|
|
Analyze the provided PRD content and generate approximately ${numTasks} top-level development tasks. If the complexity or the level of detail of the PRD is high, generate more tasks relative to the complexity of the PRD
|
|
Each task should represent a logical unit of work needed to implement the requirements and focus on the most direct and effective way to implement the requirements without unnecessary complexity or overengineering. Include pseudo-code, implementation details, and test strategy for each task. Find the most up to date information to implement each task.
|
|
Assign sequential IDs starting from ${nextId}. Infer title, description, details, and test strategy for each task based *only* on the PRD content.
|
|
Set status to 'pending', dependencies to an empty array [], and priority to 'medium' initially for all tasks.
|
|
Respond ONLY with a valid JSON object containing a single key "tasks", where the value is an array of task objects adhering to the provided Zod schema. Do not include any explanation or markdown formatting.
|
|
|
|
Each task should follow this JSON structure:
|
|
{
|
|
"id": number,
|
|
"title": string,
|
|
"description": string,
|
|
"status": "pending",
|
|
"dependencies": number[] (IDs of tasks this depends on),
|
|
"priority": "high" | "medium" | "low",
|
|
"details": string (implementation details),
|
|
"testStrategy": string (validation approach)
|
|
}
|
|
|
|
Guidelines:
|
|
1. Unless complexity warrants otherwise, create exactly ${numTasks} tasks, numbered sequentially starting from ${nextId}
|
|
2. Each task should be atomic and focused on a single responsibility following the most up to date best practices and standards
|
|
3. Order tasks logically - consider dependencies and implementation sequence
|
|
4. Early tasks should focus on setup, core functionality first, then advanced features
|
|
5. Include clear validation/testing approach for each task
|
|
6. Set appropriate dependency IDs (a task can only depend on tasks with lower IDs, potentially including existing tasks with IDs less than ${nextId} if applicable)
|
|
7. Assign priority (high/medium/low) based on criticality and dependency order
|
|
8. Include detailed implementation guidance in the "details" field${research ? ', with specific libraries and version recommendations based on your research' : ''}
|
|
9. If the PRD contains specific requirements for libraries, database schemas, frameworks, tech stacks, or any other implementation details, STRICTLY ADHERE to these requirements in your task breakdown and do not discard them under any circumstance
|
|
10. Focus on filling in any gaps left by the PRD or areas that aren't fully specified, while preserving all explicit requirements
|
|
11. Always aim to provide the most direct path to implementation, avoiding over-engineering or roundabout approaches${research ? '\n12. For each task, include specific, actionable guidance based on current industry standards and best practices discovered through research' : ''}`;
|
|
|
|
// Build user prompt with PRD content
|
|
const userPrompt = `Here's the Product Requirements Document (PRD) to break down into approximately ${numTasks} tasks, starting IDs from ${nextId}:${research ? '\n\nRemember to thoroughly research current best practices and technologies before task breakdown to provide specific, actionable implementation details.' : ''}\n\n${prdContent}\n\n
|
|
|
|
Return your response in this format:
|
|
{
|
|
"tasks": [
|
|
{
|
|
"id": 1,
|
|
"title": "Setup Project Repository",
|
|
"description": "...",
|
|
...
|
|
},
|
|
...
|
|
],
|
|
"metadata": {
|
|
"projectName": "PRD Implementation",
|
|
"totalTasks": ${numTasks},
|
|
"sourceFile": "${prdPath}",
|
|
"generatedAt": "YYYY-MM-DD"
|
|
}
|
|
}`;
|
|
|
|
// Call the unified AI service
|
|
report(
|
|
`Calling AI service to generate tasks from PRD${research ? ' with research-backed analysis' : ''}...`,
|
|
'info'
|
|
);
|
|
|
|
// Call generateObjectService with the CORRECT schema and additional telemetry params
|
|
aiServiceResponse = await generateObjectService({
|
|
role: research ? 'research' : 'main', // Use research role if flag is set
|
|
session: session,
|
|
projectRoot: projectRoot,
|
|
schema: prdResponseSchema,
|
|
objectName: 'tasks_data',
|
|
systemPrompt: systemPrompt,
|
|
prompt: userPrompt,
|
|
commandName: 'parse-prd',
|
|
outputType: isMCP ? 'mcp' : 'cli'
|
|
});
|
|
|
|
// Create the directory if it doesn't exist
|
|
const tasksDir = path.dirname(tasksPath);
|
|
if (!fs.existsSync(tasksDir)) {
|
|
fs.mkdirSync(tasksDir, { recursive: true });
|
|
}
|
|
logFn.success(
|
|
`Successfully parsed PRD via AI service${research ? ' with research-backed analysis' : ''}.`
|
|
);
|
|
|
|
// Validate and Process Tasks
|
|
// const generatedData = aiServiceResponse?.mainResult?.object;
|
|
|
|
// Robustly get the actual AI-generated object
|
|
let generatedData = null;
|
|
if (aiServiceResponse?.mainResult) {
|
|
if (
|
|
typeof aiServiceResponse.mainResult === 'object' &&
|
|
aiServiceResponse.mainResult !== null &&
|
|
'tasks' in aiServiceResponse.mainResult
|
|
) {
|
|
// If mainResult itself is the object with a 'tasks' property
|
|
generatedData = aiServiceResponse.mainResult;
|
|
} else if (
|
|
typeof aiServiceResponse.mainResult.object === 'object' &&
|
|
aiServiceResponse.mainResult.object !== null &&
|
|
'tasks' in aiServiceResponse.mainResult.object
|
|
) {
|
|
// If mainResult.object is the object with a 'tasks' property
|
|
generatedData = aiServiceResponse.mainResult.object;
|
|
}
|
|
}
|
|
|
|
if (!generatedData || !Array.isArray(generatedData.tasks)) {
|
|
logFn.error(
|
|
`Internal Error: generateObjectService returned unexpected data structure: ${JSON.stringify(generatedData)}`
|
|
);
|
|
throw new Error(
|
|
'AI service returned unexpected data structure after validation.'
|
|
);
|
|
}
|
|
|
|
let currentId = nextId;
|
|
const taskMap = new Map();
|
|
const processedNewTasks = generatedData.tasks.map((task) => {
|
|
const newId = currentId++;
|
|
taskMap.set(task.id, newId);
|
|
return {
|
|
...task,
|
|
id: newId,
|
|
status: 'pending',
|
|
priority: task.priority || 'medium',
|
|
dependencies: Array.isArray(task.dependencies) ? task.dependencies : [],
|
|
subtasks: []
|
|
};
|
|
});
|
|
|
|
// Remap dependencies for the NEWLY processed tasks
|
|
processedNewTasks.forEach((task) => {
|
|
task.dependencies = task.dependencies
|
|
.map((depId) => taskMap.get(depId)) // Map old AI ID to new sequential ID
|
|
.filter(
|
|
(newDepId) =>
|
|
newDepId != null && // Must exist
|
|
newDepId < task.id && // Must be a lower ID (could be existing or newly generated)
|
|
(findTaskById(existingTasks, newDepId) || // Check if it exists in old tasks OR
|
|
processedNewTasks.some((t) => t.id === newDepId)) // check if it exists in new tasks
|
|
);
|
|
});
|
|
|
|
const finalTasks = append
|
|
? [...existingTasks, ...processedNewTasks]
|
|
: processedNewTasks;
|
|
|
|
// Read the existing file to preserve other tags
|
|
let outputData = {};
|
|
if (fs.existsSync(tasksPath)) {
|
|
try {
|
|
const existingFileContent = fs.readFileSync(tasksPath, 'utf8');
|
|
outputData = JSON.parse(existingFileContent);
|
|
} catch (error) {
|
|
// If we can't read the existing file, start with empty object
|
|
outputData = {};
|
|
}
|
|
}
|
|
|
|
// Update only the target tag, preserving other tags
|
|
outputData[targetTag] = {
|
|
tasks: finalTasks,
|
|
metadata: {
|
|
created:
|
|
outputData[targetTag]?.metadata?.created || new Date().toISOString(),
|
|
updated: new Date().toISOString(),
|
|
description: `Tasks for ${targetTag} context`
|
|
}
|
|
};
|
|
|
|
// Ensure the target tag has proper metadata
|
|
ensureTagMetadata(outputData[targetTag], {
|
|
description: `Tasks for ${targetTag} context`
|
|
});
|
|
|
|
// Write the complete data structure back to the file
|
|
fs.writeFileSync(tasksPath, JSON.stringify(outputData, null, 2));
|
|
report(
|
|
`Successfully ${append ? 'appended' : 'generated'} ${processedNewTasks.length} tasks in ${tasksPath}${research ? ' with research-backed analysis' : ''}`,
|
|
'success'
|
|
);
|
|
|
|
// Generate markdown task files after writing tasks.json
|
|
// await generateTaskFiles(tasksPath, path.dirname(tasksPath), { mcpLog });
|
|
|
|
// Handle CLI output (e.g., success message)
|
|
if (outputFormat === 'text') {
|
|
console.log(
|
|
boxen(
|
|
chalk.green(
|
|
`Successfully generated ${processedNewTasks.length} new tasks${research ? ' with research-backed analysis' : ''}. Total tasks in ${tasksPath}: ${finalTasks.length}`
|
|
),
|
|
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
|
)
|
|
);
|
|
|
|
console.log(
|
|
boxen(
|
|
chalk.white.bold('Next Steps:') +
|
|
'\n\n' +
|
|
`${chalk.cyan('1.')} Run ${chalk.yellow('task-master list')} to view all tasks\n` +
|
|
`${chalk.cyan('2.')} Run ${chalk.yellow('task-master expand --id=<id>')} to break down a task into subtasks`,
|
|
{
|
|
padding: 1,
|
|
borderColor: 'cyan',
|
|
borderStyle: 'round',
|
|
margin: { top: 1 }
|
|
}
|
|
)
|
|
);
|
|
|
|
if (aiServiceResponse && aiServiceResponse.telemetryData) {
|
|
displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli');
|
|
}
|
|
}
|
|
|
|
// Return telemetry data
|
|
return {
|
|
success: true,
|
|
tasksPath,
|
|
telemetryData: aiServiceResponse?.telemetryData,
|
|
tagInfo: aiServiceResponse?.tagInfo
|
|
};
|
|
} catch (error) {
|
|
report(`Error parsing PRD: ${error.message}`, 'error');
|
|
|
|
// Only show error UI for text output (CLI)
|
|
if (outputFormat === 'text') {
|
|
console.error(chalk.red(`Error: ${error.message}`));
|
|
|
|
if (getDebugFlag(projectRoot)) {
|
|
// Use projectRoot for debug flag check
|
|
console.error(error);
|
|
}
|
|
|
|
process.exit(1);
|
|
} else {
|
|
throw error; // Re-throw for JSON output
|
|
}
|
|
}
|
|
}
|
|
|
|
export default parsePRD;
|