mirror of
https://github.com/eyaltoledano/claude-task-master.git
synced 2025-06-27 00:29:58 +00:00

* Update SWE scores (#657) * docs: Auto-update and format models.md * feat: Flexible brand rules management (#460) * chore(docs): update docs and rules related to model management. * feat(ai): Add OpenRouter AI provider support Integrates the OpenRouter AI provider using the Vercel AI SDK adapter (@openrouter/ai-sdk-provider). This allows users to configure and utilize models available through the OpenRouter platform. - Added src/ai-providers/openrouter.js with standard Vercel AI SDK wrapper functions (generateText, streamText, generateObject). - Updated ai-services-unified.js to include the OpenRouter provider in the PROVIDER_FUNCTIONS map and API key resolution logic. - Verified config-manager.js handles OpenRouter API key checks correctly. - Users can configure OpenRouter models via .taskmasterconfig using the task-master models command or MCP models tool. Requires OPENROUTER_API_KEY. - Enhanced error handling in ai-services-unified.js to provide clearer messages when generateObjectService fails due to lack of underlying tool support in the selected model/provider endpoint. * feat(cli): Add --status/-s filter flag to show command and get-task MCP tool Implements the ability to filter subtasks displayed by the `task-master show <id>` command using the `--status` (or `-s`) flag. This is also available in the MCP context. - Modified `commands.js` to add the `--status` option to the `show` command definition. - Updated `utils.js` (`findTaskById`) to handle the filtering logic and return original subtask counts/arrays when filtering. - Updated `ui.js` (`displayTaskById`) to use the filtered subtasks for the table, display a summary line when filtering, and use the original subtask list for the progress bar calculation. - Updated MCP `get_task` tool and `showTaskDirect` function to accept and pass the `status` parameter. - Added changeset entry. * fix(tasks): Improve next task logic to be subtask-aware * fix(tasks): Enable removing multiple tasks/subtasks via comma-separated IDs - Refactors the core `removeTask` function (`task-manager/remove-task.js`) to accept and iterate over comma-separated task/subtask IDs. - Updates dependency cleanup and file regeneration logic to run once after processing all specified IDs. - Adjusts the `remove-task` CLI command (`commands.js`) description and confirmation prompt to handle multiple IDs correctly. - Fixes a bug in the CLI confirmation prompt where task/subtask titles were not being displayed correctly. - Updates the `remove_task` MCP tool description to reflect the new multi-ID capability. This addresses the previously known issue where only the first ID in a comma-separated list was processed. Closes #140 * Update README.md (#342) * Update Discord badge (#337) * refactor(init): Improve robustness and dependencies; Update template deps for AI SDKs; Silence npm install in MCP; Improve conditional model setup logic; Refactor init.js flags; Tweak Getting Started text; Fix MCP server launch command; Update default model in config template * Refactor: Improve MCP logging, update E2E & tests Refactors MCP server logging and updates testing infrastructure. - MCP Server: - Replaced manual logger wrappers with centralized `createLogWrapper` utility. - Updated direct function calls to use `{ session, mcpLog }` context. - Removed deprecated `model` parameter from analyze, expand-all, expand-task tools. - Adjusted MCP tool import paths and parameter descriptions. - Documentation: - Modified `docs/configuration.md`. - Modified `docs/tutorial.md`. - Testing: - E2E Script (`run_e2e.sh`): - Removed `set -e`. - Added LLM analysis function (`analyze_log_with_llm`) & integration. - Adjusted test run directory creation timing. - Added debug echo statements. - Deleted Unit Tests: Removed `ai-client-factory.test.js`, `ai-client-utils.test.js`, `ai-services.test.js`. - Modified Fixtures: Updated `scripts/task-complexity-report.json`. - Dev Scripts: - Modified `scripts/dev.js`. * chore(tests): Passes tests for merge candidate - Adjusted the interactive model default choice to be 'no change' instead of 'cancel setup' - E2E script has been perfected and works as designed provided there are all provider API keys .env in the root - Fixes the entire test suite to make sure it passes with the new architecture. - Fixes dependency command to properly show there is a validation failure if there is one. - Refactored config-manager.test.js mocking strategy and fixed assertions to read the real supported-models.json - Fixed rule-transformer.test.js assertion syntax and transformation logic adjusting replacement for search which was too broad. - Skip unstable tests in utils.test.js (log, readJSON, writeJSON error paths) due to SIGABRT crash. These tests trigger a native crash (SIGABRT), likely stemming from a conflict between internal chalk usage within the functions and Jest's test environment, possibly related to ESM module handling. * chore(wtf): removes chai. not sure how that even made it in here. also removes duplicate test in scripts/. * fix: ensure API key detection properly reads .env in MCP context Problem: - Task Master model configuration wasn't properly checking for API keys in the project's .env file when running through MCP - The isApiKeySet function was only checking session.env and process.env but not inspecting the .env file directly - This caused incorrect API key status reporting in MCP tools even when keys were properly set in .env Solution: - Modified resolveEnvVariable function in utils.js to properly read from .env file at projectRoot - Updated isApiKeySet to correctly pass projectRoot to resolveEnvVariable - Enhanced the key detection logic to have consistent behavior between CLI and MCP contexts - Maintains the correct precedence: session.env → .env file → process.env Testing: - Verified working correctly with both MCP and CLI tools - API keys properly detected in .env file in both contexts - Deleted .cursor/mcp.json to confirm introspection of .env as fallback works * fix(update): pass projectRoot through update command flow Modified ai-services-unified.js, update.js tool, and update-tasks.js direct function to correctly pass projectRoot. This enables the .env file API key fallback mechanism for the update command when running via MCP, ensuring consistent key resolution with the CLI context. * fix(analyze-complexity): pass projectRoot through analyze-complexity flow Modified analyze-task-complexity.js core function, direct function, and analyze.js tool to correctly pass projectRoot. Fixed import error in tools/index.js. Added debug logging to _resolveApiKey in ai-services-unified.js. This enables the .env API key fallback for analyze_project_complexity. * fix(add-task): pass projectRoot and fix logging/refs Modified add-task core, direct function, and tool to pass projectRoot for .env API key fallback. Fixed logFn reference error and removed deprecated reportProgress call in core addTask function. Verified working. * fix(parse-prd): pass projectRoot and fix schema/logging Modified parse-prd core, direct function, and tool to pass projectRoot for .env API key fallback. Corrected Zod schema used in generateObjectService call. Fixed logFn reference error in core parsePRD. Updated unit test mock for utils.js. * fix(update-task): pass projectRoot and adjust parsing Modified update-task-by-id core, direct function, and tool to pass projectRoot. Reverted parsing logic in core function to prioritize `{...}` extraction, resolving parsing errors. Fixed ReferenceError by correctly destructuring projectRoot. * fix(update-subtask): pass projectRoot and allow updating done subtasks Modified update-subtask-by-id core, direct function, and tool to pass projectRoot for .env API key fallback. Removed check preventing appending details to completed subtasks. * fix(mcp, expand): pass projectRoot through expand/expand-all flows Problem: expand_task & expand_all MCP tools failed with .env keys due to missing projectRoot propagation for API key resolution. Also fixed a ReferenceError: wasSilent is not defined in expandTaskDirect. Solution: Modified core logic, direct functions, and MCP tools for expand-task and expand-all to correctly destructure projectRoot from arguments and pass it down through the context object to the AI service call (generateTextService). Fixed wasSilent scope in expandTaskDirect. Verification: Tested expand_task successfully in MCP using .env keys. Reviewed expand_all flow for correct projectRoot propagation. * chore: prettier * fix(expand-all): add projectRoot to expandAllTasksDirect invokation. * fix(update-tasks): Improve AI response parsing for 'update' command Refactors the JSON array parsing logic within in . The previous logic primarily relied on extracting content from markdown code blocks (json or javascript), which proved brittle when the AI response included comments or non-JSON text within the block, leading to parsing errors for the command. This change modifies the parsing strategy to first attempt extracting content directly between the outermost '[' and ']' brackets. This is more robust as it targets the expected array structure directly. If bracket extraction fails, it falls back to looking for a strict json code block, then prefix stripping, before attempting a raw parse. This approach aligns with the successful parsing strategy used for single-object responses in and resolves the parsing errors previously observed with the command. * refactor(mcp): introduce withNormalizedProjectRoot HOF for path normalization Added HOF to mcp tools utils to normalize projectRoot from args/session. Refactored get-task tool to use HOF. Updated relevant documentation. * refactor(mcp): apply withNormalizedProjectRoot HOF to update tool Problem: The MCP tool previously handled project root acquisition and path resolution within its method, leading to potential inconsistencies and repetition. Solution: Refactored the tool () to utilize the new Higher-Order Function (HOF) from . Specific Changes: - Imported HOF. - Updated the Zod schema for the parameter to be optional, as the HOF handles deriving it from the session if not provided. - Wrapped the entire function body with the HOF. - Removed the manual call to from within the function body. - Destructured the from the object received by the wrapped function, ensuring it's the normalized path provided by the HOF. - Used the normalized variable when calling and when passing arguments to . This change standardizes project root handling for the tool, simplifies its method, and ensures consistent path normalization. This serves as the pattern for refactoring other MCP tools. * fix: apply to all tools withNormalizedProjectRoot to fix projectRoot issues for linux and windows * fix: add rest of tools that need wrapper * chore: cleanup tools to stop using rootFolder and remove unused imports * chore: more cleanup * refactor: Improve update-subtask, consolidate utils, update config This commit introduces several improvements and refactorings across MCP tools, core logic, and configuration. **Major Changes:** 1. **Refactor updateSubtaskById:** - Switched from generateTextService to generateObjectService for structured AI responses, using a Zod schema (subtaskSchema) for validation. - Revised prompts to have the AI generate relevant content based on user request and context (parent/sibling tasks), while explicitly preventing AI from handling timestamp/tag formatting. - Implemented **local timestamp generation (new Date().toISOString()) and formatting** (using <info added on ...> tags) within the function *after* receiving the AI response. This ensures reliable and correctly formatted details are appended. - Corrected logic to append only the locally formatted, AI-generated content block to the existing subtask.details. 2. **Consolidate MCP Utilities:** - Moved/consolidated the withNormalizedProjectRoot HOF into mcp-server/src/tools/utils.js. - Updated MCP tools (like update-subtask.js) to import withNormalizedProjectRoot from the new location. 3. **Refactor Project Initialization:** - Deleted the redundant mcp-server/src/core/direct-functions/initialize-project-direct.js file. - Updated mcp-server/src/core/task-master-core.js to import initializeProjectDirect from its correct location (./direct-functions/initialize-project.js). **Other Changes:** - Updated .taskmasterconfig fallback model to claude-3-7-sonnet-20250219. - Clarified model cost representation in the models tool description (taskmaster.mdc and mcp-server/src/tools/models.js). * fix: displayBanner logging when silentMode is active (#385) * fix: improve error handling, test options, and model configuration - Enhance error validation in parse-prd.js and update-tasks.js - Fix bug where mcpLog was incorrectly passed as logWrapper - Improve error messages and response formatting - Add --skip-verification flag to E2E tests - Update MCP server config that ships with init to match new API key structure - Fix task force/append handling in parse-prd command - Increase column width in update-tasks display * chore: fixes parse prd to show loading indicator in cli. * fix(parse-prd): suggested fix for mcpLog was incorrect. reverting to my previously working code. * chore(init): No longer ships readme with task-master init (commented out for now). No longer looking for task-master-mcp, instead checked for task-master-ai - this should prevent the init sequence from needlessly adding another mcp server with task-master-mcp to the mpc.json which a ton of people probably ran into. * chore: restores 3.7 sonnet as the main role. * fix(add/remove-dependency): dependency mcp tools were failing due to hard-coded tasks path in generate task files. * chore: removes tasks json backup that was temporarily created. * fix(next): adjusts mcp tool response to correctly return the next task/subtask. Also adds nextSteps to the next task response. * chore: prettier * chore: readme typos * fix(config): restores sonnet 3.7 as default main role. * Version Packages * hotfix: move production package to "dependencies" (#399) * Version Packages * Fix: issues with 0.13.0 not working (#402) * Exit prerelease mode and version packages * hotfix: move production package to "dependencies" * Enter prerelease mode and version packages * Enter prerelease mode and version packages * chore: cleanup * chore: improve pre.json and add pre-release workflow * chore: fix package.json * chore: cleanup * chore: improve pre-release workflow * chore: allow github actions to commit * extract fileMap and conversionConfig into brand profile * extract into brand profile * add windsurf profile * add remove brand rules function * fix regex * add rules command to add/remove rules for a specific brand * fix post processing for roo * allow multiples * add cursor profile * update test for new structure * move rules to assets * use assets/rules for rules files * use standardized setupMCP function * fix formatting * fix formatting * add logging * fix escapes * default to cursor * allow init with certain rulesets; no more .windsurfrules * update docs * update log msg * fix formatting * keep mdc extension for cursor * don't rewrite .mdc to .md inside the files * fix roo init (add modes) * fix cursor init (don't use roo transformation by default) * use more generic function names * update docs * fix formatting * update function names * add changeset * add rules to mcp initialize project * register tool with mcp server * update docs * add integration test * fix cursor initialization * rule selection * fix formatting * fix MCP - remove yes flag * add import * update roo tests * add/update tests * remove test * add rules command test * update MCP responses, centralize rules profiles & helpers * fix logging and MCP response messages * fix formatting * incorrect test * fix tests * update fileMap * fix file extension transformations * fix formatting * add rules command test * test already covered * fix formatting * move renaming logic into profiles * make sure dir is deleted (DS_Store) * add confirmation for rules removal * add force flag for rules remove * use force flag for test * remove yes parameter * fix formatting * import brand profiles from rule-transformer.js * update comment * add interactive rules setup * optimize * only copy rules specifically listed in fileMap * update comment * add cline profile * add brandDir to remove ambiguity and support Cline * specify whether to create mcp config and filename * add mcpConfigName value for parh * fix formatting * remove rules just for this repository - only include rules to be distributed * update error message * update "brand rules" to "rules" * update to minor * remove comment * remove comments * move to /src/utils * optimize imports * move rules-setup.js to /src/utils * move rule-transformer.js to /src/utils * move confirmation to /src/ui/confirm.js * default to all rules * use profile js for mcp config settings * only run rules interactive setup if not provided via command line * update comments * initialize with all brands if nothing specified * update var name * clean up * enumerate brands for brand rules * update instructions * add test to check for brand profiles * fix quotes * update semantics and terminology from 'brand rules' to 'rules profiles' * fix formatting * fix formatting * update function name and remove copying of cursor rules, now handled by rules transformer * update comment * rename to mcp-config-setup.js * use enums for rules actions * add aggregate reporting for rules add command * add missing log message * use simpler path * use base profile with modifications for each brand * use displayName and don't select any defaults in setup * add confirmation if removing ALL rules profiles, and add --force flag on rules remove * Use profile-detection instead of rules-detection * add newline at end of mcp config * add proper formatting for mcp.json * update rules * update rules * update rules * add checks for other rules and other profile folder items before removing * update confirmation for rules remove * update docs * update changeset * fix for filepath at bottom of rule * Update cline profile and add test; adjust other rules tests * update changeset * update changeset * clarify init for all profiles if not specified * update rule text * revert text * use "rule profiles" instead of "rules profiles" * use standard tool mappings for windsurf * add Trae support * update changeset * update wording * update to 'rule profile' * remove unneeded exports to optimize loc * combine to /src/utils/profiles.js; add codex and claude code profiles * rename function and add boxen * add claude and codex integration tests * organize tests into profiles folder * mock fs for transformer tests * update UI * add cline and trae integration tests * update test * update function name * update formatting * Update change set with new profiles * move profile integration tests to subdirectory * properly create temp directories in /tmp folder * fix formatting * use taskmaster subfolder for the 2 TM rules * update wording * ensure subdirectory exists * update rules from next * update from next * update taskmaster rule * add details on new rules command and init * fix mcp init * fix MCP path to assets * remove duplication * remove duplication * MCP server path fixes for rules command * fix for CLI roo rules add/remove * update tests * fix formatting * fix pattern for interactive rule profiles setup * restore comments * restore comments * restore comments * remove unused import, fix quotes * add missing integration tests * add VS Code profile and tests * update docs and rules to include vscode profile * add rules subdirectory support per-profile * move profiles to /src * fix formatting * rename to remove ambiguity * use --setup for rules interactive setup * Fix Cursor deeplink installation with copy-paste instructions (#723) * change roo boomerang to orchestrator; update tests that don't use modes * fix newline * chore: cleanup --------- Co-authored-by: Eyal Toledano <eyal@microangel.so> Co-authored-by: Yuval <yuvalbl@users.noreply.github.com> Co-authored-by: Marijn van der Werf <marijn.vanderwerf@gmail.com> Co-authored-by: Eyal Toledano <eutait@gmail.com> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * fix: providers config for azure, bedrock, and vertex (#822) * fix: providers config for azure, bedrock, and vertex * chore: improve changelog * chore: fix CI * fix: switch to ESM export to avoid mixed format (#633) * fix: switch to ESM export to avoid mixed format The CLI entrypoint was using `module.exports` alongside ESM `import` statements, resulting in an invalid mixed module format. Replaced the CommonJS export with a proper ESM `export` to maintain consistency and prevent module resolution issues. * chore: add changeset --------- Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> * fix: Fix external provider support (#726) * fix(bedrock): improve AWS credential handling and add model definitions (#826) * fix(bedrock): improve AWS credential handling and add model definitions - Change error to warning when AWS credentials are missing in environment - Allow fallback to system configuration (aws config files or instance profiles) - Remove hardcoded region and profile parameters in Bedrock client - Add Claude 3.7 Sonnet and DeepSeek R1 model definitions for Bedrock - Update config manager to properly handle Bedrock provider * chore: cleanup and format and small refactor --------- Co-authored-by: Ray Krueger <raykrueger@gmail.com> * docs: Auto-update and format models.md * Version Packages * chore: fix package.json * Fix/expand command tag corruption (#827) * fix(expand): Fix tag corruption in expand command - Fix tag parameter passing through MCP expand-task flow - Add tag parameter to direct function and tool registration - Fix contextGatherer method name from _buildDependencyContext to _buildDependencyGraphs - Add comprehensive test coverage for tag handling in expand-task - Ensures tagged task structure is preserved during expansion - Prevents corruption when tag is undefined. Fixes expand command causing tag corruption in tagged task lists. All existing tests pass and new test coverage added. * test(e2e): Add comprehensive tag-aware expand testing to verify tag corruption fix - Add new test section for feature-expand tag creation and testing - Verify tag preservation during expand, force expand, and expand --all operations - Test that master tag remains intact and feature-expand tag receives subtasks correctly - Fix file path references to use correct .taskmaster/tasks/tasks.json location - Fix config file check to use .taskmaster/config.json instead of .taskmasterconfig - All tag corruption verification tests pass successfully in E2E test * fix(changeset): Update E2E test improvements changeset to properly reflect tag corruption fix verification * chore(changeset): combine duplicate changesets for expand tag corruption fix Merge eighty-breads-wonder.md into bright-llamas-enter.md to consolidate the expand command fix and its comprehensive E2E testing enhancements into a single changeset entry. * Delete .changeset/eighty-breads-wonder.md * Version Packages * chore: fix package.json * fix(expand): Enhance context handling in expandAllTasks function - Added `tag` to context destructuring for better context management. - Updated `readJSON` call to include `contextTag` for improved data integrity. - Ensured the correct tag is passed during task expansion to prevent tag corruption. --------- Co-authored-by: Parththipan Thaniperumkarunai <parththipan.thaniperumkarunai@milkmonkey.de> Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Add pyproject.toml as project root marker (#804) * feat: Add pyproject.toml as project root marker - Added 'pyproject.toml' to the project markers array in findProjectRoot() - Enables Task Master to recognize Python projects using pyproject.toml - Improves project root detection for modern Python development workflows - Maintains compatibility with existing Node.js and Git-based detection * chore: add changeset --------- Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> * feat: add Claude Code provider support Implements Claude Code as a new AI provider that uses the Claude Code CLI without requiring API keys. This enables users to leverage Claude models through their local Claude Code installation. Key changes: - Add complete AI SDK v1 implementation for Claude Code provider - Custom SDK with streaming/non-streaming support - Session management for conversation continuity - JSON extraction for object generation mode - Support for advanced settings (maxTurns, allowedTools, etc.) - Integrate Claude Code into Task Master's provider system - Update ai-services-unified.js to handle keyless authentication - Add provider to supported-models.json with opus/sonnet models - Ensure correct maxTokens values are applied (opus: 32000, sonnet: 64000) - Fix maxTokens configuration issue - Add max_tokens property to getAvailableModels() output - Update setModel() to properly handle claude-code models - Create update-config-tokens.js utility for init process - Add comprehensive documentation - User guide with configuration examples - Advanced settings explanation and future integration options The implementation maintains full backward compatibility with existing providers while adding seamless Claude Code support to all Task Master commands. * fix(docs): correct invalid commands in claude-code usage examples - Remove non-existent 'do', 'estimate', and 'analyze' commands - Replace with actual Task Master commands: next, show, set-status - Use correct syntax for parse-prd and analyze-complexity * feat: make @anthropic-ai/claude-code an optional dependency This change makes the Claude Code SDK package optional, preventing installation failures for users who don't need Claude Code functionality. Changes: - Added @anthropic-ai/claude-code to optionalDependencies in package.json - Implemented lazy loading in language-model.js to only import the SDK when actually used - Updated documentation to explain the optional installation requirement - Applied formatting fixes to ensure code consistency Benefits: - Users without Claude Code subscriptions don't need to install the dependency - Reduces package size for users who don't use Claude Code - Prevents installation failures if the package is unavailable - Provides clear error messages when the package is needed but not installed The implementation uses dynamic imports to load the SDK only when doGenerate() or doStream() is called, ensuring the provider can be instantiated without the package present. * test: add comprehensive tests for ClaudeCodeProvider Addresses code review feedback about missing automated tests for the ClaudeCodeProvider. ## Changes - Added unit tests for ClaudeCodeProvider class covering constructor, validateAuth, and getClient methods - Added unit tests for ClaudeCodeLanguageModel testing lazy loading behavior and error handling - Added integration tests verifying optional dependency behavior when @anthropic-ai/claude-code is not installed ## Test Coverage 1. **Unit Tests**: - ClaudeCodeProvider: Basic functionality, no API key requirement, client creation - ClaudeCodeLanguageModel: Model initialization, lazy loading, error messages, warning generation 2. **Integration Tests**: - Optional dependency behavior when package is not installed - Clear error messages for users about missing package - Provider instantiation works but usage fails gracefully All tests pass and provide comprehensive coverage for the claude-code provider implementation. * revert: remove maxTokens update functionality from init This functionality was out of scope for the Claude Code provider PR. The automatic updating of maxTokens values in config.json during initialization is a general improvement that should be in a separate PR. Additionally, Claude Code ignores maxTokens and temperature parameters anyway, making this change irrelevant for the Claude Code integration. Removed: - scripts/modules/update-config-tokens.js - Import and usage in scripts/init.js * docs: add Claude Code support information to README - Added Claude Code to the list of supported providers in Requirements section - Noted that Claude Code requires no API key but needs Claude Code CLI - Added example of configuring claude-code/sonnet model - Created dedicated Claude Code Support section with key information - Added link to detailed Claude Code setup documentation This ensures users are aware of the Claude Code option as a no-API-key alternative for using Claude models. * style: apply biome formatting to test files * fix(models): add missing --claude-code flag to models command The models command was missing the --claude-code provider flag, preventing users from setting Claude Code models via CLI. While the backend already supported claude-code as a provider hint, there was no command-line flag to trigger it. Changes: - Added --claude-code option to models command alongside existing provider flags - Updated provider flags validation to include claudeCode option - Added claude-code to providerHint logic for all three model roles (main, research, fallback) - Updated error message to include --claude-code in list of mutually exclusive flags - Added example usage in help text This allows users to properly set Claude Code models using commands like: task-master models --set-main sonnet --claude-code task-master models --set-main opus --claude-code Without this flag, users would get "Model ID not found" errors when trying to set claude-code models, as the system couldn't determine the correct provider for generic model names like "sonnet" or "opus". * chore: add changeset for Claude Code provider feature * docs: Auto-update and format models.md * readme: add troubleshooting note for MCP tools not working * Feature/compatibleapisupport (#830) * add compatible platform api support * Adjust the code according to the suggestions * Fully revised as requested: restored all required checks, improved compatibility, and converted all comments to English. * feat: Add support for compatible API endpoints via baseURL * chore: Add changeset for compatible API support * chore: cleanup * chore: improve changeset * fix: package-lock.json * fix: package-lock.json --------- Co-authored-by: He-Xun <1226807142@qq.com> * Rename Roo Code "Boomerang" role to "Orchestrator" (#831) * feat: Enhanced project initialization with Git worktree detection (#743) * Fix Cursor deeplink installation with copy-paste instructions (#723) * detect git worktree * add changeset * add aliases and git flags * add changeset * rename and update test * add store tasks in git functionality * update changeset * fix newline * remove unused import * update command wording * update command option text * fix: update task by id (#834) * store tasks in git by default (#835) * Call rules interactive setup during init (#833) * chore: rc version bump * feat: Claude Code slash commands for Task Master (#774) * Fix Cursor deeplink installation with copy-paste instructions (#723) * fix: expand-task (#755) * docs: Update o3 model price (#751) * docs: Auto-update and format models.md * docs: Auto-update and format models.md * feat: Add Claude Code task master commands Adds Task Master slash commands for Claude Code under /project:tm/ namespace --------- Co-authored-by: Joe Danziger <joe@ticc.net> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com> Co-authored-by: Volodymyr Zahorniak <7808206+zahorniak@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com> * feat: make more compatible with "o" family models (#839) * docs: Auto-update and format models.md * docs: Add comprehensive Azure OpenAI configuration documentation (#837) * docs: Add comprehensive Azure OpenAI configuration documentation - Add detailed Azure OpenAI configuration section with prerequisites, authentication, and setup options - Include both global and per-model baseURL configuration examples - Add comprehensive troubleshooting guide for common Azure OpenAI issues - Update environment variables section with Azure OpenAI examples - Add Azure OpenAI models to all model tables (Main, Research, Fallback) - Include prominent Azure configuration example in main documentation - Fix azureBaseURL format to use correct Azure OpenAI endpoint structure Addresses common Azure OpenAI setup challenges and provides clear guidance for new users. * refactor: Move Azure models from docs/models.md to scripts/modules/supported-models.json - Remove Azure model entries from documentation tables - Add Azure provider section to supported-models.json with gpt-4o, gpt-4o-mini, and gpt-4-1 - Maintain consistency with existing model configuration structure * docs: Auto-update and format models.md * Version Packages * chore: format fix --------- Co-authored-by: Riccardo (Ricky) Esclapon <32306488+ries9112@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Joe Danziger <joe@ticc.net> Co-authored-by: Eyal Toledano <eyal@microangel.so> Co-authored-by: Yuval <yuvalbl@users.noreply.github.com> Co-authored-by: Marijn van der Werf <marijn.vanderwerf@gmail.com> Co-authored-by: Eyal Toledano <eutait@gmail.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Nathan Marley <nathan@glowberrylabs.com> Co-authored-by: Ray Krueger <raykrueger@gmail.com> Co-authored-by: Parththipan Thaniperumkarunai <parththipan.thaniperumkarunai@milkmonkey.de> Co-authored-by: Parthy <52548018+mm-parthy@users.noreply.github.com> Co-authored-by: ejones40 <ethan.jones@fortyau.com> Co-authored-by: Ben Vargas <ben@vargas.com> Co-authored-by: V4G4X <34249137+V4G4X@users.noreply.github.com> Co-authored-by: He-Xun <1226807142@qq.com> Co-authored-by: neno <github@meaning.systems> Co-authored-by: Volodymyr Zahorniak <7808206+zahorniak@users.noreply.github.com> Co-authored-by: neno-is-ooo <204701868+neno-is-ooo@users.noreply.github.com> Co-authored-by: Jitesh Thakur <56656484+Jitha-afk@users.noreply.github.com>
1353 lines
38 KiB
JavaScript
1353 lines
38 KiB
JavaScript
/**
|
|
* utils.js
|
|
* Utility functions for the Task Master CLI
|
|
*/
|
|
|
|
import fs from 'fs';
|
|
import path from 'path';
|
|
import chalk from 'chalk';
|
|
import dotenv from 'dotenv';
|
|
// Import specific config getters needed here
|
|
import { getLogLevel, getDebugFlag } from './config-manager.js';
|
|
import * as gitUtils from './utils/git-utils.js';
|
|
import {
|
|
COMPLEXITY_REPORT_FILE,
|
|
LEGACY_COMPLEXITY_REPORT_FILE,
|
|
LEGACY_CONFIG_FILE
|
|
} from '../../src/constants/paths.js';
|
|
|
|
// Global silent mode flag
|
|
let silentMode = false;
|
|
|
|
// --- Environment Variable Resolution Utility ---
|
|
/**
|
|
* Resolves an environment variable's value.
|
|
* Precedence:
|
|
* 1. session.env (if session provided)
|
|
* 2. process.env
|
|
* 3. .env file at projectRoot (if projectRoot provided)
|
|
* @param {string} key - The environment variable key.
|
|
* @param {object|null} [session=null] - The MCP session object.
|
|
* @param {string|null} [projectRoot=null] - The project root directory (for .env fallback).
|
|
* @returns {string|undefined} The value of the environment variable or undefined if not found.
|
|
*/
|
|
function resolveEnvVariable(key, session = null, projectRoot = null) {
|
|
// 1. Check session.env
|
|
if (session?.env?.[key]) {
|
|
return session.env[key];
|
|
}
|
|
|
|
// 2. Read .env file at projectRoot
|
|
if (projectRoot) {
|
|
const envPath = path.join(projectRoot, '.env');
|
|
if (fs.existsSync(envPath)) {
|
|
try {
|
|
const envFileContent = fs.readFileSync(envPath, 'utf-8');
|
|
const parsedEnv = dotenv.parse(envFileContent); // Use dotenv to parse
|
|
if (parsedEnv && parsedEnv[key]) {
|
|
// console.log(`DEBUG: Found key ${key} in ${envPath}`); // Optional debug log
|
|
return parsedEnv[key];
|
|
}
|
|
} catch (error) {
|
|
// Log error but don't crash, just proceed as if key wasn't found in file
|
|
log('warn', `Could not read or parse ${envPath}: ${error.message}`);
|
|
}
|
|
}
|
|
}
|
|
|
|
// 3. Fallback: Check process.env
|
|
if (process.env[key]) {
|
|
return process.env[key];
|
|
}
|
|
|
|
// Not found anywhere
|
|
return undefined;
|
|
}
|
|
|
|
// --- Project Root Finding Utility ---
|
|
/**
|
|
* Recursively searches upwards for project root starting from a given directory.
|
|
* @param {string} [startDir=process.cwd()] - The directory to start searching from.
|
|
* @param {string[]} [markers=['package.json', '.git', LEGACY_CONFIG_FILE]] - Marker files/dirs to look for.
|
|
* @returns {string|null} The path to the project root, or null if not found.
|
|
*/
|
|
function findProjectRoot(
|
|
startDir = process.cwd(),
|
|
markers = ['package.json', 'pyproject.toml', '.git', LEGACY_CONFIG_FILE]
|
|
) {
|
|
let currentPath = path.resolve(startDir);
|
|
const rootPath = path.parse(currentPath).root;
|
|
|
|
while (currentPath !== rootPath) {
|
|
// Check if any marker exists in the current directory
|
|
const hasMarker = markers.some((marker) => {
|
|
const markerPath = path.join(currentPath, marker);
|
|
return fs.existsSync(markerPath);
|
|
});
|
|
|
|
if (hasMarker) {
|
|
return currentPath;
|
|
}
|
|
|
|
// Move up one directory
|
|
currentPath = path.dirname(currentPath);
|
|
}
|
|
|
|
// Check the root directory as well
|
|
const hasMarkerInRoot = markers.some((marker) => {
|
|
const markerPath = path.join(rootPath, marker);
|
|
return fs.existsSync(markerPath);
|
|
});
|
|
|
|
return hasMarkerInRoot ? rootPath : null;
|
|
}
|
|
|
|
// --- Dynamic Configuration Function --- (REMOVED)
|
|
|
|
// --- Logging and Utility Functions ---
|
|
|
|
// Set up logging based on log level
|
|
const LOG_LEVELS = {
|
|
debug: 0,
|
|
info: 1,
|
|
warn: 2,
|
|
error: 3,
|
|
success: 1 // Treat success like info level
|
|
};
|
|
|
|
/**
|
|
* Returns the task manager module
|
|
* @returns {Promise<Object>} The task manager module object
|
|
*/
|
|
async function getTaskManager() {
|
|
return import('./task-manager.js');
|
|
}
|
|
|
|
/**
|
|
* Enable silent logging mode
|
|
*/
|
|
function enableSilentMode() {
|
|
silentMode = true;
|
|
}
|
|
|
|
/**
|
|
* Disable silent logging mode
|
|
*/
|
|
function disableSilentMode() {
|
|
silentMode = false;
|
|
}
|
|
|
|
/**
|
|
* Check if silent mode is enabled
|
|
* @returns {boolean} True if silent mode is enabled
|
|
*/
|
|
function isSilentMode() {
|
|
return silentMode;
|
|
}
|
|
|
|
/**
|
|
* Logs a message at the specified level
|
|
* @param {string} level - The log level (debug, info, warn, error)
|
|
* @param {...any} args - Arguments to log
|
|
*/
|
|
function log(level, ...args) {
|
|
// Immediately return if silentMode is enabled
|
|
if (isSilentMode()) {
|
|
return;
|
|
}
|
|
|
|
// GUARD: Prevent circular dependency during config loading
|
|
// Use a simple fallback log level instead of calling getLogLevel()
|
|
let configLevel = 'info'; // Default fallback
|
|
try {
|
|
// Only try to get config level if we're not in the middle of config loading
|
|
configLevel = getLogLevel() || 'info';
|
|
} catch (error) {
|
|
// If getLogLevel() fails (likely due to circular dependency),
|
|
// use default 'info' level and continue
|
|
configLevel = 'info';
|
|
}
|
|
|
|
// Use text prefixes instead of emojis
|
|
const prefixes = {
|
|
debug: chalk.gray('[DEBUG]'),
|
|
info: chalk.blue('[INFO]'),
|
|
warn: chalk.yellow('[WARN]'),
|
|
error: chalk.red('[ERROR]'),
|
|
success: chalk.green('[SUCCESS]')
|
|
};
|
|
|
|
// Ensure level exists, default to info if not
|
|
const currentLevel = LOG_LEVELS.hasOwnProperty(level) ? level : 'info';
|
|
|
|
// Check log level configuration
|
|
if (
|
|
LOG_LEVELS[currentLevel] >= (LOG_LEVELS[configLevel] ?? LOG_LEVELS.info)
|
|
) {
|
|
const prefix = prefixes[currentLevel] || '';
|
|
// Use console.log for all levels, let chalk handle coloring
|
|
// Construct the message properly
|
|
const message = args
|
|
.map((arg) => (typeof arg === 'object' ? JSON.stringify(arg) : arg))
|
|
.join(' ');
|
|
console.log(`${prefix} ${message}`);
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Checks if the data object has a tagged structure (contains tag objects with tasks arrays)
|
|
* @param {Object} data - The data object to check
|
|
* @returns {boolean} True if the data has a tagged structure
|
|
*/
|
|
function hasTaggedStructure(data) {
|
|
if (!data || typeof data !== 'object') {
|
|
return false;
|
|
}
|
|
|
|
// Check if any top-level properties are objects with tasks arrays
|
|
for (const key in data) {
|
|
if (
|
|
data.hasOwnProperty(key) &&
|
|
typeof data[key] === 'object' &&
|
|
Array.isArray(data[key].tasks)
|
|
) {
|
|
return true;
|
|
}
|
|
}
|
|
return false;
|
|
}
|
|
|
|
/**
|
|
* Reads and parses a JSON file
|
|
* @param {string} filepath - Path to the JSON file
|
|
* @param {string} [projectRoot] - Optional project root for tag resolution (used by MCP)
|
|
* @param {string} [tag] - Optional tag to use instead of current tag resolution
|
|
* @returns {Object|null} The parsed JSON data or null if error
|
|
*/
|
|
function readJSON(filepath, projectRoot = null, tag = null) {
|
|
// GUARD: Prevent circular dependency during config loading
|
|
let isDebug = false; // Default fallback
|
|
try {
|
|
// Only try to get debug flag if we're not in the middle of config loading
|
|
isDebug = getDebugFlag();
|
|
} catch (error) {
|
|
// If getDebugFlag() fails (likely due to circular dependency),
|
|
// use default false and continue
|
|
}
|
|
|
|
if (isDebug) {
|
|
console.log(
|
|
`readJSON called with: ${filepath}, projectRoot: ${projectRoot}, tag: ${tag}`
|
|
);
|
|
}
|
|
|
|
if (!filepath) {
|
|
return null;
|
|
}
|
|
|
|
let data;
|
|
try {
|
|
data = JSON.parse(fs.readFileSync(filepath, 'utf8'));
|
|
if (isDebug) {
|
|
console.log(`Successfully read JSON from ${filepath}`);
|
|
}
|
|
} catch (err) {
|
|
if (isDebug) {
|
|
console.log(`Failed to read JSON from ${filepath}: ${err.message}`);
|
|
}
|
|
return null;
|
|
}
|
|
|
|
// If it's not a tasks.json file, return as-is
|
|
if (!filepath.includes('tasks.json') || !data) {
|
|
if (isDebug) {
|
|
console.log(`File is not tasks.json or data is null, returning as-is`);
|
|
}
|
|
return data;
|
|
}
|
|
|
|
// Check if this is legacy format that needs migration
|
|
// Only migrate if we have tasks at the ROOT level AND no tag-like structure
|
|
if (
|
|
Array.isArray(data.tasks) &&
|
|
!data._rawTaggedData &&
|
|
!hasTaggedStructure(data)
|
|
) {
|
|
if (isDebug) {
|
|
console.log(`File is in legacy format, performing migration...`);
|
|
}
|
|
|
|
// This is legacy format - migrate it to tagged format
|
|
const migratedData = {
|
|
master: {
|
|
tasks: data.tasks,
|
|
metadata: data.metadata || {
|
|
created: new Date().toISOString(),
|
|
updated: new Date().toISOString(),
|
|
description: 'Tasks for master context'
|
|
}
|
|
}
|
|
};
|
|
|
|
// Write the migrated data back to the file
|
|
try {
|
|
writeJSON(filepath, migratedData);
|
|
if (isDebug) {
|
|
console.log(`Successfully migrated legacy format to tagged format`);
|
|
}
|
|
|
|
// Perform complete migration (config.json, state.json)
|
|
performCompleteTagMigration(filepath);
|
|
|
|
// Check and auto-switch git tags if enabled (after migration)
|
|
// This needs to run synchronously BEFORE tag resolution
|
|
if (projectRoot) {
|
|
try {
|
|
// Run git integration synchronously
|
|
gitUtils.checkAndAutoSwitchGitTagSync(projectRoot, filepath);
|
|
} catch (error) {
|
|
// Silent fail - don't break normal operations
|
|
}
|
|
}
|
|
|
|
// Mark for migration notice
|
|
markMigrationForNotice(filepath);
|
|
} catch (writeError) {
|
|
if (isDebug) {
|
|
console.log(`Error writing migrated data: ${writeError.message}`);
|
|
}
|
|
// If write fails, continue with the original data
|
|
}
|
|
|
|
// Continue processing with the migrated data structure
|
|
data = migratedData;
|
|
}
|
|
|
|
// If we have tagged data, we need to resolve which tag to use
|
|
if (typeof data === 'object' && !data.tasks) {
|
|
// This is tagged format
|
|
if (isDebug) {
|
|
console.log(`File is in tagged format, resolving tag...`);
|
|
}
|
|
|
|
// Ensure all tags have proper metadata before proceeding
|
|
for (const tagName in data) {
|
|
if (
|
|
data.hasOwnProperty(tagName) &&
|
|
typeof data[tagName] === 'object' &&
|
|
data[tagName].tasks
|
|
) {
|
|
try {
|
|
ensureTagMetadata(data[tagName], {
|
|
description: `Tasks for ${tagName} context`,
|
|
skipUpdate: true // Don't update timestamp during read operations
|
|
});
|
|
} catch (error) {
|
|
// If ensureTagMetadata fails, continue without metadata
|
|
if (isDebug) {
|
|
console.log(
|
|
`Failed to ensure metadata for tag ${tagName}: ${error.message}`
|
|
);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
// Store reference to the raw tagged data for functions that need it
|
|
const originalTaggedData = JSON.parse(JSON.stringify(data));
|
|
|
|
// Check and auto-switch git tags if enabled (for existing tagged format)
|
|
// This needs to run synchronously BEFORE tag resolution
|
|
if (projectRoot) {
|
|
try {
|
|
// Run git integration synchronously
|
|
gitUtils.checkAndAutoSwitchGitTagSync(projectRoot, filepath);
|
|
} catch (error) {
|
|
// Silent fail - don't break normal operations
|
|
}
|
|
}
|
|
|
|
try {
|
|
// Default to master tag if anything goes wrong
|
|
let resolvedTag = 'master';
|
|
|
|
// Try to resolve the correct tag, but don't fail if it doesn't work
|
|
try {
|
|
// If tag is provided, use it directly
|
|
if (tag) {
|
|
resolvedTag = tag;
|
|
} else if (projectRoot) {
|
|
// Use provided projectRoot
|
|
resolvedTag = resolveTag({ projectRoot });
|
|
} else {
|
|
// Try to derive projectRoot from filepath
|
|
const derivedProjectRoot = findProjectRoot(path.dirname(filepath));
|
|
if (derivedProjectRoot) {
|
|
resolvedTag = resolveTag({ projectRoot: derivedProjectRoot });
|
|
}
|
|
// If derivedProjectRoot is null, stick with 'master'
|
|
}
|
|
} catch (tagResolveError) {
|
|
if (isDebug) {
|
|
console.log(
|
|
`Tag resolution failed, using master: ${tagResolveError.message}`
|
|
);
|
|
}
|
|
// resolvedTag stays as 'master'
|
|
}
|
|
|
|
if (isDebug) {
|
|
console.log(`Resolved tag: ${resolvedTag}`);
|
|
}
|
|
|
|
// Get the data for the resolved tag
|
|
const tagData = data[resolvedTag];
|
|
if (tagData && tagData.tasks) {
|
|
// Add the _rawTaggedData property and the resolved tag to the returned data
|
|
const result = {
|
|
...tagData,
|
|
tag: resolvedTag,
|
|
_rawTaggedData: originalTaggedData
|
|
};
|
|
if (isDebug) {
|
|
console.log(
|
|
`Returning data for tag '${resolvedTag}' with ${tagData.tasks.length} tasks`
|
|
);
|
|
}
|
|
return result;
|
|
} else {
|
|
// If the resolved tag doesn't exist, fall back to master
|
|
const masterData = data.master;
|
|
if (masterData && masterData.tasks) {
|
|
if (isDebug) {
|
|
console.log(
|
|
`Tag '${resolvedTag}' not found, falling back to master with ${masterData.tasks.length} tasks`
|
|
);
|
|
}
|
|
return {
|
|
...masterData,
|
|
tag: 'master',
|
|
_rawTaggedData: originalTaggedData
|
|
};
|
|
} else {
|
|
if (isDebug) {
|
|
console.log(`No valid tag data found, returning empty structure`);
|
|
}
|
|
// Return empty structure if no valid data
|
|
return {
|
|
tasks: [],
|
|
tag: 'master',
|
|
_rawTaggedData: originalTaggedData
|
|
};
|
|
}
|
|
}
|
|
} catch (error) {
|
|
if (isDebug) {
|
|
console.log(`Error during tag resolution: ${error.message}`);
|
|
}
|
|
// If anything goes wrong, try to return master or empty
|
|
const masterData = data.master;
|
|
if (masterData && masterData.tasks) {
|
|
return {
|
|
...masterData,
|
|
_rawTaggedData: originalTaggedData
|
|
};
|
|
}
|
|
return {
|
|
tasks: [],
|
|
_rawTaggedData: originalTaggedData
|
|
};
|
|
}
|
|
}
|
|
|
|
// If we reach here, it's some other format
|
|
if (isDebug) {
|
|
console.log(`File format not recognized, returning as-is`);
|
|
}
|
|
return data;
|
|
}
|
|
|
|
/**
|
|
* Performs complete tag migration including config.json and state.json updates
|
|
* @param {string} tasksJsonPath - Path to the tasks.json file that was migrated
|
|
*/
|
|
function performCompleteTagMigration(tasksJsonPath) {
|
|
try {
|
|
// Derive project root from tasks.json path
|
|
const projectRoot =
|
|
findProjectRoot(path.dirname(tasksJsonPath)) ||
|
|
path.dirname(tasksJsonPath);
|
|
|
|
// 1. Migrate config.json - add defaultTag and tags section
|
|
const configPath = path.join(projectRoot, '.taskmaster', 'config.json');
|
|
if (fs.existsSync(configPath)) {
|
|
migrateConfigJson(configPath);
|
|
}
|
|
|
|
// 2. Create state.json if it doesn't exist
|
|
const statePath = path.join(projectRoot, '.taskmaster', 'state.json');
|
|
if (!fs.existsSync(statePath)) {
|
|
createStateJson(statePath);
|
|
}
|
|
|
|
if (getDebugFlag()) {
|
|
log(
|
|
'debug',
|
|
`Complete tag migration performed for project: ${projectRoot}`
|
|
);
|
|
}
|
|
} catch (error) {
|
|
if (getDebugFlag()) {
|
|
log('warn', `Error during complete tag migration: ${error.message}`);
|
|
}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Migrates config.json to add tagged task system configuration
|
|
* @param {string} configPath - Path to the config.json file
|
|
*/
|
|
function migrateConfigJson(configPath) {
|
|
try {
|
|
const rawConfig = fs.readFileSync(configPath, 'utf8');
|
|
const config = JSON.parse(rawConfig);
|
|
if (!config) return;
|
|
|
|
let modified = false;
|
|
|
|
// Add global.defaultTag if missing
|
|
if (!config.global) {
|
|
config.global = {};
|
|
}
|
|
if (!config.global.defaultTag) {
|
|
config.global.defaultTag = 'master';
|
|
modified = true;
|
|
}
|
|
|
|
if (modified) {
|
|
fs.writeFileSync(configPath, JSON.stringify(config, null, 2), 'utf8');
|
|
if (process.env.TASKMASTER_DEBUG === 'true') {
|
|
console.log(
|
|
'[DEBUG] Updated config.json with tagged task system settings'
|
|
);
|
|
}
|
|
}
|
|
} catch (error) {
|
|
if (process.env.TASKMASTER_DEBUG === 'true') {
|
|
console.warn(`[WARN] Error migrating config.json: ${error.message}`);
|
|
}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Creates initial state.json file for tagged task system
|
|
* @param {string} statePath - Path where state.json should be created
|
|
*/
|
|
function createStateJson(statePath) {
|
|
try {
|
|
const initialState = {
|
|
currentTag: 'master',
|
|
lastSwitched: new Date().toISOString(),
|
|
branchTagMapping: {},
|
|
migrationNoticeShown: false
|
|
};
|
|
|
|
fs.writeFileSync(statePath, JSON.stringify(initialState, null, 2), 'utf8');
|
|
if (process.env.TASKMASTER_DEBUG === 'true') {
|
|
console.log('[DEBUG] Created initial state.json for tagged task system');
|
|
}
|
|
} catch (error) {
|
|
if (process.env.TASKMASTER_DEBUG === 'true') {
|
|
console.warn(`[WARN] Error creating state.json: ${error.message}`);
|
|
}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Marks in state.json that migration occurred and notice should be shown
|
|
* @param {string} tasksJsonPath - Path to the tasks.json file
|
|
*/
|
|
function markMigrationForNotice(tasksJsonPath) {
|
|
try {
|
|
const projectRoot = path.dirname(path.dirname(tasksJsonPath));
|
|
const statePath = path.join(projectRoot, '.taskmaster', 'state.json');
|
|
|
|
// Ensure state.json exists
|
|
if (!fs.existsSync(statePath)) {
|
|
createStateJson(statePath);
|
|
}
|
|
|
|
// Read and update state to mark migration occurred using fs directly
|
|
try {
|
|
const rawState = fs.readFileSync(statePath, 'utf8');
|
|
const stateData = JSON.parse(rawState) || {};
|
|
// Only set to false if it's not already set (i.e., first time migration)
|
|
if (stateData.migrationNoticeShown === undefined) {
|
|
stateData.migrationNoticeShown = false;
|
|
fs.writeFileSync(statePath, JSON.stringify(stateData, null, 2), 'utf8');
|
|
}
|
|
} catch (stateError) {
|
|
if (process.env.TASKMASTER_DEBUG === 'true') {
|
|
console.warn(
|
|
`[WARN] Error updating state for migration notice: ${stateError.message}`
|
|
);
|
|
}
|
|
}
|
|
} catch (error) {
|
|
if (process.env.TASKMASTER_DEBUG === 'true') {
|
|
console.warn(
|
|
`[WARN] Error marking migration for notice: ${error.message}`
|
|
);
|
|
}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Writes and saves a JSON file. Handles tagged task lists properly.
|
|
* @param {string} filepath - Path to the JSON file
|
|
* @param {Object} data - Data to write (can be resolved tag data or raw tagged data)
|
|
* @param {string} projectRoot - Optional project root for tag context
|
|
* @param {string} tag - Optional tag for tag context
|
|
*/
|
|
function writeJSON(filepath, data, projectRoot = null, tag = null) {
|
|
const isDebug = process.env.TASKMASTER_DEBUG === 'true';
|
|
|
|
try {
|
|
let finalData = data;
|
|
|
|
// If data represents resolved tag data but lost _rawTaggedData (edge-case observed in MCP path)
|
|
if (
|
|
!data._rawTaggedData &&
|
|
projectRoot &&
|
|
Array.isArray(data.tasks) &&
|
|
!hasTaggedStructure(data)
|
|
) {
|
|
const resolvedTag = tag || getCurrentTag(projectRoot);
|
|
|
|
if (isDebug) {
|
|
console.log(
|
|
`writeJSON: Detected resolved tag data missing _rawTaggedData. Re-reading raw data to prevent data loss for tag '${resolvedTag}'.`
|
|
);
|
|
}
|
|
|
|
// Re-read the full file to get the complete tagged structure
|
|
const rawFullData = JSON.parse(fs.readFileSync(filepath, 'utf8'));
|
|
|
|
// Merge the updated data into the full structure
|
|
finalData = {
|
|
...rawFullData,
|
|
[resolvedTag]: {
|
|
// Preserve existing tag metadata if it exists, otherwise use what's passed
|
|
...(rawFullData[resolvedTag]?.metadata || {}),
|
|
...(data.metadata ? { metadata: data.metadata } : {}),
|
|
tasks: data.tasks // The updated tasks array is the source of truth here
|
|
}
|
|
};
|
|
}
|
|
// If we have _rawTaggedData, this means we're working with resolved tag data
|
|
// and need to merge it back into the full tagged structure
|
|
else if (data && data._rawTaggedData && projectRoot) {
|
|
const resolvedTag = tag || getCurrentTag(projectRoot);
|
|
|
|
// Get the original tagged data
|
|
const originalTaggedData = data._rawTaggedData;
|
|
|
|
// Create a clean copy of the current resolved data (without internal properties)
|
|
const { _rawTaggedData, tag: _, ...cleanResolvedData } = data;
|
|
|
|
// Update the specific tag with the resolved data
|
|
finalData = {
|
|
...originalTaggedData,
|
|
[resolvedTag]: cleanResolvedData
|
|
};
|
|
|
|
if (isDebug) {
|
|
console.log(
|
|
`writeJSON: Merging resolved data back into tag '${resolvedTag}'`
|
|
);
|
|
}
|
|
}
|
|
|
|
// Clean up any internal properties that shouldn't be persisted
|
|
let cleanData = finalData;
|
|
if (cleanData && typeof cleanData === 'object') {
|
|
// Remove any _rawTaggedData or tag properties from root level
|
|
const { _rawTaggedData, tag: tagProp, ...rootCleanData } = cleanData;
|
|
cleanData = rootCleanData;
|
|
|
|
// Additional cleanup for tag objects
|
|
if (typeof cleanData === 'object' && !Array.isArray(cleanData)) {
|
|
const finalCleanData = {};
|
|
for (const [key, value] of Object.entries(cleanData)) {
|
|
if (
|
|
value &&
|
|
typeof value === 'object' &&
|
|
Array.isArray(value.tasks)
|
|
) {
|
|
// This is a tag object - clean up any rogue root-level properties
|
|
const { created, description, ...cleanTagData } = value;
|
|
|
|
// Only keep the description if there's no metadata.description
|
|
if (
|
|
description &&
|
|
(!cleanTagData.metadata || !cleanTagData.metadata.description)
|
|
) {
|
|
cleanTagData.description = description;
|
|
}
|
|
|
|
finalCleanData[key] = cleanTagData;
|
|
} else {
|
|
finalCleanData[key] = value;
|
|
}
|
|
}
|
|
cleanData = finalCleanData;
|
|
}
|
|
}
|
|
|
|
fs.writeFileSync(filepath, JSON.stringify(cleanData, null, 2), 'utf8');
|
|
|
|
if (isDebug) {
|
|
console.log(`writeJSON: Successfully wrote to ${filepath}`);
|
|
}
|
|
} catch (error) {
|
|
log('error', `Error writing JSON file ${filepath}:`, error.message);
|
|
if (isDebug) {
|
|
log('error', 'Full error details:', error);
|
|
}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Sanitizes a prompt string for use in a shell command
|
|
* @param {string} prompt The prompt to sanitize
|
|
* @returns {string} Sanitized prompt
|
|
*/
|
|
function sanitizePrompt(prompt) {
|
|
// Replace double quotes with escaped double quotes
|
|
return prompt.replace(/"/g, '\\"');
|
|
}
|
|
|
|
/**
|
|
* Reads the complexity report from file
|
|
* @param {string} customPath - Optional custom path to the report
|
|
* @returns {Object|null} The parsed complexity report or null if not found
|
|
*/
|
|
function readComplexityReport(customPath = null) {
|
|
// GUARD: Prevent circular dependency during config loading
|
|
let isDebug = false; // Default fallback
|
|
try {
|
|
// Only try to get debug flag if we're not in the middle of config loading
|
|
isDebug = getDebugFlag();
|
|
} catch (error) {
|
|
// If getDebugFlag() fails (likely due to circular dependency),
|
|
// use default false and continue
|
|
isDebug = false;
|
|
}
|
|
|
|
try {
|
|
let reportPath;
|
|
if (customPath) {
|
|
reportPath = customPath;
|
|
} else {
|
|
// Try new location first, then fall back to legacy
|
|
const newPath = path.join(process.cwd(), COMPLEXITY_REPORT_FILE);
|
|
const legacyPath = path.join(
|
|
process.cwd(),
|
|
LEGACY_COMPLEXITY_REPORT_FILE
|
|
);
|
|
|
|
reportPath = fs.existsSync(newPath) ? newPath : legacyPath;
|
|
}
|
|
|
|
if (!fs.existsSync(reportPath)) {
|
|
if (isDebug) {
|
|
log('debug', `Complexity report not found at ${reportPath}`);
|
|
}
|
|
return null;
|
|
}
|
|
|
|
const reportData = readJSON(reportPath);
|
|
if (isDebug) {
|
|
log('debug', `Successfully read complexity report from ${reportPath}`);
|
|
}
|
|
return reportData;
|
|
} catch (error) {
|
|
if (isDebug) {
|
|
log('error', `Error reading complexity report: ${error.message}`);
|
|
}
|
|
return null;
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Finds a task analysis in the complexity report
|
|
* @param {Object} report - The complexity report
|
|
* @param {number} taskId - The task ID to find
|
|
* @returns {Object|null} The task analysis or null if not found
|
|
*/
|
|
function findTaskInComplexityReport(report, taskId) {
|
|
if (
|
|
!report ||
|
|
!report.complexityAnalysis ||
|
|
!Array.isArray(report.complexityAnalysis)
|
|
) {
|
|
return null;
|
|
}
|
|
|
|
return report.complexityAnalysis.find((task) => task.taskId === taskId);
|
|
}
|
|
|
|
function addComplexityToTask(task, complexityReport) {
|
|
let taskId;
|
|
if (task.isSubtask) {
|
|
taskId = task.parentTask.id;
|
|
} else if (task.parentId) {
|
|
taskId = task.parentId;
|
|
} else {
|
|
taskId = task.id;
|
|
}
|
|
|
|
const taskAnalysis = findTaskInComplexityReport(complexityReport, taskId);
|
|
if (taskAnalysis) {
|
|
task.complexityScore = taskAnalysis.complexityScore;
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Checks if a task exists in the tasks array
|
|
* @param {Array} tasks - The tasks array
|
|
* @param {string|number} taskId - The task ID to check
|
|
* @returns {boolean} True if the task exists, false otherwise
|
|
*/
|
|
function taskExists(tasks, taskId) {
|
|
if (!taskId || !tasks || !Array.isArray(tasks)) {
|
|
return false;
|
|
}
|
|
|
|
// Handle both regular task IDs and subtask IDs (e.g., "1.2")
|
|
if (typeof taskId === 'string' && taskId.includes('.')) {
|
|
const [parentId, subtaskId] = taskId
|
|
.split('.')
|
|
.map((id) => parseInt(id, 10));
|
|
const parentTask = tasks.find((t) => t.id === parentId);
|
|
|
|
if (!parentTask || !parentTask.subtasks) {
|
|
return false;
|
|
}
|
|
|
|
return parentTask.subtasks.some((st) => st.id === subtaskId);
|
|
}
|
|
|
|
const id = parseInt(taskId, 10);
|
|
return tasks.some((t) => t.id === id);
|
|
}
|
|
|
|
/**
|
|
* Formats a task ID as a string
|
|
* @param {string|number} id - The task ID to format
|
|
* @returns {string} The formatted task ID
|
|
*/
|
|
function formatTaskId(id) {
|
|
if (typeof id === 'string' && id.includes('.')) {
|
|
return id; // Already formatted as a string with a dot (e.g., "1.2")
|
|
}
|
|
|
|
if (typeof id === 'number') {
|
|
return id.toString();
|
|
}
|
|
|
|
return id;
|
|
}
|
|
|
|
/**
|
|
* Finds a task by ID in the tasks array. Optionally filters subtasks by status.
|
|
* @param {Array} tasks - The tasks array
|
|
* @param {string|number} taskId - The task ID to find
|
|
* @param {Object|null} complexityReport - Optional pre-loaded complexity report
|
|
* @param {string} [statusFilter] - Optional status to filter subtasks by
|
|
* @returns {{task: Object|null, originalSubtaskCount: number|null, originalSubtasks: Array|null}} The task object (potentially with filtered subtasks), the original subtask count, and original subtasks array if filtered, or nulls if not found.
|
|
*/
|
|
function findTaskById(
|
|
tasks,
|
|
taskId,
|
|
complexityReport = null,
|
|
statusFilter = null
|
|
) {
|
|
if (!taskId || !tasks || !Array.isArray(tasks)) {
|
|
return { task: null, originalSubtaskCount: null };
|
|
}
|
|
|
|
// Check if it's a subtask ID (e.g., "1.2")
|
|
if (typeof taskId === 'string' && taskId.includes('.')) {
|
|
// If looking for a subtask, statusFilter doesn't apply directly here.
|
|
const [parentId, subtaskId] = taskId
|
|
.split('.')
|
|
.map((id) => parseInt(id, 10));
|
|
const parentTask = tasks.find((t) => t.id === parentId);
|
|
|
|
if (!parentTask || !parentTask.subtasks) {
|
|
return { task: null, originalSubtaskCount: null, originalSubtasks: null };
|
|
}
|
|
|
|
const subtask = parentTask.subtasks.find((st) => st.id === subtaskId);
|
|
if (subtask) {
|
|
// Add reference to parent task for context
|
|
subtask.parentTask = {
|
|
id: parentTask.id,
|
|
title: parentTask.title,
|
|
status: parentTask.status
|
|
};
|
|
subtask.isSubtask = true;
|
|
}
|
|
|
|
// If we found a task, check for complexity data
|
|
if (subtask && complexityReport) {
|
|
addComplexityToTask(subtask, complexityReport);
|
|
}
|
|
|
|
return {
|
|
task: subtask || null,
|
|
originalSubtaskCount: null,
|
|
originalSubtasks: null
|
|
};
|
|
}
|
|
|
|
let taskResult = null;
|
|
let originalSubtaskCount = null;
|
|
let originalSubtasks = null;
|
|
|
|
// Find the main task
|
|
const id = parseInt(taskId, 10);
|
|
const task = tasks.find((t) => t.id === id) || null;
|
|
|
|
// If task not found, return nulls
|
|
if (!task) {
|
|
return { task: null, originalSubtaskCount: null, originalSubtasks: null };
|
|
}
|
|
|
|
taskResult = task;
|
|
|
|
// If task found and statusFilter provided, filter its subtasks
|
|
if (statusFilter && task.subtasks && Array.isArray(task.subtasks)) {
|
|
// Store original subtasks and count before filtering
|
|
originalSubtasks = [...task.subtasks]; // Clone the original subtasks array
|
|
originalSubtaskCount = task.subtasks.length;
|
|
|
|
// Clone the task to avoid modifying the original array
|
|
const filteredTask = { ...task };
|
|
filteredTask.subtasks = task.subtasks.filter(
|
|
(subtask) =>
|
|
subtask.status &&
|
|
subtask.status.toLowerCase() === statusFilter.toLowerCase()
|
|
);
|
|
|
|
taskResult = filteredTask;
|
|
}
|
|
|
|
// If task found and complexityReport provided, add complexity data
|
|
if (taskResult && complexityReport) {
|
|
addComplexityToTask(taskResult, complexityReport);
|
|
}
|
|
|
|
// Return the found task, original subtask count, and original subtasks
|
|
return { task: taskResult, originalSubtaskCount, originalSubtasks };
|
|
}
|
|
|
|
/**
|
|
* Truncates text to a specified length
|
|
* @param {string} text - The text to truncate
|
|
* @param {number} maxLength - The maximum length
|
|
* @returns {string} The truncated text
|
|
*/
|
|
function truncate(text, maxLength) {
|
|
if (!text || text.length <= maxLength) {
|
|
return text;
|
|
}
|
|
|
|
return `${text.slice(0, maxLength - 3)}...`;
|
|
}
|
|
|
|
/**
|
|
* Find cycles in a dependency graph using DFS
|
|
* @param {string} subtaskId - Current subtask ID
|
|
* @param {Map} dependencyMap - Map of subtask IDs to their dependencies
|
|
* @param {Set} visited - Set of visited nodes
|
|
* @param {Set} recursionStack - Set of nodes in current recursion stack
|
|
* @returns {Array} - List of dependency edges that need to be removed to break cycles
|
|
*/
|
|
function findCycles(
|
|
subtaskId,
|
|
dependencyMap,
|
|
visited = new Set(),
|
|
recursionStack = new Set(),
|
|
path = []
|
|
) {
|
|
// Mark the current node as visited and part of recursion stack
|
|
visited.add(subtaskId);
|
|
recursionStack.add(subtaskId);
|
|
path.push(subtaskId);
|
|
|
|
const cyclesToBreak = [];
|
|
|
|
// Get all dependencies of the current subtask
|
|
const dependencies = dependencyMap.get(subtaskId) || [];
|
|
|
|
// For each dependency
|
|
for (const depId of dependencies) {
|
|
// If not visited, recursively check for cycles
|
|
if (!visited.has(depId)) {
|
|
const cycles = findCycles(depId, dependencyMap, visited, recursionStack, [
|
|
...path
|
|
]);
|
|
cyclesToBreak.push(...cycles);
|
|
}
|
|
// If the dependency is in the recursion stack, we found a cycle
|
|
else if (recursionStack.has(depId)) {
|
|
// Find the position of the dependency in the path
|
|
const cycleStartIndex = path.indexOf(depId);
|
|
// The last edge in the cycle is what we want to remove
|
|
const cycleEdges = path.slice(cycleStartIndex);
|
|
// We'll remove the last edge in the cycle (the one that points back)
|
|
cyclesToBreak.push(depId);
|
|
}
|
|
}
|
|
|
|
// Remove the node from recursion stack before returning
|
|
recursionStack.delete(subtaskId);
|
|
|
|
return cyclesToBreak;
|
|
}
|
|
|
|
/**
|
|
* Convert a string from camelCase to kebab-case
|
|
* @param {string} str - The string to convert
|
|
* @returns {string} The kebab-case version of the string
|
|
*/
|
|
const toKebabCase = (str) => {
|
|
// Special handling for common acronyms
|
|
const withReplacedAcronyms = str
|
|
.replace(/ID/g, 'Id')
|
|
.replace(/API/g, 'Api')
|
|
.replace(/UI/g, 'Ui')
|
|
.replace(/URL/g, 'Url')
|
|
.replace(/URI/g, 'Uri')
|
|
.replace(/JSON/g, 'Json')
|
|
.replace(/XML/g, 'Xml')
|
|
.replace(/HTML/g, 'Html')
|
|
.replace(/CSS/g, 'Css');
|
|
|
|
// Insert hyphens before capital letters and convert to lowercase
|
|
return withReplacedAcronyms
|
|
.replace(/([A-Z])/g, '-$1')
|
|
.toLowerCase()
|
|
.replace(/^-/, ''); // Remove leading hyphen if present
|
|
};
|
|
|
|
/**
|
|
* Detect camelCase flags in command arguments
|
|
* @param {string[]} args - Command line arguments to check
|
|
* @returns {Array<{original: string, kebabCase: string}>} - List of flags that should be converted
|
|
*/
|
|
function detectCamelCaseFlags(args) {
|
|
const camelCaseFlags = [];
|
|
for (const arg of args) {
|
|
if (arg.startsWith('--')) {
|
|
const flagName = arg.split('=')[0].slice(2); // Remove -- and anything after =
|
|
|
|
// Skip single-word flags - they can't be camelCase
|
|
if (!flagName.includes('-') && !/[A-Z]/.test(flagName)) {
|
|
continue;
|
|
}
|
|
|
|
// Check for camelCase pattern (lowercase followed by uppercase)
|
|
if (/[a-z][A-Z]/.test(flagName)) {
|
|
const kebabVersion = toKebabCase(flagName);
|
|
if (kebabVersion !== flagName) {
|
|
camelCaseFlags.push({
|
|
original: flagName,
|
|
kebabCase: kebabVersion
|
|
});
|
|
}
|
|
}
|
|
}
|
|
}
|
|
return camelCaseFlags;
|
|
}
|
|
|
|
/**
|
|
* Aggregates an array of telemetry objects into a single summary object.
|
|
* @param {Array<Object>} telemetryArray - Array of telemetryData objects.
|
|
* @param {string} overallCommandName - The name for the aggregated command.
|
|
* @returns {Object|null} Aggregated telemetry object or null if input is empty.
|
|
*/
|
|
function aggregateTelemetry(telemetryArray, overallCommandName) {
|
|
if (!telemetryArray || telemetryArray.length === 0) {
|
|
return null;
|
|
}
|
|
|
|
const aggregated = {
|
|
timestamp: new Date().toISOString(), // Use current time for aggregation time
|
|
userId: telemetryArray[0].userId, // Assume userId is consistent
|
|
commandName: overallCommandName,
|
|
modelUsed: 'Multiple', // Default if models vary
|
|
providerName: 'Multiple', // Default if providers vary
|
|
inputTokens: 0,
|
|
outputTokens: 0,
|
|
totalTokens: 0,
|
|
totalCost: 0,
|
|
currency: telemetryArray[0].currency || 'USD' // Assume consistent currency or default
|
|
};
|
|
|
|
const uniqueModels = new Set();
|
|
const uniqueProviders = new Set();
|
|
const uniqueCurrencies = new Set();
|
|
|
|
telemetryArray.forEach((item) => {
|
|
aggregated.inputTokens += item.inputTokens || 0;
|
|
aggregated.outputTokens += item.outputTokens || 0;
|
|
aggregated.totalCost += item.totalCost || 0;
|
|
uniqueModels.add(item.modelUsed);
|
|
uniqueProviders.add(item.providerName);
|
|
uniqueCurrencies.add(item.currency || 'USD');
|
|
});
|
|
|
|
aggregated.totalTokens = aggregated.inputTokens + aggregated.outputTokens;
|
|
aggregated.totalCost = parseFloat(aggregated.totalCost.toFixed(6)); // Fix precision
|
|
|
|
if (uniqueModels.size === 1) {
|
|
aggregated.modelUsed = [...uniqueModels][0];
|
|
}
|
|
if (uniqueProviders.size === 1) {
|
|
aggregated.providerName = [...uniqueProviders][0];
|
|
}
|
|
if (uniqueCurrencies.size > 1) {
|
|
aggregated.currency = 'Multiple'; // Mark if currencies actually differ
|
|
} else if (uniqueCurrencies.size === 1) {
|
|
aggregated.currency = [...uniqueCurrencies][0];
|
|
}
|
|
|
|
return aggregated;
|
|
}
|
|
|
|
/**
|
|
* Gets the current tag from state.json or falls back to defaultTag from config
|
|
* @param {string} projectRoot - The project root directory (required)
|
|
* @returns {string} The current tag name
|
|
*/
|
|
function getCurrentTag(projectRoot) {
|
|
if (!projectRoot) {
|
|
throw new Error('projectRoot is required for getCurrentTag');
|
|
}
|
|
|
|
try {
|
|
// Try to read current tag from state.json using fs directly
|
|
const statePath = path.join(projectRoot, '.taskmaster', 'state.json');
|
|
if (fs.existsSync(statePath)) {
|
|
const rawState = fs.readFileSync(statePath, 'utf8');
|
|
const stateData = JSON.parse(rawState);
|
|
if (stateData && stateData.currentTag) {
|
|
return stateData.currentTag;
|
|
}
|
|
}
|
|
} catch (error) {
|
|
// Ignore errors, fall back to default
|
|
}
|
|
|
|
// Fall back to defaultTag from config using fs directly
|
|
try {
|
|
const configPath = path.join(projectRoot, '.taskmaster', 'config.json');
|
|
if (fs.existsSync(configPath)) {
|
|
const rawConfig = fs.readFileSync(configPath, 'utf8');
|
|
const configData = JSON.parse(rawConfig);
|
|
if (configData && configData.global && configData.global.defaultTag) {
|
|
return configData.global.defaultTag;
|
|
}
|
|
}
|
|
} catch (error) {
|
|
// Ignore errors, use hardcoded default
|
|
}
|
|
|
|
// Final fallback
|
|
return 'master';
|
|
}
|
|
|
|
/**
|
|
* Resolves the tag to use based on options
|
|
* @param {Object} options - Options object
|
|
* @param {string} options.projectRoot - The project root directory (required)
|
|
* @param {string} [options.tag] - Explicit tag to use
|
|
* @returns {string} The resolved tag name
|
|
*/
|
|
function resolveTag(options = {}) {
|
|
const { projectRoot, tag } = options;
|
|
|
|
if (!projectRoot) {
|
|
throw new Error('projectRoot is required for resolveTag');
|
|
}
|
|
|
|
// If explicit tag provided, use it
|
|
if (tag) {
|
|
return tag;
|
|
}
|
|
|
|
// Otherwise get current tag from state/config
|
|
return getCurrentTag(projectRoot);
|
|
}
|
|
|
|
/**
|
|
* Gets the tasks array for a specific tag from tagged tasks.json data
|
|
* @param {Object} data - The parsed tasks.json data (after migration)
|
|
* @param {string} tagName - The tag name to get tasks for
|
|
* @returns {Array} The tasks array for the specified tag, or empty array if not found
|
|
*/
|
|
function getTasksForTag(data, tagName) {
|
|
if (!data || !tagName) {
|
|
return [];
|
|
}
|
|
|
|
// Handle migrated format: { "master": { "tasks": [...] }, "otherTag": { "tasks": [...] } }
|
|
if (
|
|
data[tagName] &&
|
|
data[tagName].tasks &&
|
|
Array.isArray(data[tagName].tasks)
|
|
) {
|
|
return data[tagName].tasks;
|
|
}
|
|
|
|
return [];
|
|
}
|
|
|
|
/**
|
|
* Sets the tasks array for a specific tag in the data structure
|
|
* @param {Object} data - The tasks.json data object
|
|
* @param {string} tagName - The tag name to set tasks for
|
|
* @param {Array} tasks - The tasks array to set
|
|
* @returns {Object} The updated data object
|
|
*/
|
|
function setTasksForTag(data, tagName, tasks) {
|
|
if (!data) {
|
|
data = {};
|
|
}
|
|
|
|
if (!data[tagName]) {
|
|
data[tagName] = {};
|
|
}
|
|
|
|
data[tagName].tasks = tasks || [];
|
|
return data;
|
|
}
|
|
|
|
/**
|
|
* Flatten tasks array to include subtasks as individual searchable items
|
|
* @param {Array} tasks - Array of task objects
|
|
* @returns {Array} Flattened array including both tasks and subtasks
|
|
*/
|
|
function flattenTasksWithSubtasks(tasks) {
|
|
const flattened = [];
|
|
|
|
for (const task of tasks) {
|
|
// Add the main task
|
|
flattened.push({
|
|
...task,
|
|
searchableId: task.id.toString(), // For consistent ID handling
|
|
isSubtask: false
|
|
});
|
|
|
|
// Add subtasks if they exist
|
|
if (task.subtasks && task.subtasks.length > 0) {
|
|
for (const subtask of task.subtasks) {
|
|
flattened.push({
|
|
...subtask,
|
|
searchableId: `${task.id}.${subtask.id}`, // Format: "15.2"
|
|
isSubtask: true,
|
|
parentId: task.id,
|
|
parentTitle: task.title,
|
|
// Enhance subtask context with parent information
|
|
title: `${subtask.title} (subtask of: ${task.title})`,
|
|
description: `${subtask.description} [Parent: ${task.description}]`
|
|
});
|
|
}
|
|
}
|
|
}
|
|
|
|
return flattened;
|
|
}
|
|
|
|
/**
|
|
* Ensures the tag object has a metadata object with created/updated timestamps.
|
|
* @param {Object} tagObj - The tag object (e.g., data['master'])
|
|
* @param {Object} [opts] - Optional fields (e.g., description, skipUpdate)
|
|
* @param {string} [opts.description] - Description for the tag
|
|
* @param {boolean} [opts.skipUpdate] - If true, don't update the 'updated' timestamp
|
|
* @returns {Object} The updated tag object (for chaining)
|
|
*/
|
|
function ensureTagMetadata(tagObj, opts = {}) {
|
|
if (!tagObj || typeof tagObj !== 'object') {
|
|
throw new Error('tagObj must be a valid object');
|
|
}
|
|
|
|
const now = new Date().toISOString();
|
|
|
|
if (!tagObj.metadata) {
|
|
// Create new metadata object
|
|
tagObj.metadata = {
|
|
created: now,
|
|
updated: now,
|
|
...(opts.description ? { description: opts.description } : {})
|
|
};
|
|
} else {
|
|
// Ensure existing metadata has required fields
|
|
if (!tagObj.metadata.created) {
|
|
tagObj.metadata.created = now;
|
|
}
|
|
|
|
// Update timestamp unless explicitly skipped
|
|
if (!opts.skipUpdate) {
|
|
tagObj.metadata.updated = now;
|
|
}
|
|
|
|
// Add description if provided and not already present
|
|
if (opts.description && !tagObj.metadata.description) {
|
|
tagObj.metadata.description = opts.description;
|
|
}
|
|
}
|
|
|
|
return tagObj;
|
|
}
|
|
|
|
// Export all utility functions and configuration
|
|
export {
|
|
LOG_LEVELS,
|
|
log,
|
|
readJSON,
|
|
writeJSON,
|
|
sanitizePrompt,
|
|
readComplexityReport,
|
|
findTaskInComplexityReport,
|
|
taskExists,
|
|
formatTaskId,
|
|
findTaskById,
|
|
truncate,
|
|
findCycles,
|
|
toKebabCase,
|
|
detectCamelCaseFlags,
|
|
disableSilentMode,
|
|
enableSilentMode,
|
|
getTaskManager,
|
|
isSilentMode,
|
|
addComplexityToTask,
|
|
resolveEnvVariable,
|
|
findProjectRoot,
|
|
aggregateTelemetry,
|
|
getCurrentTag,
|
|
resolveTag,
|
|
getTasksForTag,
|
|
setTasksForTag,
|
|
performCompleteTagMigration,
|
|
migrateConfigJson,
|
|
createStateJson,
|
|
markMigrationForNotice,
|
|
flattenTasksWithSubtasks,
|
|
ensureTagMetadata
|
|
};
|