Parthy fd005c4c54 fix(core): Implement Boundary-First Tag Resolution (#943)
* refactor(context): Standardize tag and projectRoot handling across all task tools

This commit unifies context management by adopting a boundary-first resolution strategy. All task-scoped tools now resolve `tag` and `projectRoot` at their entry point and forward these values to the underlying direct functions.

This approach centralizes context logic, ensuring consistent behavior and enhanced flexibility in multi-tag environments.

* fix(tag): Clean up tag handling in task functions and sync process

This commit refines the handling of the `tag` parameter across multiple functions, ensuring consistent context management. The `tag` is now passed more efficiently in `listTasksDirect`, `setTaskStatusDirect`, and `syncTasksToReadme`, improving clarity and reducing redundancy. Additionally, a TODO comment has been added in `sync-readme.js` to address future tag support enhancements.

* feat(tag): Implement Boundary-First Tag Resolution for consistent tag handling

This commit introduces Boundary-First Tag Resolution in the task manager, ensuring consistent and deterministic tag handling across CLI and MCP. This change resolves potential race conditions and improves the reliability of tag-specific operations.

Additionally, the `expandTask` function has been updated to use the resolved tag when writing JSON, enhancing data integrity during task updates.

* chore(biome): formatting

* fix(expand-task): Update writeJSON call to use tag instead of resolvedTag

* fix(commands): Enhance complexity report path resolution and task initialization
`resolveComplexityReportPath` function to streamline output path generation based on tag context and user-defined output.
- Improved clarity and maintainability of command handling by centralizing path resolution logic.

* Fix: unknown currentTag

* fix(task-manager): Update generateTaskFiles calls to include tag and projectRoot parameters

This commit modifies the `moveTask` and `updateSubtaskById` functions to pass the `tag` and `projectRoot` parameters to the `generateTaskFiles` function. This ensures that task files are generated with the correct context when requested, enhancing consistency in task management operations.

* fix(commands): Refactor tag handling and complexity report path resolution
This commit updates the `registerCommands` function to utilize `taskMaster.getCurrentTag()` for consistent tag retrieval across command actions. It also enhances the initialization of `TaskMaster` by passing the tag directly, improving clarity and maintainability. The complexity report path resolution is streamlined to ensure correct file naming based on the current tag context.

* fix(task-master): Update complexity report path expectations in tests
This commit modifies the `initTaskMaster` test to expect a valid string for the complexity report path, ensuring it matches the expected file naming convention. This change enhances test reliability by verifying the correct output format when the path is generated.

* fix(set-task-status): Enhance logging and tag resolution in task status updates
This commit improves the logging output in the `registerSetTaskStatusTool` function to include the tag context when setting task statuses. It also updates the tag handling by resolving the tag using the `resolveTag` utility, ensuring that the correct tag is used when updating task statuses. Additionally, the `setTaskStatus` function is modified to remove the tag parameter from the `readJSON` and `writeJSON` calls, streamlining the data handling process.

* fix(commands, expand-task, task-manager): Add complexity report option and enhance path handling
This commit introduces a new `--complexity-report` option in the `registerCommands` function, allowing users to specify a custom path for the complexity report. The `expandTask` function is updated to accept the `complexityReportPath` from the context, ensuring it is utilized correctly during task expansion. Additionally, the `setTaskStatus` function now includes the `tag` parameter in the `readJSON` and `writeJSON` calls, improving task status updates with proper context. The `initTaskMaster` function is also modified to create parent directories for output paths, enhancing file handling robustness.

* fix(expand-task): Add complexityReportPath to context for task expansion tests

This commit updates the test for the `expandTask` function by adding the `complexityReportPath` to the context object. This change ensures that the complexity report path is correctly utilized in the test, aligning with recent enhancements to complexity report handling in the task manager.

* chore: implement suggested changes

* fix(parse-prd): Clarify tag parameter description for task organization
Updated the documentation for the `tag` parameter in the `parse-prd.js` file to provide a clearer context on its purpose for organizing tasks into separate task lists.

* Fix Inconsistent tag resolution pattern.

* fix: Enhance complexity report path handling with tag support

This commit updates various functions to incorporate the `tag` parameter when resolving complexity report paths. The `expandTaskDirect`, `resolveComplexityReportPath`, and related tools now utilize the current tag context, improving consistency in task management. Additionally, the complexity report path is now correctly passed through the context in the `expand-task` and `set-task-status` tools, ensuring accurate report retrieval based on the active tag.

* Updated the JSDoc for the `tag` parameter in the `show-task.js` file.

* Remove redundant comment on tag parameter in readJSON call

* Remove unused import for getTagAwareFilePath

* Add missed complexityReportPath to args for task expansion

* fix(tests): Enhance research tests with tag-aware functionality

This commit updates the `research.test.js` file to improve the testing of the `performResearch` function by incorporating tag-aware functionality. Key changes include mocking the `findProjectRoot` to return a valid path, enhancing the `ContextGatherer` and `FuzzyTaskSearch` mocks, and adding comprehensive tests for tag parameter handling in various scenarios. The tests now cover passing different tag values, ensuring correct behavior when tags are provided, undefined, or null, and validating the integration of tags in task discovery and context gathering processes.

* Remove unused import for

* fix: Refactor complexity report path handling and improve argument destructuring

This commit enhances the `expandTaskDirect` function by improving the destructuring of arguments for better readability. It also updates the `analyze.js` and `analyze-task-complexity.js` files to utilize the new `resolveComplexityReportOutputPath` function, ensuring tag-aware resolution of output paths. Additionally, logging has been added to provide clarity on the report path being used.

* test: Add complexity report tag isolation tests and improve path handling

This commit introduces a new test file for complexity report tag isolation, ensuring that different tags maintain separate complexity reports. It enhances the existing tests in `analyze-task-complexity.test.js` by updating expectations to use `expect.stringContaining` for file paths, improving robustness against path changes. The new tests cover various scenarios, including path resolution and report generation for both master and feature tags, ensuring no cross-tag contamination occurs.

* Update scripts/modules/task-manager/list-tasks.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update scripts/modules/task-manager/list-tasks.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* test(complexity-report): Fix tag slugification in filename expectations

- Update mocks to use slugifyTagForFilePath for cross-platform compatibility
- Replace raw tag values with slugified versions in expected filenames
- Fix test expecting 'feature/user-auth-v2' to expect 'feature-user-auth-v2'
- Align test with actual filename generation logic that sanitizes special chars

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-07-20 00:51:41 +03:00

1015 lines
28 KiB
JavaScript

/**
* research.js
* Core research functionality for AI-powered queries with project context
*/
import fs from 'fs';
import path from 'path';
import chalk from 'chalk';
import boxen from 'boxen';
import inquirer from 'inquirer';
import { highlight } from 'cli-highlight';
import { ContextGatherer } from '../utils/contextGatherer.js';
import { FuzzyTaskSearch } from '../utils/fuzzyTaskSearch.js';
import { generateTextService } from '../ai-services-unified.js';
import { getPromptManager } from '../prompt-manager.js';
import {
log as consoleLog,
findProjectRoot,
readJSON,
flattenTasksWithSubtasks
} from '../utils.js';
import {
displayAiUsageSummary,
startLoadingIndicator,
stopLoadingIndicator
} from '../ui.js';
/**
* Perform AI-powered research with project context
* @param {string} query - Research query/prompt
* @param {Object} options - Research options
* @param {Array<string>} [options.taskIds] - Task/subtask IDs for context
* @param {Array<string>} [options.filePaths] - File paths for context
* @param {string} [options.customContext] - Additional custom context
* @param {boolean} [options.includeProjectTree] - Include project file tree
* @param {string} [options.detailLevel] - Detail level: 'low', 'medium', 'high'
* @param {string} [options.projectRoot] - Project root directory
* @param {string} [options.tag] - Tag for the task
* @param {boolean} [options.saveToFile] - Whether to save results to file (MCP mode)
* @param {Object} [context] - Execution context
* @param {Object} [context.session] - MCP session object
* @param {Object} [context.mcpLog] - MCP logger object
* @param {string} [context.commandName] - Command name for telemetry
* @param {string} [context.outputType] - Output type ('cli' or 'mcp')
* @param {string} [outputFormat] - Output format ('text' or 'json')
* @param {boolean} [allowFollowUp] - Whether to allow follow-up questions (default: true)
* @returns {Promise<Object>} Research results with telemetry data
*/
async function performResearch(
query,
options = {},
context = {},
outputFormat = 'text',
allowFollowUp = true
) {
const {
taskIds = [],
filePaths = [],
customContext = '',
includeProjectTree = false,
detailLevel = 'medium',
projectRoot: providedProjectRoot,
tag,
saveToFile = false
} = options;
const {
session,
mcpLog,
commandName = 'research',
outputType = 'cli'
} = context;
const isMCP = !!mcpLog;
// Determine project root
const projectRoot = providedProjectRoot || findProjectRoot();
if (!projectRoot) {
throw new Error('Could not determine project root directory');
}
// Create consistent logger
const logFn = isMCP
? mcpLog
: {
info: (...args) => consoleLog('info', ...args),
warn: (...args) => consoleLog('warn', ...args),
error: (...args) => consoleLog('error', ...args),
debug: (...args) => consoleLog('debug', ...args),
success: (...args) => consoleLog('success', ...args)
};
// Show UI banner for CLI mode
if (outputFormat === 'text') {
console.log(
boxen(chalk.cyan.bold(`🔍 AI Research Query`), {
padding: 1,
borderColor: 'cyan',
borderStyle: 'round',
margin: { top: 1, bottom: 1 }
})
);
}
try {
// Initialize context gatherer
const contextGatherer = new ContextGatherer(projectRoot, tag);
// Auto-discover relevant tasks using fuzzy search to supplement provided tasks
let finalTaskIds = [...taskIds]; // Start with explicitly provided tasks
let autoDiscoveredIds = [];
try {
const tasksPath = path.join(
projectRoot,
'.taskmaster',
'tasks',
'tasks.json'
);
const tasksData = await readJSON(tasksPath, projectRoot, tag);
if (tasksData && tasksData.tasks && tasksData.tasks.length > 0) {
// Flatten tasks to include subtasks for fuzzy search
const flattenedTasks = flattenTasksWithSubtasks(tasksData.tasks);
const fuzzySearch = new FuzzyTaskSearch(flattenedTasks, 'research');
const searchResults = fuzzySearch.findRelevantTasks(query, {
maxResults: 8,
includeRecent: true,
includeCategoryMatches: true
});
autoDiscoveredIds = fuzzySearch.getTaskIds(searchResults);
// Remove any auto-discovered tasks that were already explicitly provided
const uniqueAutoDiscovered = autoDiscoveredIds.filter(
(id) => !finalTaskIds.includes(id)
);
// Add unique auto-discovered tasks to the final list
finalTaskIds = [...finalTaskIds, ...uniqueAutoDiscovered];
if (outputFormat === 'text' && finalTaskIds.length > 0) {
// Sort task IDs numerically for better display
const sortedTaskIds = finalTaskIds
.map((id) => parseInt(id))
.sort((a, b) => a - b)
.map((id) => id.toString());
// Show different messages based on whether tasks were explicitly provided
if (taskIds.length > 0) {
const sortedProvidedIds = taskIds
.map((id) => parseInt(id))
.sort((a, b) => a - b)
.map((id) => id.toString());
console.log(
chalk.gray('Provided tasks: ') +
chalk.cyan(sortedProvidedIds.join(', '))
);
if (uniqueAutoDiscovered.length > 0) {
const sortedAutoIds = uniqueAutoDiscovered
.map((id) => parseInt(id))
.sort((a, b) => a - b)
.map((id) => id.toString());
console.log(
chalk.gray('+ Auto-discovered related tasks: ') +
chalk.cyan(sortedAutoIds.join(', '))
);
}
} else {
console.log(
chalk.gray('Auto-discovered relevant tasks: ') +
chalk.cyan(sortedTaskIds.join(', '))
);
}
}
}
} catch (error) {
// Silently continue without auto-discovered tasks if there's an error
logFn.debug(`Could not auto-discover tasks: ${error.message}`);
}
const contextResult = await contextGatherer.gather({
tasks: finalTaskIds,
files: filePaths,
customContext,
includeProjectTree,
format: 'research', // Use research format for AI consumption
includeTokenCounts: true
});
const gatheredContext = contextResult.context;
const tokenBreakdown = contextResult.tokenBreakdown;
// Load prompts using PromptManager
const promptManager = getPromptManager();
const promptParams = {
query: query,
gatheredContext: gatheredContext || '',
detailLevel: detailLevel,
projectInfo: {
root: projectRoot,
taskCount: finalTaskIds.length,
fileCount: filePaths.length
}
};
// Load prompts - the research template handles detail level internally
const { systemPrompt, userPrompt } = await promptManager.loadPrompt(
'research',
promptParams
);
// Count tokens for system and user prompts
const systemPromptTokens = contextGatherer.countTokens(systemPrompt);
const userPromptTokens = contextGatherer.countTokens(userPrompt);
const totalInputTokens = systemPromptTokens + userPromptTokens;
if (outputFormat === 'text') {
// Display detailed token breakdown in a clean box
displayDetailedTokenBreakdown(
tokenBreakdown,
systemPromptTokens,
userPromptTokens
);
}
// Only log detailed info in debug mode or MCP
if (outputFormat !== 'text') {
logFn.info(
`Calling AI service with research role, context size: ${tokenBreakdown.total} tokens (${gatheredContext.length} characters)`
);
}
// Start loading indicator for CLI mode
let loadingIndicator = null;
if (outputFormat === 'text') {
loadingIndicator = startLoadingIndicator('Researching with AI...\n');
}
let aiResult;
try {
// Call AI service with research role
aiResult = await generateTextService({
role: 'research', // Always use research role for research command
session,
projectRoot,
systemPrompt,
prompt: userPrompt,
commandName,
outputType
});
} catch (error) {
if (loadingIndicator) {
stopLoadingIndicator(loadingIndicator);
}
throw error;
} finally {
if (loadingIndicator) {
stopLoadingIndicator(loadingIndicator);
}
}
const researchResult = aiResult.mainResult;
const telemetryData = aiResult.telemetryData;
const tagInfo = aiResult.tagInfo;
// Format and display results
// Initialize interactive save tracking
let interactiveSaveInfo = { interactiveSaveOccurred: false };
if (outputFormat === 'text') {
displayResearchResults(
researchResult,
query,
detailLevel,
tokenBreakdown
);
// Display AI usage telemetry for CLI users
if (telemetryData) {
displayAiUsageSummary(telemetryData, 'cli');
}
// Offer follow-up question option (only for initial CLI queries, not MCP)
if (allowFollowUp && !isMCP) {
interactiveSaveInfo = await handleFollowUpQuestions(
options,
context,
outputFormat,
projectRoot,
logFn,
query,
researchResult
);
}
}
// Handle MCP save-to-file request
if (saveToFile && isMCP) {
const conversationHistory = [
{
question: query,
answer: researchResult,
type: 'initial',
timestamp: new Date().toISOString()
}
];
const savedFilePath = await handleSaveToFile(
conversationHistory,
projectRoot,
context,
logFn
);
// Add saved file path to return data
return {
query,
result: researchResult,
contextSize: gatheredContext.length,
contextTokens: tokenBreakdown.total,
tokenBreakdown,
systemPromptTokens,
userPromptTokens,
totalInputTokens,
detailLevel,
telemetryData,
tagInfo,
savedFilePath,
interactiveSaveOccurred: false // MCP save-to-file doesn't count as interactive save
};
}
logFn.success('Research query completed successfully');
return {
query,
result: researchResult,
contextSize: gatheredContext.length,
contextTokens: tokenBreakdown.total,
tokenBreakdown,
systemPromptTokens,
userPromptTokens,
totalInputTokens,
detailLevel,
telemetryData,
tagInfo,
interactiveSaveOccurred:
interactiveSaveInfo?.interactiveSaveOccurred || false
};
} catch (error) {
logFn.error(`Research query failed: ${error.message}`);
if (outputFormat === 'text') {
console.error(chalk.red(`\n❌ Research failed: ${error.message}`));
}
throw error;
}
}
/**
* Display detailed token breakdown for context and prompts
* @param {Object} tokenBreakdown - Token breakdown from context gatherer
* @param {number} systemPromptTokens - System prompt token count
* @param {number} userPromptTokens - User prompt token count
*/
function displayDetailedTokenBreakdown(
tokenBreakdown,
systemPromptTokens,
userPromptTokens
) {
const parts = [];
// Custom context
if (tokenBreakdown.customContext) {
parts.push(
chalk.cyan('Custom: ') +
chalk.yellow(tokenBreakdown.customContext.tokens.toLocaleString())
);
}
// Tasks breakdown
if (tokenBreakdown.tasks && tokenBreakdown.tasks.length > 0) {
const totalTaskTokens = tokenBreakdown.tasks.reduce(
(sum, task) => sum + task.tokens,
0
);
const taskDetails = tokenBreakdown.tasks
.map((task) => {
const titleDisplay =
task.title.length > 30
? task.title.substring(0, 30) + '...'
: task.title;
return ` ${chalk.gray(task.id)} ${chalk.white(titleDisplay)} ${chalk.yellow(task.tokens.toLocaleString())} tokens`;
})
.join('\n');
parts.push(
chalk.cyan('Tasks: ') +
chalk.yellow(totalTaskTokens.toLocaleString()) +
chalk.gray(` (${tokenBreakdown.tasks.length} items)`) +
'\n' +
taskDetails
);
}
// Files breakdown
if (tokenBreakdown.files && tokenBreakdown.files.length > 0) {
const totalFileTokens = tokenBreakdown.files.reduce(
(sum, file) => sum + file.tokens,
0
);
const fileDetails = tokenBreakdown.files
.map((file) => {
const pathDisplay =
file.path.length > 40
? '...' + file.path.substring(file.path.length - 37)
: file.path;
return ` ${chalk.gray(pathDisplay)} ${chalk.yellow(file.tokens.toLocaleString())} tokens ${chalk.gray(`(${file.sizeKB}KB)`)}`;
})
.join('\n');
parts.push(
chalk.cyan('Files: ') +
chalk.yellow(totalFileTokens.toLocaleString()) +
chalk.gray(` (${tokenBreakdown.files.length} files)`) +
'\n' +
fileDetails
);
}
// Project tree
if (tokenBreakdown.projectTree) {
parts.push(
chalk.cyan('Project Tree: ') +
chalk.yellow(tokenBreakdown.projectTree.tokens.toLocaleString()) +
chalk.gray(
` (${tokenBreakdown.projectTree.fileCount} files, ${tokenBreakdown.projectTree.dirCount} dirs)`
)
);
}
// Prompts breakdown
const totalPromptTokens = systemPromptTokens + userPromptTokens;
const promptDetails = [
` ${chalk.gray('System:')} ${chalk.yellow(systemPromptTokens.toLocaleString())} tokens`,
` ${chalk.gray('User:')} ${chalk.yellow(userPromptTokens.toLocaleString())} tokens`
].join('\n');
parts.push(
chalk.cyan('Prompts: ') +
chalk.yellow(totalPromptTokens.toLocaleString()) +
chalk.gray(' (generated)') +
'\n' +
promptDetails
);
// Display the breakdown in a clean box
if (parts.length > 0) {
const content = parts.join('\n\n');
const tokenBox = boxen(content, {
title: chalk.blue.bold('Context Analysis'),
titleAlignment: 'left',
padding: { top: 1, bottom: 1, left: 2, right: 2 },
margin: { top: 0, bottom: 1 },
borderStyle: 'single',
borderColor: 'blue'
});
console.log(tokenBox);
}
}
/**
* Process research result text to highlight code blocks
* @param {string} text - Raw research result text
* @returns {string} Processed text with highlighted code blocks
*/
function processCodeBlocks(text) {
// Regex to match code blocks with optional language specification
const codeBlockRegex = /```(\w+)?\n([\s\S]*?)```/g;
return text.replace(codeBlockRegex, (match, language, code) => {
try {
// Default to javascript if no language specified
const lang = language || 'javascript';
// Highlight the code using cli-highlight
const highlightedCode = highlight(code.trim(), {
language: lang,
ignoreIllegals: true // Don't fail on unrecognized syntax
});
// Add a subtle border around code blocks
const codeBox = boxen(highlightedCode, {
padding: { top: 0, bottom: 0, left: 1, right: 1 },
margin: { top: 0, bottom: 0 },
borderStyle: 'single',
borderColor: 'dim'
});
return '\n' + codeBox + '\n';
} catch (error) {
// If highlighting fails, return the original code block with basic formatting
return (
'\n' +
chalk.gray('```' + (language || '')) +
'\n' +
chalk.white(code.trim()) +
'\n' +
chalk.gray('```') +
'\n'
);
}
});
}
/**
* Display research results in formatted output
* @param {string} result - AI research result
* @param {string} query - Original query
* @param {string} detailLevel - Detail level used
* @param {Object} tokenBreakdown - Detailed token usage
*/
function displayResearchResults(result, query, detailLevel, tokenBreakdown) {
// Header with query info
const header = boxen(
chalk.green.bold('Research Results') +
'\n\n' +
chalk.gray('Query: ') +
chalk.white(query) +
'\n' +
chalk.gray('Detail Level: ') +
chalk.cyan(detailLevel),
{
padding: { top: 1, bottom: 1, left: 2, right: 2 },
margin: { top: 1, bottom: 0 },
borderStyle: 'round',
borderColor: 'green'
}
);
console.log(header);
// Process the result to highlight code blocks
const processedResult = processCodeBlocks(result);
// Main research content in a clean box
const contentBox = boxen(processedResult, {
padding: { top: 1, bottom: 1, left: 2, right: 2 },
margin: { top: 0, bottom: 1 },
borderStyle: 'single',
borderColor: 'gray'
});
console.log(contentBox);
// Success footer
console.log(chalk.green('✅ Research completed'));
}
/**
* Handle follow-up questions and save functionality in interactive mode
* @param {Object} originalOptions - Original research options
* @param {Object} context - Execution context
* @param {string} outputFormat - Output format
* @param {string} projectRoot - Project root directory
* @param {Object} logFn - Logger function
* @param {string} initialQuery - Initial query for context
* @param {string} initialResult - Initial AI result for context
*/
async function handleFollowUpQuestions(
originalOptions,
context,
outputFormat,
projectRoot,
logFn,
initialQuery,
initialResult
) {
let interactiveSaveOccurred = false;
try {
// Import required modules for saving
const { readJSON } = await import('../utils.js');
const updateTaskById = (await import('./update-task-by-id.js')).default;
const { updateSubtaskById } = await import('./update-subtask-by-id.js');
// Initialize conversation history with the initial Q&A
const conversationHistory = [
{
question: initialQuery,
answer: initialResult,
type: 'initial',
timestamp: new Date().toISOString()
}
];
while (true) {
// Get user choice
const { action } = await inquirer.prompt([
{
type: 'list',
name: 'action',
message: 'What would you like to do next?',
choices: [
{ name: 'Ask a follow-up question', value: 'followup' },
{ name: 'Save to file', value: 'savefile' },
{ name: 'Save to task/subtask', value: 'save' },
{ name: 'Quit', value: 'quit' }
],
pageSize: 4
}
]);
if (action === 'quit') {
break;
}
if (action === 'savefile') {
// Handle save to file functionality
await handleSaveToFile(
conversationHistory,
projectRoot,
context,
logFn
);
continue;
}
if (action === 'save') {
// Handle save functionality
const saveResult = await handleSaveToTask(
conversationHistory,
projectRoot,
context,
logFn
);
if (saveResult) {
interactiveSaveOccurred = true;
}
continue;
}
if (action === 'followup') {
// Get the follow-up question
const { followUpQuery } = await inquirer.prompt([
{
type: 'input',
name: 'followUpQuery',
message: 'Enter your follow-up question:',
validate: (input) => {
if (!input || input.trim().length === 0) {
return 'Please enter a valid question.';
}
return true;
}
}
]);
if (!followUpQuery || followUpQuery.trim().length === 0) {
continue;
}
console.log('\n' + chalk.gray('─'.repeat(60)) + '\n');
// Build cumulative conversation context from all previous exchanges
const conversationContext =
buildConversationContext(conversationHistory);
// Create enhanced options for follow-up with full conversation context
const followUpOptions = {
...originalOptions,
taskIds: [], // Clear task IDs to allow fresh fuzzy search
customContext:
conversationContext +
(originalOptions.customContext
? `\n\n--- Original Context ---\n${originalOptions.customContext}`
: '')
};
// Perform follow-up research
const followUpResult = await performResearch(
followUpQuery.trim(),
followUpOptions,
context,
outputFormat,
false // allowFollowUp = false for nested calls
);
// Add this exchange to the conversation history
conversationHistory.push({
question: followUpQuery.trim(),
answer: followUpResult.result,
type: 'followup',
timestamp: new Date().toISOString()
});
}
}
} catch (error) {
// If there's an error with inquirer (e.g., non-interactive terminal),
// silently continue without follow-up functionality
logFn.debug(`Follow-up questions not available: ${error.message}`);
}
return { interactiveSaveOccurred };
}
/**
* Handle saving conversation to a task or subtask
* @param {Array} conversationHistory - Array of conversation exchanges
* @param {string} projectRoot - Project root directory
* @param {Object} context - Execution context
* @param {Object} logFn - Logger function
*/
async function handleSaveToTask(
conversationHistory,
projectRoot,
context,
logFn
) {
try {
// Import required modules
const { readJSON } = await import('../utils.js');
const updateTaskById = (await import('./update-task-by-id.js')).default;
const { updateSubtaskById } = await import('./update-subtask-by-id.js');
// Get task ID from user
const { taskId } = await inquirer.prompt([
{
type: 'input',
name: 'taskId',
message: 'Enter task ID (e.g., "15" for task or "15.2" for subtask):',
validate: (input) => {
if (!input || input.trim().length === 0) {
return 'Please enter a task ID.';
}
const trimmedInput = input.trim();
// Validate format: number or number.number
if (!/^\d+(\.\d+)?$/.test(trimmedInput)) {
return 'Invalid format. Use "15" for task or "15.2" for subtask.';
}
return true;
}
}
]);
const trimmedTaskId = taskId.trim();
// Format conversation thread for saving
const conversationThread = formatConversationForSaving(conversationHistory);
// Determine if it's a task or subtask
const isSubtask = trimmedTaskId.includes('.');
// Try to save - first validate the ID exists
const tasksPath = path.join(
projectRoot,
'.taskmaster',
'tasks',
'tasks.json'
);
if (!fs.existsSync(tasksPath)) {
console.log(
chalk.red('❌ Tasks file not found. Please run task-master init first.')
);
return;
}
const data = readJSON(tasksPath, projectRoot, context.tag);
if (!data || !data.tasks) {
console.log(chalk.red('❌ No valid tasks found.'));
return;
}
if (isSubtask) {
// Validate subtask exists
const [parentId, subtaskId] = trimmedTaskId
.split('.')
.map((id) => parseInt(id, 10));
const parentTask = data.tasks.find((t) => t.id === parentId);
if (!parentTask) {
console.log(chalk.red(`❌ Parent task ${parentId} not found.`));
return;
}
if (
!parentTask.subtasks ||
!parentTask.subtasks.find((st) => st.id === subtaskId)
) {
console.log(chalk.red(`❌ Subtask ${trimmedTaskId} not found.`));
return;
}
// Save to subtask using updateSubtaskById
console.log(chalk.blue('💾 Saving research conversation to subtask...'));
await updateSubtaskById(
tasksPath,
trimmedTaskId,
conversationThread,
false, // useResearch = false for simple append
context,
'text'
);
console.log(
chalk.green(
`✅ Research conversation saved to subtask ${trimmedTaskId}`
)
);
} else {
// Validate task exists
const taskIdNum = parseInt(trimmedTaskId, 10);
const task = data.tasks.find((t) => t.id === taskIdNum);
if (!task) {
console.log(chalk.red(`❌ Task ${trimmedTaskId} not found.`));
return;
}
// Save to task using updateTaskById with append mode
console.log(chalk.blue('💾 Saving research conversation to task...'));
await updateTaskById(
tasksPath,
taskIdNum,
conversationThread,
false, // useResearch = false for simple append
context,
'text',
true // appendMode = true
);
console.log(
chalk.green(`✅ Research conversation saved to task ${trimmedTaskId}`)
);
}
return true; // Indicate successful save
} catch (error) {
console.log(chalk.red(`❌ Error saving conversation: ${error.message}`));
logFn.error(`Error saving conversation: ${error.message}`);
return false; // Indicate failed save
}
}
/**
* Handle saving conversation to a file in .taskmaster/docs/research/
* @param {Array} conversationHistory - Array of conversation exchanges
* @param {string} projectRoot - Project root directory
* @param {Object} context - Execution context
* @param {Object} logFn - Logger function
* @returns {Promise<string>} Path to saved file
*/
async function handleSaveToFile(
conversationHistory,
projectRoot,
context,
logFn
) {
try {
// Create research directory if it doesn't exist
const researchDir = path.join(
projectRoot,
'.taskmaster',
'docs',
'research'
);
if (!fs.existsSync(researchDir)) {
fs.mkdirSync(researchDir, { recursive: true });
}
// Generate filename from first query and timestamp
const firstQuery = conversationHistory[0]?.question || 'research-query';
const timestamp = new Date().toISOString().split('T')[0]; // YYYY-MM-DD format
// Create a slug from the query (remove special chars, limit length)
const querySlug = firstQuery
.toLowerCase()
.replace(/[^a-z0-9\s-]/g, '') // Remove special characters
.replace(/\s+/g, '-') // Replace spaces with hyphens
.replace(/-+/g, '-') // Replace multiple hyphens with single
.substring(0, 50) // Limit length
.replace(/^-+|-+$/g, ''); // Remove leading/trailing hyphens
const filename = `${timestamp}_${querySlug}.md`;
const filePath = path.join(researchDir, filename);
// Format conversation for file
const fileContent = formatConversationForFile(
conversationHistory,
firstQuery
);
// Write file
fs.writeFileSync(filePath, fileContent, 'utf8');
const relativePath = path.relative(projectRoot, filePath);
console.log(
chalk.green(`✅ Research saved to: ${chalk.cyan(relativePath)}`)
);
logFn.success(`Research conversation saved to ${relativePath}`);
return filePath;
} catch (error) {
console.log(chalk.red(`❌ Error saving research file: ${error.message}`));
logFn.error(`Error saving research file: ${error.message}`);
throw error;
}
}
/**
* Format conversation history for saving to a file
* @param {Array} conversationHistory - Array of conversation exchanges
* @param {string} initialQuery - The initial query for metadata
* @returns {string} Formatted file content
*/
function formatConversationForFile(conversationHistory, initialQuery) {
const timestamp = new Date().toISOString();
const date = new Date().toLocaleDateString();
const time = new Date().toLocaleTimeString();
// Create metadata header
let content = `---
title: Research Session
query: "${initialQuery}"
date: ${date}
time: ${time}
timestamp: ${timestamp}
exchanges: ${conversationHistory.length}
---
# Research Session
`;
// Add each conversation exchange
conversationHistory.forEach((exchange, index) => {
if (exchange.type === 'initial') {
content += `## Initial Query\n\n**Question:** ${exchange.question}\n\n**Response:**\n\n${exchange.answer}\n\n`;
} else {
content += `## Follow-up ${index}\n\n**Question:** ${exchange.question}\n\n**Response:**\n\n${exchange.answer}\n\n`;
}
if (index < conversationHistory.length - 1) {
content += '---\n\n';
}
});
// Add footer
content += `\n---\n\n*Generated by Task Master Research Command* \n*Timestamp: ${timestamp}*\n`;
return content;
}
/**
* Format conversation history for saving to a task/subtask
* @param {Array} conversationHistory - Array of conversation exchanges
* @returns {string} Formatted conversation thread
*/
function formatConversationForSaving(conversationHistory) {
const timestamp = new Date().toISOString();
let formatted = `## Research Session - ${new Date().toLocaleDateString()} ${new Date().toLocaleTimeString()}\n\n`;
conversationHistory.forEach((exchange, index) => {
if (exchange.type === 'initial') {
formatted += `**Initial Query:** ${exchange.question}\n\n`;
formatted += `**Response:** ${exchange.answer}\n\n`;
} else {
formatted += `**Follow-up ${index}:** ${exchange.question}\n\n`;
formatted += `**Response:** ${exchange.answer}\n\n`;
}
if (index < conversationHistory.length - 1) {
formatted += '---\n\n';
}
});
return formatted;
}
/**
* Build conversation context string from conversation history
* @param {Array} conversationHistory - Array of conversation exchanges
* @returns {string} Formatted conversation context
*/
function buildConversationContext(conversationHistory) {
if (conversationHistory.length === 0) {
return '';
}
const contextParts = ['--- Conversation History ---'];
conversationHistory.forEach((exchange, index) => {
const questionLabel =
exchange.type === 'initial' ? 'Initial Question' : `Follow-up ${index}`;
const answerLabel =
exchange.type === 'initial' ? 'Initial Answer' : `Answer ${index}`;
contextParts.push(`\n${questionLabel}: ${exchange.question}`);
contextParts.push(`${answerLabel}: ${exchange.answer}`);
});
return contextParts.join('\n');
}
export { performResearch };