Conversation
8d428bb7 update doc-detective-resolver [skip ci] b8629cf1 Set release version [skip ci] 65c772a2 Auto dev release: v3.6.0-dev.2 [skip ci] a6e0924d Fix wait function to use driver.pause for synchronization (#401) 9e121a79 Auto dev release: v3.6.0-dev.1 [skip ci] 580f4d0e Standardize maxVariation logic and rename difference calculation (#400) git-subtree-dir: core git-subtree-split: 8d428bb7a62a75746668e8c2fbb21497c571b7b5
6ef908c0 Set releae version [skip ci] cc6e1159 Auto dev release: v3.6.0-dev.1 [skip ci] 79e02b48 Add support for additional CommonMark syntax variations and improve JSON parsing (#93) git-subtree-dir: resolver git-subtree-split: 6ef908c06cc9ee921a8ca2c96d580ef04e107342
23b4b759 bump version to match doc-detective-resolver d0d9c532 chore: bump doc-detective-resolver version to 3.6.1 git-subtree-dir: vscode git-subtree-split: 23b4b759519429dcc5561d0504f1dcc470f6b227
9e985467 Dynamically size the spec list c01d9209 Add SpecSelector component and enhance TestBuilder with YAML support and spec detection e2877ba3 Enhance escape key handling in StepEditor, TestBuilder, and TestEditor for improved navigation and cancellation 3cde4bab Make lists scrollable 6699d071 Set specId 2c6f1056 Change terminal spacing behavior 750b5bda Change editor invokation d3b47474 Initial editor 96a7564f Add Ink-based CLI UI e72db834 update doc-detective-core [skip ci] b1f7d41a Auto dev release: v3.5.0-dev.1 [skip ci] 2f0d9691 fix: Respect false config values for recursive and detectSteps options (#160) 66b4718a update doc-detective-core [skip ci] ca5bf89f Drop Node 18 [skip ci] e37aaf66 Switch npm ci to npm i [skip ci] 0d8dd8be Auto dev release: v3.4.1-dev.1 [skip ci] 81d4b489 Remove integration example 52e22a88 update doc-detective-core [skip ci] c7749882 Fix core repo name [skip ci] 4b5e802e Auto dev release: v3.4.0-dev.10 [skip ci] 48a90d7e Fix patch versioning 03d05271 Auto dev release: v3.4.0-dev.9 [skip ci] 3ac6163d Fixed API reporter 1cbd321a Auto dev release: v3.4.0-dev.8 [skip ci] e9b4ebd0 dep updates 6a67254b Auto dev release: v3.4.0-dev.7 [skip ci] e65eedf3 Update deps dee756d4 Fixed context result field 3c3c760d Auto dev release: v3.4.0-dev.6 [skip ci] 4c47721f dev dep 8bd4305b Fix resolved-tests path 3ab1b529 Auto dev release: v3.4.0-dev.5 [skip ci] b98a5ca3 Update core version 45edccaa Auto dev release: v3.4.0-dev.4 [skip ci] 834c8c7d dev deps b3715bef Auto dev release: v3.4.0-dev.3 [skip ci] 23f2f715 Add support for fetching resolved tests via DOC_DETECTIVE_API environment variable (#158) b9b16490 Auto dev release: v3.4.0-dev.2 [skip ci] 5cb04ed3 Add support for DOC_DETECTIVE_CONFIG environment variable (#157) 5636f05b update doc-detective-core [skip ci] f1069eb3 Auto dev release: v3.4.0-dev.1 [skip ci] 18a78df6 Updated deps 39578230 update doc-detective-core [skip ci] e25eee03 update doc-detective-core [skip ci] 8fdbd6f4 Set version [skip ci] 446bf910 Auto dev release: v3.1.3-dev.2 [skip ci] 128bee14 Update package dependencies git-subtree-dir: cli git-subtree-split: 9e9854678a75b42b6003837f3ec8f4f6ee58c112
- Implement unsaved changes detection in TestBuilder - Prompt user for confirmation when exiting with unsaved changes in TestBuilder - Add unsaved changes tracking in TestEditor - Confirm cancel action with unsaved changes in TestEditor
- Always use JSON syntax for test serialization for consistency - Introduce hasTestMetadata function to check for saveable test metadata - Add findLineStart, findLineEnd, and getLineIndentation utility functions - Update prepareSourceUpdates to handle test-level and step-level source locations more effectively
This reverts commit 6ca40a4.
…ndle availability warnings in tests
…etection and remove redundant state
…stringification and enhance task guidelines
WalkthroughAdds an interactive Ink-based builder and runner, AI-driven step refinement and Ollama orchestration, inline source serialization/update utilities, resolved-tests API integration, extensive schema/builder modules and tests, package/version/CI updates, and many new CLI UI components and helper utilities. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as CLI (src/index.js)
participant Builder as Builder (runBuilder)
participant UI as Ink UI (TestBuilder/DebugRunner)
participant AI as AI (refineStep/providers)
participant FS as Filesystem (sourceFileUtils)
User->>CLI: run --editor / -e (optional --input)
CLI->>Builder: runBuilder(options)
Builder->>UI: mount TestBuilder / SpecSelector
UI->>AI: request refineStep(failed step, context)
AI-->>UI: refined step (validated)
UI->>FS: prepareSourceUpdates & batchUpdateSourceContent
FS-->>UI: update results (file diffs)
UI-->>Builder: complete/exit
Builder-->>CLI: returned control
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
common/src/schemas/dereferenceSchemas.js (1)
27-53: Nice add: includesourceLocation_v3.schema.json, but harden$refrewriting to avoid breaking deref on new schemas.
updateRefPaths()currently:
- uses undeclared vars (
valueFile,valueAttribute,valuePath) → implicit globals- assumes every external
$refhas a#...fragment, otherwise it produces#undefined(invalid ref)Suggested fix:
function updateRefPaths(schema) { if (schema === null || typeof schema !== "object") return schema; for (let [key, value] of Object.entries(schema)) { if (typeof value === "object") { updateRefPaths(value); } - if (key === "$ref" && !value.startsWith("#")) { - // File name of the referenced schema - valueFile = value.split("#")[0]; - // Attribute path in the referenced schema - valueAttribute = value.split("#")[1]; - valuePath = path.resolve(`${__dirname}/build/${valueFile}`); - schema[key] = `${valuePath}#${valueAttribute}`; + if (key === "$ref" && typeof value === "string" && !value.startsWith("#")) { + const [valueFile, valueAttribute] = value.split("#", 2); + const valuePath = path.resolve(`${__dirname}/build/${valueFile}`); + schema[key] = valueAttribute != null ? `${valuePath}#${valueAttribute}` : valuePath; } } return schema; }cli/scripts/bump-sync-version-core.js (1)
21-101: Script lacks rollback mechanism for partial failures.The script performs a sequence of operations (clean → version → commit → tag → push) without any rollback capability. If any step fails midway, the repository could be left in an inconsistent state:
- Failure after line 86:
package.jsonmodified but not committed- Failure after line 90: Changes committed but not tagged
- Failure after line 93: Tag created but not pushed
This could require manual cleanup and may cause confusion in CI/CD pipelines.
Consider adding error handling with rollback:
function main() { const initialBranch = execSync("git branch --show-current", { encoding: "utf8", stdio: "pipe" }).trim(); try { // Existing logic here... } catch (error) { console.error("Error during version bump, attempting rollback..."); try { // Reset to clean state execSync("git reset --hard HEAD", { stdio: "inherit" }); execSync("git checkout " + initialBranch, { stdio: "inherit" }); execSync("git clean -fd", { stdio: "inherit" }); } catch (rollbackError) { console.error("Rollback failed. Manual cleanup may be required."); } throw error; } }
🟡 Minor comments (15)
common/src/schemas/output_schemas/sourceLocation_v3.schema.json-78-105 (1)
78-105: Fix example 2commentFormatmismatch (linkReferencevshtmlComment).Example 2’s
originalTextis a markdown link;commentFormatshould likely be"linkReference":- "commentFormat": "htmlComment" + "commentFormat": "linkReference"cli/scripts/bump-sync-version-core.js-60-60 (1)
60-60: Unused variable:corePatchis never used.The
corePatchvariable is extracted on line 60 but never referenced in the version computation logic. This might indicate:
- Dead code that should be removed
- Missing logic that should compare or use the core patch version
If the core patch version isn't needed for the version sync logic, consider removing the variable to keep the code clean.
const coreMajor = semver.major(coreVersion); const coreMinor = semver.minor(coreVersion); -const corePatch = semver.patch(coreVersion); -console.log(`Project version: ${projMajor}.${projMinor}.${projPatch}`); -console.log(`Core version: ${coreMajor}.${coreMinor}.${corePatch}`); +console.log(`Project version: ${projMajor}.${projMinor}.${projPatch}`); +console.log(`Core version: ${coreMajor}.${coreMinor}.${semver.patch(coreVersion)}`);Also applies to: 67-78
cli/src/cli/ResultsSummary.mjs-15-18 (1)
15-18: Potential null reference when calculating totals.If any of
specs,tests,contexts, orstepsis defined but has missing status properties (e.g.,{ pass: 5 }withoutfail), the addition will produceNaN. Consider defaulting each property.// Calculate totals - const totalSpecs = specs ? specs.pass + specs.fail + specs.warning + specs.skipped : 0; - const totalTests = tests ? tests.pass + tests.fail + tests.warning + tests.skipped : 0; - const totalContexts = contexts ? contexts.pass + contexts.fail + contexts.warning + contexts.skipped : 0; - const totalSteps = steps ? steps.pass + steps.fail + steps.warning + steps.skipped : 0; + const totalSpecs = specs ? (specs.pass || 0) + (specs.fail || 0) + (specs.warning || 0) + (specs.skipped || 0) : 0; + const totalTests = tests ? (tests.pass || 0) + (tests.fail || 0) + (tests.warning || 0) + (tests.skipped || 0) : 0; + const totalContexts = contexts ? (contexts.pass || 0) + (contexts.fail || 0) + (contexts.warning || 0) + (contexts.skipped || 0) : 0; + const totalSteps = steps ? (steps.pass || 0) + (steps.fail || 0) + (steps.warning || 0) + (steps.skipped || 0) : 0;cli/src/cli/runner.js-42-46 (1)
42-46: Remove unusedresolvedTestsprop passed to App component.The
resolvedTestsprop is passed to the App component on line 42 but the component (App.mjs, line 8) only acceptsconfigandstateprops. TheresolvedTestsvalue is never used.return React.createElement(App, { config, resolvedTests: options.resolvedTests, state, });cli/src/cli/ResultsSummary.mjs-27-29 (1)
27-29:allSpecsSkippeddoesn't account for warnings.The condition checks only
pass === 0 && fail === 0 && skipped > 0, but if there are warnings, it could incorrectly report "All items were skipped" when warnings exist.// Check if all skipped const allSpecsSkipped = - specs && specs.pass === 0 && specs.fail === 0 && specs.skipped > 0; + specs && specs.pass === 0 && specs.fail === 0 && (specs.warning || 0) === 0 && specs.skipped > 0;cli/src/cli/builder/FieldEditor.mjs-69-71 (1)
69-71: Potential issue:String(value)for object values.When
valueis an object,String(value)produces"[object Object]", which would display incorrectly in the text input and corrupt the value on save.Consider handling object values explicitly:
const [localValue, setLocalValue] = useState( - value !== undefined ? String(value) : '' + value !== undefined + ? (typeof value === 'object' ? JSON.stringify(value) : String(value)) + : '' );Alternatively, if object fields should use a different editor entirely, add a guard earlier in the component.
cli/src/utils.js-166-173 (1)
166-173: Using unvalidated envConfig for logging creates potential issues.On line 167-170,
log(envConfig, ...)is called withenvConfigas the config parameter, but at this pointenvConfighasn't been validated and may not have alogLevelproperty in the expected format. This could cause unexpected behavior in thelogfunction.if (!envValidation.valid) { log( - envConfig, + {}, "error", `Invalid config from DOC_DETECTIVE_CONFIG environment variable. ${envValidation.errors}` );cli/src/cli/builder/DebugRunner.mjs-725-727 (1)
725-727: Unused variable in AI refinement success rendering.
refinedDisplayis assigned but never used in the rendering below it. This appears to be leftover from a previous implementation.// AI refinement succeeded - show the suggestion - const refinedDisplay = getStepDisplay(aiRefinedStep); return React.createElement(cli/src/index.js-25-32 (1)
25-32: computeFileHash function is defined but never used.The
computeFileHashfunction is implemented but not called anywhere in the file. The matching logic usescanonicalizeSpecfor content comparison instead. This appears to be dead code.Either remove the unused function or document its intended future use:
-/** - * Compute a content hash for a file to enable content-based matching. - * @param {string} filePath - Absolute path to the file - * @returns {string|null} MD5 hash of file content, or null if file can't be read - */ -function computeFileHash(filePath) { - try { - const content = fs.readFileSync(filePath, 'utf-8'); - return crypto.createHash('md5').update(content).digest('hex'); - } catch { - return null; - } -}Committable suggestion skipped: line range outside the PR's diff.
cli/src/index.js-173-185 (1)
173-185: Priority 5 fallback has a potential logic issue.The check
remainingSpecs.filter(s => !results.some(r => r.spec === s && r.filePath))looks at theresultsarray, but at this point in the loop, the current spec hasn't been added toresultsyet. This means the filter might not work as intended for edge cases.Consider tracking matched specs separately or adjusting the logic:
- if (!matchedPath && !error) { - const unmatchedInputs = inputPaths.filter(p => !usedInputPaths.has(p)); - const remainingSpecs = detectedSpecs.filter(s => - !results.some(r => r.spec === s && r.filePath) - ); - - // Only use this fallback if there's exactly one unmatched input and one remaining spec - if (unmatchedInputs.length === 1 && remainingSpecs.length === 1 && remainingSpecs[0] === detectedSpec) { + if (!matchedPath && !error) { + const unmatchedInputs = inputPaths.filter(p => !usedInputPaths.has(p)); + // Check if this is the last unmatched spec and there's exactly one unmatched input + const matchedSpecCount = results.filter(r => r.filePath).length; + const remainingSpecCount = detectedSpecs.length - matchedSpecCount; + + if (unmatchedInputs.length === 1 && remainingSpecCount === 1) {cli/test/builder/schemaUtils.test.mjs-125-136 (1)
125-136: Conditional test may silently pass without assertions.If
getStepTypeVariants('goTo')returns an empty array, the property assertions inside theifblock are never executed, making the test misleadingly green.it('each variant has expected properties', function () { const variants = getStepTypeVariants('goTo'); - if (variants.length > 0) { - const variant = variants[0]; - expect(variant).to.have.property('index'); - expect(variant).to.have.property('title'); - expect(variant).to.have.property('type'); - expect(variant).to.have.property('schema'); - } + expect(variants.length).to.be.greaterThan(0, 'goTo should have at least one variant'); + const variant = variants[0]; + expect(variant).to.have.property('index'); + expect(variant).to.have.property('title'); + expect(variant).to.have.property('type'); + expect(variant).to.have.property('schema'); });common/src/ollama.js-39-40 (1)
39-40:MODEL_PULL_TIMEOUT_MSis exported but not used internally byensureModelAvailable.The constant is defined and exported (line 40 and 392) for external use—test files import and use it to set mocha timeouts. However, the
ensureModelAvailablefunction has no timeout mechanism built in. The streaming model pull operation (lines 285-340) lacks timeout protection and could hang indefinitely on slow connections or network failures. Consider adding timeout handling viaAbortControllerorPromise.racewithin the function rather than relying on external timeout management.cli/src/cli/builder/TestBuilder.mjs-1124-1134 (1)
1124-1134: Wrap switch case declarations in a block scope.The
runnableTestsdeclaration inside the switch case can be accessed by other cases, which is flagged by the linter. Wrap it in a block.case 'debugTest': - // If only one test with steps, go directly to debug - const runnableTests = tests.filter((t) => (t.steps || []).length > 0); - if (runnableTests.length === 1) { - const runnableIndex = tests.findIndex((t) => (t.steps || []).length > 0); - setDebugTestIndex(runnableIndex); - setPhase('debugRun'); - } else { - setPhase('selectDebugTest'); + { + // If only one test with steps, go directly to debug + const runnableTests = tests.filter((t) => (t.steps || []).length > 0); + if (runnableTests.length === 1) { + const runnableIndex = tests.findIndex((t) => (t.steps || []).length > 0); + setDebugTestIndex(runnableIndex); + setPhase('debugRun'); + } else { + setPhase('selectDebugTest'); + } } break;cli/src/cli/builder/TestBuilder.mjs-880-896 (1)
880-896: Improve error feedback to user on save failure.The save error is logged to console but the user only sees a return to menu without explanation. Consider displaying the error message in the UI.
} catch (err) { - // TODO: Handle error better - console.error('Failed to save:', err); - setPhase('menu'); + console.error('Failed to save:', err); + // Could add an error state to display to user + setInlineUpdateError(`Failed to save: ${err.message}`); + setPhase('inlineUpdateError'); }Committable suggestion skipped: line range outside the PR's diff.
cli/src/cli/builder/sourceFileUtils.js-116-173 (1)
116-173: Wrap switch case declarations in block scopes.The
attrs(line 120) andyamlLines(line 135) declarations inside switch cases can leak to other cases. Wrap each case in a block.function serializeToSyntax(obj, syntaxFormat, type) { switch (syntaxFormat) { - case 'xml': + case 'xml': { // XML-style attributes: key="value" key=boolean const attrs = []; for (const [key, value] of Object.entries(obj)) { // ... existing code ... } return attrs.join(' '); - - case 'yaml': + } + case 'yaml': { // Multiline YAML format const yamlLines = []; for (const [key, value] of Object.entries(obj)) { // ... existing code ... } return yamlLines.join('\n'); - + } case 'json': default: return JSON.stringify(obj); } }
🧹 Nitpick comments (47)
common/src/schemas/dereferenceSchemas.js (1)
127-141: Makeschemas.jsongeneration deterministic (remove unnecessaryasyncand avoidrequirecaching).
files.forEach(async ...)is misleading, andrequire()can cache between runs in the same process. Prefer sync read/parse:console.log("Building schemas.json file..."); const schemas = {}; - files.forEach(async (file) => { + for (const file of files) { const key = file.replace(".schema.json", ""); - // Load schema from file - let schema = require(`${outputDir}/${file}`); - // Load into `schema` object - schemas[key] = schema; - }); + const schemaPath = path.resolve(`${outputDir}/${file}`); + schemas[key] = JSON.parse(fs.readFileSync(schemaPath, "utf8")); + } fs.writeFileSync( `${__dirname}/schemas.json`, JSON.stringify(schemas, null, 2) );common/src/schemas/output_schemas/sourceLocation_v3.schema.json (1)
9-13: Revisit “absolute path” forfileto avoid leaking local filesystem details.If this metadata is ever persisted/shared (reports, fixtures, CI artifacts), absolute paths can expose usernames/workspaces and are non-portable. Consider allowing repo-relative paths (or explicitly documenting where/why absolute paths are safe).
cli/scripts/bump-sync-version-core.js (4)
62-63: Consider consistent capitalization.Line 62 uses "Project version" (capitalized) while line 63 uses "core version" (lowercase). Consider making them consistent.
console.log(`Project version: ${projMajor}.${projMinor}.${projPatch}`); -console.log(`core version: ${coreMajor}.${coreMinor}.${corePatch}`); +console.log(`Core version: ${coreMajor}.${coreMinor}.${corePatch}`);
79-86: Good validation approach, but consider checking for existing tags.The validation of the computed version is well-placed. However, the script doesn't check if the tag already exists before attempting to create it at line 93, which could lead to unclear error messages.
Consider adding tag existence check:
// Validate the new version before setting it if (!semver.valid(newVersion)) { console.error(`Error: Generated invalid version: ${newVersion}`); process.exit(1); } + +// Check if tag already exists +try { + const existingTags = execSync("git tag -l", { encoding: "utf8", stdio: "pipe" }); + if (existingTags.split('\n').includes(`v${newVersion}`)) { + console.error(`Error: Tag v${newVersion} already exists`); + process.exit(1); + } +} catch (error) { + console.error("Error checking existing tags"); + process.exit(1); +} execCommand(`npm version --no-git-tag-version ${newVersion}`);
90-90: Consider including version number in commit message.The commit message would be more informative if it included the actual version number being set, making the git history easier to understand at a glance.
-execCommand('git commit -m "update doc-detective-core [skip ci]"'); +execCommand(`git commit -m "chore: bump version to ${newVersion} [skip ci]"`);
8-19: Consider including error details in error messages.The error handler logs the command that failed but doesn't include details about why it failed, which could make debugging more difficult.
function execCommand(command, options = {}) { try { return execSync(command, { encoding: "utf8", stdio: "inherit", ...options, }); } catch (error) { - console.error(`Error executing command: ${command}`); + console.error(`Error executing command: ${command}`); + console.error(`Error details: ${error.message}`); process.exit(1); } }common/src/ai.js (3)
86-134: Consider adding error handling for ensureModelAvailable.The call to
ensureModelAvailableat line 93 could fail (e.g., Ollama unavailable, network issues, model pull failures). Consider wrapping this in a try-catch block to provide clearer error messages when model availability checks fail.Apply this diff:
if (model.startsWith("ollama/")) { const ollamaBaseUrl = config.integrations?.ollama?.baseUrl || DEFAULT_OLLAMA_BASE_URL; - await ensureModelAvailable({ - model: detectedModel, - baseUrl: ollamaBaseUrl, - }); + try { + await ensureModelAvailable({ + model: detectedModel, + baseUrl: ollamaBaseUrl, + }); + } catch (error) { + throw new Error(`Failed to ensure Ollama model availability: ${error.message}`); + } return { provider: "ollama", model: detectedModel, apiKey: null, baseURL: ollamaBaseUrl, }; }
252-254: Enhance isZodSchema detection for robustness.The current check
typeof schema.safeParse === "function"could produce false positives if other objects happen to have asafeParsemethod. Consider a more robust check.Apply this diff:
const isZodSchema = (schema) => { - return schema && typeof schema.safeParse === "function"; + return schema && typeof schema.safeParse === "function" && typeof schema._def !== "undefined"; };The
_defproperty is an internal Zod property that exists on all Zod schemas, making this check more specific.
642-670: Clarify the role of the provider parameter.The
providerparameter is documented in the JSDoc (line 628) but appears to be unused in the implementation. The function always usesdetectProviderto determine the provider. Either remove the parameter from the documentation or implement explicit provider override logic.If the parameter should be supported, apply this diff:
// Determine provider, model, and API key - const detected = await detectProvider(config, model); + const detected = provider + ? { provider, model: modelMap[model] || model, apiKey: apiKey || getApiKey(config, provider), baseURL } + : await detectProvider(config, model);Otherwise, remove
providerfrom the JSDoc at line 628.cli/src/cli/builder/components.mjs (2)
325-365: Consider more robust JSON syntax highlighting.The current color detection uses simple string matching (line 332-336), which could produce incorrect colors for edge cases (e.g., strings containing
": true"would be colored as boolean). For better accuracy, consider a proper JSON lexer or parser-based highlighting.This is a nice-to-have improvement and not critical for functionality.
414-421: Consider UX for confirmation prompt.The prompt accepts Enter (return) as confirmation alongside 'y' at line 416. This could lead to accidental confirmations if users press Enter without reading. Consider requiring explicit 'y' input for destructive actions.
For safer UX, apply this diff:
useInput((input, key) => { - if (input.toLowerCase() === 'y' || key.return) { + if (input.toLowerCase() === 'y') { onConfirm(); } else if (input.toLowerCase() === 'n' || key.escape) { onCancel(); } });cli/src/cli/App.mjs (1)
32-35: Enhance error display with stack traces in development.The error display only shows the error message. Consider including the stack trace when in development mode for better debugging.
Apply this diff:
state.phase === 'error' && h(Box, { flexDirection: 'column' }, h(Text, { color: 'red', bold: true }, '✖ Error'), - h(Text, { color: 'red' }, state.error?.message || String(state.error)) + h(Text, { color: 'red' }, state.error?.message || String(state.error)), + config?.debug && state.error?.stack && h(Text, { color: 'gray', dimColor: true }, state.error.stack) )common/dev/dev.js (2)
4-25: Add error handling for refineStep failures.The
testRefineStepfunction doesn't handle errors fromrefineStep. If refinement fails for a model, the entire script will crash. Add try-catch to gracefully handle failures and continue testing other models.Apply this diff:
const testRefineStep = async (model) => { const originalStep = { stepId: "step-123", find: { selector: ".old-button-class" }, }; const startTime = performance.now(); - const refinedStep = await refineStep({ - step: originalStep, - failureMessage: "Element not found: .old-button-class", - context: { - dom: `<html><body> - <button class="new-submit-button" id="submit">Submit</button> - </body></html>`, + try { + const refinedStep = await refineStep({ + step: originalStep, + failureMessage: "Element not found: .old-button-class", + context: { + dom: `<html><body> + <button class="new-submit-button" id="submit">Submit</button> + </body></html>`, + }, + }); + const endTime = performance.now(); + const duration = endTime - startTime; + + console.log("Refined Step:", refinedStep); + return { model, duration, success: true }; + } catch (error) { + const endTime = performance.now(); + const duration = endTime - startTime; + + console.error(`Failed to refine step with model ${model}:`, error.message); + return { model, duration, success: false, error: error.message }; - }, - }); - const endTime = performance.now(); - const duration = endTime - startTime; - - console.log("Refined Step:", refinedStep); - return { model, duration }; + } };
27-49: Update results display to show success/failure status.Since we're testing multiple models, the results summary should indicate which models succeeded or failed during refinement.
Apply this diff:
// Print results summary console.log("\n=== Refinement Duration Summary ==="); - results.forEach(({ model, duration }) => { - console.log(`${model}: ${duration.toFixed(2)}ms`); + results.forEach(({ model, duration, success, error }) => { + const status = success ? '✓' : '✖'; + const errorInfo = error ? ` - Error: ${error}` : ''; + console.log(`${status} ${model}: ${duration.toFixed(2)}ms${errorInfo}`); }); // Print sorted by duration console.log("\n=== Sorted by Duration (fastest first) ==="); - [...results].sort((a, b) => a.duration - b.duration).forEach(({ model, duration }) => { - console.log(`${model}: ${duration.toFixed(2)}ms`); + [...results] + .filter(r => r.success) + .sort((a, b) => a.duration - b.duration) + .forEach(({ model, duration }) => { + console.log(`✓ ${model}: ${duration.toFixed(2)}ms`); });cli/src/cli/builder/builderRunner.js (1)
14-16: Consider validating options input.The function destructures
specsandinitialSpecfrom options but doesn't validate thatspecsis actually an array. If a caller passesspecs: nullor a non-array value, the.lengthcheck on line 35 could throw.async function runBuilder(options = {}) { - const { outputDir = process.cwd(), initialSpec = null, specs = [] } = options; + const { outputDir = process.cwd(), initialSpec = null, specs: rawSpecs = [] } = options; + const specs = Array.isArray(rawSpecs) ? rawSpecs : [];cli/test/builder/StepEditor.test.mjs (2)
9-11: Unusedrequirevariable.
createRequireis imported andrequireis created but never used in this file. This appears to be copied boilerplate that can be removed.import React from 'react'; import { render } from 'ink-testing-library'; -import { createRequire } from 'module'; - -const require = createRequire(import.meta.url); import StepEditor from '../../src/cli/builder/StepEditor.mjs';
22-28: Dynamic chai import with global assignment.Setting
global.expectis unnecessary ifexpectis only used within this file. The local variable binding is sufficient.let expect; before(async function () { const chai = await import('chai'); expect = chai.expect; - global.expect = expect; });cli/test/builder/SpecSelector.test.mjs (2)
9-15: Unused imports and variables.
createRequireand the resultingrequirevariable are not used.__filenameand__dirnameare computed but only__dirnameis used fortempDir. Consider removing unused code.import React from 'react'; import { render } from 'ink-testing-library'; -import { createRequire } from 'module'; import * as path from 'path'; import { fileURLToPath } from 'url'; -const require = createRequire(import.meta.url); -const __filename = fileURLToPath(import.meta.url); -const __dirname = path.dirname(__filename); +const __dirname = path.dirname(fileURLToPath(import.meta.url));
25-31: Same global.expect pattern as other test files.As noted in StepEditor.test.mjs, assigning to
global.expectis unnecessary.cli/test/builder/DebugRunner.test.mjs (1)
9-12: Unusedrequirevariable.Similar to other test files,
createRequireis imported andrequireis created but only used in the skipped smoke test block at line 419. Consider moving the require creation inside the smoke test block or removing if not needed.cli/test/builder/inlineSourceUpdates.test.mjs (2)
71-76: Usefs.rmSyncinstead of deprecatedfs.rmdirSync.
fs.rmdirSyncis deprecated for removing non-empty directories. Although the directory should be empty afterafterEachcleanup, usingfs.rmSyncwithrecursive: trueis more robust and consistent with the cleanup inafterEach.after(function () { // Remove temp directory if (fs.existsSync(tempDir)) { - fs.rmdirSync(tempDir); + fs.rmSync(tempDir, { recursive: true, force: true }); } });
61-69: Temp file cleanup could fail silently.If
fs.unlinkSyncthrows for any file (e.g., permission error), the loop will stop and remaining files won't be cleaned. Consider wrapping in try-catch or usingfs.rmSyncon the directory.afterEach(function () { // Clean up temp files after each test if (fs.existsSync(tempDir)) { - const files = fs.readdirSync(tempDir); - for (const file of files) { - fs.unlinkSync(path.join(tempDir, file)); - } + const files = fs.readdirSync(tempDir); + for (const file of files) { + try { + fs.unlinkSync(path.join(tempDir, file)); + } catch (err) { + // Ignore cleanup errors + } + } } });cli/src/cli/builder/StepEditor.mjs (3)
228-239: Consider adjusting Esc behavior for type/variant selection views.When in
selectTypeorselectVariantview, pressing Esc callsonCancel()which exits the entire StepEditor. Users might expect Esc to return to the previous view (menu) instead of fully canceling. This could cause accidental loss of edits if the user is just trying to navigate back.Consider changing to:
if (key.escape) { if (view === 'menu') { onCancel(); - } else if (view === 'selectType' || view === 'selectVariant') { - onCancel(); } else { setView('menu'); setEditingField(null); } }
625-627: Incomplete implementation: Delete field submenu.The "Delete field" action is listed in the menu but has no implementation (TODO comment). Selecting this option does nothing.
Would you like me to help implement the delete field submenu, or should I open an issue to track this?
556-558: Consider hiding "Delete field" option until implemented.The delete field menu item is shown but does nothing when selected due to the incomplete TODO. This could confuse users.
- if (currentFields.length > 0) { - menuItems.push({ label: '🗑️ Delete field...', value: 'deleteField' }); - } + // TODO: Uncomment when delete field submenu is implemented + // if (currentFields.length > 0) { + // menuItems.push({ label: '🗑️ Delete field...', value: 'deleteField' }); + // }cli/test/resolvedTests.test.js (1)
10-13: RelativestaticDirpath may be fragile.The
staticDir: "./test/server/public"is relative to the current working directory at runtime. If tests are run from a different directory, this path resolution could fail.+const __dirname = path.dirname(__filename); +// Note: __filename would need: const __filename = require('url').fileURLToPath(import.meta.url); for ESM +// But this is CommonJS, so use __dirname directly if available, or: const server = createServer({ port: 8093, - staticDir: "./test/server/public", + staticDir: path.resolve(__dirname, "./server/public"), });cli/src/cli/builder/FieldEditor.mjs (1)
280-310: Integer validation missing.For
integertype fields, the validation only checks if the value is a valid number but doesn't verify it's actually an integer. Users could submit1.5for an integer field.const num = Number(val); if (isNaN(num)) { setError('Must be a number'); + } else if (effectiveField.type === 'integer' && !Number.isInteger(num)) { + setError('Must be a whole number'); } else if (effectiveField.minimum !== undefined && num < effectiveField.minimum) {And in
handleSubmit:const handleSubmit = () => { if (error) return; if (localValue === '' && effectiveField.default === undefined) { setError('A value is required'); return; } const num = localValue === '' ? effectiveField.default : Number(localValue); + if (effectiveField.type === 'integer' && !Number.isInteger(num)) { + setError('Must be a whole number'); + return; + } onChange(num); onSubmit(num); };cli/src/cli/builder/TestEditor.mjs (1)
469-470: Consider adding confirmation for test deletion.Selecting "Delete test" immediately calls
onDelete()without any confirmation prompt. Given that unsaved changes already trigger a confirmation on cancel, it would be consistent to confirm before deleting the entire test.You could add a
confirmDeleteview similar toconfirmCancel, or simply reuse a generic confirmation component.cli/test/builder/TestBuilder.test.mjs (1)
449-527: Consider increasing delay for keyboard input tests or using a more robust waiting strategy.The
delay(50)calls afterstdin.write()are arbitrary and could cause flaky tests on slower systems or under load. Consider using a retry/poll pattern or increasing the delay for reliability.// Example: polling helper that waits for a condition const waitFor = async (condition, timeout = 1000, interval = 50) => { const start = Date.now(); while (Date.now() - start < timeout) { if (condition()) return; await delay(interval); } throw new Error('Condition not met within timeout'); }; // Usage: stdin.write('my-spec'); await waitFor(() => lastFrame().includes('my-spec'));cli/src/cli/builder/SpecSelector.mjs (3)
26-38: Dynamic import pattern looks good, but consider adding missingsetImportError(null)on success.The dynamic import of TestBuilder handles both success and error cases. However, when the import succeeds,
setImportErroris not explicitly cleared. While this isn't a bug sinceselectedSpecis checked first in the error condition, it would be cleaner to reset it.import('./TestBuilder.mjs') .then(module => { setTestBuilder(() => module.default); + setImportError(null); })
104-129: Menu items array is mutated after initial creation.The
itemsarray is built fromspecs.map()and then mutated withpush()on lines 132-141. While this works, it's slightly inconsistent with functional patterns. Consider using spread or concat for clarity:- const items = specs.map((specData, index) => { + const specItems = specs.map((specData, index) => { // ... existing mapping logic }); - // Add option to create new spec - items.push({ - label: '➕ Create new specification', - value: 'new', - }); - - // Add exit option - items.push({ - label: '🚪 Exit', - value: 'exit', - }); + const items = [ + ...specItems, + { label: '➕ Create new specification', value: 'new' }, + { label: '🚪 Exit', value: 'exit' }, + ];
144-190: SpecItem defined inside component will be recreated on every render.The
SpecItemcomponent is defined insideSpecSelector, causing it to be a new function reference on each render. This could cause unnecessary re-renders of the list items. Consider memoizing or extracting it outside the component.Move
SpecItemoutside ofSpecSelectoror wrap withReact.memo:// Define outside SpecSelector const SpecItem = React.memo(({ isSelected, label, value, fileName, dirName, testCount, isValid, validationErrors }) => { // ... component body });cli/src/utils.js (2)
74-148: getResolvedTestsFromEnv uses process.exit(1) which prevents graceful error handling.Using
process.exit(1)in utility functions makes them hard to test and prevents callers from handling errors gracefully. Consider throwing errors instead and letting the caller decide how to handle them.if (!apiConfig.accountId || !apiConfig.url || !apiConfig.token || !apiConfig.contextIds) { - log( - config, - "error", - "Invalid DOC_DETECTIVE_API: must contain 'accountId', 'url', 'token', and 'contextIds' properties" - ); - process.exit(1); + throw new Error("Invalid DOC_DETECTIVE_API: must contain 'accountId', 'url', 'token', and 'contextIds' properties"); }Then handle the error in the calling code.
732-792: reportResults silently swallows errors - consider propagating to caller.The function catches errors and only logs them to console, but doesn't throw or return an error status. The caller has no way to know if reporting failed.
} catch (error) { console.error( `Error reporting results to ${apiConfig.url}/contexts: ${error.message}` ); + throw error; // Or return { success: false, error } }cli/src/index.js (1)
40-48: canonicalizeSpec may not handle nested objects consistently.Using
Object.keys(clone).sort()only sorts top-level keys. Nested objects won't have their keys sorted, which could cause false negatives in comparison. Consider using a recursive key-sorting function or a library likejson-stable-stringify.function canonicalizeSpec(spec) { const clone = JSON.parse(JSON.stringify(spec)); // Remove metadata fields that don't represent actual spec content delete clone.sourcePath; delete clone.contentPath; delete clone.specId; - // Sort keys for consistent comparison - return JSON.stringify(clone, Object.keys(clone).sort()); + // Recursively sort keys for consistent comparison + const sortKeys = (obj) => { + if (Array.isArray(obj)) return obj.map(sortKeys); + if (obj && typeof obj === 'object') { + return Object.keys(obj).sort().reduce((acc, key) => { + acc[key] = sortKeys(obj[key]); + return acc; + }, {}); + } + return obj; + }; + return JSON.stringify(sortKeys(clone)); }cli/src/cli/builder/DebugRunner.mjs (3)
10-14: Top-level await for dynamic imports may cause issues.Using top-level
await import()foros,path, andfsat module load time:
- Requires the module to be in an ES module context with top-level await support
- May cause timing issues if the module is imported before the awaits complete
These are Node.js built-ins and don't need dynamic import. Consider using regular imports or synchronous
require.import { createRequire } from 'module'; const require = createRequire(import.meta.url); -const os = await import('os'); -const path = await import('path'); -const fs = await import('fs'); +const os = require('os'); +const path = require('path'); +const fs = require('fs');Then update usages from
os.default.tmpdir()toos.tmpdir(), etc.
141-168: useEffect for runner initialization has unnecessary dependency.The effect depends on
[phase, localTest.steps], but it only runs whenphase === 'init'. ThelocalTest.stepsdependency is unnecessary and could cause re-runs if steps change while in a different phase.- }, [phase, localTest.steps]); + }, [phase]);The
localTest.stepsis read inside but doesn't need to trigger re-execution since the phase check gates it.
207-273: attemptAiRefinement handles provider detection correctly.The async call to
detectProvider()without config/model args relies on environment variables for API keys. The fallback tostepResultphase when no provider is available is appropriate.One concern: The screenshot temp file is created then immediately deleted. If
saveScreenshotreturns a buffer, consider using an in-memory approach instead of file I/O.Check if
runner.saveScreenshotcan return a buffer directly without writing to disk, avoiding the temp file creation/deletion.cli/test/builder/schemaUtils.test.mjs (2)
44-50: Global state in test setup.Setting
global.expectin thebeforehook can cause test isolation issues if other test files depend on or modify this global. Consider using module-scopedexpectonly or ensuring test isolation between files.// Use dynamic import for chai to support ESM let expect; before(async function () { const chai = await import('chai'); expect = chai.expect; - global.expect = expect; });
439-446: Consider adding validation assertion for created default steps.The test verifies that each default step has the expected property key, but doesn't validate that the created step passes schema validation. This could catch issues where
createDefaultStepgenerates invalid defaults.it('creates valid steps that pass validation', function () { const stepTypes = getStepTypes(); for (const stepType of stepTypes) { const step = createDefaultStep(stepType); expect(step).to.have.property(stepType); + const result = validateStep(step); + expect(result.valid, `Default ${stepType} step should be valid`).to.be.true; } });cli/test/builder/sourceFileUtils.test.mjs (1)
74-79: Deprecatedfs.rmdirSyncusage.
fs.rmdirSyncis deprecated in favor offs.rmSyncwith therecursiveoption. While it works for empty directories (since afterEach cleans files), using the newer API is recommended.after(function () { // Remove temp directory if (fs.existsSync(tempDir)) { - fs.rmdirSync(tempDir); + fs.rmSync(tempDir, { recursive: true }); } });common/src/refineStep.js (1)
290-302: Parameter mutation inpostProcessStep.The function mutates
refinedStepdirectly. While this works, consider creating a shallow copy to avoid potential side effects if the AI result object is used elsewhere.const postProcessStep = (refinedStep, originalStep) => { + const result = { ...refinedStep }; + // Preserve stepId from original if present and not in refined - if (originalStep?.stepId && !refinedStep?.stepId) { - refinedStep.stepId = originalStep.stepId; + if (originalStep?.stepId && !result.stepId) { + result.stepId = originalStep.stepId; } // Preserve sourceLocation from original if present - if (originalStep?.sourceLocation && !refinedStep?.sourceLocation) { - refinedStep.sourceLocation = originalStep.sourceLocation; + if (originalStep?.sourceLocation && !result.sourceLocation) { + result.sourceLocation = originalStep.sourceLocation; } - return refinedStep; + return result; };common/src/ollama.js (2)
180-185: Inconsistent indentation in try-catch block.The indentation inside the try-catch is inconsistent, which affects readability.
try { - await startOllamaContainer(); + await startOllamaContainer(); } catch (error) { console.error(`Failed to start Ollama container: ${error.message}`); return false; }
231-237: Edge case: negative byte values.
formatByteshandles zero but not negative values, which would produceNaN undefineddue toMath.logof a negative number. While unlikely in practice, defensive coding would handle this.const formatBytes = (bytes) => { - if (bytes === 0) return "0 B"; + if (bytes <= 0) return "0 B"; const k = 1024; const sizes = ["B", "KB", "MB", "GB", "TB"]; const i = Math.floor(Math.log(bytes) / Math.log(k)); return `${(bytes / Math.pow(k, i)).toFixed(2)} ${sizes[i]}`; };cli/src/cli/builder/TestBuilder.mjs (1)
168-177: Consider deep equality check instead of JSON serialization for change detection.Using
JSON.stringifyfor comparison works but has edge cases: property order differences could cause false positives, and it doesn't handleundefinedvalues,Dateobjects, or circular references properly. For test specs, this is likely acceptable, but worth noting.cli/src/cli/builder/schemaUtils.mjs (1)
506-513: Consider deriving browser actions from schema instead of hardcoding.The hardcoded list may become stale if new browser-requiring actions are added to the schema. Consider deriving this from schema metadata if available.
// Current hardcoded approach works but requires manual updates export function getBrowserActions() { return ['type', 'find', 'click', 'dragAndDrop', 'screenshot', 'record']; }cli/src/cli/builder/sourceFileUtils.js (1)
451-467: Hash length check is fragile for MD5 detection.The check
originalContent.length === 32 && /^[a-f0-9]+$/i.test(originalContent)assumes MD5 hash format. Consider using a more explicit marker or storing metadata about the content type.// Current approach works but could match non-hash strings of exactly 32 hex chars if (originalContent.length === 32 && /^[a-f0-9]+$/i.test(originalContent)) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (11)
cli/package-lock.jsonis excluded by!**/package-lock.jsoncommon/dist/schemas/config_v3.schema.jsonis excluded by!**/dist/**common/dist/schemas/report_v3.schema.jsonis excluded by!**/dist/**common/dist/schemas/resolvedTests_v3.schema.jsonis excluded by!**/dist/**common/dist/schemas/sourceLocation_v3.schema.jsonis excluded by!**/dist/**common/dist/schemas/spec_v3.schema.jsonis excluded by!**/dist/**common/dist/schemas/step_v3.schema.jsonis excluded by!**/dist/**common/dist/schemas/test_v3.schema.jsonis excluded by!**/dist/**common/package-lock.jsonis excluded by!**/package-lock.jsoncore/package-lock.jsonis excluded by!**/package-lock.jsonresolver/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (48)
cli/.github/workflows/auto-dev-release.yml(1 hunks)cli/.github/workflows/npm-test.yaml(2 hunks)cli/.github/workflows/update-core.yaml(1 hunks)cli/dev/dev.config.json(1 hunks)cli/dev/dev.spec.json(1 hunks)cli/package.json(2 hunks)cli/samples/http.spec.yaml(1 hunks)cli/samples/kitten-search.spec.json(1 hunks)cli/samples/tests.spec.json(0 hunks)cli/scripts/bump-sync-version-core.js(2 hunks)cli/src/cli/App.mjs(1 hunks)cli/src/cli/ResultsSummary.mjs(1 hunks)cli/src/cli/TestRunner.mjs(1 hunks)cli/src/cli/builder/DebugRunner.mjs(1 hunks)cli/src/cli/builder/FieldEditor.mjs(1 hunks)cli/src/cli/builder/SpecSelector.mjs(1 hunks)cli/src/cli/builder/StepEditor.mjs(1 hunks)cli/src/cli/builder/TestBuilder.mjs(1 hunks)cli/src/cli/builder/TestEditor.mjs(1 hunks)cli/src/cli/builder/builderRunner.js(1 hunks)cli/src/cli/builder/components.mjs(1 hunks)cli/src/cli/builder/index.js(1 hunks)cli/src/cli/builder/schemaUtils.mjs(1 hunks)cli/src/cli/builder/sourceFileUtils.js(1 hunks)cli/src/cli/runner.js(1 hunks)cli/src/index.js(2 hunks)cli/src/utils.js(6 hunks)cli/test/builder/DebugRunner.test.mjs(1 hunks)cli/test/builder/SpecSelector.test.mjs(1 hunks)cli/test/builder/StepEditor.test.mjs(1 hunks)cli/test/builder/TestBuilder.test.mjs(1 hunks)cli/test/builder/TestEditor.test.mjs(1 hunks)cli/test/builder/editor.test.mjs(1 hunks)cli/test/builder/fixtures.mjs(1 hunks)cli/test/builder/inlineSourceUpdates.test.mjs(1 hunks)cli/test/builder/schemaUtils.test.mjs(1 hunks)cli/test/builder/sourceFileUtils.test.mjs(1 hunks)cli/test/resolvedTests.test.js(1 hunks)cli/test/server/index.js(2 hunks)cli/test/utils.test.js(7 hunks)common/dev/dev.js(1 hunks)common/package.json(2 hunks)common/src/ai.js(1 hunks)common/src/index.js(1 hunks)common/src/ollama.js(1 hunks)common/src/refineStep.js(1 hunks)common/src/schemas/dereferenceSchemas.js(1 hunks)common/src/schemas/output_schemas/sourceLocation_v3.schema.json(1 hunks)
💤 Files with no reviewable changes (1)
- cli/samples/tests.spec.json
🧰 Additional context used
🧬 Code graph analysis (20)
cli/src/cli/builder/StepEditor.mjs (2)
cli/src/cli/builder/schemaUtils.mjs (6)
getStepTypes(15-24)getStepTypeInfo(418-428)getStepTypeVariants(50-63)detectVariantIndex(90-123)getCommonStepProperties(40-43)validateStep(343-349)cli/src/cli/builder/FieldEditor.mjs (1)
FieldEditor(61-431)
cli/test/builder/StepEditor.test.mjs (2)
cli/src/cli/builder/StepEditor.mjs (1)
StepEditor(128-642)cli/test/builder/fixtures.mjs (8)
getMockGoToStep(56-58)getMockGoToStep(56-58)getMockClickStep(66-68)getMockClickStep(66-68)getMockFindStep(76-78)getMockFindStep(76-78)getMockTypeStep(86-88)getMockTypeStep(86-88)
cli/test/resolvedTests.test.js (1)
cli/test/server/index.js (3)
path(3-3)server(22-22)server(167-172)
cli/src/cli/builder/DebugRunner.mjs (3)
cli/src/cli/builder/schemaUtils.mjs (8)
require(8-8)require(9-9)step(437-437)stepRequiresBrowser(531-534)i(97-97)i(111-111)actionType(532-532)getStepActionType(520-524)common/src/ai.js (1)
detectProvider(86-134)common/src/refineStep.js (1)
refineStep(220-282)
cli/src/cli/TestRunner.mjs (4)
cli/src/cli/ResultsSummary.mjs (1)
h(2-2)cli/src/cli/App.mjs (1)
h(2-2)cli/src/cli/runner.js (1)
React(11-15)cli/src/cli/builder/TestBuilder.mjs (1)
currentTest(394-394)
cli/src/cli/runner.js (3)
cli/src/index.js (4)
config(358-358)results(66-66)results(375-377)error(89-89)cli/src/cli/App.mjs (1)
App(8-37)cli/src/cli/ResultsSummary.mjs (1)
results(12-12)
cli/test/builder/SpecSelector.test.mjs (1)
cli/test/builder/editor.test.mjs (3)
require(14-14)__filename(17-17)__dirname(18-18)
cli/src/cli/builder/FieldEditor.mjs (2)
cli/src/cli/builder/schemaUtils.mjs (4)
getFieldVariants(70-82)detectVariantIndex(90-123)validatePattern(383-391)describePattern(398-411)resolver/src/utils.js (2)
input(365-365)key(250-250)
cli/src/cli/builder/TestEditor.mjs (4)
cli/src/cli/builder/components.mjs (12)
React(6-6)StatusBar(370-409)StatusBar(370-409)items(261-264)ScrollableSelect(56-155)ScrollableSelect(56-155)DescriptiveItem(13-35)DescriptiveItem(13-35)NoIndicator(40-40)NoIndicator(40-40)JsonPreview(325-365)JsonPreview(325-365)cli/src/cli/builder/schemaUtils.mjs (4)
validateTest(356-362)createDefaultStep(435-479)getCommonStepProperties(40-43)getStepTypes(15-24)cli/src/cli/builder/FieldEditor.mjs (1)
FieldEditor(61-431)cli/src/cli/builder/StepEditor.mjs (1)
StepEditor(128-642)
cli/src/index.js (4)
cli/src/cli/runner.js (2)
require(1-1)results(60-62)cli/src/utils.js (9)
require(2-2)require(3-3)require(6-6)path(4-4)results(351-351)argv(40-69)outputDir(287-287)config(194-194)resolvedTests(80-80)resolver/src/utils.js (11)
require(7-7)require(8-13)path(6-6)yaml(307-307)spec(1038-1038)outputDir(440-440)i(75-75)i(249-249)i(1330-1330)i(1338-1338)specs(961-961)cli/src/cli/builder/builderRunner.js (1)
process(15-15)
cli/test/builder/DebugRunner.test.mjs (2)
cli/test/builder/fixtures.mjs (11)
options(180-180)getMockTest(27-34)getMockTest(27-34)getMockGoToStep(56-58)getMockGoToStep(56-58)getMockStepResult(227-233)getMockStepResult(227-233)getMockClickStep(66-68)getMockClickStep(66-68)getMockFindStep(76-78)getMockFindStep(76-78)cli/src/cli/builder/DebugRunner.mjs (2)
DebugRunner(98-894)runStep(107-107)
cli/src/cli/App.mjs (3)
cli/src/cli/ResultsSummary.mjs (2)
h(2-2)ResultsSummary(5-106)cli/src/cli/TestRunner.mjs (2)
h(2-2)TestRunner(6-39)cli/src/cli/runner.js (1)
React(11-15)
cli/src/cli/ResultsSummary.mjs (3)
cli/src/cli/App.mjs (1)
h(2-2)cli/src/cli/TestRunner.mjs (1)
h(2-2)cli/src/cli/runner.js (1)
React(11-15)
cli/src/utils.js (2)
cli/src/index.js (5)
require(3-12)require(13-13)require(17-17)require(329-329)require(373-373)common/src/ai.js (1)
validate(291-291)
cli/src/cli/builder/sourceFileUtils.js (1)
cli/src/cli/builder/TestBuilder.mjs (2)
require(12-12)require(26-36)
common/src/index.js (3)
common/dev/dev.js (2)
require(1-1)require(2-2)common/src/ai.js (7)
require(1-1)require(2-2)require(3-3)require(4-4)require(5-5)require(6-6)require(9-9)common/test/refineStep.test.js (3)
require(2-8)require(9-9)require(379-379)
common/src/ollama.js (1)
common/src/ai.js (7)
require(1-1)require(2-2)require(3-3)require(4-4)require(5-5)require(6-6)require(9-9)
cli/test/builder/fixtures.mjs (2)
cli/test/builder/schemaUtils.test.mjs (8)
actionType(502-502)actionType(509-509)actionType(516-516)actionType(527-527)stepTypes(55-55)stepTypes(62-62)stepTypes(75-75)stepTypes(440-440)cli/src/cli/builder/SpecSelector.mjs (1)
testCount(109-109)
cli/src/cli/builder/components.mjs (2)
cli/src/cli/builder/TestBuilder.mjs (7)
React(6-6)description(450-450)description(1002-1002)items(449-456)items(554-558)items(597-600)value(1096-1096)cli/src/cli/builder/TestEditor.mjs (4)
React(6-6)items(187-191)items(230-233)value(435-435)
common/dev/dev.js (4)
common/src/ai.js (12)
require(1-1)require(2-2)require(3-3)require(4-4)require(5-5)require(6-6)require(9-9)modelMap(17-38)result(263-263)result(372-372)result(744-744)result(834-834)common/src/index.js (5)
require(1-1)require(2-2)require(3-3)require(4-4)require(5-5)common/test/refineStep.test.js (10)
originalStep(239-242)originalStep(266-269)originalStep(285-300)refinedStep(244-252)refinedStep(271-274)refinedStep(302-304)refinedStep(315-322)refinedStep(333-341)refinedStep(357-364)refinedStep(381-387)common/src/refineStep.js (2)
refinedStep(271-271)refineStep(220-282)
🪛 Biome (2.1.2)
cli/src/cli/builder/sourceFileUtils.js
[error] 120-120: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
[error] 135-135: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
cli/src/cli/builder/TestBuilder.mjs
[error] 1128-1130: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
|
@copilot Create a new PR to address all comments on this PR. |
|
@hawkeyexl I've opened a new pull request, #74, to work on those changes. Once the pull request is ready, I'll request review from you. |
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (6)
cli/scripts/bump-sync-version-core.js (1)
22-28: Good addition of CI guard addressing previous review feedback.The CI environment check properly prevents accidental local execution of this script, which performs destructive git operations. The error messages clearly explain the risk and provide an escape hatch for advanced users.
common/src/schemas/output_schemas/sourceLocation_v3.schema.json (1)
8-8: Incomplete required array—coordinate fields should be required.The
requiredarray only includes["file", "startLine", "endLine"], but both examples populate all coordinate fields (startColumn,endColumn,startOffset,endOffset,originalText). Since this schema is read-only and populated during test detection, incomplete data could break downstream inline editing features that expect full location information.Apply this diff to require all coordinate fields:
- "required": ["file", "startLine", "endLine"], + "required": ["file", "startLine", "endLine", "startColumn", "endColumn", "startOffset", "endOffset", "originalText"],Note:
isInlineandisAutoDetectedhave defaults and can remain optional.common/src/refineStep.js (1)
236-239: Previous issue addressed:aiConfignow reads fromconfig.ai.The dead code issue from the past review has been fixed. The configuration is now properly extracted from
config?.ai || {}.common/src/ai.js (1)
40-78: Naming appears consistent:openAiis used throughout.The past review flagged inconsistency between
openAiandopenai, but the current code consistently usesopenAi(camelCase) in this function and throughout the file.cli/src/cli/runner.js (1)
71-77: DoublesetImmediatecorrectly replacesrequestAnimationFrame.The past review comment about
requestAnimationFramenot being available in Node.js has been properly addressed. The current implementation uses nestedsetImmediatecalls, which is the correct Node.js pattern to yield to the event loop twice and allow React/Ink to flush pending updates before unmounting.cli/src/index.js (1)
360-365: Log function argument order has been corrected.The past review comment flagged that these log calls had incorrect argument order. The current code correctly passes arguments as
log(config, "debug", message), matching the expected function signature.
🧹 Nitpick comments (8)
cli/scripts/bump-sync-version-core.js (1)
70-71: Minor: Inconsistent log message casing."core version" should be "Core version" for consistency with the line above.
console.log(`Project version: ${projMajor}.${projMinor}.${projPatch}`); - console.log(`core version: ${coreMajor}.${coreMinor}.${corePatch}`); + console.log(`Core version: ${coreMajor}.${coreMinor}.${corePatch}`);cli/src/cli/builder/TestEditor.mjs (2)
158-177: Previous concern addressed; minor variable shadowing.The approach of passing an empty object to let
StepEditorhandle type selection is valid, addressing the earlier concern aboutcreateDefaultStep(). However, the callback parameternewStepon line 167 shadows the outernewStepconst from line 161, which reduces readability.Consider renaming for clarity:
if (view === 'addStep') { // Create new step - pass empty object so StepEditor will prompt for type selection - const newStep = {}; + const emptyStep = {}; return React.createElement(StepEditor, { - step: newStep, + step: emptyStep, stepIndex: (localTest.steps || []).length, onChange: () => {}, - onSave: (newStep) => { - const newSteps = [...(localTest.steps || []), newStep]; + onSave: (savedStep) => { + const newSteps = [...(localTest.steps || []), savedStep];
282-294: Consider truncating validation error display.The validation errors are rendered using
JSON.stringifywithout any length limit. For tests with many errors, this could overflow the terminal and hurt readability. TheJsonPreviewcomponent already handles truncation—consider applying similar logic here or limiting displayed errors.common/src/ai.js (3)
133-133: Inconsistent return object structure.Other return statements include an
apiKeyproperty, but this fallback case omits it. While this may not cause runtime errors (property access will returnundefined), it's inconsistent with the documented return type.- return { provider: null, model: null }; + return { provider: null, model: null, apiKey: null };
282-303: Consider caching the Ajv instance for better performance.A new
Ajvinstance is created on every validation call. Since Ajv compilation can be expensive for complex schemas, consider caching the instance at module level.+// Module-level Ajv instance for reuse +const ajv = new Ajv({ + allErrors: true, + useDefaults: true, + coerceTypes: true, + strict: false, +}); +addFormats(ajv); + const validateAgainstJsonSchema = (object, schema) => { - const ajv = new Ajv({ - allErrors: true, - useDefaults: true, - coerceTypes: true, - strict: false, - }); - addFormats(ajv); - const validate = ajv.compile(schema);
575-579: JSDoc type annotation is incomplete.The
@param {("openai" | "anthropic")} providertype doesn't include"google", but the function handles it at lines 600-608.- * @param {"openai" | "anthropic"} provider - The provider name. + * @param {"openai" | "anthropic" | "google"} provider - The provider name.cli/src/index.js (2)
50-206: Consider splitting this complex function into smaller helpers.The
matchSpecsToSourceFilesfunction is 157 lines long and handles 5 different matching strategies. While the logic is sound and systematic, the function would be more maintainable if split into smaller, focused helper functions for each matching priority level.Example structure:
function matchSpecsToSourceFiles(detectedSpecs, inputPaths) { const results = []; const usedInputPaths = new Set(); const context = buildMatchingContext(inputPaths); for (const spec of detectedSpecs) { const match = trySourcePathMatch(spec, context) || tryContentPathMatch(spec, context) || tryBasenameMatch(spec, context, usedInputPaths) || tryContentHashMatch(spec, context, usedInputPaths) || trySingleRemainingMatch(spec, detectedSpecs, results, context, usedInputPaths); results.push(formatMatchResult(spec, match, usedInputPaths)); } return results; }This would make each matching strategy easier to test and understand in isolation.
234-271: Consider extracting input path parsing to a helper function.The input path extraction logic (lines 236-271) handles multiple flag formats and fallback cases. While functional, extracting this into a dedicated function would improve readability and testability.
Example:
function parseEditorInputPaths(rawArgs) { const inputPaths = []; for (let i = 0; i < rawArgs.length; i++) { const arg = rawArgs[i]; if (arg === '--editor' || arg === '-e') continue; if (arg === '--input' || arg === '-i') { if (i + 1 < rawArgs.length) { addCommaSeparatedPaths(inputPaths, rawArgs[i + 1]); i++; } continue; } if (arg.startsWith('--input=') || arg.startsWith('-i=')) { addCommaSeparatedPaths(inputPaths, arg.split('=')[1]); continue; } if (arg.startsWith('-')) continue; const resolved = path.resolve(arg); if (fs.existsSync(resolved)) { inputPaths.push(resolved); } } return inputPaths; }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
cli/.github/workflows/npm-test.yaml(0 hunks)cli/scripts/bump-sync-version-core.js(2 hunks)cli/src/cli/builder/TestEditor.mjs(1 hunks)cli/src/cli/runner.js(1 hunks)cli/src/index.js(2 hunks)common/src/ai.js(1 hunks)common/src/refineStep.js(1 hunks)common/src/schemas/output_schemas/sourceLocation_v3.schema.json(1 hunks)
💤 Files with no reviewable changes (1)
- cli/.github/workflows/npm-test.yaml
🧰 Additional context used
🧬 Code graph analysis (3)
cli/src/cli/runner.js (3)
cli/src/index.js (4)
config(358-358)results(66-66)results(375-377)error(89-89)cli/src/cli/App.mjs (1)
App(8-37)cli/src/cli/ResultsSummary.mjs (1)
results(12-12)
cli/src/index.js (3)
resolver/src/utils.js (11)
require(7-7)require(8-13)path(6-6)yaml(307-307)spec(1038-1038)outputDir(440-440)i(75-75)i(249-249)i(1330-1330)i(1338-1338)specs(961-961)cli/src/utils.js (7)
require(2-2)require(3-3)require(6-6)path(4-4)argv(40-69)outputDir(287-287)config(194-194)cli/src/cli/builder/builderRunner.js (1)
process(15-15)
cli/src/cli/builder/TestEditor.mjs (2)
cli/src/cli/builder/components.mjs (12)
React(6-6)StatusBar(370-409)StatusBar(370-409)items(261-264)ScrollableSelect(56-155)ScrollableSelect(56-155)DescriptiveItem(13-35)DescriptiveItem(13-35)NoIndicator(40-40)NoIndicator(40-40)JsonPreview(325-365)JsonPreview(325-365)cli/src/cli/builder/FieldEditor.mjs (1)
FieldEditor(61-431)
🔇 Additional comments (29)
cli/scripts/bump-sync-version-core.js (5)
31-32: LGTM!The destructive git operations are now properly guarded by the CI check above.
81-85: LGTM!The patch version bump logic is correct, and the logging clearly indicates the action being taken.
88-92: Good defensive validation.Validating the computed version before setting it prevents invalid versions from being committed and tagged.
94-95: LGTM!The
--no-git-tag-versionflag correctly defers tagging to the explicitgit tagcommand below, and the version value is validated before use.
98-98: LGTM!The
[skip ci]tag in the commit message correctly prevents CI loops when pushing version bumps.common/src/schemas/output_schemas/sourceLocation_v3.schema.json (1)
79-91: Conditional logic correctly addresses previous feedback.The
allOfconditional properly requirescommentFormatwhenisInlineistrue, which addresses the previous review suggestion. This ensures inline tests always include serialization format information.cli/src/cli/builder/TestEditor.mjs (8)
30-46: LGTM - Good defensive initialization and change tracking.The safe fallback for undefined test (line 34), deep clone for original state (line 38), and JSON-based change detection (line 42) are appropriate for this CLI context.
51-68: Good escape key handling with unsaved changes protection.The flow correctly prompts for confirmation when there are unsaved changes and properly resets editing state when navigating back.
70-117: Good error handling for unknown fields.The defensive check at line 74 with a user-friendly warning message and escape path is a solid approach for handling potentially missing field definitions.
119-156: Well-structured step editing with proper immutable updates.The step modification callbacks correctly use spread operators to maintain immutability, and the delete handler properly uses
spliceon a copy of the array.
346-361: Step type detection logic is well-designed.Good approach using
getStepTypes()andgetCommonStepProperties()to correctly identify the action type of each step. The display truncation and unknown type fallback are properly handled.
385-417: Good UX: confirmation dialog protects against accidental data loss.The unsaved changes confirmation flow with clear options is a user-friendly approach.
444-494: Menu selection handling is well-structured.The pattern-based parsing for
editMeta:andeditStep:prefixes, combined with the switch statement for other actions, provides clean and maintainable routing logic. The intentional fall-through fornone_*items is documented appropriately.
22-29: Well-structured component with clear prop interface.The component's prop contract (
test,testIndex,onChange,onSave,onCancel,onDelete) is clear and the multi-view state machine approach provides good organization for the complex editing workflow.common/src/refineStep.js (4)
10-36: LGTM! Well-structured system prompt.The system prompt is comprehensive, covering step refinement guidelines and all available action types. This will help the AI model produce more accurate and contextually appropriate refined steps.
49-167: LGTM! Comprehensive prompt construction.The function effectively builds a structured prompt with appropriate content truncation. The markdown formatting with code blocks will help the AI model parse the context correctly.
175-180: LGTM!Simple and effective truncation utility with clear truncation indicator.
290-302: LGTM!The post-processing logic correctly preserves essential metadata from the original step when the AI doesn't include them.
common/src/ai.js (5)
17-38: LGTM! Clean model mapping structure.The model map provides a maintainable way to translate internal identifiers to provider-specific names.
144-173: LGTM!Clean provider factory implementation with proper handling of optional configuration parameters.
183-220: LGTM!Good handling of multiple image data formats with appropriate conversion for provider compatibility.
338-382: Verify handling of circular schema references.The recursive
dereferenceSchemafunction doesn't track visited references. Circular references (e.g.,{ "$ref": "#" }or mutually recursive definitions) would cause infinite recursion and stack overflow.If the step schemas may contain circular references, consider adding a
visitedSet to track resolved paths:-const dereferenceSchema = (schema, rootSchema) => { +const dereferenceSchema = (schema, rootSchema, visited = new Set()) => { // ... if (schema.$ref) { const refPath = schema.$ref; + if (visited.has(refPath)) { + return {}; // Break circular reference + } + visited.add(refPath); // ... - return dereferenceSchema(resolved, rootSchema); + return dereferenceSchema(resolved, rootSchema, visited); }
453-478: LGTM! Good handling of discriminated unions for Ollama.The approach of merging all anyOf options with optional properties is a reasonable simplification for providers with limited schema support.
cli/src/cli/runner.js (2)
1-15: LGTM! Clean imports and function signature.The dynamic ESM imports are correctly used to load Ink and React into this CommonJS module, and the JSDoc documentation clearly describes the function's purpose and parameters.
79-102: LGTM! Error handling is robust.The error handling properly:
- Normalizes errors to Error objects (line 90)
- Updates the UI to show the error state
- Provides a brief delay for the user to see the error
- Cleans up by unmounting before re-throwing
cli/src/index.js (4)
3-18: LGTM! Imports support new spec matching and editor features.The additional imports (crypto, doc-detective-resolver, js-yaml) are appropriate for the new functionality: content-based file matching, spec detection, and YAML parsing.
20-32: LGTM! MD5 hashing is appropriate for content-based matching.The function correctly uses MD5 for non-cryptographic content comparison. Returning
nullon read errors provides a clean fallback for the matching logic.
212-227: LGTM! Precise flag detection.The
hasEditorFlagfunction correctly identifies exact--editoror-eflags without false positives from similar flag names. It properly handles both standalone flags and assignment formats (--editor=value).
367-386: LGTM! Environment-aware test execution is well-structured.The code correctly:
- Checks for resolved tests from the environment (DOC_DETECTIVE_API)
- Conditionally passes
resolvedTeststorunTestsonly when available- Selects the appropriate reporter based on whether
apiConfigis presentThis design supports both standalone CLI usage and integration with external test management systems.
| const [state, setState] = React.useState({ | ||
| phase: 'initializing', | ||
| results: null, | ||
| error: null, | ||
| currentSpec: null, | ||
| currentTest: null, | ||
| progress: { | ||
| specs: { current: 0, total: 0 }, | ||
| tests: { current: 0, total: 0 }, | ||
| steps: { current: 0, total: 0 }, | ||
| }, | ||
| }); | ||
|
|
||
| // Store the state updater for external use | ||
| React.useEffect(() => { | ||
| updateState = setState; | ||
| resolve(); | ||
| }, []); | ||
|
|
||
| return React.createElement(App, { | ||
| config, | ||
| resolvedTests: options.resolvedTests, | ||
| state, | ||
| }); | ||
| }; | ||
|
|
||
| appInstance = render(React.createElement(AppWrapper)); | ||
| }); | ||
|
|
||
| // Wait for the app to mount | ||
| await mountPromise; | ||
|
|
||
| try { | ||
| // Update to running phase | ||
| updateState((prev) => ({ ...prev, phase: 'running' })); | ||
|
|
||
| // Run tests | ||
| const results = options.resolvedTests | ||
| ? await runTests(config, { resolvedTests: options.resolvedTests }) | ||
| : await runTests(config); |
There was a problem hiding this comment.
Progress tracking fields are initialized but never updated during test execution.
The state structure includes currentSpec, currentTest, and progress fields (lines 27-33), and these are passed to the TestRunner component. However, there's no mechanism to update these fields during test execution (lines 59-62). The runTests function doesn't provide progress callbacks, so the UI will show the initial "0 of 0" state throughout execution rather than real-time progress.
Consider one of these solutions:
- If
runTestssupports progress callbacks, wire them toupdateState - Poll test state during execution if the core exposes it
- Update the UI design to remove real-time progress if it's not feasible
Without this, the TestRunner component won't display meaningful progress information during test execution.
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cli/.github/workflows/auto-dev-release.yml(1 hunks)cli/.github/workflows/npm-test.yaml(1 hunks)
🔇 Additional comments (1)
cli/.github/workflows/auto-dev-release.yml (1)
93-93: LGTM with documented rationale.The change to
npm installis properly documented with an inline comment explaining it's intentional to fix CI, and aligns with the maintainer's explicit decision in past review discussions.
| cache-dependency-path: package-lock.json | ||
|
|
||
| - run: npm ci | ||
| - run: npm install // Intentionally changed from 'npm ci' to 'npm install' to fix CI |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the workflow file exists and examine the relevant lines
if [ -f "cli/.github/workflows/npm-test.yaml" ]; then
echo "=== Workflow file content (lines 35-70) ==="
sed -n '35,70p' "cli/.github/workflows/npm-test.yaml"
else
echo "File not found at cli/.github/workflows/npm-test.yaml"
# Try alternative path
find . -name "npm-test.yaml" -type f
fiRepository: doc-detective/.github
Length of output: 1087
🏁 Script executed:
#!/bin/bash
# Search git history for changes related to npm install/ci in the workflow
git log --all --oneline --follow -- "**/npm-test.yaml" | head -20Repository: doc-detective/.github
Length of output: 359
🏁 Script executed:
#!/bin/bash
# Search for any npm install vs npm ci discussions in recent commits
git log --all -p -- "**/npm-test.yaml" | grep -A 5 -B 5 "npm install\|npm ci" | head -80Repository: doc-detective/.github
Length of output: 2324
The npm command inconsistency requires clarification.
The test job uses npm install (line 43) while the publish-npm job uses npm ci (line 60). The inline comment indicates the change to npm install was intentional to fix CI, but this leaves the publish job unchanged. Either update the publish-npm job to use npm install for consistency, or add a clear comment explaining why npm ci is retained there (e.g., if exact version matching is critical for publishing).
🤖 Prompt for AI Agents
In cli/.github/workflows/npm-test.yaml around line 43, the workflow uses 'npm
install' while the publish-npm job (around line 60) still uses 'npm ci', causing
an inconsistency; either make both steps use the same command or document the
rationale. Update the publish-npm job to use 'npm install' to match the test
job, or if 'npm ci' must remain for deterministic installs during publish, add a
clear inline comment next to line 60 explaining why 'npm ci' is required (e.g.,
for reproducible lockfile-based installs during publishing). Ensure both places
are consistent or clearly documented.
- Replace Text components with Box for better layout in TestEditor. - Add progress tracking comments in runWithUI function for future enhancements. - Implement sortKeys function for consistent object key comparison. - Update generate function to use detected baseURL if provided. - Change commentFormat in sourceLocation schema from "htmlComment" to "linkReference".
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (3)
cli/src/cli/runner.js (1)
29-38: Progress tracking fields explained for future use.The comment on lines 29-33 addresses the previous review concern about these fields not being updated during execution. Since
runTests()currently doesn't provide progress callbacks, the fields remain at their initial state.Optional: If the progress callback support isn't imminent, consider removing these fields now and re-adding them when the functionality is actually implemented (YAGNI principle). This would simplify the current state structure.
common/src/ai.js (2)
672-677: Past comment already addressed: baseURL fallback is implemented.The past review comment flagged that
detected.baseURLwasn't used as a fallback, but line 676 correctly implementsbaseURL: baseURL || detected.baseURL, ensuring the detected base URL is used when the function argument is undefined.
857-859: Past comment already addressed: lastObject stores unwrapped object.The past review comment suggested storing
validationObject(unwrapped) instead ofresult.object, and line 859 correctly implements this:lastObject = validationObject;.
🧹 Nitpick comments (5)
cli/src/index.js (2)
25-32: Consider logging file read failures.The function silently returns
nullon any error, which could hide issues like permission errors or encoding problems. While the null return is checked by callers, logging failures would improve debuggability.function computeFileHash(filePath) { try { const content = fs.readFileSync(filePath, 'utf-8'); return crypto.createHash('md5').update(content).digest('hex'); - } catch { + } catch (err) { + // File read failed - likely missing, permission denied, or encoding issue + log(config, 'debug', `Unable to compute hash for ${filePath}: ${err.message}`); return null; } }Note: This would require passing
configto the function.
309-316: Consider collecting all matching errors before exiting.The code exits on the first matching error, preventing the user from working with successfully matched specs. For better UX, collect all errors and either:
- Display all errors together, then exit if any failed
- Allow the builder to run with successfully matched specs and display warnings for failed matches
+ const errors = []; for (const matched of matchedSpecs) { const { spec: detectedSpec, filePath, matchMethod, error } = matched; if (error) { - console.error(`\x1b[31mError: ${error}\x1b[0m`); - console.error('\x1b[33mPlease ensure each spec file is provided explicitly as input.\x1b[0m'); - process.exit(1); + errors.push(error); + continue; } const ext = filePath ? path.extname(filePath).toLowerCase() : '.json'; // ... rest of spec processing } + + if (errors.length > 0) { + errors.forEach(err => console.error(`\x1b[31mError: ${err}\x1b[0m`)); + console.error('\x1b[33mPlease ensure each spec file is provided explicitly as input.\x1b[0m'); + process.exit(1); + } }cli/src/cli/runner.js (1)
42-45: Consider more idiomatic React pattern for state updates.The current approach exposes the component's
setStateto the outer scope via closure (lines 42-45, used at lines 62, 70, 92). While this works correctly, it's unconventional for React.More idiomatic alternatives:
- Use
useImperativeHandlewith a ref to expose an update method- Pass state update callbacks as props to the component
- Use an external state management solution
Example with
useImperativeHandle:const stateRef = React.useRef(); const AppWrapper = () => { const [state, setState] = React.useState({ /* ... */ }); React.useImperativeHandle(stateRef, () => ({ updateState: setState }), []); React.useEffect(() => { resolve(); }, []); return React.createElement(App, { config, resolvedTests: options.resolvedTests, state }); }; appInstance = render(React.createElement(AppWrapper)); await mountPromise; // Later: stateRef.current.updateState(...)However, given the specialized nature of this CLI runner, the current pattern is acceptable.
common/src/ai.js (2)
533-540: Potential information loss when converting patternProperties.When converting
patternPropertiestoadditionalProperties, the code uses only the first pattern's schema (line 537). If the original schema has multiple patterns with different schemas, the additional patterns are silently discarded.Consider whether this simplification is acceptable for your use cases. If schemas commonly use multiple patterns with different types, you might want to:
- Merge the schemas from all patterns, or
- Log a warning when multiple patterns are present, or
- Document this limitation
Based on learnings about Ollama's limited schema support, this simplification may be necessary.
781-797: Review schema transformation order: simplification before wrapping.The schema is simplified for Ollama (lines 781-784) and then potentially wrapped if it has top-level
allOf/anyOf/oneOf(lines 787-797). However,simplifySchemaForOllamais designed to flatten these constructs. Verify whether wrapping a simplified schema is necessary or if the order should be reversed.Consider checking if wrapping should happen before simplification, or if the simplification already makes wrapping unnecessary:
// Option 1: Wrap first, then simplify (if simplification can handle wrapped schemas) if (!isZodSchema(schema) && (schema.allOf || schema.anyOf || schema.oneOf)) { schema = { type: "object", properties: { object: schema }, required: ["object"], additionalProperties: false, }; wrappedSchema = true; } if (provider === "ollama" && !isZodSchema(schema)) { schema = simplifySchemaForOllama(schema); } // Option 2: Don't wrap simplified schemas (if simplification eliminates the need) if (provider === "ollama" && !isZodSchema(schema)) { schema = simplifySchemaForOllama(schema); // Skip wrapping check since schema is already simplified } else if (!isZodSchema(schema) && (schema.allOf || schema.anyOf || schema.oneOf)) { // Only wrap non-Ollama schemas schema = { /* wrapper */ }; wrappedSchema = true; }
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cli/src/cli/builder/TestEditor.mjs(1 hunks)cli/src/cli/runner.js(1 hunks)cli/src/index.js(2 hunks)common/src/ai.js(1 hunks)common/src/schemas/output_schemas/sourceLocation_v3.schema.json(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
cli/src/cli/runner.js (3)
cli/src/index.js (4)
config(376-376)results(84-84)results(393-395)error(107-107)cli/src/cli/App.mjs (1)
App(8-37)cli/src/cli/ResultsSummary.mjs (1)
results(12-12)
common/src/ai.js (1)
common/src/ollama.js (3)
isOllamaAvailable(15-34)DEFAULT_OLLAMA_BASE_URL(37-37)ensureModelAvailable(269-387)
🔇 Additional comments (21)
common/src/schemas/output_schemas/sourceLocation_v3.schema.json (2)
1-91: Schema definition is well-structured and addresses previous feedback.The minimal
requiredarray and conditionalcommentFormatrequirement viaallOfare appropriate for the sourceLocation contract. The readOnly semantics and field descriptions are clear.
92-120: Examples are now consistent.Example 2 correctly uses
"commentFormat": "linkReference"for the auto-detected markdown link, addressing the previous review comment.cli/src/cli/builder/TestEditor.mjs (3)
158-177: Empty step object correctly delegates type selection to StepEditor.The intentional empty object
{}allowsStepEditorto prompt for type selection, properly addressing the previous review feedback.
205-213: Margin spacing correctly implemented.The
Textcomponent is properly wrapped in aBoxwithmarginBottom, addressing the previous review comment.
255-263: Margin spacing correctly implemented.The
Textcomponent is properly wrapped in aBoxwithmarginBottom, addressing the previous review comment.cli/src/index.js (5)
39-50: Recursive key sorting correctly implemented.The implementation now recursively sorts all nested object keys, addressing the previous review comment about only normalizing top-level keys.
58-66: LGTM!The canonicalization logic correctly removes volatile metadata and applies deep key sorting for consistent content comparison.
237-244: LGTM!The flag detection logic correctly handles all variations:
--editor,-e,--editor=<value>, and-e=<value>.
385-405: LGTM!The environment-aware test execution and conditional result reporting logic is well-structured. Using
resolvedTestswhen provided by the API environment and falling back to standard detection maintains flexibility.
16-18: Dependencies properly declared. Bothdoc-detective-resolverandjs-yamlare included in package.json dependencies.cryptois a Node.js built-in module.cli/src/cli/runner.js (1)
76-82: Correct fix for Node.js environment.The double
setImmediatepattern correctly addresses the previousrequestAnimationFrameissue (which is browser-only). This allows React/Ink to flush updates to the terminal by yielding to the event loop twice.common/src/ai.js (10)
40-78: LGTM! Provider detection with Ollama availability check.The cascading provider detection logic correctly checks environment variables and config, with proper async handling for Ollama availability verification. The consistent use of
openAi(camelCase) aligns with the resolved naming convention.
86-134: LGTM! Robust provider detection with model availability.The provider detection correctly maps model identifiers to providers, ensures Ollama models are available before proceeding, and consistently extracts API keys from environment variables or config.
144-173: LGTM! Clean provider factory implementation.The provider factory correctly instantiates each supported provider with only the necessary options, avoiding undefined values and providing clear error messaging for unsupported providers.
183-245: LGTM! Comprehensive file and message handling.The file-to-image conversion handles multiple input formats (Buffer, Uint8Array, URL, base64) correctly, with appropriate base64 conversion for Ollama compatibility. Message building efficiently handles both text-only and multimodal content.
252-329: LGTM! Robust schema validation with both Zod and JSON Schema support.The validation layer correctly handles both Zod and JSON schemas with appropriate error formatting. AJV configuration (allErrors, useDefaults, coerceTypes, strict: false) provides good balance between strictness and flexibility.
338-382: LGTM! Recursive schema dereferencing with appropriate fallback.The dereferencing logic correctly resolves
$refpointers for both JSON Schema and OpenAPI formats, with recursive handling and a safe empty-object fallback for unresolvable references.
581-611: LGTM! Consistent API key extraction.API key extraction correctly checks environment variables first, then falls back to config, with consistent use of
openAi(camelCase) across all code paths.
17-38: The model names inmodelMapare valid identifiers for currently available models across all providers (Anthropic, OpenAI, Google). No action required—the mappings are correct and match the actual APIs.
1-8: All specified npm package versions are valid and secure.Verified that all eight dependencies (ai@5.0.109, @ai-sdk/openai@2.0.80, @ai-sdk/anthropic@2.0.54, @ai-sdk/google@3.0.0-beta.72, ollama-ai-provider-v2@1.2.0, zod@4.1.13, ajv@8.17.1, and ajv-formats@3.0.1) are correctly specified in package.json. Security checks confirm no known vulnerabilities in the verified packages.
703-716: Verify handling when messages array contains no user messages.When
findLastIndexfinds no match, it returns -1, so the conditionindex === lastUserIndexwill never be true if no user message exists. This means files won't be attached to any message, and the operation silently fails. Either add validation to ensure messages contain at least one user message, or document that files are only attached when a user message is present.
| if (!matchedPath && !error) { | ||
| const unmatchedInputs = inputPaths.filter(p => !usedInputPaths.has(p)); | ||
| const remainingSpecs = detectedSpecs.filter(s => | ||
| !results.some(r => r.spec === s && r.filePath) | ||
| ); | ||
|
|
||
| // Only use this fallback if there's exactly one unmatched input and one remaining spec | ||
| if (unmatchedInputs.length === 1 && remainingSpecs.length === 1 && remainingSpecs[0] === detectedSpec) { | ||
| matchedPath = path.resolve(unmatchedInputs[0]); | ||
| matchMethod = 'singleRemaining'; | ||
| } | ||
| } |
There was a problem hiding this comment.
Fix path comparison in fallback matching logic.
Line 193 filters inputPaths (which may contain relative paths) against usedInputPaths (which contains absolute paths from line 212). This comparison will fail when relative paths are used, causing the fallback logic to incorrectly identify "unmatched" inputs.
// Priority 5: Single remaining unmatched input for single unmatched spec
if (!matchedPath && !error) {
- const unmatchedInputs = inputPaths.filter(p => !usedInputPaths.has(p));
+ const unmatchedInputs = inputPaths.filter(p => !usedInputPaths.has(path.resolve(p)));
const remainingSpecs = detectedSpecs.filter(s =>
!results.some(r => r.spec === s && r.filePath)
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!matchedPath && !error) { | |
| const unmatchedInputs = inputPaths.filter(p => !usedInputPaths.has(p)); | |
| const remainingSpecs = detectedSpecs.filter(s => | |
| !results.some(r => r.spec === s && r.filePath) | |
| ); | |
| // Only use this fallback if there's exactly one unmatched input and one remaining spec | |
| if (unmatchedInputs.length === 1 && remainingSpecs.length === 1 && remainingSpecs[0] === detectedSpec) { | |
| matchedPath = path.resolve(unmatchedInputs[0]); | |
| matchMethod = 'singleRemaining'; | |
| } | |
| } | |
| if (!matchedPath && !error) { | |
| const unmatchedInputs = inputPaths.filter(p => !usedInputPaths.has(path.resolve(p))); | |
| const remainingSpecs = detectedSpecs.filter(s => | |
| !results.some(r => r.spec === s && r.filePath) | |
| ); | |
| // Only use this fallback if there's exactly one unmatched input and one remaining spec | |
| if (unmatchedInputs.length === 1 && remainingSpecs.length === 1 && remainingSpecs[0] === detectedSpec) { | |
| matchedPath = path.resolve(unmatchedInputs[0]); | |
| matchMethod = 'singleRemaining'; | |
| } | |
| } |
🤖 Prompt for AI Agents
In cli/src/index.js around lines 192 to 203, the fallback matching filters
inputPaths (which can be relative) against usedInputPaths (which contains
absolute paths), causing the comparison to fail; fix it by resolving/normalizing
inputPaths to absolute paths before filtering against usedInputPaths and use the
resolved path when assigning matchedPath (keep the original logic but compare
and select using path.resolve or path.normalize so relative inputs match
correctly).
| const validationObject = wrappedSchema | ||
| ? result.object.object | ||
| : result.object; |
There was a problem hiding this comment.
Potential undefined access when unwrapping schema.
If wrappedSchema is true but the AI-generated object doesn't include the expected object property, line 837 will fail with Cannot read property 'object' of undefined. Add a safety check or provide a fallback.
const validationObject = wrappedSchema
- ? result.object.object
+ ? result.object?.object || result.object
: result.object;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const validationObject = wrappedSchema | |
| ? result.object.object | |
| : result.object; | |
| const validationObject = wrappedSchema | |
| ? result.object?.object || result.object | |
| : result.object; |
🤖 Prompt for AI Agents
In common/src/ai.js around lines 836 to 838, the unwrapping assumes
result.object.object exists when wrappedSchema is true which can throw if
result.object is undefined; add a safety check and fallback so you don't access
.object of undefined — e.g. compute validationObject using optional chaining and
nullish coalescing (use result.object?.object ?? result.object ?? {}), and if
that yields an empty/fallback object, handle it appropriately (log or throw a
clear validation error) instead of letting a TypeError surface.
- Create a new test file for the parsers module. - Implement tests for detecting file formats including markdown, MDX, reStructuredText, AsciiDoc, DITA, XML, and HTML. - Add tests for the `getParser` function to ensure correct parser retrieval based on format. - Write tests for each parsing function: `parseMarkdown`, `parseMdx`, `parseAsciidoc`, `parseDitaXml`, `parseHtml`, and `parseNaive`. - Validate chunk structure, ensuring properties like content and source location are present. - Include edge case tests for long documents, special characters, Unicode content, and mixed line endings.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @common/src/ai.js:
- Around line 132-133: The fallback return in common/src/ai.js currently returns
{ provider: null, model: null } and must include the missing apiKey property to
match other code paths; update that return to include apiKey: null so the
returned object shape is consistent with other returns (e.g., the same shape
used elsewhere in the function) to avoid callers breaking on a missing apiKey.
- Around line 42-48: The code dereferences config.integrations.anthropic.apiKey
without ensuring anthropic is an object; update the provider creation and other
occurrences in detectProvider and getApiKey to safely access nested properties
(e.g., use config.integrations?.anthropic?.apiKey or check typeof
config.integrations?.anthropic === 'object' before reading apiKey) and provide a
sensible fallback (undefined) when apiKey is missing so a truthy integrations
flag like { anthropic: true } won't cause errors.
🧹 Nitpick comments (6)
common/src/ai.js (6)
135-142: JSDoc is incomplete: missing "google" provider.The
@paramtype foroptions.providerlists"openai" | "anthropic" | "ollama"but the function also supports"google".Suggested fix
/** * Creates a provider instance based on the provider name. * @param {Object} options - * @param {"openai" | "anthropic" | "ollama"} options.provider - The provider name. + * @param {"openai" | "anthropic" | "google" | "ollama"} options.provider - The provider name. * @param {string} [options.apiKey] - Optional API key override. * @param {string} [options.baseURL] - Optional base URL override. * @returns {Function} The provider factory function. */
281-302: Performance: Ajv instance created on every validation call.Creating a new
Ajvinstance and callingaddFormatson every validation is expensive. Consider caching the Ajv instance at module level.Suggested fix
+// Module-level cached Ajv instance +const ajvInstance = new Ajv({ + allErrors: true, + useDefaults: true, + coerceTypes: true, + strict: false, +}); +addFormats(ajvInstance); + /** * Validates an object against a JSON schema. * @param {Object} object - The object to validate. * @param {Object} schema - The JSON schema. * @returns {{ valid: boolean, errors: string | null, object: Object }} Validation result. */ const validateAgainstJsonSchema = (object, schema) => { - const ajv = new Ajv({ - allErrors: true, - useDefaults: true, - coerceTypes: true, - strict: false, - }); - addFormats(ajv); - - const validate = ajv.compile(schema); + const validate = ajvInstance.compile(schema); const valid = validate(object);Note: If schema caching is also needed, consider using
ajvInstance.getSchema()with schema IDs or a WeakMap cache for compiled validators.
574-579: JSDoc is incomplete: missing "google" provider.The
@paramtype forproviderlists"openai" | "anthropic"but the function also handles"google".Suggested fix
/** * Extracts the API key for a provider from a Doc Detective config object. * @param {Object} config - The Doc Detective configuration object. - * @param {"openai" | "anthropic"} provider - The provider name. + * @param {"openai" | "anthropic" | "google"} provider - The provider name. * @returns {string | undefined} The API key if found. */
622-622: JSDoc comment inconsistent with actual default.The comment states
default: "anthropic/claude-haiku-4.5"butDEFAULT_MODELon line 11 is"ollama/qwen3:4b".Suggested fix
- * @param {string} [options.model] - Model identifier (default: "anthropic/claude-haiku-4.5"). + * @param {string} [options.model] - Model identifier (default: "ollama/qwen3:4b").
665-669: Error message lists incomplete provider options.The error mentions only "openai" or "anthropic" but the module also supports "google" and "ollama".
Suggested fix
if (!detected.provider) { throw new Error( - `Cannot determine provider for model "${model}". Please specify a 'provider' option ("openai" or "anthropic").` + `Cannot determine provider for model "${model}". Please ensure a valid model is specified or an API key/Ollama server is available.` ); }
865-877: Fragile error handling based on string matching.Checking
error.message.includes("Schema validation failed after")is brittle and will break if the message text changes. Consider using a custom error class or error code instead.Suggested fix
+class SchemaValidationError extends Error { + constructor(message, attempts, lastErrors) { + super(message); + this.name = 'SchemaValidationError'; + this.attempts = attempts; + this.lastErrors = lastErrors; + } +} // In the retry loop (line 860-864): if (attempt === MAX_SCHEMA_VALIDATION_RETRIES) { - throw new Error( - `Schema validation failed after ${MAX_SCHEMA_VALIDATION_RETRIES} attempts. Last errors: ${validation.errors}` + throw new SchemaValidationError( + `Schema validation failed after ${MAX_SCHEMA_VALIDATION_RETRIES} attempts. Last errors: ${validation.errors}`, + MAX_SCHEMA_VALIDATION_RETRIES, + validation.errors ); } // In the catch block (lines 865-877): } catch (error) { - if ( - error.message.includes("Schema validation failed after") || - attempt === MAX_SCHEMA_VALIDATION_RETRIES - ) { + if (error instanceof SchemaValidationError || attempt === MAX_SCHEMA_VALIDATION_RETRIES) { throw error; }
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
resolver/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (13)
common/src/ai.jscommon/src/index.jscommon/src/ollama.jscommon/test/ai.test.jscore/test/temp-dom-timeout-test.jsoncore/test/temp-screenshot-regression-test.jsonresolver/local-llm/llama.cppresolver/package.jsonresolver/src/analyze.jsresolver/src/analyze.test.jsresolver/src/index.jsresolver/src/parsers.jsresolver/src/parsers.test.js
🚧 Files skipped from review as they are similar to previous changes (2)
- common/src/ollama.js
- common/src/index.js
🧰 Additional context used
🧬 Code graph analysis (1)
common/src/ai.js (1)
common/src/ollama.js (3)
isOllamaAvailable(15-34)DEFAULT_OLLAMA_BASE_URL(37-37)ensureModelAvailable(269-387)
🔇 Additional comments (6)
common/src/ai.js (6)
1-12: LGTM!Imports are appropriate for the multi-provider AI generation module, and constants are reasonable defaults.
14-37: LGTM!The model mapping structure is clean and easily extensible for adding new providers or models.
174-219: LGTM!The function properly handles different input formats (Buffer, Uint8Array, URL, base64 string) with appropriate conversions for the AI SDK.
221-244: LGTM!Clean implementation that correctly handles both text-only and multimodal content building.
383-399: LGTM!The schema simplification logic is well-documented and handles the complexities of converting JSON Schema features to Ollama-compatible formats appropriately.
885-893: LGTM!Exports match the documented public API surface.
| if (process.env.ANTHROPIC_API_KEY || config.integrations?.anthropic) { | ||
| return { | ||
| provider: "anthropic", | ||
| model: "claude-haiku-4-5", | ||
| apiKey: | ||
| process.env.ANTHROPIC_API_KEY || config.integrations.anthropic.apiKey, | ||
| }; |
There was a problem hiding this comment.
Potential null dereference when accessing nested config properties.
If config.integrations?.anthropic is truthy but not an object with an apiKey property (e.g., { anthropic: true }), accessing config.integrations.anthropic.apiKey will fail or return undefined unexpectedly.
Suggested fix
if (process.env.ANTHROPIC_API_KEY || config.integrations?.anthropic) {
return {
provider: "anthropic",
model: "claude-haiku-4-5",
apiKey:
- process.env.ANTHROPIC_API_KEY || config.integrations.anthropic.apiKey,
+ process.env.ANTHROPIC_API_KEY || config.integrations?.anthropic?.apiKey,
};Apply the same pattern to lines 53, 63-64, and similar occurrences in detectProvider and getApiKey.
🤖 Prompt for AI Agents
In @common/src/ai.js around lines 42 - 48, The code dereferences
config.integrations.anthropic.apiKey without ensuring anthropic is an object;
update the provider creation and other occurrences in detectProvider and
getApiKey to safely access nested properties (e.g., use
config.integrations?.anthropic?.apiKey or check typeof
config.integrations?.anthropic === 'object' before reading apiKey) and provide a
sensible fallback (undefined) when apiKey is missing so a truthy integrations
flag like { anthropic: true } won't cause errors.
| return { provider: null, model: null }; | ||
| }; |
There was a problem hiding this comment.
Inconsistent return shape: missing apiKey property.
Other return statements include apiKey: null, but this fallback return omits it, which could cause issues for callers expecting a consistent shape.
Suggested fix
- return { provider: null, model: null };
+ return { provider: null, model: null, apiKey: null };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return { provider: null, model: null }; | |
| }; | |
| return { provider: null, model: null, apiKey: null }; | |
| }; |
🤖 Prompt for AI Agents
In @common/src/ai.js around lines 132 - 133, The fallback return in
common/src/ai.js currently returns { provider: null, model: null } and must
include the missing apiKey property to match other code paths; update that
return to include apiKey: null so the returned object shape is consistent with
other returns (e.g., the same shape used elsewhere in the function) to avoid
callers breaking on a missing apiKey.
| const dereferenceSchema = (schema, rootSchema) => { | ||
| if (!schema || typeof schema !== "object") { | ||
| return schema; | ||
| } | ||
|
|
||
| // Handle arrays | ||
| if (Array.isArray(schema)) { | ||
| return schema.map((item) => dereferenceSchema(item, rootSchema)); | ||
| } | ||
|
|
||
| // Handle $ref | ||
| if (schema.$ref) { | ||
| const refPath = schema.$ref; | ||
| let resolved = null; | ||
|
|
||
| // Parse the reference path | ||
| if (refPath.startsWith("#/")) { | ||
| const pathParts = refPath.slice(2).split("/"); | ||
| resolved = rootSchema; | ||
| for (const part of pathParts) { | ||
| resolved = resolved?.[part]; | ||
| if (!resolved) break; | ||
| } | ||
| } | ||
|
|
||
| if (resolved) { | ||
| // Recursively dereference the resolved schema | ||
| return dereferenceSchema(resolved, rootSchema); | ||
| } | ||
| // If we can't resolve, return an empty object | ||
| return {}; | ||
| } |
There was a problem hiding this comment.
Potential infinite recursion with circular $ref references.
The function doesn't track visited refs, so circular references (e.g., A → B → A) will cause a stack overflow. Consider adding a visited set to detect cycles.
Suggested fix
-const dereferenceSchema = (schema, rootSchema) => {
+const dereferenceSchema = (schema, rootSchema, visited = new Set()) => {
if (!schema || typeof schema !== "object") {
return schema;
}
// Handle arrays
if (Array.isArray(schema)) {
- return schema.map((item) => dereferenceSchema(item, rootSchema));
+ return schema.map((item) => dereferenceSchema(item, rootSchema, visited));
}
// Handle $ref
if (schema.$ref) {
const refPath = schema.$ref;
+
+ // Detect circular reference
+ if (visited.has(refPath)) {
+ console.warn(`Circular $ref detected: ${refPath}`);
+ return {};
+ }
+ visited.add(refPath);
+
let resolved = null;
// ... rest of resolution logic ...
if (resolved) {
- return dereferenceSchema(resolved, rootSchema);
+ return dereferenceSchema(resolved, rootSchema, visited);
}
return {};
}
// Recursively process all properties
const result = {};
for (const [key, value] of Object.entries(schema)) {
if (typeof value === "object" && value !== null) {
- result[key] = dereferenceSchema(value, rootSchema);
+ result[key] = dereferenceSchema(value, rootSchema, visited);
} else {
result[key] = value;
}
}
return result;
};- Implement function to check for AI integrations in environment variables and config files. - Add state management for analysis errors and analyzed specifications. - Create effect to handle document analysis when the phase changes to 'analyzing'. - Introduce UI elements for displaying analysis results and errors. - Update menu options to include document analysis based on AI integration availability.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@cli/src/cli/builder/TestBuilder.mjs`:
- Around line 600-607: The render path checks for phase === 'debugRun' and
debugTestIndex but calls setPhase and setDebugTestIndex during render when
testToDebug is missing, which causes React update-in-render warnings; move that
conditional state update into a useEffect that depends on phase and
debugTestIndex (and spec if needed), detect the missing test (testToDebug falsy)
inside the effect and then call setPhase('menu') and setDebugTestIndex(null)
there so the render remains pure; reference the phase variable, debugTestIndex
variable, testToDebug lookup, and the setPhase/setDebugTestIndex setters when
implementing the effect.
- Around line 1368-1378: The switch case for 'debugTest' declares block-scoped
constants (runnableTests, runnableIndex) which can leak into other cases; to
fix, wrap the entire 'debugTest' case body in its own block { ... } so the const
declarations are scoped locally, i.e., replace the case 'debugTest': ... break;
with case 'debugTest': { const runnableTests = tests.filter(...); if (...) {
const runnableIndex = ...; setDebugTestIndex(runnableIndex);
setPhase('debugRun'); } else { setPhase('selectDebugTest'); } break; } ensuring
references to runnableTests, runnableIndex, setDebugTestIndex, setPhase and
tests remain unchanged.
🧹 Nitpick comments (2)
cli/src/cli/builder/TestBuilder.mjs (2)
530-546: Consider renaming callback parameter to avoid shadowing.The
onSavecallback parameternewTest(line 538) shadows the outernewTestconst (line 532). While functionally correct, this can be confusing during maintenance.Suggested fix
if (phase === 'addTest') { const newTest = createDefaultTest(); return React.createElement(TestEditor, { test: newTest, testIndex: (spec.tests || []).length, onChange: () => {}, - onSave: (newTest) => { - const newTests = [...(spec.tests || []), newTest]; + onSave: (savedTest) => { + const newTests = [...(spec.tests || []), savedTest]; setSpec({ ...spec, tests: newTests }); setPhase('menu'); },
1124-1128: TODO: Improve save error handling.The error is logged to console but the user receives no feedback—they're simply returned to the menu. Consider adding an error state/phase to display the failure reason to the user.
Would you like me to help implement an error display phase for save failures, or open an issue to track this improvement?
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cli/samples/kitten-search-analyze.mdcli/src/cli/builder/TestBuilder.mjs
✅ Files skipped from review due to trivial changes (1)
- cli/samples/kitten-search-analyze.md
🧰 Additional context used
🪛 Biome (2.1.2)
cli/src/cli/builder/TestBuilder.mjs
[error] 1373-1375: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
🔇 Additional comments (2)
cli/src/cli/builder/TestBuilder.mjs (2)
282-306: Good pattern: useEffect hooks before conditional returns.The placement of all
useEffecthooks before any conditionalreturnstatements (lines 284-402) correctly maintains React's hook call order. This is important for avoiding "Rendered more/fewer hooks than expected" errors.
309-355: Well-implemented async effect cleanup.Both the
inlineUpdateandanalyzinguseEffects properly implement cancellation via acancelledflag and cleanup function. This prevents state updates after unmount or phase changes.Also applies to: 358-402
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| if (phase === 'debugRun' && debugTestIndex !== null) { | ||
| const testToDebug = spec.tests?.[debugTestIndex]; | ||
|
|
||
| if (!testToDebug) { | ||
| setPhase('menu'); | ||
| setDebugTestIndex(null); | ||
| return null; | ||
| } |
There was a problem hiding this comment.
Avoid state updates during render.
Calling setPhase and setDebugTestIndex directly in the render body (lines 604-605) is a React anti-pattern that can cause "Cannot update a component while rendering" warnings and unpredictable behavior. State updates should be performed in event handlers or effects, not during render.
Proposed fix using useEffect
Add a useEffect to handle this edge case:
+ // Handle invalid debug test index
+ useEffect(() => {
+ if (phase === 'debugRun' && debugTestIndex !== null) {
+ const testToDebug = spec.tests?.[debugTestIndex];
+ if (!testToDebug) {
+ setDebugTestIndex(null);
+ setPhase('menu');
+ }
+ }
+ }, [phase, debugTestIndex, spec.tests]);
+
// Debug run phase
if (phase === 'debugRun' && debugTestIndex !== null) {
const testToDebug = spec.tests?.[debugTestIndex];
- if (!testToDebug) {
- setPhase('menu');
- setDebugTestIndex(null);
+ if (!testToDebug) {
return null;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (phase === 'debugRun' && debugTestIndex !== null) { | |
| const testToDebug = spec.tests?.[debugTestIndex]; | |
| if (!testToDebug) { | |
| setPhase('menu'); | |
| setDebugTestIndex(null); | |
| return null; | |
| } | |
| // Handle invalid debug test index | |
| useEffect(() => { | |
| if (phase === 'debugRun' && debugTestIndex !== null) { | |
| const testToDebug = spec.tests?.[debugTestIndex]; | |
| if (!testToDebug) { | |
| setDebugTestIndex(null); | |
| setPhase('menu'); | |
| } | |
| } | |
| }, [phase, debugTestIndex, spec.tests]); | |
| // Debug run phase | |
| if (phase === 'debugRun' && debugTestIndex !== null) { | |
| const testToDebug = spec.tests?.[debugTestIndex]; | |
| if (!testToDebug) { | |
| return null; | |
| } |
🤖 Prompt for AI Agents
In `@cli/src/cli/builder/TestBuilder.mjs` around lines 600 - 607, The render path
checks for phase === 'debugRun' and debugTestIndex but calls setPhase and
setDebugTestIndex during render when testToDebug is missing, which causes React
update-in-render warnings; move that conditional state update into a useEffect
that depends on phase and debugTestIndex (and spec if needed), detect the
missing test (testToDebug falsy) inside the effect and then call
setPhase('menu') and setDebugTestIndex(null) there so the render remains pure;
reference the phase variable, debugTestIndex variable, testToDebug lookup, and
the setPhase/setDebugTestIndex setters when implementing the effect.
| case 'debugTest': | ||
| // If only one test with steps, go directly to debug | ||
| const runnableTests = tests.filter((t) => (t.steps || []).length > 0); | ||
| if (runnableTests.length === 1) { | ||
| const runnableIndex = tests.findIndex((t) => (t.steps || []).length > 0); | ||
| setDebugTestIndex(runnableIndex); | ||
| setPhase('debugRun'); | ||
| } else { | ||
| setPhase('selectDebugTest'); | ||
| } | ||
| break; |
There was a problem hiding this comment.
Wrap switch case declaration in a block.
The const runnableTests declaration inside the debugTest case (line 1370) can be erroneously accessed by other switch clauses. This is flagged by Biome's noSwitchDeclarations rule.
Proposed fix
case 'debugTest':
+ {
// If only one test with steps, go directly to debug
const runnableTests = tests.filter((t) => (t.steps || []).length > 0);
if (runnableTests.length === 1) {
const runnableIndex = tests.findIndex((t) => (t.steps || []).length > 0);
setDebugTestIndex(runnableIndex);
setPhase('debugRun');
} else {
setPhase('selectDebugTest');
}
break;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case 'debugTest': | |
| // If only one test with steps, go directly to debug | |
| const runnableTests = tests.filter((t) => (t.steps || []).length > 0); | |
| if (runnableTests.length === 1) { | |
| const runnableIndex = tests.findIndex((t) => (t.steps || []).length > 0); | |
| setDebugTestIndex(runnableIndex); | |
| setPhase('debugRun'); | |
| } else { | |
| setPhase('selectDebugTest'); | |
| } | |
| break; | |
| case 'debugTest': | |
| { | |
| // If only one test with steps, go directly to debug | |
| const runnableTests = tests.filter((t) => (t.steps || []).length > 0); | |
| if (runnableTests.length === 1) { | |
| const runnableIndex = tests.findIndex((t) => (t.steps || []).length > 0); | |
| setDebugTestIndex(runnableIndex); | |
| setPhase('debugRun'); | |
| } else { | |
| setPhase('selectDebugTest'); | |
| } | |
| break; | |
| } |
🧰 Tools
🪛 Biome (2.1.2)
[error] 1373-1375: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
🤖 Prompt for AI Agents
In `@cli/src/cli/builder/TestBuilder.mjs` around lines 1368 - 1378, The switch
case for 'debugTest' declares block-scoped constants (runnableTests,
runnableIndex) which can leak into other cases; to fix, wrap the entire
'debugTest' case body in its own block { ... } so the const declarations are
scoped locally, i.e., replace the case 'debugTest': ... break; with case
'debugTest': { const runnableTests = tests.filter(...); if (...) { const
runnableIndex = ...; setDebugTestIndex(runnableIndex); setPhase('debugRun'); }
else { setPhase('selectDebugTest'); } break; } ensuring references to
runnableTests, runnableIndex, setDebugTestIndex, setPhase and tests remain
unchanged.
Introduce support for additional CommonMark syntax variations, improve JSON parsing, and enhance the TestBuilder with unsaved changes tracking. Update dependencies and streamline the build process while refactoring components for better maintainability. Implement AI generation capabilities and integrate Ollama support for improved functionality. Bump version to 3.7.0-preview.0.
Summary by CodeRabbit
New Features
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.