From 5914a01870cc36352848214d279639d361fdc234 Mon Sep 17 00:00:00 2001 From: adk-bot Date: Fri, 6 Feb 2026 19:06:58 +0000 Subject: [PATCH] Update ADK doc according to issue #1256 - 2 --- docs/evaluate/index.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/docs/evaluate/index.md b/docs/evaluate/index.md index f676ce2b8..8fe34cce2 100644 --- a/docs/evaluate/index.md +++ b/docs/evaluate/index.md @@ -382,6 +382,14 @@ generated by an AI model. For details on how to set up an eval with user simulation, see [User Simulation](./user-sim.md). +### Tool Simulation / Agent Simulator +When evaluating agents, it is often useful to simulate tool outputs rather than calling real APIs. This allows for: +- Deterministic testing +- Fault injection (errors, latency) +- Cost savings + +The [Agent Simulator](./agent-simulator.md) provides a flexible way to mock tools and inject behaviors. + ## How to run Evaluation with the ADK As a developer, you can evaluate your agents using the ADK in the following ways: @@ -507,4 +515,4 @@ Here are the details for each command line argument: * For example: `sample_eval_set_file.json:eval_1,eval_2,eval_3` `This will only run eval_1, eval_2 and eval_3 from sample_eval_set_file.json` * `CONFIG_FILE_PATH`: The path to the config file. -* `PRINT_DETAILED_RESULTS`: Prints detailed results on the console. +* `PRINT_DETAILED_RESULTS`: Prints detailed results on the console. \ No newline at end of file