How to use the verify-samples tool to run, verify, and manage sample definitions in the Agent Framework repository. Use this when adding, updating, or running sample verification.
The verify-samples project (dotnet/eng/verify-samples/) is an automated tool that runs sample projects and verifies their output using deterministic checks and AI-powered verification.
Important: By default, samples must be pre-built before running verify-samples. Build the solution first, or pass --build to build samples during the run:
cd dotnet
dotnet build agent-framework-dotnet.slnx -f net10.0
Then run verify-samples:
# Run all samples across all categories
dotnet run --project eng/verify-samples -- --log results.log --csv results.csv
# Run a specific category
dotnet run --project eng/verify-samples -- --category 02-agents --log results.log
# Run specific samples by name
dotnet run --project eng/verify-samples -- Agent_Step02_StructuredOutput Agent_Step09_AsFunctionTool
# Control parallelism (default 8)
dotnet run --project eng/verify-samples -- --parallel 8 --log results.log
# Build samples during run (skips the need for a prior build step)
# This may cause build conflicts as multiple samples are built in parallel, so use with caution
dotnet run --project eng/verify-samples -- --build --log results.log
# Combine options
dotnet run --project eng/verify-samples -- --category 03-workflows --parallel 4 --log results.log --csv results.csv --md results.md
The tool itself needs:
AZURE_OPENAI_ENDPOINT — for the AI verification agentAZURE_OPENAI_DEPLOYMENT_NAME (optional, defaults to gpt-5-mini)Individual samples require their own env vars (e.g., AZURE_AI_PROJECT_ENDPOINT). The tool automatically checks and skips samples with missing env vars.
--log results.log — detailed per-sample log with stdout/stderr, AI reasoning, and a summary--csv results.csv — tabular summary with Sample, ProjectPath, Status, FailedChecks, and Failures columns--md results.md — Markdown summary with results table and collapsible failure details (suitable for GitHub PR comments)Definitions are in the dotnet/eng/verify-samples/ directory:
| Category | Config File | Registered Key |
|---|---|---|
| 01-get-started | GetStartedSamples.cs | 01-get-started |
| 02-agents | AgentsSamples.cs | 02-agents |
| 03-workflows | WorkflowSamples.cs | 03-workflows |
Categories are registered in VerifyOptions.cs in the s_sampleSets dictionary.
Each sample is defined as a SampleDefinition in the appropriate config file. Key properties:
new SampleDefinition
{
// Required: Display name for the sample
Name = "Agent_Step02_StructuredOutput",
// Required: Relative path from dotnet/ to the sample project directory
ProjectPath = "samples/02-agents/Agents/Agent_Step02_StructuredOutput",
// Environment variables the sample requires (throws if missing)
RequiredEnvironmentVariables = ["AZURE_OPENAI_ENDPOINT"],
// Environment variables with defaults that would prompt on console if unset
OptionalEnvironmentVariables = ["AZURE_OPENAI_DEPLOYMENT_NAME"],
// Skip this sample with a reason (for structural issues only)
SkipReason = null, // or "Requires external service X."
// Deterministic checks: substrings that must appear in stdout
MustContain = ["=== Section Header ==="],
// Substrings that must NOT appear in stdout
MustNotContain = [],
// If true, only MustContain checks are used (no AI verification)
IsDeterministic = false,
// AI verification: natural-language descriptions of expected output
// Each entry describes one aspect to verify independently
ExpectedOutputDescription =
[
"The output should show structured person information with Name, Age, and Occupation fields.",
"The output should not contain error messages or stack traces.",
],
// Stdin inputs to feed to the sample (for interactive samples)
Inputs = ["Y", "Y", "Y"],
// Delay between stdin inputs in ms (default 2000, increase for LLM calls between inputs)
InputDelayMs = 3000,
}
Check the sample's Program.cs to understand:
GetEnvironmentVariable)Console.ReadLine, Application.GetInput)EXIT patterns in YAML workflows)Choose the right verification strategy:
IsDeterministic = true): Use MustContain for samples with fixed output strings. No AI verification.ExpectedOutputDescription with semantic descriptions. Write expectations that are flexible enough for non-deterministic LLM output.MustContain for fixed markers AND ExpectedOutputDescription for LLM-generated content.Set SkipReason only for structural issues:
For interactive samples, provide Inputs:
Application.GetInput(args) need one initial inputConsole.ReadLine() approval loops need "Y" inputsexternalLoop need "EXIT" as the last inputInputDelayMs to 3000-8000ms for samples with LLM calls between inputsAdd the definition to the appropriate config file (e.g., AgentsSamples.cs) in the All list.
Register new categories (if needed) in VerifyOptions.cs s_sampleSets dictionary.
"The output should not contain error messages or stack traces." as the last entry"The output should say 'The weather in Amsterdam is cloudy with a high of 15°C'""The output should contain weather information about Amsterdam mentioning cloudy weather with a high of 15°C."new SampleDefinition
{
Name = "Agent_With_AzureOpenAIChatCompletion",
ProjectPath = "samples/02-agents/AgentProviders/Agent_With_AzureOpenAIChatCompletion",
RequiredEnvironmentVariables = ["AZURE_OPENAI_ENDPOINT"],
OptionalEnvironmentVariables = ["AZURE_OPENAI_DEPLOYMENT_NAME"],
ExpectedOutputDescription =
[
"The output should contain a joke about a pirate.",
"The output should not contain error messages or stack traces.",
],
},
new SampleDefinition
{
Name = "Workflow_Declarative_GenerateCode",
ProjectPath = "samples/03-workflows/Declarative/GenerateCode",
IsDeterministic = true,
MustContain = ["WORKFLOW: Parsing", "WORKFLOW: Defined"],
ExpectedOutputDescription = ["The output should show a YAML workflow being parsed and C# code being generated from it."],
},
new SampleDefinition
{
Name = "FoundryAgent_Hosted_MCP",
ProjectPath = "samples/02-agents/ModelContextProtocol/FoundryAgent_Hosted_MCP",
RequiredEnvironmentVariables = ["AZURE_AI_PROJECT_ENDPOINT"],
OptionalEnvironmentVariables = ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
Inputs = ["Y", "Y", "Y", "Y", "Y"],
InputDelayMs = 5000,
ExpectedOutputDescription = ["The output should show an agent using the Microsoft Learn MCP tool with approval prompts."],
},
new SampleDefinition
{
Name = "Workflow_Declarative_FunctionTools",
ProjectPath = "samples/03-workflows/Declarative/FunctionTools",
RequiredEnvironmentVariables = ["AZURE_AI_PROJECT_ENDPOINT"],
OptionalEnvironmentVariables = ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
Inputs = ["What are today's specials?", "EXIT"],
InputDelayMs = 8000,
ExpectedOutputDescription = ["The output should show a workflow calling function tools to answer a question about restaurant specials."],
},
new SampleDefinition
{
Name = "Agent_MCP_Server",
ProjectPath = "samples/02-agents/ModelContextProtocol/Agent_MCP_Server",
RequiredEnvironmentVariables = ["AZURE_OPENAI_ENDPOINT"],
OptionalEnvironmentVariables = ["AZURE_OPENAI_DEPLOYMENT_NAME"],
SkipReason = "Runs as an MCP stdio server that does not exit on its own.",
},