How to build, run and verify the .NET sample projects in the Agent Framework repository. Use this when a user wants to verify that the samples still function as expected.
We should only support verifying samples that:
Always report to the user which samples were run and which were not, and why.
Samples should be verified to ensure that they actually work as intended and that their output matches what is expected. For each sample that is run, output should be produced that shows the result and explains the reasoning about what output was expected, what was produced, and why it didn't match what the sample was expected to produce.
Steps to verify a sample:
[Sample Name] Succeeded
[Sample Name] Failed
Actual Output:
[What the sample produced]
Expected Output:
[Explanation of what was expected and why the actual output didn't match expectations]
Most samples use environment variables to configure settings.
var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT") ?? throw new InvalidOperationException("AZURE_OPENAI_ENDPOINT is not set.");
var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT_NAME") ?? "gpt-4o-mini";
To run a sample, the environment variables should be set first. Before running a sample, check whether each environment variable in the sample has a value and then give the user a list of environment variables to set.
You can provide the user some examples of how to set the variables like this:
export AZURE_OPENAI_ENDPOINT="https://my-openai-instance.openai.azure.com/"
export AZURE_OPENAI_DEPLOYMENT_NAME="gpt-4o-mini"
To check if a variable has a value use e.g.:
echo $AZURE_OPENAI_ENDPOINT
cd dotnet/samples/<category>/<sample-dir>
dotnet run
For multi-targeted projects (e.g., Durable console apps), specify the framework:
dotnet run --framework net10.0