Expand MetalLB/OpenShift high-level test cases into a detailed manual test plan with copy-paste YAML manifests and oc/kubectl commands, grounded in Jira, design docs, and metallb-operator, metallb, and frr-k8s code. Use when the user asks for a detailed test plan, step-by-step test instructions, executable test cases, or YAML/commands for manual QA.
Turn each high-level test case into operator-ready manual steps: concrete Kubernetes/OpenShift YAML (as fenced blocks) and oc / kubectl commands the tester runs in order, with expected observations and cleanup where needed.
NET-1234), and/orKUBECONFIG (path or env) for a dedicated test OpenShift clusterIf neither Epic nor approved high-level plan context is available, ask for the Epic key and whether Phase 1 is approved before proceeding (see .cursor/rules/metallb-qe-lifecycle.mdc).
KUBECONFIG: Run all proposed test cases on the cluster; capture observed results and align Expected: lines with reality. Note per-step outcomes for early defect detection..cursor/workspaces/metallb-repo-analysis/ to decide product bug vs procedure error. Bug: file Jira with project = OCPBUGS, component = Networking / Metal LB, attach logs/evidence. Procedure: fix steps/YAML/commands and re-run on cluster before publishing/updating the Doc.CNF (default Polarion space CNF) only after the user approves the detailed Google Doc. Use metallb-polarion-test-publish and metallb-polarion-livedoc-workflow rules. Do not skip full home-page HTML embedding.metallb-manual-test-execution) or Phase 4 (metallb-e2e-automation) until the user explicitly approves moving on (and Polarion is done or explicitly deferred).Align with high-level coverage
TC-01, TC-02, …) and intents.Collect Jira and doc context
searchAtlassian, getJiraIssue, getJiraIssueRemoteIssueLinks, Confluence links from the Epic).adapters/jira_adapter.py only if MCP is unavailable.Refresh analysis repos (same as high-level skill)
.cursor/workspaces/metallb-repo-analysis/:
metallb-operator, metallb, frr-k8sgit pull; analyze api/, controllers, validation, and feature gates so YAML and commands match real CRDs and field names.Author executable steps
#### Step 1, #### Step 2, …).Manifest (YAML): (plain label, not bold) followed by a fenced yaml block the tester can save and apply.Run: (plain label) followed by a fenced bash block using oc or kubectl (apply, get, describe, logs, wait, delete, debug).Expected: line (plain label, not bold).Namespace, literals, and Google Docs–friendly formatting (mandatory for detailed plans)
metallb-system in every manifest and command unless the Epic explicitly targets a different downstream layout (if so, state that once under Prerequisites and still avoid ALL_CAPS variables)..cursor/rules/metallb-qe-lifecycle.mdc).apiVersion/kind/metadata consistent with analyzed repos.oc for OpenShift with a one-line note that kubectl works where equivalent..env./tmp → script).Match template.md sections:
# Detailed Test Plan: <Feature> (<JIRA_KEY>)## JIRA Reference (ticket key + URL lines per validator)## Related High-Level Test Plan (link or “Generated from Epic …” if none)## Prerequisites and Environment## Placeholders (fixed literals table; see step 6 above)## Detailed Test Cases with ### TC-NN: … and per-step YAML + Run: bash blocks## ReferencesMinimum three detailed test cases (TC-01 …), each with at least one yaml fence and one bash/sh fence containing oc or kubectl, per scripts/validate_detailed_test_plan.py.
METALLB_NSTEST_POOL_NAMESPEAKER_STATE_NAMEcnf20333-baseline-poolcnf20333-peer-bfd-missing192.0.2.0/24ConfigurationStatebashNODE=$(oc get pod -n metallb-system -l app.kubernetes.io/component=speaker -o jsonpath='{.items[0].spec.nodeName}')speaker-${NODE}yaml blocks. Put instructions in normal sentences above the fence; keep fenced YAML strictly valid and copy-pasteable.**Purpose:** (required by the validator). Use non-bold labels for Manifest (YAML):, Run:, and Expected: unless the high-level template demands otherwise.Placeholders section (required heading, literal-first content)
## Placeholders (validator requirement).**Group title** followed by - bullets (backtick values). Example groups: Namespace, Baseline pool, BGP/BFD test objects, Cleanup targets.Prerequisites
## Prerequisites and Environment: cluster type, MetalLB/FRR operator install assumptions, required CRDs, feature gates.Render output with exact template
template.md in this folder./tmp only unless the user explicitly asks for a local file.Validate and publish
scripts/validate_and_publish_detailed_test_plan.sh "Detailed Test Plan - <JIRA_KEY> - <Feature Name>"adapters/google_docs_adapter.py for final publishing.Response to the user
PyTorch深度学习模式与最佳实践,用于构建稳健、高效且可复现的训练流程、模型架构和数据加载。