Generate and run a self-contained bash verification script for an MTV/Forklift bug or feature. Use this skill when the user asks to "create a verification script", "write a test script", "generate a bash test", "create an e2e test script", or "write a verification bash script" for an MTV/Forklift Jira ticket.
Generate self-contained bash e2e verification scripts for MTV/Forklift bugs and features. Scripts follow a standard pattern: create namespace → create providers → run test steps → verify result → cleanup.
Follow these steps in order. Never skip a step. Always wait for user confirmation before proceeding.
Ask the user for the Jira ticket number (e.g. MTV-4911).
WebFetch url: "https://redhat.atlassian.net/browse/MTV-<number>"
Extract from the ticket:
If the ticket is not publicly accessible or the fetch fails, ask the user to paste the relevant description.
Search GitHub for pull requests in kubev2v/forklift that reference this ticket:
WebFetch url: "https://github.com/kubev2v/forklift/pulls?q=MTV-<number>"
For each PR found, fetch its detail page to extract:
WebFetch url: "https://github.com/kubev2v/forklift/pull/<pr-number>"
Use the PR information to answer:
If no PRs are found via the search page, also try:
WebFetch url: "https://api.github.com/search/issues?q=MTV-<number>+repo:kubev2v/forklift+type:pr"
Summarize what you learned from the Jira ticket and PRs before proceeding to Step 2.
Ask the user for any information not found in the ticket. Collect only what is needed:
| Information | When needed |
|---|---|
| Provider type (vsphere / ovirt / openstack / ova / openshift / hyperv / ec2) | Always |
| Source provider URL | Always (except OVA and EC2) |
| Credentials (username / password / token) | Always (except OVA) |
| VM name(s) to migrate | When the test involves a migration plan |
| TLS mode (cacert or insecure-skip-tls) | Always (except OVA) |
| Any custom image or setting to override | When the ticket references a fix image |
Do not ask for information that has a clear default (e.g. namespace name, plan name, provider name — these can be derived from the ticket number).
Inform the user which environment variables they should set:
GOVC_URL, GOVC_USERNAME, GOVC_PASSWORDRHV_URL, RHV_USERNAME, RHV_PASSWORDOSP_URL, OSP_USERNAME, OSP_PASSWORD, OSP_DOMAIN_NAME, OSP_PROJECT_NAME, OSP_REGION_NAMEOVA_URLSOURCE_OCP_URL, SOURCE_OCP_TOKENHV_URL, HV_USERNAME, HV_PASSWORD, HV_SMB_URLEC2_REGION, EC2_ACCESS_KEY_ID, EC2_SECRET_ACCESS_KEY, EC2_TARGET_AZ, EC2_TARGET_REGIONNaming convention for OCP-to-OCP: The remote OpenShift cluster is the source (where
VMs live), and the local cluster (running MTV) is the target. Use the SOURCE_ prefix
to make this clear — SOURCE_OCP_URL and SOURCE_OCP_TOKEN refer to the remote source
cluster, not the local cluster the script runs on.
Write a test plan markdown file named tests/scenarios/test-mtv-<number>.md (relative to
the repo root). Create the tests/scenarios/ directory if it does not exist.
Note: tests/scenarios/ is gitignored — scripts and docs are private per developer
and are not committed to the repository.
The plan must include:
# Test Plan: MTV-<number> — <summary>
## Objective
<What this test verifies, in one paragraph>
## Prerequisites
- oc / kubectl with mtv plugin installed
- MTV installed on the cluster
- Environment variables set: <list>
- <Any other prereqs: VM name, VDDK image, custom controller image, etc.>
## Test Steps
1. <Step description>
2. …
## Pass Criteria
- <Specific observable outcome that confirms the fix/feature works>
## Fail Criteria
- <What indicates the bug is still present or the feature does not work>
## Cleanup
- Namespace `<ns>` deleted
- Providers deleted
- Any settings overrides reverted
Present the plan to the user and ask for review. Wait for explicit approval ("looks good", "approved", etc.) or for requested changes before proceeding.
After plan approval, generate a bash script named tests/scenarios/test-mtv-<number>.sh.
#!/bin/bash
#
# E2E test for <summary> (MTV-<number>).
#
# Prerequisites: oc, oc mtv plugin, <env vars>,
# and a cluster with MTV installed (controller in konveyor-forklift by default).
set -euo pipefail
# --- Constants ---
NS="mtv-<number>-test"
PROVIDER="<type>-test"
PLAN="mtv-<number>-plan"
SKIP_CLEANUP="${SKIP_CLEANUP:-false}"
POLL=15
# <other ticket-specific constants>
# ===================================================================
# Preflight: verify MTV is installed and provider prerequisites
# ===================================================================
echo "============================================================="
echo "PREFLIGHT: Checking MTV installation"
echo "============================================================="
if ! command -v oc &>/dev/null; then
echo "ERROR: 'oc' CLI not found in PATH."
exit 1
fi
if ! oc mtv settings get --setting vddk_image &>/dev/null; then
echo "ERROR: Cannot read MTV settings. Is MTV installed on this cluster?"
exit 1
fi
echo "MTV controller found."
VDDK_IMAGE=$(oc mtv settings get --setting vddk_image 2>/dev/null \
| tail -1 | awk '{print $NF}')
if [[ -n "${VDDK_IMAGE}" && "${VDDK_IMAGE}" != "<none>" && "${VDDK_IMAGE}" != "VALUE" ]]; then
echo "VDDK image configured: ${VDDK_IMAGE}"
else
echo "ERROR: VDDK image not configured. Required for migrations."
echo "Set it with: oc mtv settings set --setting vddk_image --value <image>"
exit 1
fi
echo "Preflight passed."
echo ""
# ===================================================================
# Cleanup (also runs on error via trap)
# ===================================================================
cleanup() {
if [[ "${SKIP_CLEANUP}" == "true" ]]; then
echo "SKIP_CLEANUP=true — preserving resources in namespace '${NS}' for forensic inspection."
return 0
fi
echo "Cleaning up..."
oc mtv delete plan --name "${PLAN}" -n "${NS}" 2>/dev/null || true
oc mtv delete provider --name "${PROVIDER}" -n "${NS}" 2>/dev/null || true
oc mtv delete provider --name host -n "${NS}" 2>/dev/null || true
oc delete namespace "${NS}" --ignore-not-found 2>/dev/null || true
# <revert any settings overrides>
echo "Cleanup done."
}
trap cleanup EXIT
cleanup
# ===================================================================
# STEP 1: Create namespace
# ===================================================================
echo "STEP 1: Creating namespace '${NS}'"
oc create namespace "${NS}" --dry-run=client -o yaml | oc apply -f -
# ===================================================================
# STEP 2: Create provider(s)
# ===================================================================
# <use oc mtv create provider with appropriate flags>
# For vSphere (insecure):
# oc mtv create provider --name "${PROVIDER}" --type vsphere \
# --url "${GOVC_URL}/sdk" --username "${GOVC_USERNAME}" \
# --password "${GOVC_PASSWORD}" --provider-insecure-skip-tls -n "${NS}"
#
# For vSphere (with CA cert):
# oc mtv create provider --name "${PROVIDER}" --type vsphere \
# --url "${GOVC_URL}/sdk" --username "${GOVC_USERNAME}" \
# --password "${GOVC_PASSWORD}" \
# --cacert "$(fetch_ca_cert "${GOVC_URL}")" -n "${NS}"
echo "Creating host (local OpenShift) provider"
oc mtv create provider --name host --type openshift -n "${NS}"
echo "Waiting for providers Ready..."
oc wait "provider.forklift.konveyor.io/${PROVIDER}" -n "${NS}" \
--for=condition=Ready --timeout=300s
oc wait "provider.forklift.konveyor.io/host" -n "${NS}" \
--for=condition=Ready --timeout=300s
# ===================================================================
# STEP 3: Create migration plan (if needed for this test)
# ===================================================================
# oc mtv create plan --name "${PLAN}" --source "${PROVIDER}" \
# --vms "${VM}" -n "${NS}"
# oc wait "plan.forklift.konveyor.io/${PLAN}" -n "${NS}" \
# --for=condition=Ready --timeout=300s
# ===================================================================
# STEP <N>: <Ticket-specific test steps>
# ===================================================================
# <Insert the core test logic here — this is ticket-specific>
# ===================================================================
# Summary
# ===================================================================
echo ""
echo "TEST PASSED: <what this confirms>"
exit 0
Include this function when TLS verification is needed (non-insecure providers):
fetch_ca_cert() {
local hostport
hostport=$(echo "$1" | sed -E 's|https?://||; s|/.*||')
if ! echo "${hostport}" | grep -q ':'; then
hostport="${hostport}:443"
fi
openssl s_client -showcerts -connect "${hostport}" </dev/null 2>/dev/null \
| awk '/BEGIN CERTIFICATE/,/END CERTIFICATE/'
}
After creating a plan and waiting for condition=Ready, always check for critical
conditions before starting the migration:
check_plan_health() {
local plan_name="$1"
echo "Checking plan health..."
local critical
critical=$(oc get plan.forklift.konveyor.io/"${plan_name}" -n "${NS}" \
-o jsonpath='{range .status.conditions[*]}{.type}={.status}={.category}={.message}{"\n"}{end}' 2>/dev/null || echo "")
if echo "${critical}" | grep -q "=True=Critical="; then
echo "ERROR: Plan '${plan_name}' has critical issues:"
echo "${critical}" | grep "=True=Critical=" | while IFS='=' read -r ctype cstatus ccat cmsg; do
echo " ${ctype}: ${cmsg}"
done
return 1
fi
echo "Plan health OK."
return 0
}
Common critical conditions include VMStorageNotMapped (storage mapping missing) and
VMNetworkNotMapped (network mapping missing). Fix these by providing explicit
--storage-pairs or --network-pairs when creating the plan.
By default, oc mtv create plan auto-generates storage and network mappings from
provider inventory. Omit --storage-pairs unless the auto-mapping picks the wrong
target storage class. When you do need to override, use explicit --storage-pairs:
set -euo pipefailtrap cleanup EXIT and call cleanup at the start2>/dev/null || true on all delete commands)oc wait --for=condition=Ready with explicit timeouts after each resource creation.
When a test expects a resource to NOT be Ready (e.g. a plan that should be
blocked by a validation condition), do not wait for Ready — it will never come.
Instead, write a polling loop that races the expected condition against Ready,
returning whichever appears first. The test then asserts which one won:
wait_for_plan_condition() {
local plan_name="$1"
local target_condition="$2" # e.g. "VMCriticalConcerns"
local elapsed=0
while [[ ${elapsed} -lt ${MAX_WAIT} ]]; do
local target_status ready_status
target_status=$(oc get plan.forklift.konveyor.io/"${plan_name}" -n "${NS}" \
-o jsonpath="{.status.conditions[?(@.type==\"${target_condition}\")].status}" 2>/dev/null || echo "")
ready_status=$(oc get plan.forklift.konveyor.io/"${plan_name}" -n "${NS}" \
-o jsonpath='{.status.conditions[?(@.type=="Ready")].status}' 2>/dev/null || echo "")
if [[ "${target_status}" == "True" ]]; then echo "${target_condition}"; return 0; fi
if [[ "${ready_status}" == "True" ]]; then echo "Ready"; return 0; fi
sleep "${POLL}"; elapsed=$((elapsed + POLL))
done
echo "TIMEOUT"; return 1
}
# Usage: expect blocked plan
result=$(wait_for_plan_condition "${PLAN}" "VMCriticalConcerns")
[[ "${result}" == *"VMCriticalConcerns"* ]] && echo "PASS" || echo "FAIL"
# Usage: expect ready plan
result=$(wait_for_plan_condition "${PLAN}" "VMCriticalConcerns")
[[ "${result}" == *"Ready"* ]] && echo "PASS" || echo "FAIL"
echo "TEST PASSED/FAILED/INCONCLUSIVE: <reason>" before each exitexit 1 on the first
failure. Instead, record failures in a variable (e.g. FAILURES+="..."), clean up the
scenario, and continue to the next one. Print a summary at the end showing each scenario's
pass/fail status and exit 1 only if any scenario failed. This gives the user full visibility
into which scenarios pass and which fail in a single run.
Since the script uses set -euo pipefail, wrap each scenario's risky operations (plan
creation, migration start, wait) in a subshell so errors are caught without aborting:
FAILURES=""
SCENARIO_X_PASS=false
set +e
(
set -euo pipefail
create_plan "${PLAN_X}" --some-flag
oc mtv start plan --name "${PLAN_X}" -n "${NS}"
wait_for_plan "${PLAN_X}"
)
SCENARIO_X_RC=$?
set -e
if [[ ${SCENARIO_X_RC} -ne 0 ]]; then
echo "SCENARIO X FAIL: plan creation or migration failed"
FAILURES="${FAILURES} X: plan creation or migration failed\n"
else
# ... verify PVC names or other assertions ...
fi
cleanup_scenario "${PLAN_X}"
At the end, print a summary table and exit based on whether FAILURES is empty:
echo "RESULTS:"
echo " A: ${SCENARIO_A_PASS}"
echo " B: ${SCENARIO_B_PASS}"
if [[ -z "${FAILURES}" ]]; then
echo "TEST PASSED: All scenarios verified successfully"
exit 0
else
echo "TEST FAILED: One or more scenarios failed:"
echo -e "${FAILURES}"
exit 1
fi
STEP N: echo banners so logs are easy to followSKIP_CLEANUP=true to skip cleanup and preserve all resources for forensic inspectionoc commands before executing them, prefixed with >>>, so users
can follow along, reproduce steps manually, and debug failures. Key commands to echo:
oc mtv create provider ...)oc mtv get inventory storage ...)oc mtv create plan ...) — show the full command with all flagsoc mtv start plan ...)oc get pvc -n ...) — show the full table output, not just names${VAR_NAME} instead of the value)When a test has multiple scenarios (e.g. testing different flag combinations on the same provider), share a single namespace and set of providers across all scenarios:
trap cleanup EXITBetween-scenario cleanup helper pattern:
cleanup_scenario() {
local plan_name="$1"
oc mtv delete plan --name "${plan_name}" -n "${NS}" 2>/dev/null || true
oc delete vm "${VM_NAME}" -n "${NS}" --ignore-not-found 2>/dev/null || true
sleep 5
local pvcs
pvcs=$(oc get pvc -n "${NS}" --no-headers -o custom-columns=NAME:.metadata.name 2>/dev/null || true)
for pvc in ${pvcs}; do
oc delete pvc "${pvc}" -n "${NS}" --ignore-not-found 2>/dev/null || true
done
local dvs
dvs=$(oc get dv -n "${NS}" --no-headers -o custom-columns=NAME:.metadata.name 2>/dev/null || true)
for dv in ${dvs}; do
oc delete dv "${dv}" -n "${NS}" --ignore-not-found 2>/dev/null || true
done
sleep 5
}
Present the script to the user and ask for permission to run it. Do not run it automatically.
After the user grants permission, run the script:
bash tests/scenarios/test-mtv-<number>.sh 2>&1 | tee tests/scenarios/test-mtv-<number>.log
After each run:
Repeat until the user is satisfied with the result or declares the test complete.
oc mtv create provider --name "${PROVIDER}" --type vsphere \
--url "${GOVC_URL}/sdk" \
--username "${GOVC_USERNAME}" \
--password "${GOVC_PASSWORD}" \
--provider-insecure-skip-tls \
-n "${NS}"
oc mtv create provider --name "${PROVIDER}" --type vsphere \
--url "${GOVC_URL}/sdk" \
--username "${GOVC_USERNAME}" \
--password "${GOVC_PASSWORD}" \
--cacert "$(fetch_ca_cert "${GOVC_URL}")" \
-n "${NS}"
oc mtv create provider --name "${PROVIDER}" --type ovirt \
--url "${RHV_URL}" \
--username "${RHV_USERNAME}" \
--password "${RHV_PASSWORD}" \
--cacert "$(fetch_ca_cert "${RHV_URL}")" \
-n "${NS}"
oc mtv create provider --name "${PROVIDER}" --type openstack \
--url "${OSP_URL}" \
--username "${OSP_USERNAME}" \
--password "${OSP_PASSWORD}" \
--provider-domain-name "${OSP_DOMAIN_NAME}" \
--provider-project-name "${OSP_PROJECT_NAME}" \
--provider-region-name "${OSP_REGION_NAME}" \
--cacert "$(fetch_ca_cert "${OSP_URL}")" \
-n "${NS}"
oc mtv create provider --name "${PROVIDER}" --type ova \
--url "${OVA_URL}" \
-n "${NS}"
oc mtv create provider --name "${PROVIDER}" --type openshift \
--url "${SOURCE_OCP_URL}" \
--provider-token "${SOURCE_OCP_TOKEN}" \
--provider-insecure-skip-tls \
-n "${NS}"
The remote OpenShift cluster is typically the source (where VMs live). Use the
SOURCE_ prefix for its env vars to distinguish from the local cluster running MTV.
Use with the local OpenShift provider (below) which serves as the target.
oc mtv create provider --name "${PROVIDER}" --type hyperv \
--url "${HV_URL}" \
--username "${HV_USERNAME}" \
--password "${HV_PASSWORD}" \
--smb-url "${HV_SMB_URL}" \
--provider-insecure-skip-tls \
-n "${NS}"
oc mtv create provider --name "${PROVIDER}" --type ec2 \
--ec2-region "${EC2_REGION}" \
--target-access-key-id "${EC2_ACCESS_KEY_ID}" \
--target-secret-access-key "${EC2_SECRET_ACCESS_KEY}" \
--target-az "${EC2_TARGET_AZ}" \
--target-region "${EC2_TARGET_REGION}" \
-n "${NS}"
oc mtv create provider --name host --type openshift -n "${NS}"
The local OpenShift provider auto-detects the current cluster and needs no URL or token. It is typically the target but can serve as the source when the remote OpenShift is the target.