Use this skill when the user wants to configure or generate artifacts for Oracle Kubernetes Engine multi-homed pods with OCI VCN-Native CNI, Multus CNI, ipvlan, and Generic VNIC Attachment (GVA), including discovery-backed customer input collection, node pool JSON, NADs, cloud-init, test pod manifests, and verification steps.
Use this skill for end-to-end OKE multi-home pod workflows that combine:
ipvlan for the secondary interfaceThis skill supports two modes:
Do not use this skill for generic GVA Application Resource scheduling workflows. For that case, prefer the existing oke-gva-deployer skill.
Read only what you need:
references/guide.md for the technical workflow and example manifestsassets/customer-input-template.yaml for required fields, prompting metadata, and defaultsUse these scripts instead of reconstructing the workflow from scratch:
scripts/check_multus.shscripts/discover_customer_inputs.pyscripts/render_artifacts.pyscripts/scaffold_workspace.shConfirm the target cluster explicitly. Do not run discovery against an implicit or default cluster.
If the user wants discovery or does not know cluster metadata, run:
bash scripts/scaffold_workspace.sh
python3 scripts/discover_customer_inputs.py --cluster <cluster-name-or-ocid>
This creates customer-input-template.yaml in the current working directory if it is missing, then writes customer-input.discovered.yaml with discovered cluster metadata and cached subnet/NSG choices.
bash scripts/check_multus.sh --context <kubectl-context>
If multus_installed is false, stop and ask the user whether they want to install Multus before continuing.
x_codex_prompting block in the customer input YAML to ask for missing values.deployment.mode.
If it is existing-gva-node-pool, do not require create-only node pool fields and do not render nodepool.json.
If it is create-gva-node-pool, require the node pool creation inputs and render nodepool.json.python3 scripts/render_artifacts.py --input customer-input.discovered.yaml
If discovery was not used:
python3 scripts/render_artifacts.py --input customer-input-template.yaml
python3 scripts/render_artifacts.py --input <input-yaml> --strict
The renderer writes rendered/ in the current working directory:
nodepool.jsoncloud-init.shoci-vcn-native-network-nad.yamlipvlan-network.yamlsleep-forever-pod.yamlverification.mdeth0 pinned to the intended default interface.net1 pinned to the intended secondary interface.OCI_VCN_IP_NATIVE, stop and explain the mismatch.When interface names are still unknown, prefer kubectl debug node/... against a worker in the target node pool.
Primary command:
kubectl --context <kubectl-context> debug node/<node-name> -it --image=nicolaka/netshoot --profile=sysadmin -- chroot /host ip -br addr
Fallback command:
kubectl --context <kubectl-context> debug node/<node-name> -it --image=nicolaka/netshoot --profile=sysadmin -- chroot /host ifconfig -a
Useful follow-up:
kubectl --context <kubectl-context> debug node/<node-name> -it --image=nicolaka/netshoot --profile=sysadmin -- chroot /host ip route
Use these results to identify:
net1 subnetinterfaces.default_network.interface_name and interfaces.secondary_network.interface_namescripts/scaffold_workspace.sh when the working directory does not yet have the template files.scripts/check_multus.sh immediately after the cluster is identified and before generating final multi-home pod artifacts.scripts/discover_customer_inputs.py when the user has provided an explicit cluster name, context, or OCID and wants you to auto-populate cluster metadata.scripts/render_artifacts.py once the customer input YAML is complete enough to render outputs.Keep operational responses short and ordered like this:
StatusMissing Inputs or Generated FilesCommands to RunValidationRisks