visionOS patterns, APIs, and spatial computing conventions. Use when building apps for Apple Vision Pro — provides scene types, RealityKit ECS, ARKit providers, gesture handling, and visionOS 26 updates.
When working with visionOS, follow these patterns and conventions.
WindowGroup → 2D SwiftUI window in shared space
.windowStyle(.volumetric) → bounded 3D container
ImmersiveSpace → full environment control (.mixed / .progressive / .full)
Only one ImmersiveSpace can be open system-wide. Handle .error from openImmersiveSpace.
RealityView { content in
if let model = try? await ModelEntity(named: "robot.usdz") {
content.add(model)
}
} update: { content in
// Runs when SwiftUI state changes
}
Three components required — without all three, gestures won't work:
entity.components.set(InputTargetComponent())
entity.components.set(HoverEffectComponent())
entity.generateCollisionShapes(recursive: true)
SpatialTapGesture — eye focus + pinchDragGesture — move entitiesMagnifyGesture — scaleRotateGesture3D — rotatelet session = ARKitSession()
try await session.run([
WorldTrackingProvider(),
HandTrackingProvider(),
PlaneDetectionProvider()
])
Providers: WorldTracking, HandTracking, PlaneDetection, SceneReconstruction, ImageTracking, CameraFrame (enterprise/visionOS 26).
let anchor = WorldAnchor(originFromAnchorTransform: transform)
try await worldTracking.addAnchor(anchor)
// Persists across sessions automatically
AmbientAudioComponent — non-directionalSpatialAudioComponent — positional 3DChannelAudioComponent — stereo/surroundgenerateCollisionShapes(recursive: true)CoordinateSpace3D — unified coordinate conversionSurfaceAlignment — snap to detected surfacescontentCaptureProtected — DRM content protectionFor deeper information: rlm_search(query="...", project="visionos-development")
Full expertise: ~/.claude/research/visionos-development/expertise.md