Use Netryx to index street-view panoramas and geolocate any street-level photo to GPS coordinates using CosPlace + LightGlue computer vision pipeline.
Skill by ara.so — Daily 2026 Skills collection.
Netryx is a locally-hosted geolocation engine that identifies GPS coordinates from any street-level photograph. It indexes Google Street View panoramas into a searchable fingerprint database and uses a three-stage CV pipeline (CosPlace → ALIKED/DISK + LightGlue → RANSAC refinement) to match a query image to a precise location. Sub-50m accuracy. No landmarks required. Runs entirely on local hardware.
git clone https://github.com/sparkyniner/Netryx-OpenSource-Next-Gen-Street-Level-Geolocation.git
cd Netryx-OpenSource-Next-Gen-Street-Level-Geolocation
python3 -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
# Required: LightGlue (must be installed from GitHub)
pip install git+https://github.com/cvg/LightGlue.git
# Optional: LoFTR dense matcher for Ultra Mode
pip install kornia
GPU support auto-detected:
Optional Gemini API key for AI Coarse geolocation mode:
export GEMINI_API_KEY="your_key_here"
python test_super.py
macOS blank GUI fix:
brew install [email protected]
The index stores 512-dim CosPlace fingerprints for every crawled panorama in a geographic area.
Via GUI:
cosplace_parts/Index size reference:
| Radius | ~Panoramas | Build Time (M2 Max) | Storage |
|---|---|---|---|
| 0.5 km | ~500 | 30 min | ~60 MB |
| 1 km | ~2,000 | 1–2 hrs | ~250 MB |
| 5 km | ~30,000 | 8–12 hrs | ~3 GB |
| 10 km | ~100,000 | 24–48 hrs | ~7 GB |
Interrupted builds resume automatically on next run.
Auto-build compiled index (runs after Create or manually):
cosplace_parts/*.npz → index/cosplace_descriptors.npy
→ index/metadata.npz
Via GUI:
GEMINI_API_KEY)Query Image
│
├─ CosPlace 512-dim descriptor
├─ Flipped descriptor (handles mirrored perspectives)
│
▼
Cosine similarity search → radius filter (haversine) → Top 500–1000 candidates
│ (<1 second)
▼
Download panoramas → Rectilinear crops at 3 FOVs (70°, 90°, 110°)
│
├─ ALIKED (CUDA) or DISK (MPS/CPU) keypoint extraction
├─ LightGlue deep feature matching
├─ RANSAC geometric verification
│ (2–5 min)
▼
Heading refinement: ±45° at 15° steps, top 15 candidates
│
├─ Spatial consensus clustering (50m cells)
├─ Confidence scoring (uniqueness ratio)
│
▼
📍 GPS Coordinates + Confidence Score
Enable Ultra Mode checkbox for night shots, motion blur, low-texture scenes.
Adds three extra passes:
Significantly slower; use when standard pipeline returns low confidence.
netryx/
├── test_super.py # Main app: GUI + indexing + search pipeline
├── cosplace_utils.py # CosPlace model loading + descriptor extraction
├── build_index.py # Standalone high-performance index builder
├── requirements.txt
├── cosplace_parts/ # Raw .npz embedding chunks (created during indexing)
└── index/
├── cosplace_descriptors.npy # All 512-dim descriptors (stacked)
└── metadata.npz # lat, lon, heading, panoid per descriptor
from cosplace_utils import load_cosplace_model, extract_descriptor
from PIL import Image
import torch
device = torch.device("cuda" if torch.cuda.is_available() else
"mps" if torch.backends.mps.is_available() else "cpu")
model = load_cosplace_model(device=device)
img = Image.open("query_photo.jpg").convert("RGB")
descriptor = extract_descriptor(model, img, device=device)
# descriptor.shape → (512,)
import numpy as np
# Load compiled index
descriptors = np.load("index/cosplace_descriptors.npy") # (N, 512)
meta = np.load("index/metadata.npz")
lats = meta["lats"] # (N,)
lons = meta["lons"] # (N,)
headings = meta["headings"] # (N,)
panoids = meta["panoids"] # (N,)
# Query descriptor (from extract_descriptor above)
query_vec = descriptor / np.linalg.norm(descriptor)
# Cosine similarity — single matrix multiply
norms = np.linalg.norm(descriptors, axis=1, keepdims=True)
normed = descriptors / (norms + 1e-8)
scores = normed @ query_vec # (N,)
# Haversine radius filter
def haversine_km(lat1, lon1, lat2, lon2):
R = 6371.0
dlat = np.radians(lat2 - lat1)
dlon = np.radians(lon2 - lon1)
a = np.sin(dlat/2)**2 + np.cos(np.radians(lat1)) * np.cos(np.radians(lat2)) * np.sin(dlon/2)**2
return R * 2 * np.arcsin(np.sqrt(a))
center_lat, center_lon = 48.8566, 2.3522 # Paris example
radius_km = 2.0
distances = haversine_km(lats, center_lon, center_lat, center_lon)
# Vectorised version:
distances = haversine_km(lats, lons, center_lat, center_lon)
mask = distances <= radius_km
filtered_scores = scores.copy()
filtered_scores[~mask] = -1.0
top_k = 500
top_idx = np.argsort(filtered_scores)[::-1][:top_k]
print("Top match:")
print(f" lat={lats[top_idx[0]]:.6f}, lon={lons[top_idx[0]]:.6f}")
print(f" heading={headings[top_idx[0]]}, panoid={panoids[top_idx[0]]}")
print(f" score={filtered_scores[top_idx[0]]:.4f}")
import torch
from PIL import Image
import numpy as np
from lightglue import LightGlue, ALIKED, DISK
from lightglue.utils import load_image, rbd
device = torch.device("cuda" if torch.cuda.is_available() else
"mps" if torch.backends.mps.is_available() else "cpu")
# Select extractor based on device
if device.type == "cuda":
extractor = ALIKED(max_num_keypoints=1024).eval().to(device)