11
Virtual Machines
4
Environments
136 GB
GPU VRAM
80 Gbps
Fabric Capacity

Network Topology

                              INTERNET
                                  │
                            ┌─────┴─────┐
                            │  vulkan   │
                            │  Firewall │
                            │  OPNsense │
                            └─────┬─────┘
                                  │ 20 Gbps LACP
                                  ▼
                    ┌─────────────────────────────┐
                    │       spine-crs309          │
                    │    10G Spine Switch         │
                    │    MikroTik CRS309-1G-8S+   │
                    └──┬────────┬────────┬────────┘
                       │        │        │
              20 Gbps  │   10G  │  10G   │  20 Gbps
                       │        │        │
                       ▼        ▼        ▼
               ┌───────────┐ ┌─────┐ ┌─────┐ ┌─────────────┐
               │  saturn   │ │theia│ │dios-│ │access-crs326│
               │  Proxmox  │ │Womb │ │curi │ │Access Switch│
               │Hypervisor │ │96GB │ │40GB │ │  24+2 ports │
               └───────────┘ └─────┘ └─────┘ └──────┬──────┘
                    │                               │
            ┌───────┴───────┐               ┌───────┴───────┐
            │ Saturn's Moons│               │  Bare Metal   │
            │  (11 VMs)     │               │   Services    │
            └───────────────┘               └───────────────┘
            

This is the foundation. See what runs on it.

The Nimmerverse Sensory Network — architecture, protocols, and the path to Young Nyx.

🔗 View Public Repository

Fleet Architecture

Four Environments

Complete isolation from experiment to production

We are a lab that reaps its own fruit. Every environment is a complete stack — PostgreSQL, ChromaDB, NATS — isolated and independently deployable.

Environment Port Block Purpose Can Break?
dev 30000-39999 Experimental, WIP, new features Yes
staging 40000-49999 Pre-prod validation, integration testing Minimal
prod 50000-59999 Live, stable, serving Never
training 60000-69999 Active learning, GRPO, model training Yes

Golden Rule: VM ID = last IP octet. VM 120 → 10.0.20.120. No mental math needed.

The Fleet

11 VMs across 4 environments

Service Dev Staging Prod Training
PostgreSQL phoebe-dev :35432 phoebe-staging :45432 phoebe-prod :55432 phoebe-training :65432
ChromaDB iris-dev :35000 iris-staging :45000 iris-prod :55000 iris-training :65000
NATS nats-dev :30000 nats-staging :40000 nats-prod :50000

All PostgreSQL instances run 17.8. All NATS instances have JetStream enabled. Full monitoring wired to Prometheus.

Identity & Userspaces

FreeIPA identities, systemd isolation

Every daemon has an identity. No local system users — everything runs as FreeIPA-managed service accounts with consistent UIDs across all hosts.

  • Service accounts: svc-nats-dev, svc-chromadb-prod, etc.
  • Group: nimmerverse-services (GID 20003)
  • Shell: /sbin/nologin — no interactive access

Bare metal philosophy: On GPU workstations (theia, dioscuri), workloads run in systemd userspaces. Each user owns their services:

User Host Services
nyx-cognitive theia vLLM inference
nyx-training theia GRPO, LoRA training
nyx-organs dioscuri STT, TTS, Vision
nyx-nervous dioscuri Math cells, sensor cells

Unix permissions ARE the RBAC. Compromised cell can't touch vLLM — different user, no access.

"We are a lab. We always have production running AND something developing. The streams must never cross accidentally."

— Development Conventions

Network Fabric

Spine-Leaf Architecture

80 Gbps total fabric capacity

Professional-grade spine-leaf topology rebuilt December 2025. Every critical path is redundant, every link is bonded.

  • Spine: MikroTik CRS309-1G-8S+ — 8x 10G SFP+ ports
  • Access: MikroTik CRS326-24G-2S+RM — 24x 1G + 2x 10G SFP+
  • Bonding: LACP 802.3ad on all critical uplinks
  • Hardware VLAN filtering: Offloaded to switch silicon

Firewall & Security

OPNsense on dedicated hardware

Vulkan guards the perimeter. Named for the Roman god of fire — fitting for a firewall.

  • Platform: HP Z620 workstation (repurposed enterprise hardware)
  • OS: OPNsense (FreeBSD-based, open source)
  • Uplink: 20 Gbps LACP bond to spine
  • Zones: 7 VLANs with full inter-zone firewall rules

Network Segmentation

7 isolated security zones

Zone Purpose
Management Infrastructure devices, IPMI, switches
LAN User workstations, development machines
Data Storage, databases, Git repositories
Kubernetes Container orchestration, AI workloads
Lab Testing, experiments, GPU management
Wireless WiFi devices, IoT
DMZ Public-facing services

Compute Infrastructure

The Womb — theia

Primary AI training workstation

Named for the Greek Titaness of sight and shining light. Also the protoplanet that collided with Earth to form the Moon — a cosmic womb event.

Component Specification
Platform Lenovo ThinkStation P8
CPU AMD Threadripper PRO 7955WX (32c/64t)
RAM 128 GB ECC DDR5 (8-channel)
GPU NVIDIA RTX PRO 6000 Blackwell — 96 GB VRAM
Network 10 GbE dedicated to K8s workloads
Role Kubernetes GPU worker, primary training

The Divine Twins — dioscuri

Inference and parallel workloads

Named for Castor and Pollux, the divine twins who became the Gemini constellation. Two souls in one body — two GPUs in one machine.

Component Specification
Platform Lenovo ThinkStation P8
CPU AMD Threadripper PRO 7955WX (32c/64t)
RAM 128 GB ECC DDR5 (8-channel)
GPU 2x NVIDIA RTX 4000 Ada — 40 GB VRAM total
Network 10 GbE dedicated to K8s workloads
Role Kubernetes GPU worker, inference cluster

Saturn — Hypervisor

Proxmox VE with 11 virtual machines

The gas giant at the center of the system. Its moons orbit in service of the greater whole — now across four complete environments.

  • Platform: Custom workstation, AMD Ryzen 9 3900X
  • RAM: 128 GB
  • Hypervisor: Proxmox VE
  • Network: 20 Gbps LACP bond to spine
  • VMs: 11 total — K8s control plane, 4x PostgreSQL, 4x ChromaDB, 3x NATS, monitoring, identity

Kubernetes Cluster

Container orchestration with GPU scheduling

  • Control Plane: Saturn VM (kubeadm, 1 master)
  • Workers: theia + dioscuri (bare metal GPU nodes)
  • CNI: Flannel
  • Load Balancer: MetalLB
  • Ingress: Traefik
  • GPU Plugin: NVIDIA device plugin for K8s
  • Monitoring: Prometheus + DCGM exporter

Saturn's Moons — Services

Infrastructure Services

The foundation that supports the fleet

Moon Service Purpose
k8s-master Kubernetes Container orchestration control plane
tethys Prometheus + Grafana Monitoring — all fleet targets wired
athena FreeIPA Identity management — all service accounts
dione Gitea Self-hosted Git — code sovereignty
rhea Caddy Reverse proxy — TLS termination

Fleet VMs (phoebe-*, iris-*, nats-*) are detailed in the Fleet Architecture section above.

Bare Metal Services

Dedicated hardware for specific roles

Host Hardware Role
remus & romulus Raspberry Pi (pair) DNS twins — Technitium, redundant resolution
ceres Raspberry Pi Vaultwarden — password management
bennu ThinkCentre Downloads, utilities
chronos GPS receiver Stratum-1 NTP — time truth from satellites

"Sovereignty through ownership. The backbone awaits its neural tissue."

— Infrastructure Philosophy