Site Reliability Engineer (SRE) - AI Infrastructure
- $300,000 gross per year
- San Francisco, California, United States
- Permanent
- 300000
- Artificial Intelligence
- AI Network
- AI Software
Are you looking for an exciting new opportunity?
Join a stealth-mode hyperscale data center startup building a next-generation AI and cloud platform designed for startups and advanced research, powered by thousands of H100, H200, and B200 GPUs available on demand. Their platform supports everything from rapid experimentation to full-scale model training and inference, with flexible orchestration via Slurm, Kubernetes, or direct SSH access.
This is a rare opportunity to work at the intersection of hyperscale infrastructure and AI, shaping the operational backbone of one of the largest GPU clusters in private deployment. If you want to build and operate infrastructure for frontier AI workloads, automate systems at petascale, and be part of a founding engineering team, this is the place to do it.
Responsibilities:
- Design, deploy, and maintain large-scale GPU clusters (H100/H200/B200) for training and inference workloads.
- Build automation pipelines for provisioning, scaling, and monitoring compute resources across Slurm and Kubernetes environments.
- Develop observability, alerting, and auto-healing systems for high-availability GPU workloads.
- Collaborate with ML, networking, and platform teams to optimise resource scheduling, GPU utilization, and data flow.
- Implement infrastructure-as-code, CI/CD pipelines, and reliability standards across thousands of nodes.
- Diagnose performance bottlenecks and drive continuous improvements in reliability, latency, and throughput.
Skills / Must Have:
- 7+ years of experience in SRE, DevOps, or Infrastructure Engineering roles supporting large-scale compute environments.
- Strong hands-on experience with Kubernetes and Slurm for cluster orchestration and workload management.
- Deep knowledge of Linux systems, networking, and GPU infrastructure (NVIDIA H100/H200/B200 preferred).
- Proficiency in Python, Go, or Bash for automation, tooling, and performance tuning.
- Experience with observability stacks (Prometheus, Grafana, Loki) and incident response frameworks.
- Familiarity with high-performance computing (HPC) or AI/ML training infrastructure at scale.
- Background in reliability engineering, distributed systems, or hardware acceleration environments is a strong plus.
Benefits:
- Equity
Salary:
- $300,000 gross per year