Builds the largest chips ever for generative AI.
Optimize deployment, ensure reliability, scalability, security, and observability of distributed infrastructure.
Drive adoption of AI inference API through engineering and go-to-market strategies.
Design, deploy, and operate a network observability platform for real-time visibility.
Automate configuration, upgrades, monitoring, and failure handling for large Cerebras clusters.
Deploy and manage clusters in distributed environments, troubleshoot networking, and automate tasks.
Build data pipelines, analyze data, develop models, and create visualizations.
Lead corporate security, design security capabilities, and manage risk assessments.
Drive the vision and strategy for Cerebras’ ML training ecosystem.
Develop novel ML algorithms, network architectures, and improve training dynamics.