Back to Jobs

DevOps Engineer - ML & Data Infrastructure (Remote - US )

Remote, USA Full-time Posted 2025-11-24
Dear applicant, please note that this role is for US-based candidated only. We’re looking for a DevOps Engineer to help design, build, and optimize the cloud infrastructure powering our machine learning operations. You’ll play a key role in scaling AI models from research to production — ensuring smooth deployments, real-time monitoring, and rock-solid reliability across our Google Cloud Platform (GCP) environment. You’ll work hand-in-hand with data scientists, ML engineers, and other DevOps experts to automate workflows, enhance performance, and keep our AI systems running seamlessly for millions of players worldwide. What You’ll Do • Manage, configure, and automate cloud infrastructure using tools such as Terraform and Ansible. • Implement CI/CD pipelines for ML models and data workflows, focusing on automation, versioning, rollback, and monitoring with tools like Vertex AI, Jenkins, and DataDog. • Build and maintain scalable data and feature pipelines for both real-time and batch processing using BigQuery, BigTable, Dataflow, Composer, Pub/Sub, and Cloud Run. • Set up infrastructure for model monitoring and observability — detecting drift, bias, and performance issues using Vertex AI Model Monitoring and custom dashboards. • Optimize inference performance, improving latency and cost-efficiency of AI workloads. • Ensure overall system reliability, scalability, and performance across the ML/Data platform. • Define and implement infrastructure best practices for deployment, monitoring, logging, and security. • Troubleshoot complex issues affecting ML/Data pipelines and production systems. • Ensure compliance with data governance, security, and regulatory standards, especially for real-money gaming environments. What We’re Looking For • 3+ years of experience as a DevOps Engineer, ideally with a focus on ML and Data infrastructure. • Strong hands-on experience with Google Cloud Platform (GCP) — especially BigQuery, Dataflow, Vertex AI, Cloud Run, and Pub/Sub. • Proficiency with Terraform (and bonus points for Ansible). • Solid grasp of containerization (Docker, Kubernetes) and orchestration platforms like GKE. • Experience building and maintaining CI/CD pipelines, preferably with Jenkins. • Strong understanding of monitoring and logging best practices for cloud and data systems. • Scripting experience with Python, Groovy, or Shell. • Familiarity with AI orchestration frameworks (LangGraph or LangChain) is a plus. • Bonus points if you’ve worked in gaming, real-time fraud detection, or AI-driven personalization systems. Apply tot his job Apply To this Job

Similar Jobs