MLOps Engineer (Remote) at PairSoft


Company Logo

PairSoft is Hiring

Job Info:
  • Company PairSoft
  • Position MLOps Engineer (Remote)
  • Location Remote India
  • Source EchoJobs
  • Published March 01, 2026
  • Category Engineer
  • Type Full-Time


Job Description

About the Role

We are looking for an MLOps Engineer to design, build, and operate production-grade infrastructure and pipelines for Machine Learning (ML), Deep Learning and Generative AI (GenAI) solutions. Your primary focus is ensuring these systems are reliable, scalable, observable, and secure across the full lifecycle—training, deployment, monitoring, and retraining.

This is a hands-on role at the intersection of ML engineering and cloud/platform reliability. While MLOps is the top priority, there is room to contribute to AI solution implementation (model integration/experimentation) if you’re interested and bandwidth allows. You will work closely with AI engineers, DevOps, and the Data & AI Architect to standardize and scale repeatable production patterns.

Key Responsibilities

  • Design, implement, and maintain end-to-end pipelines covering training, validation, deployment, monitoring, and retraining.
  • Build and operate production ML infrastructure using Infrastructure as Code (IaC).
  • Implement and manage CI/CD for ML, including artifact/model versioning, promotion, rollout/rollback, and dev/test/prod parity.
  • Deploy and run ML/GenAI workloads on Azure using Azure App Service and Azure Container Apps, with monitoring via Application Insights.
  • Implement model observability: performance monitoring, data quality checks, drift detection (where applicable), alerting, and dashboards.
  • Optimize compute and cost for training and inference (scaling policies, capacity planning, cost/performance tradeoffs).
  • Support GenAI operational needs, including LLM inference patterns, embeddings, and retrieval pipelines; enable hooks for evals/guardrails where required.
  • Ensure ML systems meet security and governance requirements (RBAC/least privilege, secrets management, audit logging, encryption, secure access patterns).
  • Partner with the Data & AI Architect to translate architecture standards into reusable pipeline templates and operational controls.
  • Partner with AI engineers to productionize solutions and improve reliability and scalability; contribute to model development/experimentation as capacity allows.

Requirements

  • 3+ years of experience in MLOps, ML engineering, platform engineering, or a closely related role.
  • Strong proficiency in Python for ML workflows, automation, and pipeline development.
  • Hands-on experience building and operating ML systems on Azure (OCI exposure is a plus).
  • Proven experience building production-grade MLOps pipelines end-to-end (training → deployment → monitoring → retraining).
  • Strong experience with Infrastructure as Code (Terraform or equivalent).
  • Experience with MLOps tooling such as MLflow (or equivalent experiment tracking) and CI/CD pipelines.
  • Experience containerizing services using Docker in production environments.
  • Hands-on experience deploying and monitoring services on Azure using Azure App Service, Azure Container Apps, and Application Insights.
  • Familiarity with GenAI/LLM-based systems (inference workflows, embeddings, retrieval/RAG components) and operational considerations.
  • Strong communication skills and ability to collaborate in a fast-paced, cross-functional environment.

Nice to Have

  • Experience with orchestration tools such as Apache Airflow (open-source) or Azure-native alternatives (Azure Data Factory / Azure ML pipelines).
  • Experience with feature stores and/or real-time inference patterns.
  • Exposure to multi-cloud architectures

✉️