About the job
Role Overview
We are seeking an experienced AI Quality Engineer to own and drive end-to-end quality strategy for API and AI/ML platforms. This role focuses on building scalable automation frameworks, CI/CD-integrated validation pipelines, and robust AI testing standards to ensure accuracy, fairness, performance, and production reliability across services.
Key Responsibilities
- Own the end-to-end testing strategy for API and AI/ML capabilities across multiple squads, working closely with QA Leads, Tech Leads, and Engineering teams.
- Design, build, and maintain scalable automated test suites for APIs, covering functional, integration, regression, performance, and security testing, fully integrated into CI/CD pipelines with defined quality gates.
- Establish and manage baseline evaluation datasets, golden tests, and acceptance thresholds for pre-deployment and post-deployment validation of APIs and AI models.
- Define and execute AI/ML testing strategies, including model validation, output quality assessment, bias and fairness checks, drift detection, edge-case testing, and continuous model monitoring.
- Ensure CI/CT pipeline reliability through proactive monitoring, alerting, rapid triage, root cause analysis, environment/data management, and performance optimization.
- Expand and standardize validation frameworks and test coverage across API and AI services, ensuring consistency across environments and compliance with engineering and governance standards.
- Drive shift-left testing by collaborating during solution design, sprint execution, and code reviews.
- Report on test health, risks, and quality metrics, contributing to governance forums and providing visibility into release readiness and quality trends.
Required Skills & Experience
- Bachelor’s degree in Computer Science or equivalent practical experience.
- 5+ years of hands-on experience in API test automation, including functional, integration, performance, and security testing, with strong CI/CD integration.
- Proficiency with modern test automation frameworks and tools for API testing, performance testing, and CI/CD orchestration.
- Strong understanding of AI/ML quality validation concepts, including model evaluation, fairness, drift, and monitoring (hands-on experience preferred).
- Excellent cross-team collaboration and communication skills, with the ability to influence testing strategy and support multiple squads.
- Strong documentation, debugging, and problem-solving skills, with a proven record of improving test reliability and engineering quality maturity.
Nice to Have
- Experience testing LLMs, ML inference APIs, or data-driven platforms
- Exposure to cloud-native CI/CD pipelines and distributed systems
- Familiarity with governance, compliance, or enterprise QA standards