About the job
Company Description
Tenhance is a rapidly growing startup focused on building adaptive, human-centric, and agentic AI systems that go beyond traditional static AI interactions. We specialize in developing AI infrastructure and products that help organizations achieve intelligent workflows, personalized experiences, and higher AI adoption across enterprise and consumer ecosystems.
Our ecosystem includes:
- Sukhi AI - a consumer-focused AI companion designed for personalized engagement, reminders, and intelligent interactions
- Navihyr - an AI-powered interview and hiring automation platform helping companies scale recruitment workflows using AI-driven interviews and automation
- Enterprise AI copilots, multi-agent systems, workflow automation infrastructure, and AI orchestration platforms
Backed by expertise in AI engineering, product innovation, and enterprise integration, our mission is to make AI systems more proactive, adaptive, reliable, and meaningful.
If you are excited about building and testing next-generation AI systems in a fast-moving startup environment, we’d love to hear from you.
Role Description
This is a full-time remote role for a QA Automation Engineer specializing in GenAI Systems and AI Testing.
This role goes beyond traditional software QA.
You will work on testing AI-native products, conversational systems, LLM-powered workflows, agentic AI systems, and enterprise automation platforms. The ideal candidate understands both modern QA practices and the evolving challenges of testing GenAI systems — including contextual understanding, hallucinations, conversational consistency, prompt reliability, memory handling, workflow accuracy, and response quality.
You’ll collaborate closely with founders, product teams, AI engineers, and developers to improve system reliability, scalability, usability, and performance across the Tenhance ecosystem.
Key Responsibilities
QA Automation & System Testing
- Develop and maintain automated testing frameworks and test suites
- Build regression, smoke, integration, API, and end-to-end testing workflows
- Perform manual, exploratory, and usability testing across platforms
- Validate frontend, backend, APIs, and enterprise workflow integrations
- Support CI/CD pipelines and release validation processes
- Identify bugs, edge cases, workflow failures, and system inconsistencies
- Create detailed bug reports, test cases, and QA documentation
GenAI & AI Systems Testing
- Test AI-driven systems for contextual understanding and conversational continuity
- Validate prompt reliability, memory handling, and response consistency
- Identify hallucinations, inaccurate outputs, unstable reasoning, and AI failure cases
- Evaluate AI responses across multiple user personas and edge-case scenarios
- Test RAG pipelines, retrieval quality, and grounding accuracy
- Validate multi-step AI agent workflows and automation systems
- Monitor AI system behavior including latency, fallback handling, and reliability
- Create structured evaluation datasets and benchmark testing scenarios for AI workflows
Product Ownership & Startup Execution
- Work directly with founders and core engineering teams
- Operate effectively in a fast-paced startup environment with evolving priorities
- Balance speed, experimentation, and quality during rapid product iterations
- Take ownership of QA processes from early-stage development through production
- Proactively suggest improvements in usability, workflows, and product quality
- Contribute beyond narrowly defined responsibilities when required
Qualifications
- Strong understanding of Quality Assurance principles and testing methodologies
- Hands-on experience with manual testing and automation testing frameworks
- Familiarity with tools such as: Playwright,Selenium,Cypress & Postman
- Experience creating and executing detailed test cases and QA workflows
- Strong analytical, debugging, and problem-solving skills
- Attention to detail with the ability to identify edge cases and workflow gaps
- Understanding of software development lifecycle and CI/CD practices
- Ability to work collaboratively in a remote and agile environment
- Comfortable working in ambiguity and rapidly changing startup workflows
- Strong communication and documentation skills
Preferred / Good to Have
- Familiarity with AI systems, GenAI workflows, or conversational AI platforms
- Understanding of prompt engineering and LLM evaluation concepts
- Experience testing AI agents, copilots, or RAG-based systems
- Exposure to OpenAI APIs, LangChain, vector databases, or agent frameworks
- Basic scripting knowledge in Python or JavaScript
- Experience with performance, load, or reliability testing
- Previous startup experience is highly valued
Startup Reality & Expectations
This is not a traditional enterprise-only QA role. We are building rapidly, shipping continuously, and solving real-world AI problems in production environments.
The ideal candidate should be comfortable with:
- Fast-paced execution
- Rapid experimentation and iteration
- High ownership and accountability
- Ambiguity and evolving product requirements
- Working across multiple systems and workflows
- Learning new technologies and AI paradigms quickly
If you are excited about solving challenging AI quality problems and contributing meaningfully in an early-stage environment, this role will offer significant growth and impact.
Why Join Tenhance?
- Work directly with founders and AI engineering teams
- Build and test next-generation AI systems and products
- Exposure to enterprise AI agents, copilots, and automation infrastructure
- Opportunity to define QA standards for AI-native platforms
- High-impact role with rapid ownership and learning opportunities
- Be part of a team building practical AI products with real-world adoption
Requirements added by the job poster
• Bachelor's Degree