Mistral AI

Software Engineer, QA

Mistral AI
hybrid mid full-time Paris
Apply →

First indexed 17 Apr 2026

Description

We are seeking a skilled and proactive QA Engineer to join our team and ensure the reliability, accuracy, and robustness of our AI-powered products.

In this role, you will design and execute test strategies for our applications, APIs and machine learning models. Your work will involve automated testing, edge-case analysis, and collaboration with cross-functional teams to deliver seamless, high-quality user experiences.

Key responsibilities include:

  • Test Automation: Develop automated test suites to validate app features, APIs, and model integrations, ensuring end-to-end reliability and user experience.
  • Edge Case Analysis: Collaborate with PMs and other stakeholders to identify and rigorously test edge cases, improving the robustness of both platform features and models.
  • Quality Platform Development: Contribute to building tools and frameworks that enable more efficient and scalable quality testing processes across the organization.
  • Release Readiness Validation: Implement pre-release quality gates to validate models, APIs, and platform updates, providing a green light for production releases.
  • Systematic QA Campaigns: Design and lead comprehensive quality assurance campaigns, including functional, stress, and performance testing, to proactively identify potential issues.

About you:

  • You have proven ability to create and execute comprehensive test strategies, covering functional, regression, and exploratory testing for AI products.
  • You are proficient in QA tools like Playwright, Postman, or similar platforms for API and functional testing.
  • You’re skilled in identifying, documenting, and collaborating with developers to resolve issues efficiently.
  • You are autonomous and a self-starter.
  • You are a proactive problem-solver with a continuous improvement mindset.
  • You are proficient in Python or Typescript.

Now it would be ideal if you have:

  • Experience testing Machine Learning models.
  • Understanding of the Machine Learning lifecycle.
  • Experience in various types of testing: performance, load, accessibility or others.
  • Strong debugging skills.
This listing is enriched and indexed by YubHub. To apply, use the employer's original posting: https://jobs.lever.co/mistral/03918386-cb56-4e4d-afa8-1f8b7676a4a6