Description
As a Staff Machine Learning Research Scientist on the LLM Evals team, you will lead the development of novel evaluation methodologies, metrics, and benchmarks to measure the capabilities and limitations of frontier LLMs.
Your primary responsibilities will include:
- Driving research on the effectiveness and limitations of existing LLM evaluation techniques.
- Designing and developing novel evaluation benchmarks for large language models, covering areas such as instruction following, factuality, robustness, and fairness.
- Communicating, collaborating, and building relationships with clients and peer teams to facilitate cross-functional projects.
- Collaborating with internal teams and external partners to refine metrics and create standardized evaluation protocols.
- Implementing scalable and reproducible evaluation pipelines using modern ML frameworks.
- Publishing research findings in top-tier AI conferences and contributing to open-source benchmarking initiatives.
- Mentoring and guiding research scientists and engineers, providing technical leadership across cross-functional projects.
- Staying deeply engaged with the ML research community, tracking emerging work and contributing to the advancement of LLM evaluation science.
The ideal candidate will have 5+ years of hands-on experience in large language model, NLP, and Transformer modeling, in the setting of both research and engineering development.
You will thrive in a high-energy, fast-paced startup environment and be ready to dedicate the time and effort needed to drive impactful results.
Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.