Magnet.me  -  The smart network where students and professionals find their internship or job.

The smart network where students and professionals find their internship or job.

Senior MLOps Engineer

Posted 21 Jan 2026
Share:
Work experience
5 to 10 years
Full-time / part-time
Full-time
Job function
Degree level
Required language
English (Fluent)

Build your career on Magnet.me

Create a profile and receive smart job recommendations based on your liked jobs.

In this role as a Senior MLOps Engineer, you will support Elsevier’s large-scale research platforms by turning experimental NLP, search, and GenAI models into secure, reliable, and scalable production services.

About the team

Elsevier’s mission is to help researchers, clinicians, and life sciences professionals advance discovery and improve health outcomes through trusted content, data, and analytics. As the landscape of science and healthcare evolves, we are pioneering intelligent discovery experiences — from Scopus AI and LeapSpace to ClinicalKey AI, PharmaPendium, and next-generation life sciences platforms. These products leverage retrieval-augmented generation (RAG), semantic search, and generative AI to make knowledge more discoverable, connected, and actionable across disciplines.

About the role

This role supports Elsevier’s large-scale research platforms by turning experimental NLP, search, and GenAI models into secure, reliable, and scalable production services. It focuses on ML and LLM engineering across cloud platforms, including building end-to-end ML pipelines, MLOps infrastructure, and CI/CD for models used in search, recommendations, and RAG-based systems. The position involves designing and operating retrieval, ranking, and evaluation pipelines, including IR metrics, LLM quality metrics, and A/B testing, while optimizing cost and performance at scale. You will collaborate closely with product managers, domain experts, data scientists, and operations engineers to deliver high-quality, responsible AI features over a massive scholarly corpus. The role suits an experienced ML engineer with strong cloud, search, and NLP expertise who wants to work at the intersection of GenAI, research content, and production-grade systems.

Key responsibilities

ML & LLM engineering, search and recommendation engines

  • Automate and orchestrate machine learning workflows across major cloud and AI platforms (AWS, Azure, Databricks, and foundation model APIs such as OpenAI)

  • Maintain and version model registries and artifact stores to ensure reproducibility and governance

  • Develop and manage CI/CD for ML, including automated data validation, model testing, and deployment

  • Implement ML engineering solutions using popular MLOps platforms such as AWS SageMaker, MLflow, Azure ML

  • Build end-to-end custom SageMaker pipelines for recommendation systems

  • Design and implement the engineering components of GAR+RAG systems (e.g., query interpretation and reflection, chunking, embeddings, hybrid retrieval, semantic search), manage prompt libraries, guardrails and structured output for LLMs hosted on Bedrock/SageMaker or self-hosted

  • Design and implement ML pipelines that utilize Elasticsearch/OpenSearch/Solr, vector DBs, and graph DBs

  • Build evaluation pipelines: offline IR metrics (e.g., NDCG, MAP, MRR), LLM quality metrics (e.g., faithfulness, grounding), and A/B testing

  • Optimize infrastructure costs through monitoring, scaling strategies, and efficient resource utilization

  • Stay current with the latest GAI research, NLP and RAG and apply the state-of-the-art in our experiments and systems

Collaboration

  • Partner with Subject-Matter Experts, Product Managers, Data Scientists and Responsible AI experts to translate business problems into cutting edge data science solutions

  • Collaborate and interface with Operations Engineers who deploy and run production infrastructure

Required qualifications

  • 5+ years in ML Engineering, MLOps platforms, shipping ML or search/GenAI systems to production

  • Strong Python, Java, and/or Scala engineering

  • Experience with statistical analysis, machine learning theory and natural language processing

  • Hands-on experience with major cloud vendor solutions (AWS, Azure and/or Google)

  • Search/vector/graph technologies (e.g., Elasticsearch / OpenSearch / Solr/ Neo4j)

  • Experience in evaluating LLM models

  • Background with scholarly publishing workflows, bibliometrics, or citation graphs

  • A strong understanding of the Data Science Life Cycle including feature engineering, model training, and evaluation metrics

  • Familiarity with ML frameworks, e.g., PyTorch, TensorFlow, PySpark

  • Experience with large scale data processing systems, e.g., Spark

Why join us?

Join our team and contribute to a culture of innovation, collaboration, and excellence.

Work in a way that works for you

We promote a healthy work/life balance across the organization. We offer an appealing working prospect for our people. With numerous wellbeing initiatives, shared parental leave, study assistance and sabbaticals, we will help you meet your immediate responsibilities and your long-term goals.

  • Flexible working hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive

Working for you

  • Comprehensive Pension Plan

  • Home, office, or commuting allowance

  • Generous vacation entitlement and option for sabbatical leave

  • Maternity, Paternity, Adoption and Family Care leave

  • Flexible working hours

  • Personal Choice budget

  • Internal communities and networks

  • Various employee discounts

  • Recruitment introduction reward

  • Employee Assistance Program (global)

Elsevier is a world-leading provider of information solutions that enhance the performance of science, health, and technology professionals, empowering them to make better decisions, and deliver better care.

IT
Amsterdam
10,000 employees