Magnet.me  -  Het slimme netwerk waar studenten en professionals hun stage of baan vinden.

Het slimme netwerk waar studenten en professionals hun stage of baan vinden.

Senior Data Engineer

Geplaatst 7 mei 2026
Delen:
Werkervaring
4 tot 10 jaar
Full-time / part-time
Full-time
Functie
Salaris
€ 5.700 - € 7.000 per maand
Opleidingsniveau
Taalvereiste
Engels (Vloeiend)

Bouw aan je carrière op Magnet.me

Maak een profiel aan en ontvang slimme aanbevelingen op basis van je gelikete vacatures.

At bol, you’ll build the data pipelines that power attribution reporting and insights.

How do you make our customers happy?

By building the data pipelines that turn raw advertising and interaction data into the attribution datasets our business runs on. Attribution is how we answer the question every advertiser cares about: which touchpoints — which ads, channels, and campaigns — actually drove a customer outcome. Getting this right depends entirely on the data underneath. Every impression, click, visit, and conversion has to be captured, joined, and modeled consistently before any attribution logic can run on top. That data layer is what you build. Your pipelines feed both advertising auctions and the dashboards, reports, and analyses that advertisers, product managers, and internal teams use every day. When the data is clean, timely, and well-modeled, the whole organization can trust the numbers.

The biggest challenge

Attribution data comes from many sources with its own quirks, schema drift, and edge cases. Volumes are large, expectations keep growing, and pipelines need to be correct, observable, and fast enough to support both fast reporting and heavier analytical workloads. The work is rarely straightforward. Every step has to preserve semantics, lineage, and performance. You’ll design dbt models that encode this logic clearly, orchestrate workflows in Airflow that handle failures gracefully, and write Python that holds up under scrutiny. Data quality, testing, and documentation are part of the deliverable, not an afterthought. The role works best for someone who enjoys collaborating across teams. You’ll partner with data scientists, analysts, and product teams to turn their reporting and modeling needs into robust, reusable pipelines. Influence here comes from the quality of the systems you build and your ability to make complex data problems tractable for the rest of the team.

What you'll do

You own the engineering side of the attribution data platform: pipelines, models, orchestration, and the conventions that keep it all maintainable. Reporting needs and attribution logic change often, and the pipelines that feed them need to change with them without breaking trust in the numbers. A lot of the job is working with people, not just code. You’ll talk to analysts, data scientists, and business stakeholders to understand what they need, explain how the data flows, and push back when trade-offs matter. Day to day, you’ll:

  • design, build, and maintain dbt models that transform raw interaction data into clean, documented attribution datasets
  • orchestrate workflows in Airflow with proper failure handling and observability
  • write clean, well-tested Python for ingestion, transformation, and tooling
  • model the data so it serves both operational reporting (dashboards, business KPIs) and analytical use cases (attribution modeling, ad-hoc investigation)
  • keep data quality high through testing, monitoring, and clear data contracts with upstream and downstream teams
  • optimize pipelines for cost, latency, and scalability on cloud data warehouses (BigQuery)
  • partner with data scientists and analysts to productionize attribution logic and ship insights faster
  • contribute to team conventions — CI/CD, code review, documentation — that lift everyone’s work

Expect a mix of hands-on engineering, design discussions, code review, and stakeholder conversations.

Why you can make a difference

Attribution is a shared capability within bol. When the attribution layer is solid, advertisers can set an objective, trust the measurement, and let the system optimize against it. When it’s shaky, every downstream team reinvents its own version of the truth. Hence, you’re not building a pipeline for a dashboard — you’re building the foundation that enables advertisers to reach their goals.

3 reasons why this is (not) for you

Not for you if:

  • You want to spend most of your time on novel data modeling problems. A lot of this role is unglamorous plumbing: backfills, schema migrations, chasing down why a metric moved 0.3%.
  • You find stakeholder conversations draining and would rather be handed a spec. Here, the spec usually doesn't exist yet — you help write it.
  • You're looking for a greenfield rebuild. This is an existing platform with existing decisions, some of which you'll inherit, live with, and gradually improve.

For you if:

  • You care more about a pipeline still running correctly six months from now than about shipping it today. When requirements shift, your first instinct is to reshape the model, not bolt on a patch.
  • You're comfortable being the person who says "that metric is wrong, and here's why" — and equally comfortable being told the same about your own work.
  • You get a small kick out of making other people's jobs easier. A clean dataset that an analyst can trust feels like a win, even when nobody notices.

Where you'll be working

You’ll join the Reliable product group within Marketing & Advertising, working closely with Engineering, Product, Analytics and Business teams. The team is one of the main consumers of interaction data and plays a business-critical role in driving decision making in our advertising and marketing ecosystem.

To be successful in this role, you need:

  • Strong proficiency in dbt for modeling and transforming analytical data
  • Solid proficiency in Python for data engineering, tooling, and custom pipelines
  • Experience building and operating workflows in Airflow (or similar orchestrators)
  • Strong SQL and experience with cloud data warehouses (BigQuery a plus)
  • Experience designing data models for reporting and analytical consumption (star schemas, marts, semantic layers)
  • Experience with testing, CI/CD, and version control in data engineering contexts
  • Familiarity with MLOps
  • Familiarity with data quality, data contracts, and observability practices
  • Ability to explain data architecture and trade-offs to non-technical stakeholders
  • Familiarity with Kotlin and Java

Perks of having a blue heart

Bonus

The bonus is calculated at the end of the year and we always end the year with a fun party!

Flexible working

We bring the best of both worlds together by working 50% at the office and 50% at home. This way, we find a balance between organisational and individual needs.

On and off

At bol we understand like no other that you have to take care of yourself first, then your environment and then bol. In that order. Therefore, everyone at bol receives 29 days of vacation.

Bij bol leveren onze collega’s een unieke bijdrage om het dagelijks leven makkelijker te maken. Vrijheid en verantwoordelijkheid zorgen ervoor dat we samen de volgende stap voor bol, het team, en onszelf kunnen vormgeven. Door te pionieren brengen we bol verder, met elkaar zijn wij verantwoordelijk voor deze gezamenlijke missie.

Retail
Utrecht
Actief in 2 landen
3.000 medewerkers
50% mannen - 50% vrouwen
Gemiddeld 33 jaar oud