Location selector
Location selector Location selector
  • twitter
  • linkedin
  • facebook
  • email

Company Description

We’re Adevinta, a global leader in digital marketplaces. Our household name brands, including marktplaats in the Netherlands, Mobile.de in Germany and leboncoin in France, reach hundreds of millions of people every month across 16 countries. 

We’re all about matchmaking, and our sites help people find whatever they’re looking for in their local communities – whether it’s a car, an apartment, a sofa or a new job. Every connection made or item found makes a difference by creating a world where people share more and waste less.

Our brands are supported by global Tech Hubs in Barcelona, Amsterdam, Paris and Berlin. Their goal is to develop common global products and innovation platforms which all of our brands can use. This means using cutting edge technology to create highly scalable, customisable and secure products and components that free up development time and leverage our access to global data.

Job Description

We are currently looking for an experienced BackEnd Engineer (Scala/Kotlin) to join our team of data and software engineers in Barcelona.

As a member of the Data team, you will join our journey to build state of the art data processing solutions for Adevinta, with emphasis on both volume, velocity and privacy. The data we collect from all Adevinta sites around the world is essential to our business - including feeding marketplaces with low-latency (“realtime”) updates, visitor insights analysis and targeted advertisement. We are solving exciting problems at scale, from gathering up to 900 million events per day worldwide while keeping users privacy and data security in mind.

You will be part of developing a modern data processing pipeline at scale for Adevinta sites around the world, for a variety of purposes, such as classification, insights and understanding and modelling user behavior.  As part of the global Adevinta organization you will also have the opportunity to learn from and share knowledge with data scientists and engineers across Adevinta. We encourage a diverse, collaborative and creative work environment, where you will develop and push for the state-of-the-art in big data processing at the same time as building reliable and highly scalable services.

Join us to create an amazing data platform team.



  • Create: lead the creation of data processing services and exploration tools. The team you’ll join is responsible for the creation of great tools that enable the sharing of data (batch/stream) across marketplaces, data discovery (with catalog and lineage systems) and privacy (GDPR data takeout and delete)

  • Protect: improve the reliability and availability of Adevinta systems by gathering hard data, designing systems and creating or adapting code for increased service reliability and performance

  • Improve: identify hidden areas of improvement in any process or system, including changing established rules and procedures

  • Observe: Implement monitoring solutions enabling production systems to be monitored 24/7

  • Share: Share knowledge and provide expert advice to Adevinta engineers as to how to best use the data platform tools as well as providing answers to their engineering needs, fostering a mixed development and operations culture

  • Engineer: Install, configure, fine-tune, and optimise all sorts of technology solutions



  • A degree in Computer Science, related branch or equivalent experience

  • Strong analytical / problem solving skills

  • Experience working with the Cloud (any of AWS, GCP, Azure) and Linux based systems

  • A good understanding of the best practices for developing software (version control, CI/CD, automation, peer-programming, documentation, testing, …)

  • Experience with Agile methodologies

  • Proven ability and experience developing highly structured computer programs  (Scala or Kotlin preferably), and interested in learning more

  • The desire to be constantly learning, and see the coming challenges as a great opportunity to keep growing

Additional Information


  • Experience with batch and streaming data processing tools is a plus (Spark, Kafka, Kafka Streams, Luigi, etc.)

  • Containerization (e.g. Docker, LXD, etc.) and K8s experience

  • Experience with any AMQP technologies

  • Domain driven development

  • Experience in building and maintaining systems at scale: service discovery, load balancing, secret management, dynamic request routing, circuit breakers and deployment schemes (rolling updates, canary, etc.)