What you will be doing:

  • Looking after development, maintenance and expansion of our huge multi-tenant data platform

  • Prototyping and developing data services and pipelines in production

  • Providing data engineering capabilities to our awesome product teams

  • Solving the most difficult data challenges, hand-in-hand with other teams in the company

  • Clearly and creatively documenting your achievements, tools and pipelines, so they’re useful and easy to grasp for your team members and tenants

  • Promoting and embedding the Core data stack for further use within the organization.

  • Researching new business intelligence and big data technologies and techniques

Who we’re looking for:

  • You are driven by curiosity and have passion for data

  • You enjoy helping people go beyond the limits of extracting business insights from our data

  • You are a fan of big data and have hands-on experience with large data sets

  • You feel comfortable with Python and Scala

  • You are proficient in SQL (any variant)

  • You have at least 1 year of experience working on AWS

  • You know how to build secure, efficient, cloud enabled data pipelines

  • You strive to write beautiful code, and you’re comfortable learning new technologies

Nice to have

  • You are not completely new to the following technologies/tools: Spark, Delta Lake, Hive, Presto/Trino, Docker

  • You have some experience and like working with serverless applications

  • You have experience with working across the time zones

What we offer:

- Friendly office environment in West Berlin- Charlottenburg

- Free drinks and fresh fruits

- Competitive salary

If this sounds like you, please send your CV and cover letter to hello@dataleyk.com