Intermediate Big Data Engineer (Azure, Databricks, Splunk) | IT - Datalab, Data Engineering Big Data, Splunk (m/w/d) intermediate

Cabo Personal GmbH

Hamburg, Hamburg, Deutschland
Published Feb 17, 2026
Full-time
Permanent

Job Summary

This role involves the continuous development of an Azure-based data integration platform with a primary focus on Databricks and Unity Catalog. As an Intermediate Data Engineer, you will be responsible for the conception, implementation, and operation of ETL and transformation pipelines. You will connect various source systems and provide technical support to ensure data is successfully transitioned into a unified data model. A key part of your day-to-day will involve collaborating with other engineers to enforce coding standards, promote code reuse, and maintain consistent transformation logic. You will translate complex business requirements into robust, scalable technical solutions while documenting technical concepts for stakeholders. This position offers a highly attractive hybrid working model with only 2-3 required on-site days per month, 30 days of vacation, and a competitive salary structure under the GVP collective agreement, making it an excellent opportunity for professionals seeking work-life balance and technical growth in a Big Data environment.

Required Skills

Education

Not specified

Experience

  • Several years of professional experience in Python programming
  • Proven experience in SQL and Apache Spark
  • Professional experience in creating and managing ETL pipelines
  • Practical experience with Azure Databricks, CI/CD, and Git
  • Experience in technical documentation and translating business requirements into technical solutions

Languages

English (Basic)

Additional

  • Hybrid work model requiring 2-3 days of presence per month at various locations; 38-hour work week; must be able to work within the GVP (formerly BAP) collective agreement framework.