Who We Are
Role Description
Responsibilities
- Build and maintain ETL pipelines from various data sources
- Work with Databricks, PySpark and Azure cloud infrastructure
- Design and optimize robust and scalable data pipelines for large datasets
- Handle real-time data processing (e.g. Kafka, streaming)
- Connect and manage data flows to target systems (e.g. Power BI, SharePoint)
- Work with Infrastructure as Code (Terraform)
- Ensure data quality, performance and governance compliance
- Act as a bridge between business stakeholders and technical teams
Profile
- Fluent in German & English
- Hands-on experience in data engineering with Python
- Strong knowledge of Databricks/Spark and building transformation pipelines
- Experience with DevOps methodology and software-engineering-focused projects
- Cloud expertise (like AWS, Google Cloud or Azure)
- Excellent communication and social skills, including collaboration with business stakeholders
- Experience in requirements engineering
- Industry knowledge in the energy sector is a plus
Benefits
- Interesting tasks in a multinational environment
- Possibilities to work in a home office
We Expect You to Have:
Oops! Something went wrong while submitting the form.
.png)

