Who We Are
Role Description
Responsibilities
- Design, build, and maintain scalable data pipelines using Python, Spark, and Databricks
- Develop and optimize transformation pipelines for batch and streaming data
- Apply software engineering and DevOps practices (CI/CD, testing, version control)
- Implement and operate data solutions on cloud platforms (AWS, Azure, or GCP)
- Collaborate closely with business stakeholders to gather, clarify, and translate requirements
Profile
- Hands-on experience in Data Engineering with Python
- Strong knowledge of Databricks / Sparks and building transformation pipelines
- Experience with DevOps methodology and software-engineering-focused projects
- Cloud expertise (AWS, Google Cloud, or Azure)
- Excellent communication and social skills, including collaboration with business stakeholders
- Experience in requirement engineering
- Industry knowledge in the energy sector (grid operations, local substations) is a plus
Benefits
- Possibilities to work in a home office
- Varied tasks in a renowned company
- You will work in an international environment
We Expect You to Have:
Oops! Something went wrong while submitting the form.
.png)

