
PySpark Data Engineer
- Delivery
Job description
We are the Microsoft Data Platform Competence Center – leading experts in Microsoft data technologies. Our team has hands-on experience with the full spectrum of modern data solutions including Azure Data Factory, Synapse Analytics, Microsoft Fabric, Power BI, Purview, Azure SQL Database, Databricks, Palantir, and more.
We constantly learn, experiment, and refine our engineering practices, implementing new innovations across the data ecosystem.
Our team has grown to 40 specialists operating in both the Czech Republic and Switzerland.
What you will do as a PySpark Data Engineer
Design, build and optimize scalable data pipelines using Python & PySpark
Work with modern data platforms (Palantir Foundry / Databricks / similar)
Integrate data from finance and insurance source systems
Support data delivery for analytics and reporting solutions (e.g. Power BI)
Ensure data quality, reliability and performance of pipelines
Collaborate with Business, Analytics and Engineering teams
Support deployment, monitoring and production operations
What we expect
Strong experience in Python & PySpark
Experience with Spark-based data platforms (Palantir Foundry is a big plus)
Solid experience in data engineering and building ETL/ELT pipelines
Strong knowledge of SQL and large-scale data processing
Experience from financial services (insurance is preferred)
Experience with Git and CI/CD pipelines
Good communication skills in English
What we offer you
Budget for personal development (courses, conferences, certifications)
Opportunity to work remotely on international projects
Work from the comfort of your home or from our new design office
We value our time – no overtime
Access to our company library for inspiration
Attractive mobile plan, Multisport card, and fuel card
Barbecues on the terrace and plenty of other team events
or
All done!
Your application has been successfully submitted!

