

Data Engineer
Databricks Engineer
London / Hybrid working
6 months
We are seeking a skilled and experienced Databricks Engineer for a contract role to contribute to the development and implementation of our unified data platform. You will be instrumental in building and optimizing data solutions within the Databricks Lakehouse Platform, enabling seamless data integration and advanced analytics.
Responsibilities:
• Design, develop, and maintain data pipelines and ETL/ELT processes within the Databricks Lakehouse Platform.
• Build and optimize data solutions on Databricks to support a unified data platform architecture.
• Implement data ingestion, transformation, and storage strategies within Databricks.
• Work with Delta Lake to ensure data reliability, performance, and scalability.
• Integrate Databricks with other components of the data platform (e.g., data warehousing, data streaming).
• Collaborate with data architects, data engineers, and data scientists to ensure data solutions meet business needs.
• Optimize data processing and query performance within Databricks.
• Implement data quality checks and data validation processes to ensure data integrity.
• Contribute to best practices for data engineering and platform development.
Skills and Experience:
• Extensive experience as a Databricks Engineer, with a focus on building unified data platforms.
• Deep expertise in Apache Spark and the Databricks Lakehouse Platform.
• Strong proficiency in SQL and data warehousing principles.
• Hands-on experience with Delta Lake for reliable data storage.
• Proficiency in Python for data engineering and processing.
• Experience integrating Databricks with other data technologies.
• Solid understanding of data modeling, data warehousing, and data lake concepts.
• Proven ability to optimize data pipelines and query performance.
• Strong problem-solving, analytical, and communication skills.
Databricks Engineer
London / Hybrid working
6 months
We are seeking a skilled and experienced Databricks Engineer for a contract role to contribute to the development and implementation of our unified data platform. You will be instrumental in building and optimizing data solutions within the Databricks Lakehouse Platform, enabling seamless data integration and advanced analytics.
Responsibilities:
• Design, develop, and maintain data pipelines and ETL/ELT processes within the Databricks Lakehouse Platform.
• Build and optimize data solutions on Databricks to support a unified data platform architecture.
• Implement data ingestion, transformation, and storage strategies within Databricks.
• Work with Delta Lake to ensure data reliability, performance, and scalability.
• Integrate Databricks with other components of the data platform (e.g., data warehousing, data streaming).
• Collaborate with data architects, data engineers, and data scientists to ensure data solutions meet business needs.
• Optimize data processing and query performance within Databricks.
• Implement data quality checks and data validation processes to ensure data integrity.
• Contribute to best practices for data engineering and platform development.
Skills and Experience:
• Extensive experience as a Databricks Engineer, with a focus on building unified data platforms.
• Deep expertise in Apache Spark and the Databricks Lakehouse Platform.
• Strong proficiency in SQL and data warehousing principles.
• Hands-on experience with Delta Lake for reliable data storage.
• Proficiency in Python for data engineering and processing.
• Experience integrating Databricks with other data technologies.
• Solid understanding of data modeling, data warehousing, and data lake concepts.
• Proven ability to optimize data pipelines and query performance.
• Strong problem-solving, analytical, and communication skills.