Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Data Engineer

This role is for a Data Engineer on a contract or permanent basis, offering competitive pay. Key skills include Azure Databricks, Apache Spark, SQL, Python, Terraform, and experience with Big Data and ETL. Locations include London, Birmingham, Manchester, and Newcastle upon Tyne.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 24, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
London Area, United Kingdom
🧠 - Skills detailed
#Data Processing #Azure #Big Data #PySpark #Data Lake #Delta Lake #Synapse #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #GIT #Scala #Infrastructure as Code (IaC) #AI (Artificial Intelligence) #Spark (Apache Spark) #Cloud #Azure Stream Analytics #Terraform #Azure Databricks #Data Engineering #Python #Azure Data Factory #Kafka (Apache Kafka) #Databricks #Apache Spark #DevOps #Azure DevOps #ADF (Azure Data Factory)
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

This role is open for contract and Permanent.

the role is open for various locations in London, Birmingham, Manchester and Newcastle upon Tyne, UK

Required skills

Experience as a Data Engineer, working with Big Data, ETL, and cloud-based solutions.

Strong expertise in Azure Databricks, and Apache Spark (PySpark, Scala, or SQL).

Familiar with Azure Data Services (Azure Data Factory, Azure Data Lake, Azure Synapse, Azure etc).

Proficiency in SQL, Python, and/or Scala for data processing and pipeline development.

Experience with Terraform for infrastructure as code (IaC)

Experience with Delta Lake for data versioning, optimization, and transactions.

Understanding of streaming technologies (e.g., Kafka, Event Hubs, Azure Stream Analytics).

Familiarity with CI/CD pipelines for data engineering using Terraform, Git, and Azure DevOps.

Strong problem-solving skills and the ability to optimize Spark jobs for performance.

Knowledge of Gen AI in the Software Development Lifecycle