Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Data Engineer

This role is for a Data Engineer (Spark, Kafka) in Windsor, offering up to £500 per day for an initial 6-month contract. Key skills include Kafka, Spark SQL, Python, and data pipeline management, with energy sector experience preferred.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
💰 - Day rate
Unknown
Unknown
500
🗓️ - Date discovered
February 21, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
Inside IR35
🔒 - Security clearance
Unknown
📍 - Location detailed
Windsor, England, United Kingdom
🧠 - Skills detailed
#JDBC (Java Database Connectivity) #Databases #Python #Data Security #Data Processing #NoSQL #"ETL (Extract #Transform #Load)" #Data Storage #Security #MySQL #Spark SQL #Compliance #Data Integration #ADF (Azure Data Factory) #Consul #PostgreSQL #S3 (Amazon Simple Storage Service) #Data Engineering #Spark (Apache Spark) #MongoDB #PySpark #Cloud #Apache Kafka #Programming #Data Ingestion #Big Data #Data Privacy #Data Pipeline #Azure #Storage #SQL (Structured Query Language) #Scala #Hadoop #Documentation #Kafka (Apache Kafka) #Apache Spark
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

We are partnered with a leading global consultancy that is searching for contractors with the following skillsets to work on a LONG-TERM contract within the ENERGY sector:

ROLE 1:

Role: Data Engineer (Spark, Kafka)

Location: Windsor

Style: Hybrid

Rate: up to £500 per day (inside IR35)

Duration: 6 months (initially – view to extend)

Key responsibilities:

  1. Design, implement, and manage Kafka-based data pipelines and messaging solutions to support critical business operations and enable real-time data processing.

  2. Configure, deploy, and maintain Kafka clusters, ensuring high availability and scalability to maximize uptime and support business growth.

  3. Monitor Kafka performance and troubleshoot issues to minimize downtime and ensure uninterrupted data flow, enhancing decision-making and operational efficiency.

  4. Collaborate with development teams to integrate Kafka into applications and services.

  5. Develop and maintain Kafka connectors such as JDBC, MongoDB, and S3 connectors, along with topics and schemas, to streamline data ingestion from databases,

  6. NoSQL data stores, and cloud storage, enabling faster data insights.

  7. Implement security measures to protect Kafka clusters and data streams, safeguarding sensitive information and maintaining regulatory compliance

Key Skills:

  1. Design, build, and maintain reliable, scalable data pipelines. Data Integration, Data Security and Compliance

  2. Monitor and manage the performance of data systems and troubleshoot issues.

  3. Strong knowledge of data engineering tools and technologies (e.g. SQL, ETL, data warehousing), Experience in tools like Azure ADF, Apache Kafka, Apache Spark SQL, Proficiency in programming languages such as Python, PySpark

  4. Good written and verbal communication skill

  5. Experience in managing business stakeholders for requirement clarification

ROLE 2:

Role: Hadoop Big Data Developer

Location: Windsor

Style: Hybrid

Rate: up to £400 per day (inside IR35)

Duration: 6 months (initially – view to extend)

Key responsibilities:

  1. Work closely with the development team to assess existing Big Data infrastructure

  2. Design and code Hadoop applications to analyze data compilations

  3. Create data processing frameworks

  4. Extract and isolate data clusters

  5. Test scripts to analyze results and troubleshoot bugs

  6. Create data tracking programs and documentation

  7. Maintain security and data privacy

Key Skills:

  1. Build, Schedule and maintain data pipelines. Good expertise in Pyspark, Spark SQL, Hive, Python, kafka.

  2. Strong experience in Data Collection and Integration, Scheduling, Data Storage and Management, ETL (Extract, Transform, Load) Processes

  3. Knowledge of relational and non-relational databases (e.g., MySQL, PostgreSQL, MongoDB).

  4. Good written and verbal communication skill

  5. Experience in managing business stakeholders for requirement clarification

If you are interested and have the relevant experience, please apply promptly and we will contact you to discuss it further.

Yilmaz Moore

Senior Delivery Consultant

London | Bristol | Amsterdam