

Data Engineer
We are partnered with a leading global consultancy that is searching for contractors with the following skillsets to work on a LONG-TERM contract within the ENERGY sector:
ROLE 1:
Role: Data Engineer (Spark, Kafka)
Location: Windsor
Style: Hybrid
Rate: up to £500 per day (inside IR35)
Duration: 6 months (initially – view to extend)
Key responsibilities:
Design, implement, and manage Kafka-based data pipelines and messaging solutions to support critical business operations and enable real-time data processing.
Configure, deploy, and maintain Kafka clusters, ensuring high availability and scalability to maximize uptime and support business growth.
Monitor Kafka performance and troubleshoot issues to minimize downtime and ensure uninterrupted data flow, enhancing decision-making and operational efficiency.
Collaborate with development teams to integrate Kafka into applications and services.
Develop and maintain Kafka connectors such as JDBC, MongoDB, and S3 connectors, along with topics and schemas, to streamline data ingestion from databases,
NoSQL data stores, and cloud storage, enabling faster data insights.
Implement security measures to protect Kafka clusters and data streams, safeguarding sensitive information and maintaining regulatory compliance
Key Skills:
Design, build, and maintain reliable, scalable data pipelines. Data Integration, Data Security and Compliance
Monitor and manage the performance of data systems and troubleshoot issues.
Strong knowledge of data engineering tools and technologies (e.g. SQL, ETL, data warehousing), Experience in tools like Azure ADF, Apache Kafka, Apache Spark SQL, Proficiency in programming languages such as Python, PySpark
Good written and verbal communication skill
Experience in managing business stakeholders for requirement clarification
ROLE 2:
Role: Hadoop Big Data Developer
Location: Windsor
Style: Hybrid
Rate: up to £400 per day (inside IR35)
Duration: 6 months (initially – view to extend)
Key responsibilities:
Work closely with the development team to assess existing Big Data infrastructure
Design and code Hadoop applications to analyze data compilations
Create data processing frameworks
Extract and isolate data clusters
Test scripts to analyze results and troubleshoot bugs
Create data tracking programs and documentation
Maintain security and data privacy
Key Skills:
Build, Schedule and maintain data pipelines. Good expertise in Pyspark, Spark SQL, Hive, Python, kafka.
Strong experience in Data Collection and Integration, Scheduling, Data Storage and Management, ETL (Extract, Transform, Load) Processes
Knowledge of relational and non-relational databases (e.g., MySQL, PostgreSQL, MongoDB).
Good written and verbal communication skill
Experience in managing business stakeholders for requirement clarification
If you are interested and have the relevant experience, please apply promptly and we will contact you to discuss it further.
Yilmaz Moore
Senior Delivery Consultant
London | Bristol | Amsterdam