

Data Engineer
Here are the job details-
Role – Kafka Engineer / Data Engineer
Location – Leeds, UK
Mode of Work: Hybrid 3 days from office
Job type- Contract Inside IR35
Job Description:
A Kafka Real-Time Architect is responsible for designing and implementing scalable, real time data processing systems in Kafka. This role involves architecting Kafka cluster, ensuring high availability, and integrating with other data processing tools and platforms.
As part of the CTO Data Ingestion Service, the incumbent will be required to:
• Designing and architecting scalable, real-time systems in Kafka.
• Configuring, deploying, and maintaining Kafka clusters to ensure high availability and scalability.
• Integrating Kafka with other data processing tools and platforms such as Kafka Streams, Kafka Connect, Spark Streaming Schema Registry, Flink and Beam.
• Collaborating with cross-functional teams to understand data requirements and design solutions that meet business needs.
• Implementing security measures to protect Kafka clusters and data streams.
• Monitoring Kafka performance and troubleshooting issues to ensure optimal performance.
• Providing technical guidance and support to development operations teams.
• Staying updated with the latest Kafka features, updates and industry practices.
Required Skills Experience
• Extensive Experience with Apache Kafka and real-time architecture including event driven frameworks.
• Strong Knowledge of Kafka Streams, Kafka Connect, Spark Streaming, Schema Registry, Flink and Beam.
• Experience with cloud platform such as GCP Pub/Sub.
• Excellent problem-solving skills.
Knowledge & Experience / Qualifications:
• Knowledge of Kafka data pipelines and messaging solutions to support critical business operations and enable real-time data processing.
• Monitoring Kafka performance, enhancing decision making and operational efficiency
• Collaborating with development teams to integrate Kafka applications and services.
• Maintain an architectural library for Kafka deployment models and patterns
• Helping developers to maintain Kafka connectors such as JDBC, MongoDB and S3 connectors, along with topics schemas, to streamline data ingestion from databases, NoSQL data stores and cloud storage, enabling faster data insight.
Thanks & Regards,
Here are the job details-
Role – Kafka Engineer / Data Engineer
Location – Leeds, UK
Mode of Work: Hybrid 3 days from office
Job type- Contract Inside IR35
Job Description:
A Kafka Real-Time Architect is responsible for designing and implementing scalable, real time data processing systems in Kafka. This role involves architecting Kafka cluster, ensuring high availability, and integrating with other data processing tools and platforms.
As part of the CTO Data Ingestion Service, the incumbent will be required to:
• Designing and architecting scalable, real-time systems in Kafka.
• Configuring, deploying, and maintaining Kafka clusters to ensure high availability and scalability.
• Integrating Kafka with other data processing tools and platforms such as Kafka Streams, Kafka Connect, Spark Streaming Schema Registry, Flink and Beam.
• Collaborating with cross-functional teams to understand data requirements and design solutions that meet business needs.
• Implementing security measures to protect Kafka clusters and data streams.
• Monitoring Kafka performance and troubleshooting issues to ensure optimal performance.
• Providing technical guidance and support to development operations teams.
• Staying updated with the latest Kafka features, updates and industry practices.
Required Skills Experience
• Extensive Experience with Apache Kafka and real-time architecture including event driven frameworks.
• Strong Knowledge of Kafka Streams, Kafka Connect, Spark Streaming, Schema Registry, Flink and Beam.
• Experience with cloud platform such as GCP Pub/Sub.
• Excellent problem-solving skills.
Knowledge & Experience / Qualifications:
• Knowledge of Kafka data pipelines and messaging solutions to support critical business operations and enable real-time data processing.
• Monitoring Kafka performance, enhancing decision making and operational efficiency
• Collaborating with development teams to integrate Kafka applications and services.
• Maintain an architectural library for Kafka deployment models and patterns
• Helping developers to maintain Kafka connectors such as JDBC, MongoDB and S3 connectors, along with topics schemas, to streamline data ingestion from databases, NoSQL data stores and cloud storage, enabling faster data insight.
Thanks & Regards,