Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown," working fully remotely in the UK. Key skills include Azure Databricks, Apache Spark, and SQL, with experience in data warehousing and live data streaming solutions required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 15, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United Kingdom
🧠 - Skills detailed
#Debugging #SAP #Qlik #Cloud #SAP Hana #Code Reviews #Azure #BI (Business Intelligence) #Scala #SQL (Structured Query Language) #Databricks #Apache Spark #Spark (Apache Spark) #Oracle #PySpark #Data Engineering #Azure Databricks
Role description

Insight Global is seeking multiple Data Engineers to join a prestigious energy client based in London. The successful candidates will be responsible for designing, implementing, and managing live data streaming pipelines for the client’s Energy Trading team.

Key Responsibilities:

   • Pipeline Management: Design, implement, and manage live data streaming pipelines using Azure Databricks to ensure seamless data flow and real-time processing.

   • Process Evaluation: Assess and optimize on-premise to cloud data exchange processes for accuracy, efficiency, and scalability.

   • Code Review and Debugging: Conduct thorough code reviews and debugging sessions, providing guidance and mentorship to junior data engineers to ensure high-quality code and best practices.

   • Problem Solving: Develop innovative solutions to address computing and cost challenges, leveraging advanced technologies and methodologies.

   • Remote Collaboration: Work fully remotely within the UK, maintaining effective communication and collaboration with US teams during overlapping working hours.

Must Haves:

  1. Deep expertise with Azure Databricks (DLT, Data Streaming , Unity Catalogue etc.).

  1. Prove experience designing high volume, live data streaming solutions using Azure DLT (Delta Live Tables).

  1. Expert with Apache Spark and PySpark (ability to review quality of code and debug issues).

  1. Experience with Qlik Replicate to move data from on-prem to the cloud.

  1. Background in Data warehousing (SAP Hana, BI/BW, Oracle etc.).

  1. Proficient with SQL.