Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer, remote for 6 months, with a pay rate of £450 - £500 per day. Key skills required include Python, AWS, PySpark or Scala, SQL, NoSQL, and container experience. Consulting experience is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
💰 - Day rate
Unknown
Unknown
500
🗓️ - Date discovered
April 17, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Remote
📄 - Contract type
Outside IR35
🔒 - Security clearance
Unknown
📍 - Location detailed
Greater London, England, United Kingdom
🧠 - Skills detailed
#Python #Data Lake #Distributed Computing #PySpark #Data Lakehouse #Version Control #SQL (Structured Query Language) #NoSQL #Scrum #Data Storage #Kubernetes #GIT #Consulting #Spark (Apache Spark) #AWS (Amazon Web Services) #Docker #Storage #"ACID (Atomicity #Consistency #Isolation #Durability)" #Consul #Data Engineering #Cloud #Batch #Databases #Scala #API (Application Programming Interface) #Agile
Role description

Data Engineer – HIRING ASAP

Start date: ASAP

Duration: 6 Months

Location: Remote

Rate: £450 - £500 per day outside ir35

Summary

We are currently working with a new generation consultancy based across UK and EU and founded on the premises of the engineering excellence and empowering people to make an impact. All their consultants have equity in the company, genuinely love what they do and are really good at it.

They work with all modern tech stacks and typically run agile scrum on all our projects.

Responsibilities

   • We expect you to work closely with business to understand their current data problems and to know how to analyse and cleanse their data. Also, we expect you to design data storage solutions and be a good communicator with your team.

Key Skills

   • Python in the software engineering level, including unit and integration test experience.

   • Distributed computing knowledge covered by PySpark or Scala, can debug things in SparkUI and knows how to optimise for this purpose.

   • AWS experience

   • Good understanding of data modelling, change data capture and/or ACID compliant table structure.

   • Good data lake/data Lakehouse understanding, very good understanding of a traditional data platform.

   • Experience on different aspects of ingestion, via api calls, batch and/or streaming through pulling and/or pushing data.

   • Good understanding of SQL and NoSQL databases

   • Git version control and cicd experience

   • At least one Cloud experience with solid and extensive data platform build (AWS pref)

   • Container experience, either docker or Kubernetes

   • Consulting experience – working on projects with other consultancies and multiple stakeholders.