Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Data Engineer 4 #: 25-09369

This role is for a "Data Engineer 4" with a 2-year remote contract, offering a pay rate of "unknown". Key skills include 5+ years in data pipelines, Spark, SQL, Big Data environments, and Python. Experience with AWS and data lake solutions is essential.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 9, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Remote
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Salem, OR
🧠 - Skills detailed
#SQL (Structured Query Language) #Scripting #Snowflake #AWS EMR (Amazon Elastic MapReduce) #Stories #Spark (Apache Spark) #Data Engineering #Scrum #Data Lake #Big Data #Datasets #Databricks #Batch #Data Science #Data Analysis #Scala #Python #AWS (Amazon Web Services) #Cloud #GitHub #Documentation #Airflow #Agile #Data Pipeline #Kafka (Apache Kafka)
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Job Title: Data Engineer 4

Job Location: Remote

Job Duration: 2 years on W2

Job Description

Responsibilities

The Senior Data Engineer will collaborate with product owners, developers,database architects, data analysts, visual developers and data scientists on datainitiatives and will ensure optimal data delivery and architecture is consistent throughout ongoing projects.

Must be self-directed and comfortable supporting the data needs of the product roadmap.

The right candidate will be excited by the prospect of optimizing and building integrated and aggregated data objects to architect and support our next generation of products and data initiatives.

Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing for greater scalability Comprehensive documentation and knowledge transfer to Production Support Work with Production Support to analyze and fix Production issues.

Participate in an Agile / Scrum methodology to deliver high -quality software releases every 2 weeks through Sprint Refine, plan stories and deliver timely

Analyze requirement documents and Source to target mapping

Must Have Skills

5+ years of experience designing, developing and supporting complex data pipelines.

5+ years of Spark experience in batch and streaming mode

5+ years of advanced SQL experience for analyzing and interacting with data

5+ years’ experience in Big Data stack environments like Databricks, AWS EMR etc.

3+ years of experience in scripting using Python

3+ years of experience working on clound environment like AWS.

Strong understanding of solution and technical design.

Experience building cloud scalable high-performance data lake solutions

Experience with relational SQL & tools like Snowflake

Aware of Datawarehouse concepts

Performance tuning with large datasets

Experience with source control tools such as GitHub and related dev processes

Experience with workflow scheduling tools like Airflow or Databricks Workflow

Strong problem solving and analytical mindset

Able to influence and communicate effectively, both verbally and written, with

team members and business stakeholders

Good to Have

Skills

Experience in building streaming solutions using Spark structured streaming and

Kafka.

Experience and knowledge of Databricks.

Experience in Semantic modelling and cube solutions like AAS or AtScale.