Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Data Engineer (Databricks Preferred)

This role is for a Data Engineer (Databricks Preferred) with a 3-year contract in Edison, NJ, paying $80-100/hour. Requires 8+ years in Python/SQL, finance experience, and expertise in Snowflake, Databricks, and data pipeline development.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
800
🗓️ - Date discovered
February 15, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Edison, NJ
🧠 - Skills detailed
#Big Data #Python #Code Reviews #Data Warehouse #Data Governance #Databricks #Spark (Apache Spark) #Snowflake #Datasets #Data Processing #Synapse #Data Engineering #Migration #BigQuery #Scala #Cloud #PySpark #Data Modeling #Data Migration #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Redshift #Compliance #Security #ADF (Azure Data Factory) #Delta Lake #Data Pipeline
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Onsite contract in Edison NJ area (4x per week)

Long term contract: 3 year commitment

Rate: $80-100/Hour USD

Previous experience working in Finance, ideally Capital Markets, required.

We’re looking for an experienced Data Engineer to join our client's team and drive the development of cutting-edge data solutions. If you have a passion for optimizing big data pipelines, working with cloud platforms, and delivering high-impact insights, this role is for you

!

What You’ll D
• o:Design and build scalable data pipelines for seamless ingestion, transformation, and integration across diverse source
• s.Develop and optimize Spark applications in Databricks to process and analyze large datasets efficientl
• y.Implement Delta Lake, data modeling, and cloud-based warehousing solution
• s.Write clean, efficient code in Python, SQL, and PySpark to support data processing need
• s.Work with structured, semi-structured, and unstructured data in event-driven and streaming environment
• s.Ensure high performance and scalability of Databricks workloads, while troubleshooting and optimizing job
• s.Enforce best practices in data governance, security, and compliance across cloud-based platform
• s.Conduct code reviews to ensure optimal execution and adherence to industry standard

s.

Must-Have Skil
• ls:Expertise in Snowflake & Databri
• cksAdvanced SQL & Python developm
• entHands-on experience with ADF & PySp
• arkStrong Data Warehouse architecture & modeling knowle

dge

Preferred Experie
• nce:8+ years of Python and SQL coding experi
• enceETL development using Databricks and PyS
• parkFamiliarity with cloud data warehouses (Synapse, BigQuery, Redshift, Snowfl
• ake)OLTP & OLAP, Dimensional Data Modeling exper
• tiseExperience leading cloud data migrations and architecting scalable solut
• ionsCloud certifications are a p

lus!