Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

ADF/Databricks Lead

This role is for an ADF/Databricks Lead on a remote contract for 6+ months at $60/hour. Requires 6+ years in data engineering, 3+ years with ADF/Databricks, expertise in Azure, PySpark, SQL, and strong data governance skills.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
480
🗓️ - Date discovered
February 10, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Chicago, IL
🧠 - Skills detailed
#ADF (Azure Data Factory) #Data Ingestion #Data Modeling #Data Processing #BI (Business Intelligence) #"ETL (Extract #Transform #Load)" #Python #Leadership #Data Pipeline #Infrastructure as Code (IaC) #DevOps #Scala #Azure #Data Analysis #Azure SQL #Data Science #Databricks #PySpark #Data Security #Data Engineering #Big Data #Azure DevOps #SQL (Structured Query Language) #Data Governance #ADLS (Azure Data Lake Storage) #Data Transformations #GIT #Compliance #Security #Cloud #Delta Lake #Spark (Apache Spark) #Azure Data Factory #Synapse #Computer Science #Deployment #Storage
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Chicago, US

Job Type : Contract

Work location : Remote

Salary : $60 / Hour

Description
Description:
We are seeking a highly skilled ADF/Databricks Lead to design, develop, and optimize data pipelines using Azure Data Factory (ADF) and Databricks. The ideal candidate will have deep expertise in data engineering, cloud-based data processing, and ETL workflows to support business intelligence and analytics initiatives.
Responsibilities:

Lead the design, development, and implementation of scalable data pipelines using Azure Data Factory (ADF) and Databricks.
Architect and optimize ETL/ELT workflows, ensuring efficient data ingestion, transformation, and storage.
Work closely with data analysts, data scientists, and business stakeholders to understand data needs and deliver high-quality solutions.
Develop PySpark-based data transformations and integrate structured and unstructured data from various sources.
Optimize data pipelines for performance, scalability, and cost-efficiency within the Azure ecosystem.
Ensure data governance, security, and compliance best practices are followed.
Monitor, troubleshoot, and resolve performance bottlenecks in ADF and Databricks workloads.
Collaborate with cloud architects and DevOps teams to automate deployments and manage infrastructure as code.
Provide technical leadership, mentorship, and best practices for junior data engineers.
Stay up to date with Azure and Databricks advancements to continuously improve data engineering capabilities.

Qualifications & Skills:

Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
6+ years of experience in data engineering with at least 3 years of hands-on experience in ADF and Databricks.
Strong expertise in Azure ecosystem, including Azure Synapse, Azure SQL, ADLS, and Azure Functions.
Proficiency in PySpark, Python, SQL, and Scala for data processing in Databricks.
Experience with CI/CD pipelines for data engineering solutions (Azure DevOps, Git).
Deep understanding of data warehousing, data modeling (Kimball/Inmon), and big data processing.
Strong knowledge of performance tuning, partitioning, caching, and cost optimization in Databricks.
Hands-on experience with Delta Lake, Lakehouse architecture, and data versioning.
Familiarity with data security, access control, and compliance frameworks.
Excellent problem-solving skills and ability to work independently in a fast-paced environment.

Required Skill Set

Databricks