

ADF/Databricks Lead
Chicago, US
Job Type : Contract
Work location : Remote
Salary : $60 / Hour
Description
Description:
We are seeking a highly skilled ADF/Databricks Lead to design, develop, and optimize data pipelines using Azure Data Factory (ADF) and Databricks. The ideal candidate will have deep expertise in data engineering, cloud-based data processing, and ETL workflows to support business intelligence and analytics initiatives.
Responsibilities:
Lead the design, development, and implementation of scalable data pipelines using Azure Data Factory (ADF) and Databricks.
Architect and optimize ETL/ELT workflows, ensuring efficient data ingestion, transformation, and storage.
Work closely with data analysts, data scientists, and business stakeholders to understand data needs and deliver high-quality solutions.
Develop PySpark-based data transformations and integrate structured and unstructured data from various sources.
Optimize data pipelines for performance, scalability, and cost-efficiency within the Azure ecosystem.
Ensure data governance, security, and compliance best practices are followed.
Monitor, troubleshoot, and resolve performance bottlenecks in ADF and Databricks workloads.
Collaborate with cloud architects and DevOps teams to automate deployments and manage infrastructure as code.
Provide technical leadership, mentorship, and best practices for junior data engineers.
Stay up to date with Azure and Databricks advancements to continuously improve data engineering capabilities.
Qualifications & Skills:
Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
6+ years of experience in data engineering with at least 3 years of hands-on experience in ADF and Databricks.
Strong expertise in Azure ecosystem, including Azure Synapse, Azure SQL, ADLS, and Azure Functions.
Proficiency in PySpark, Python, SQL, and Scala for data processing in Databricks.
Experience with CI/CD pipelines for data engineering solutions (Azure DevOps, Git).
Deep understanding of data warehousing, data modeling (Kimball/Inmon), and big data processing.
Strong knowledge of performance tuning, partitioning, caching, and cost optimization in Databricks.
Hands-on experience with Delta Lake, Lakehouse architecture, and data versioning.
Familiarity with data security, access control, and compliance frameworks.
Excellent problem-solving skills and ability to work independently in a fast-paced environment.
Required Skill Set
Databricks