Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Data Engineer

This role is for a Data Engineer in Pittsburgh for 12 months at a pay rate of "unknown." Candidates should have 8-11 years of experience with Azure Databricks, MSSQL, and Python, focusing on data pipeline development and ETL processes.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 17, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Pittsburgh, PA
🧠 - Skills detailed
#Spark (Apache Spark) #PySpark #Data Lake #Scala #Data Warehouse #Data Quality #Azure #Data Engineering #Azure SQL #Azure SQL Data Warehouse #Data Pipeline #Data Governance #Data Management #Databricks #Oracle #Python #Data Modeling #Azure ADLS (Azure Data Lake Storage) #Data Processing #Spark SQL #SQL Queries #SQL (Structured Query Language) #Storage #"ETL (Extract #Transform #Load)" #ADLS (Azure Data Lake Storage) #Azure Databricks
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Job Post: Data Engineer

Experience: 8 - 11 Years

Location: Pittsburgh

Duration : 12 Months

Notice Period: 0 - 30 Days

Job Purpose

Design, construct, and maintain scalable data management systems using Azure Databricks, ensuring they meet end-user expectations. Supervise the upkeep of existing data infrastructure workflows and create data processing pipelines utilizing Databricks Notebooks, Spark SQL, Python, and other tools.

Key Responsibilities
• Interpret business requirements and collaborate with internal resources and application vendors.
• Design, develop, and maintain Databricks solutions and data quality rules.
• Troubleshoot and resolve data-related issues.
• Configure and create data models and quality rules to meet customer needs.
• Work with multiple database platforms, including MSSQL and Oracle.
• Optimize PySpark/Python code and SQL queries to improve performance.
• Design and implement ETL pipelines and effective data models.
• Document data pipeline architecture and processes.
• Communicate effectively with business and technology stakeholders.
• Deliver results under tight deadlines while adhering to quality standards.

Key Competencies
• Experience: 8+ years in data engineering with expertise in Azure Databricks, MSSQL, LakeFlow, and Python.
• Proficient in creating and optimizing data pipelines using Databricks Notebooks, Spark SQL, and PySpark.
• Knowledge of Azure services like Azure Data Lake Storage and Azure SQL Data Warehouse.
• Expertise in data warehousing, ETL pipeline development, and data governance.
• Hands-on experience with data quality rules using Databricks and platforms like IDQ.
• Strong problem-solving, analytical, and organizational skills.
• Ability to work independently and collaboratively in cross-functional teams.

Skills & Requirements
• Technical Skills: Azure Databricks, PySpark, SQL (MSSQL, Spark SQL), Azure Data Lake Storage, ETL, Data Modeling and Governance, Data Warehousing, Python.
• Soft Skills: Strong communication, problem-solving, and attention to detail.