1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "$/hour." Candidates must have expertise in Azure Synapse, Python, PySpark, SQL, and Azure Data Factory, along with experience in data lake and warehouse architecture.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
March 28, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Dallas-Fort Worth Metroplex
🧠 - Skills detailed
#Data Warehouse #Data Engineering #Data Pipeline #REST API #PySpark #Apache Airflow #Visualization #Synapse #Scala #Compliance #Databricks #Data Science #Storage #Azure Blob Storage #Azure Synapse Analytics #Microsoft Power BI #Data Modeling #Spark SQL #GIT #DevOps #Spark (Apache Spark) #SQL (Structured Query Language) #Data Governance #Security #Version Control #Cloud #REST (Representational State Transfer) #Data Processing #Delta Lake #Azure #Schema Design #Azure DevOps #Data Quality #ADF (Azure Data Factory) #Azure Data Factory #"ETL (Extract #Transform #Load)" #Data Lake #Airflow #BI (Business Intelligence) #Python
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

We are seeking a skilled Data Engineer to join our growing team to design, develop, and maintain scalable data pipelines and architectures. The ideal candidate will have hands-on experience with Azure Synapse, Python, PySpark, SQL, and Azure Data Factory, along with a deep understanding of cloud-based data engineering practices.

Key Responsibilities:

   • Design, build, and manage scalable and efficient data pipelines using Azure Synapse and Azure Data Factory

   • Develop and optimize ETL/ELT workflows using PySpark and SQL

   • Work with stakeholders to understand data requirements and deliver solutions that support data-driven decision-making

   • Collaborate with data scientists, analysts, and business users to ensure data quality, governance, and integrity

   • Implement data transformation, cleansing, and validation processes

   • Monitor and troubleshoot data workflows and optimize performance

   • Ensure secure data practices and compliance with company policies and regulations

Required Skills & Qualifications:

   • Strong proficiency in Azure Synapse Analytics and Azure Data Factory

   • Expertise in Python and PySpark for data processing and transformation

   • Advanced SQL skills for data querying, transformation, and performance tuning

   • Experience with data lake and data warehouse architecture

   • Familiarity with Delta Lake, Azure Blob Storage, and Azure Data Lake Gen2

   • Solid understanding of CI/CD pipelines and version control using Git

   • Experience with data modeling and schema design

   • Good grasp of data governance, security, and compliance standards

Preferred Qualifications:

   • Experience with Azure DevOps, Databricks, or other cloud-based data platforms

   • Knowledge of Apache Airflow or other orchestration tools

   • Familiarity with Power BI or other data visualization tools

   • Experience in handling large-scale, real-time data processing pipelines

   • Understanding of REST APIs and integrating external data sources