Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Sr. Azure Data Engineer

This role is for a Sr. Azure Data Engineer on a long-term contract, 100% remote from San Francisco. Requires strong SQL, Azure, PySpark, and experience with Databricks or Synapse. US Citizens/GC only.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 15, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
California, United States
🧠 - Skills detailed
#Big Data #ADLS (Azure Data Lake Storage) #Python #Azure Databricks #Databricks #Spark (Apache Spark) #Data Processing #Data Lake #Data Engineering #Synapse #Scala #Azure Data Factory #Azure Blob Storage #Storage #PySpark #Programming #Azure SQL #Azure #Azure Cosmos DB #Azure ADLS (Azure Data Lake Storage) #Data Management #Azure SQL Database #"ETL (Extract #Transform #Load)" #Data Architecture #Data Integration #SQL (Structured Query Language) #Databases #ADF (Azure Data Factory) #Database Management #Data Pipeline
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Role: Sr. Azure Data Engineer

Work location: 100% Remote - San Francisco

Duration: Long Term Contract

Visa Restriction: US Citizens/GC

Skills Need:

Strong in SQL, Azure and Pyspark , Databricks or Synapse is essential. Go to have MS Fabric experience.

Job Description:
• We are seeking a highly experienced Senior Azure Data Engineer with over 12 years of expertise in data warehousing.
• The ideal candidate will have extensive experience with the Azure platform, developing Azure Data Factory (ADF) pipelines, and working with PySpark.
• Knowledge of Databricks or Synapse is essential. Go to have MS Fabric experience.

Responsibilities

Design and Build Data Pipelines: Develop and manage modern data pipelines and data streams using Azure Data Factory.

Database Management: Develop and maintain databases, data systems, and processing systems.

Data Transformation: Transform complex raw data into actionable business insights using PySpark.

Technical Support: Collaborate with stakeholders and teams to assist with data-related technical issues.

Data Architecture: Ensure data architecture supports business requirements and scalability.

Process Improvements: Identify, design, and implement process improvements, such as automating manual processes and optimizing data delivery.

Big Data Solutions: Utilize Databricks or Synapse for big data processing and analytics.

Skills and Qualifications

Data Management and Storage: Proficiency with Azure SQL Database, Azure Data Lake Storage, Azure Cosmos DB, Azure Blob Storage, etc.

Data Integration and ETL: Extensive experience with Azure Data Factory for data integration and ETL processes.

Big Data and Analytics: Knowledge of big data technologies like Azure Databricks and Synapse.

Programming Languages: Proficiency in SQL, Python, and PySpark.

Analytical Skills: Strong analytical and problem-solving skills.

Communication: Excellent communication and teamwork skills.