Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Azure Data Engineer with Databricks Expertise

This role is for an Azure Data Engineer with Databricks expertise, offering a 3-year contract in Iselin, NJ. Key skills include Databricks, Azure Cloud Services, and PySpark. An Azure Data Engineer Associate certification is optional. In-person interviews required.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 11, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Jersey City, NJ
🧠 - Skills detailed
#Datasets #Databases #GitLab #Spark (Apache Spark) #Data Security #Security #Data Engineering #Azure cloud #BI (Business Intelligence) #"ETL (Extract #Transform #Load)" #Storage #Azure Blob Storage #Databricks #Data Pipeline #PySpark #Version Control #Azure #Data Processing #Cloud #SQL (Structured Query Language) #Compliance #Deployment #Big Data #Delta Lake #ADLS (Azure Data Lake Storage) #Scala #Triggers
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Dice is the leading career destination for tech experts at every stage of their careers. Our client, Thronus Group LLC, is seeking the following. Apply via Dice today!

Title: Azure Data Engineer with Databricks Expertise.

Location: Iselin, NJ (Need Onsite, Hybrid 3 days from office).

Duration: 3 Years Contract Position.

Need to go for F2F Interview.

Certifications:
• Azure Data Engineer Associate or Databricks Certified Data Engineer Associate certification. (Optional)

We are seeking highly skilled Azure Data Engineer with strong expertise in Databricks to join our data team. The ideal candidate will design, implement and optimize large-scale data pipeline, ensuring scalability, reliability and performance. This role involves working closely with multiple teams and business stakeholders to deliver cutting-edge data solutions.

Key Responsibilities:
• Data Pipeline Development:
• Build and maintain scalable ETL/ELT pipelines using Databricks.
• Leverage PySpark/Spark and SQL to transform and process large datasets.
• Integrate data from multiple sources including Azure Blob Storage, ADLS and other relational/non-relational systems.
• Collaboration & Analysis:
• Work Closely with multiple teams to prepare data for Dashboard and BI Tools.
• Collaborate with cross-functional teams to understand business requirements and deliver tailored data solutions.
• Performance & Optimization:
• Optimize Databricks workloads for cost efficiency and performance.
• Monitor and troubleshoot data pipelines to ensure reliability and accuracy.
• Governance & Security:
• Implement and manage data security, access controls and governance standards using Unity Catalog.
• Ensure compliance with organizational and regulatory data policies.
• Deployment:
• Leverage Databricks Asset Bundles for seamless deployment of Databricks jobs, notebooks and configurations across environments.
• Manage version control for Databricks artifacts and collaborate with team to maintain development best practices.

Technical Skills:
• Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime etc.)
• Proficiency in Azure Cloud Services.
• Solid Understanding of Spark and PySpark for big data processing.
• Experience in relational databases.
• Knowledge on Databricks Asset Bundles and GitLab.

Preferred Experience:
• Familiarity with Databricks Runtimes and advanced configurations.
• Knowledge of streaming frameworks like Spark Streaming.
• Experience in developing real-time data solutions.