Databricks Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer, 100% remote, with a 12-month contract at a pay rate of $106k-$115k. Requires 7+ years in data engineering, expertise in Delta Lake, Databricks SQL, Python, SQL, and CI/CD tools. Bachelor's degree required.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 16, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Remote
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#MLflow #DevOps #Cloud #S3 (Amazon Simple Storage Service) #SQL (Structured Query Language) #Data Engineering #Marketo #Azure cloud #Data Extraction #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #AWS (Amazon Web Services) #Data Pipeline #BitBucket #API (Application Programming Interface) #Agile #Vault #Athena #SSIS (SQL Server Integration Services) #Data Management #Delta Lake #Security #Azure #SQL Server #Workday #Data Vault #Computer Science #Databricks #Data Ingestion #CLI (Command-Line Interface) #GitHub #Python
Role description

Title: Databricks Engineer

Location: 100% remote

Duration: 12 months

Must have: Delta Lake, Databricks SQL, MLflow, Data warehousing, SSIS, SQL Server and strong experience with API. Databricks on AWS/Azure cloud infrastructure and functions. Experience with CI/CD on Databricks using tools such as BitBucket, GitHub Actions and Databricks CLI. Strong proficiency in Python and SQL and Excellent oral and written communication skills

Responsibilities:

   • 7+ years of hands-on experience in data engineering/ETL using Databricks on AWS / Azure cloud infrastructure and functions.

   • 5+ years of experience in PBI and Data Warehousing experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

   • Experience with AWS (e.g. S3, Athena, Glue, Lambda, etc.) preferred.

   • Deep understanding of data warehousing concepts (Dimensional (star-schema), SCD2, Data Vault, Denormalized, OBT) implementing highly performant data ingestion pipelines from multiple sources

   • Strong proficiency in Python and SQL.

   • Deep understanding of Databricks platform features (Delta Lake, Databricks SQL, MLflow)

   • Experience with CI/CD on Databricks using tools such as BitBucket, GitHub Actions, and Databricks CLI

   • Integrating the end-to-end Databricks pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained.

   • Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.

   • Experience with Delta Lake, Unity Catalog, Delta Sharing, Delta Live Tables (DLT), MLflow.

   • Basic working knowledge of API or Stream based data extraction processes like Salesforce API, Bulk API.

   • Understanding of Data Management principles (quality, governance, security, privacy, life cycle management, cataloguing)

   • Excellent problem-solving and analytical skills

   • Able to Work Independently

   • Excellent oral and written communication skills

   • Nice to have: Databricks certifications and AWS Solution Architect certification.

   • Nice to have: experience with building data pipeline from various business applications like Salesforce, Marketo, NetSuite, Workday etc.

Required: Bachelor's degree in Computer Science or a related field

This role is open to W2 or those seeking Corp-Corp employment. The salary range for this role is 106k-115k. For corp-Corp rates please contact the recruiter. In addition to other benefits, Accion Labs offers a comprehensive benefits package, with Accion covering 65% of the medical, dental, and Vision Premiums for employees, their spouses, and dependent children enrolling in the Accion-provided plans.