Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Databricks Engineer

This role is for a Databricks Engineer with 6+ years in Data Engineering and 4+ years in Python. Contract length is unspecified, pay rate is "NO CTC W2 ONLY", and work is onsite in Beaverton, Oregon, 3 days a week.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
504
🗓️ - Date discovered
February 14, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
On-site
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Beaverton, OR
🧠 - Skills detailed
#ADF (Azure Data Factory) #Databases #Databricks #SQL (Structured Query Language) #Data Warehouse #AWS (Amazon Web Services) #PySpark #Data Processing #"ETL (Extract #Transform #Load)" #Delta Lake #Talend #AWS Glue #Airflow #Azure DevOps #MySQL #Apache Airflow #Snowflake #Python #Jira #DevOps #RDBMS (Relational Database Management System) #Triggers #Apache Spark #Compliance #Data Engineering #GIT #Schema Design #Computer Science #Azure Data Factory #Cloud #Data Migration #Spark (Apache Spark) #Agile #Migration #Collibra #JSON (JavaScript Object Notation) #Pandas #Libraries #Azure #GitLab #NoSQL #NumPy #MongoDB #Redis #DynamoDB #Alteryx #Jenkins
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

NO CTC W2 ONLY!

MUST HAVE DATABROCKS MIGRATION EXPERIENCE!

ONSITE IN BEAVERTON, OREGON 3 DAYS A WEEK!

Responsibilities:
• Bachelor’s degree or equivalent education, experience, and training in Computer Science.
• 6+ years of experience in Data Engineering.
• 4+ years working with Python for data processing, with proficiency in libraries such as Pandas, NumPy, PySpark, PyOdbc, PyMsSQL, Requests, Boto3, SimpleSalesforce, and Json.
• 3+ years of experience with Data Warehouse technologies, including Databricks and Snowflake.
• Strong fundamentals in Data Engineering (ETL, modeling, lineage, governance, partitioning & optimization, migration).
• Expertise in Databricks, including Apache Spark, DB SQL, Delta Lake, Delta Share, Notebooks, Workflows, RBAC, Unity Catalog, Encryption & Compliance.
• Proficient in SQL, with a focus on performance, stored procedures, triggers, and schema design, and experience with both RDBMS and NoSQL databases like MSSQL/MySQL and DynamoDB/MongoDB/Redis.
• Cloud platform expertise in AWS and/or Azure.
• Experience with ETL tools such as Apache Airflow, AWS Glue, Azure Data Factory, Talend, or Alteryx.
• Strong coding and architectural design pattern knowledge.
• Passion for troubleshooting, investigation, and root-cause analysis.
• Excellent written and verbal communication skills.
• Ability to multitask in a high-energy environment.
• Familiar with Agile methodologies and tools such as Git, Jenkins, GitLab, Azure DevOps, Jira, and Confluence.

Preferred Skills:
• Experience with tools like Collibra and Hackolade.
• Familiarity with data migration strategies and tooling.
• Experience with data migration tools or custom-built solutions for moving data from Snowflake to Databricks.
• Experience with testing and validation post-migration, including strategies like checksums, row counts, and query performance benchmarks.