Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Data Engineer

This role is for a Data Engineer with a 6-month contract, offering a pay rate of "$X/hour". It requires strong Python, advanced SQL, and Databricks skills, with experience in Azure, data visualization tools, and big data processes. Hybrid work in Sunnyvale, CA.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 22, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Hybrid
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Sunnyvale, CA
🧠 - Skills detailed
#Looker #Databases #Microsoft Power BI #Datasets #Scala #"ETL (Extract #Transform #Load)" #Azure #Metadata #SQL (Structured Query Language) #Programming #BI (Business Intelligence) #Big Data #Azure cloud #Visualization #Python #Cloud #Databricks #Deployment #Data Engineering #Docker #Tableau #Delta Lake
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Note: Only candidates who can work on W2 can apply.

Qualifications for Data Engineer:
• Strong python programming skills, expert level on using Python to process Big Data;
• Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
• Extensive Experience on Databricks on Azure Cloud platform, deep understanding on Delta lake, Lake House Architecture.
• Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
• Strong analytic skills related to working with Data Visualization Dashboard, Metrics and etc, experience on Tableau, Power BI or Looker tools;
• Build processes supporting data transformation, data structures, metadata, dependency and workload management.
• A successful history of manipulating, processing and extracting value from large disconnected datasets.
• Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
• Familiar with Deployment tool like Docker and building CI/CD pipelines.
• Experience supporting and working with cross-functional teams in a dynamic environment.
• experience in software development, Data engineering.

Top 3 Skills Needed or Required: Python, Advanced SQL and Databricks

It will be a hybrid project in Sunnyvale, CA with 3 days onsite and 2 days remote work.