Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Data Engineer

This role is a Data Engineer position based in Draper, UT, on a contract-to-hire basis, offering $100K - $125K. Requires 5+ years in data engineering, 3+ years in Python, and experience with ETL/ELT, Azure, and relational/NoSQL databases.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
560
🗓️ - Date discovered
February 20, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Salt Lake County, UT
🧠 - Skills detailed
#Python #Datasets #Kubernetes #Data Pipeline #"ETL (Extract #Transform #Load)" #MySQL #Data Warehouse #BI (Business Intelligence) #NoSQL #Snowflake #PostgreSQL #Redshift #SQL Server #SQL (Structured Query Language) #MongoDB #Azure #Data Engineering
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Job Title: Data Engineer

Location: Onsite (Draper, UT)

Job-Type: Contract-To-Hire

Salary Range: $100K - $125K

Referral Fee: $1,000

Employment Eligibility: Gravity cannot transfer nor sponsor a work visa for this position. Applicants must be eligible to work in the U.S. for any employer directly (we are not open to contract or “corp to corp” agreements).

Position Overview:

In this position, you will be involved in all things Data related. In a small but growing company, your impact will be made by leading data pipeline development, ensuring seamless integration from multiple data sources, designing, and optimizing ETL processes, structuring data, etc. In addition to leading the data engineering efforts, you will mentor other developers, share best practices, and support the BI and reporting needs of the organization.
• Duties & Responsibilities:
• Participate in daily stand-ups, discuss assignments, plan, and prioritize tasks.
• Build data pipelines and ETL/ELT processes to structure move data from various sources.
• Organize and structure data into standardized formats that can be used to build reports that are easily accessible to stakeholders.
• Work with large datasets and complex business logic.
• Required Experience & Skills:
• 3+ years’ experience developing in Python
• 5+ years’ experience in data development, integration, and engineering roles.
• Experience in ETL/ELT development using Python
• Experience working in Azure and/or Salesforce environments
• Experience with relational DB’s (PostgreSQL, MySQL, SQL Server)
• Experience with NoSQL DB’s such at MongoDB
• Experience with Data Warehouse solutions such as Snowflake or Redshift
• Must have excellent communication skills and be a self-starter

Nice to Have Experience:
• Understanding of CI/CD processes
• Experience with containerization and orchestration (Kubernetes or similar tools)