1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a contract basis, focusing on enterprise data platform projects. Key skills include SQL, Python, and experience with Databricks or Snowflake. The position requires expertise in data architecture and CI/CD pipelines.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 1, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#Data Lake #AWS (Amazon Web Services) #Data Architecture #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Integration #Azure SQL #Data Lakehouse #SQL Server #Snowflake #Azure #Data Pipeline #Deployment #Database Systems #GitHub #RDS (Amazon Relational Database Service) #Datasets #MS SQL (Microsoft SQL Server) #Airflow #Data Warehouse #DevOps #AWS RDS (Amazon Relational Database Service) #dbt (data build tool) #Data Processing #Aurora #EDW (Enterprise Data Warehouse) #Data Engineering #Oracle #Databricks
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

We are seeking a skilled Data Engineer to contribute to transformative enterprise data platform projects focused on developing data pipelines and logic engines to manage ingest, staging, and multi-tier data product modeling and enrichment using various OEM-specific data warehouse and data lake house platform implementations for consumption via analytics clients. This role requires full life cycle design, build, deployment and optimization data products for multiple large enterprise industry vertical-specific implementations by processing datasets through a defined series of logically conformed layers, models, and views.

Role & Responsibilities:

   • Collaborate in defining the overall architecture of the solution. This includes knowledge of modern Enterprise Data Warehouse and Data Lakehouse architectures that implement Medallion or Lamda architectures

   • Design, develop, test, and deploy processing modules to implement data-driven rules using SQL, Stored Procedures, and Python.

   • Understands and owns data product engineering deliverables relative to a CI-CD pipeline and standard Devops practices and principles

   • Build and optimize data pipelines on one or more of these platforms - Databricks, Snowflake, SQL Server, or Azure Data Fabric.

Hard Skills - Must have:

   • Current knowledge of an using one or more of the following modern data tools -Databricks, Snowflake, SQL Server, or Azure Data Fabric

   • Core experience with data architecture, data integrations, data warehousing, and ETL processes

   • Applied experience in SQL, Stored Procedures, and Python based on area of data platform specialization.

   • Strong knowledge of one or more of the following relational database systems - MS SQL Server, Postgres SQL, Oracle, Snowflake, Azure SQL, AWS RDS, Aurora etc.

   • Experience with CICD pipelines to support deployment and integration workflows including trunk-based development using GitHub Enterprise

Hard Skills - Nice to have/It's a plus:

   • Proficiency in Python Analytics for advanced data processing tasks.

   • Experience with DBT and Airflow for rapid model prototyping and collaboration

Soft Skills / Business Specific Skills:

   • Ability to identify, troubleshoot, and resolve complex data issues effectively.

   • Strong teamwork, communication skills and intellectual curiosity to work collaboratively and effectively with cross-functional teams.

   • Commitment to delivering high-quality, accurate, and reliable data products solutions.

   • Willingness to embrace new tools, technologies, and methodologies.

   • Innovative thinker with a proactive approach to overcoming challenges.