1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "X months" and a pay rate of "$X/hour." Required skills include Snowflake, DBT, IBM DataStage, SQL, and Python. Experience with cloud platforms and data orchestration tools is essential.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 3, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Cincinnati, OH
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Compliance #DataOps #Data Lake #Airflow #Data Processing #Spark (Apache Spark) #Data Modeling #DataStage #Schema Design #Agile #Azure #AWS (Amazon Web Services) #Python #Security #dbt (data build tool) #Data Warehouse #Scala #Data Orchestration #Data Analysis #GCP (Google Cloud Platform) #Automation #Data Pipeline #GIT #Data Science #SQL (Structured Query Language) #Data Integration #Microservices #Data Governance #Data Quality #ML (Machine Learning) #Snowflake #Version Control #Kafka (Apache Kafka) #Cloud #Data Engineering #Deployment
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Job Summary:

We are seeking a highly skilled Data Engineer with strong expertise in Snowflake, DBT (Data Build Tool), and IBM DataStage to join our data team. The ideal candidate will be responsible for designing, building, and optimizing scalable data pipelines, ensuring high data quality and performance. This role will collaborate closely with data analysts, data scientists, and business teams to support data-driven decision-making.

Key Responsibilities:

   • Design, develop, and maintain scalable and efficient ETL/ELT pipelines using Snowflake, DBT, and DataStage.

   • Optimize data warehouse performance, including query tuning and cost management.

   • Develop and implement data transformation models using DBT.

   • Manage and orchestrate data workflows and schedules to ensure seamless data movement.

   • Integrate various data sources (structured and unstructured) into the data platform.

   • Implement and enforce data governance, security, and compliance best practices.

   • Collaborate with stakeholders to understand business needs and translate them into data solutions.

   • Monitor and troubleshoot data pipelines, ensuring reliability and accuracy.

   • Work on CI/CD pipelines for data integration and deployment automation.

Required Qualifications:

   • Experience in Data Engineering or a related field.

   • Strong experience in Snowflake, including schema design, performance tuning, and cost optimization.

   • Proficiency in DBT for data modeling, transformations, and testing.

   • Hands-on experience with IBM DataStage for ETL development and management.

   • Experience with SQL and Python for data processing and automation.

   • Familiarity with cloud platforms (AWS, Azure, or GCP) and data orchestration tools (Airflow preferred).

   • Knowledge of data warehouse best practices, data lakes, and data modeling techniques.

   • Experience working with version control systems (Git) and CI/CD pipelines.

   • Strong problem-solving and communication skills.

Preferred Qualifications:

   • Experience with Kafka, Spark, or other streaming technologies.

   • Knowledge of APIs and microservices architecture for data integration.

   • Exposure to Machine Learning pipelines and analytics frameworks.

   • Experience with DataOps practices and Agile methodologies.