Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

AWS Data Engineer

This role is for an AWS Data Engineer with a contract length of over 6 months, offering $55.00 - $58.00 per hour. Key skills include SQL, Python, DBT, AWS services (S3, Glue, Athena), and 5+ years of data engineering experience.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
464
🗓️ - Date discovered
February 13, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Phoenix, AZ
🧠 - Skills detailed
#Snowflake #dbt (data build tool) #Data Exploration #Athena #Data Modeling #REST (Representational State Transfer) #Airflow #REST API #Data Engineering #AWS (Amazon Web Services) #Python #BigQuery #GIT #S3 (Amazon Simple Storage Service) #Scala #Redshift #Data Science #"ETL (Extract #Transform #Load)" #Data Accuracy #SQL (Structured Query Language) #Version Control #Data Pipeline
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Role: AWS Data Engineer
Type: Hybrid
Rate: Upon call
Client: Upon call
Job Description: We are seeking a skilled Data Engineer to join our team. The ideal candidate will have strong experience in SQL, Python, DBT, data modeling, and AWS services. You will be responsible for building and maintaining scalable data pipelines, transforming raw data into meaningful insights, and optimizing data workflows.
Key Responsibilities:

Explore, transform, and manipulate data using SQL. Pull and integrate data from REST APIs using Python.
Develop and maintain DBT models and implement DBT tests.
Design and implement star schema data models on transactional tables.
Work with AWS services such as S3, Glue, and Athena to manage and process data.
Optimize query performance and ensure data accuracy.
Collaborate with analysts, data scientists, and business stakeholders to understand data needs.

Required Qualifications:

Proficiency in SQL for data exploration, transformation, and optimization.
Strong Python skills, particularly in integrating data via REST APIs.
Hands-on experience with DBT (Data Build Tool) and writing DBT tests.
Expertise in data modeling, especially designing star schemas on transactional tables.
Working knowledge of AWS services, including S3, Glue, and Athena
Experience in building scalable data pipelines and ETL processes.
Strong analytical and problem-solving skills.
Ability to work independently and in a team-oriented environment. 5+ years of relevant experience in data engineering.

Preferred Qualifications:

Experience with data warehousing technologies (Redshift, Snowflake, or BigQuery).
Knowledge of orchestration tools like Airflow.
Familiarity with version control tools like Git.

Key Responsibilities:

Explore, transform, and manipulate data using SQL.
Pull and integrate data from REST APIs using Python.
Develop and maintain DBT models and implement DBT tests.
Design and implement star schema data models on transactional tables.
Work with AWS services such as S3, Glue, and Athena to manage and process data.
Optimize query performance and ensure data accuracy.
Collaborate with analysts, data scientists, and business stakeholders to understand data needs.

Job Types: Full-time, Contract
Pay: $55.00 - $58.00 per hour
Schedule:

8 hour shift
Monday to Friday

Work Location: Remote