Senior AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Data Engineer with a contract length of "unknown", offering a pay rate of "unknown". Key skills include AWS services, Python, SQL, and data lake implementation. AWS Solutions Architect or Developer Certification is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 4, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Newark, NJ
🧠 - Skills detailed
#Lambda (AWS Lambda) #SQL (Structured Query Language) #Aurora #IAM (Identity and Access Management) #Kafka (Apache Kafka) #Python #Storage #Data Ingestion #Data Engineering #Data Lake #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Scripting #Programming #Elasticsearch #Databases #API (Application Programming Interface) #DynamoDB #Redshift #Computer Science #Athena #Amazon RDS (Amazon Relational Database Service) #RDS (Amazon Relational Database Service) #Strategy #Unit Testing #Shell Scripting #Database Systems #Cloud #AWS (Amazon Web Services) #AWS Lambda #Data Warehouse #SQS (Simple Queue Service)
Role description

AWS Data Engineer :

Qualifications:

   • Bachelor's degree in computer science, Software Engineering, MIS or equivalent combination of education and experience

   • Experience implementing, supporting data lakes, data warehouses and data applications on AWS for large enterprises

   • Programming experience with Python, Shell scripting and SQL

   • Solid experience of AWS services such as CloudFormation, S3, Athena, Glue, EMR/Spark, RDS, Redshift, DynamoDB, Lambda, Step Functions, IAM, KMS, SM etc.

   • Solid experience implementing solutions on AWS based data lakes.

   • Should have good experience with AWS Services - API Gateway, Lambda, Step Functions, SQS, DynamoDB, S3, Elasticsearch

   • Serverless application development using AWS Lambda

   • Experience in AWS data lake/data warehouse/business analytics

   • Experience in system analysis, design, development, and implementation of data ingestion pipeline in AWS

   • Knowledge of ETL/ELT

   • End-to-end data solutions (ingest, storage, integration, processing, access) on AWS

   • Architect and implement CI/CD strategy for EDP

   • Implement high velocity streaming solutions using Amazon Kinesis, SQS, and Kafka (preferred)

   • Migrate data from traditional relational database systems, file systems, NAS shares to AWS relational databases such as Amazon RDS, Aurora, and Redshift

   • Migrate data from APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift

   • Implement POCs on any new technology or tools to be implemented on EDP and onboard for real use-case

   • AWS Solutions Architect or AWS Developer Certification preferred

   • Good understanding of Lakehouse/data cloud architecture

Responsibilities:

   • Designing, building and maintaining efficient, reusable, and reliable architecture and code.

   • Build reliable and robust Data ingestion pipelines (within AWS, onprem to AWS ,etc.)

   • Ensure the best possible performance and quality of high scale data engineering project

   • Participate in the architecture and system design discussions

   • Independently perform hands on development and unit testing of the applications.

   • Collaborate with the development team and build individual components into complex enterprise web systems.

   • Work in a team environment with product, production operation, QE/QA and cross functional teams to deliver a project throughout the whole software development cycle.

   • Responsible to identify and resolve any performance issues

   • Keep up to date with new technology development and implementation

   • Participate in code review to make sure standards and best practices are met.