

Senior Data Engineer - AWS, Python, Data Lake
Our major Sports client is seeking a Senior Data Engineer to join their growing team. Below please find an overview of what they are seeking!
Due to client requirement, applicants must be willing and able to work on a w2 basis. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance.
Rate: $70 - $80 / hr. w2
Primary Responsibilities:
• Design, implement, document and automate scalable production grade end to end data pipelines including API ingestion, transformation, processing, monitoring and analytics capabilities while adhering to best practices in software development.
• Work as a part of data engineering team building data integrations for optimal extraction, transformation, and loading of data from a wide variety of data sources.
• Deploy AWS Lake formation for data governance through LF database and table permissioning.
• Successfully introduce relevant technical solutions that provide better productivity, scalability, quality and reliability to data platform.
• Design and Implements data platform features that generally impact multiple components and the work of own and several other team members.
• Write clear and concise documentation for our most complex technical solutions.
• Collaborate with cross-functional teams to understand data platform infrastructure needs and translate them into effective and user-friendly solutions.
• Implement best practices for data infrastructure designs, ensuring efficient utilization of resources, and minimizing latency in data-related tasks.
• Identify and address bottlenecks in existing data infrastructure to improve overall system performance.
• Design and build observability solutions to monitor resource utilization, cost, quotas etc. and trigger alerts as needed.
• Communicate project status, issues, and solutions effectively to stakeholders and team members.
Required Qualifications & Experience
• Minimum of 8+ years related experience with track record of building production software.
• Minimum 3+ years of solid experience working with Medallion Lakehouse architecture (Bronze, Silver. Gold)
• Proficiency in building and delivering AWS native data solutions
• Spark, Athena, Trino/Presto
• Lambda, ECS, EKS, containerization, serverless components
• Glue Catalog and schema evolution
• Lakehouse open table formats
• Working experience of distributed processing systems including Apache Spark a must.
• Proficiency in lake house architecture, open table formats such as Hudi, orchestration frameworks such as airflow, real time streaming with Apache Kafka and container technology.
• Solid understanding of InfoSec best practices of data engineering: data encryptions, secure data exchange methods, data privacy.
• Solid understanding of data science and machine learning workflows and frameworks
• Work independently and collaborate with cross-functional teams to complete projects.
• Lead integration of technical components with other teams as necessary.
Programming Languages and Tech requirements:
• AWS (EMR, Lake Formation, ECS, ECR, containerization, serverless assets, EKS, Glue, Lambda, Flink, Kinesis, S3)
• Airflow DAGs, ephemeral creation, Event bridge
• Python, PySpark,
• SQL, (spark SQL)
• Jupyter Notebooks
• GitLab and CICD
• Hudi or/and Iceberg lakehouse architecture
Education Requirements
• Bachelor’s degree computer science or related field required.