Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a long-term contract in St. Louis, MO, offering a competitive pay rate. Key skills required include AWS EMR, Spark, PySpark, Python, SQL, and Apache Airflow, with telecommunications industry experience preferred.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
520
🗓️ - Date discovered
April 12, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
St Louis, MO
🧠 - Skills detailed
#Data Governance #DevOps #Python #Data Warehouse #Batch #Compliance #Data Engineering #GIT #Data Pipeline #Apache Airflow #S3 (Amazon Simple Storage Service) #Automation #Big Data #Jira #Athena #PySpark #Kafka (Apache Kafka) #"ETL (Extract #Transform #Load)" #SQL Queries #Scripting #Datasets #Redshift #Data Modeling #Data Transformations #Observability #Data Lineage #SQL (Structured Query Language) #AWS EMR (Amazon Elastic MapReduce) #AWS (Amazon Web Services) #Monitoring #Presto #Spark (Apache Spark) #Data Processing #Airflow #Data Lake #Security #Scala #Computer Science #Infrastructure as Code (IaC) #Agile #Scrum #Terraform #Data Quality #Cloud
Role description

Data Engineer

Hybrid: St. Louis, MO

Long Term Contract

We are seeking a highly motivated and experienced Data Engineer to support the development and optimization of Outage Management Data Products for a leading enterprise telecommunications company. This individual will play a key role in designing, building, and maintaining scalable data pipelines and analytics solutions that power real-time and historical insights into network outages and service disruptions.

The ideal candidate brings deep expertise in AWS big data tools (EMR, Spark, PySpark), Python, SQL, and workflow orchestration using Apache Airflow, with a strong understanding of data engineering best practices in a production environment.

Key Responsibilities:

   • Design, develop, and maintain scalable, reliable ETL/ELT pipelines for outage detection, tracking, and resolution data products.

   • Leverage AWS EMR, Spark, and PySpark to process large-scale datasets related to network performance and outage events.

   • Collaborate with cross-functional teams including network operations, analytics, and product to define and implement data models and analytical solutions.

   • Implement and manage Airflow DAGs to orchestrate workflows and ensure data availability across environments.

   • Develop and optimize SQL queries and data transformations for analytics dashboards and reporting tools.

   • Monitor and troubleshoot pipeline issues, ensuring high data quality, availability, and observability.

   • Partner with stakeholders to translate business requirements into technical designs and data strategies.

   • Assist in building a data lake/data warehouse architecture that supports both real-time and batch workloads.

   • Ensure security and compliance for sensitive data, implementing data governance best practices.

Required Qualifications:

   • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field (or equivalent industry experience).

   • 4+ years of experience in data engineering, analytics, or big data development roles.

   • Strong hands-on experience with AWS EMR, Spark, and PySpark for large-scale data processing.

   • Proficiency in Python for scripting, automation, and data transformation tasks.

   • Advanced knowledge of SQL and data modeling for both OLTP and OLAP environments.

   • Experience with Apache Airflow for data pipeline orchestration and monitoring.

   • Familiarity with modern data lake and data warehouse architectures (e.g., S3, Redshift, Athena, Hive, Presto).

   • Proven ability to write clean, scalable, and production-grade code.

   • Experience working in an Agile/Scrum environment with tools like Jira, Git, and CI/CD systems.

Preferred Qualifications:

   • Experience working with telecommunications data, network outages, or performance monitoring systems.

   • Familiarity with real-time data processing frameworks (e.g., Kafka, Flink, Kinesis).

   • Experience building or supporting data products or analytics dashboards for business stakeholders.

   • Understanding of DevOps and data infrastructure as code using tools like Terraform or CloudFormation.

   • Exposure to data governance frameworks (e.g., data lineage, quality checks, cataloging).

ABOUT EIGHT ELEVEN:

At Eight Eleven, our business is people. Relationships are at the center of what we do. A successful partnership is only as strong as the relationship built. We’re your trusted partner for IT hiring, recruiting and staffing needs.

For over 16 years, Eight Eleven has established and maintained relationships that are designed to meet your IT staffing needs. Whether it’s contract, contract-to-hire, or permanent placement work, we customize our search based upon your company's unique initiatives, culture and technologies. With our national team of recruiters placed at 21 major hubs around the nation, Eight Eleven finds the people best-suited for your business. When you work with us, we work with you. That’s the Eight Eleven promise.

Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.