1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Principal Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Principal Data Engineer with 11+ years of experience, offering a hybrid contract in Fort Mill or Austin. Key skills include AWS, Python, Spark, ETL, SQL, and Pytest. Strong data engineering background is required.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
March 29, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Hybrid
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Fort Mill, SC
🧠 - Skills detailed
#Spark (Apache Spark) #Lambda (AWS Lambda) #Pytest #Data Science #Data Ingestion #Big Data #AWS S3 (Amazon Simple Storage Service) #Python #Security #Data Governance #Apache Kafka #Data Pipeline #Airflow #Data Integrity #GIT #SQL (Structured Query Language) #ML (Machine Learning) #S3 (Amazon Simple Storage Service) #Docker #Apache Spark #Data Engineering #Databases #Terraform #Version Control #Redshift #Kafka (Apache Kafka) #AWS (Amazon Web Services) #Scala #Storage #Automation #Data Accuracy #Data Processing #"ETL (Extract #Transform #Load)" #Kubernetes #BI (Business Intelligence) #SQL Queries #Cloud
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Mid Level Data Engineer, 7+ yrs experience

Senior Data Engineer, 9+ yrs experience

Principal Data Engineer, 11+ yrs

Location : Fort Mill or Austin (Hybrid)

We are looking for a skilled Data Engineer to join our team and help build robust, scalable, and efficient data pipelines. The ideal candidate will have strong expertise in AWS, Python, Spark, ETL Pipelines, SQL, and Pytest. This role involves designing, implementing, and optimizing data pipelines to support analytics, business intelligence, and machine learning initiatives.

Key Responsibilities

   • Design, develop, and maintain ETL pipelines using AWS services, Python, and Spark.

   • Optimize data ingestion, transformation, and storage processes for high-performance data processing.

   • Work with structured and unstructured data, ensuring data integrity, quality, and governance.

   • Develop SQL queries to extract and manipulate data efficiently from relational databases.

   • Implement data validation and testing frameworks using Pytest to ensure data accuracy and reliability.

   • Collaborate with data scientists, analysts, and software engineers to build scalable data solutions.

   • Monitor and troubleshoot data pipelines to ensure smooth operation and minimal downtime.

   • Stay up-to-date with industry trends, tools, and best practices for data engineering and cloud technologies.

Required Skills & Qualifications

   • Experience in Data Engineering or a related field.

   • Strong proficiency in AWS (S3, Glue, Lambda, EMR, Redshift, etc.) for cloud-based data processing.

   • Hands-on experience with Python for data processing and automation.

   • Expertise in Apache Spark for distributed data processing.

   • Solid understanding of ETL pipeline design and data warehousing concepts.

   • Proficiency in SQL for querying and managing relational databases.

   • Experience writing unit and integration tests using Pytest.

   • Familiarity with CI/CD pipelines and version control systems (e.g., Git).

   • Strong problem-solving skills and ability to work in a fast-paced environment.

Preferred Qualifications

   • Experience with Terraform, Docker, or Kubernetes.

   • Knowledge of big data tools such as Apache Kafka or Airflow.

   • Exposure to data governance and security best practices.