1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 12+ years of experience, offering a long-term W2 contract in Malvern, PA (Hybrid). Key skills include Python, AWS serverless architecture, and data pipeline development. Familiarity with data warehousing and CI/CD is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 1, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Malvern, PA
🧠 - Skills detailed
#Agile #AWS (Amazon Web Services) #Data Architecture #Python #"ETL (Extract #Transform #Load)" #NoSQL #Data Pipeline #BI (Business Intelligence) #Visualization #Microsoft Power BI #Redshift #Security #S3 (Amazon Simple Storage Service) #DynamoDB #Lambda (AWS Lambda) #Tableau #Data Extraction #Data Security #AWS Glue #Databases #Data Processing #Data Integrity #API (Application Programming Interface) #Data Engineering #Automation
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Job Title: Data Engineer (Hybrid) – Long-Term Contract

Location: Malvern, PA (Hybrid)

Contract Length: Long-Term

Duration: Long term contract (Only W2, No C2C)

Experience: 12 Years

We are seeking an experienced Data Engineer with 12+ years of expertise to join our team on a long-term contract basis. The ideal candidate will have extensive experience in building data pipelines, working with serverless architecture in AWS, and writing Python code for data processing and automation. As a Data Engineer, you will play a critical role in ensuring the integrity and efficiency of our data systems and will be responsible for transforming and optimizing complex data patterns and structures.

Key Responsibilities:

   • Develop, implement, and optimize data pipelines to handle large-scale data processing tasks efficiently and effectively.

   • Design and maintain serverless solutions in AWS (e.g., Lambda, S3, DynamoDB) for data workflows.

   • Utilize Python to automate data extraction, transformation, and loading (ETL) processes.

   • Collaborate with data architects and stakeholders to define and design data models, and ensure data is processed and structured according to requirements.

   • Ensure data integrity, quality, and consistency by applying rigorous testing and validation methods.

   • Provide expertise in data patterns, structures, and logic, ensuring data is appropriately formatted for analysis and business consumption.

   • Troubleshoot and resolve performance issues and data discrepancies in pipeline systems.

   • Work in an agile, fast-paced environment, collaborating with cross-functional teams to meet business goals and deliver insights.

Qualifications:

   • 12+ years of experience as a Data Engineer or in a similar role.

   • Expertise in Python for data processing and pipeline automation.

   • Extensive experience working with serverless AWS technologies (e.g., Lambda, API Gateway, S3, DynamoDB).

   • Strong understanding of data patterns, structures, and transformation processes to ensure optimal data architecture.

   • Proven ability to build, optimize, and maintain data pipelines at scale.

   • Familiarity with AWS Glue, Redshift, or other AWS services for data warehousing and analytics is a plus.

   • Solid understanding of relational and NoSQL databases.

   • Strong problem-solving skills and attention to detail.

   • Excellent communication and teamwork skills, with the ability to work independently and collaboratively in a hybrid work environment.

Preferred Skills:

   • Experience with CI/CD pipelines for data infrastructure.

   • Familiarity with data visualization tools like Tableau, Power BI, or similar.

   • Understanding of data security and privacy practices.