1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Data Engineer - TX

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Irving, TX, lasting 6+ months at $57/hr. Key skills include SQL, Python, Spark, ETL/ELT processes, and cloud platforms (GCP preferred). Experience with Delta Lake and legacy ETL tools is advantageous.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
456
🗓️ - Date discovered
March 27, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
On-site
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Irving, TX
🧠 - Skills detailed
#Data Security #Airflow #Data Quality #Data Warehouse #Distributed Computing #SQL (Structured Query Language) #Azure #AWS (Amazon Web Services) #Data Engineering #Python #Spark (Apache Spark) #Data Lake #Scala #Security #Apache Airflow #Data Pipeline #Datasets #Dataflow #SSIS (SQL Server Integration Services) #Big Data #"ETL (Extract #Transform #Load)" #Ab Initio #GCP (Google Cloud Platform) #BigQuery #dbt (data build tool) #Compliance #SAS #Cloud #Delta Lake
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Experis IT ManpowerGroup has partnered with a leading Financial services company for a Data Engineer role to assist their team. This is an Onsite role.

Industry: Financial services company

Title: Data Engineer

Location: Irving, TX

Duration: 6+ months.

Pay: $57/hr (W2)

Job Description

We seek a skilled Data Engineer with expertise in ETL/ELT, data pipeline architecture, and cloud-based data solutions. The ideal candidate will have strong experience in SQL, Python/Spark, and various data pipelining tools. This role involves designing and optimizing data pipelines for reporting and operational analytics, working with cloud-based platforms, and integrating structured and unstructured data from multiple sources

Key Responsibilities

   • Design, develop, and optimize ETL/ELT data pipelines for operational and analytical applications.

   • Work with SQL, Python, and Spark to process large datasets efficiently.

   • Architect and implement data pipelines using open-source and cloud-based tools.

   • Manage and optimize Delta Lake for both operational and analytical data stores.

   • Translate and modernize legacy ETL code from tools like SSIS, Ab Initio, and SAS into cloud-native solutions.

   • Ensure data quality, integrity, and security while working with large-scale datasets.

   • Collaborate with business and technical teams to develop reporting and downstream applications.

   • Leverage cloud platforms (preferably GCP) to build scalable data solutions.

   • Troubleshoot and optimize data pipelines to ensure high performance and reliability.

   • Stay updated with the latest big data technologies and cloud advancements.

Required Qualifications

   • Strong proficiency in SQL, Python, and Spark.

   • Experience with ETL/ELT processes and data pipeline orchestration tools.

   • Hands-on experience with cloud platforms (GCP preferred, AWS/Azure a plus).

   • Solid understanding of data warehouse concepts, Delta Lake, and reporting solutions.

   • Experience with legacy ETL tools like SSIS, Ab Initio, and SAS is a plus.

   • Strong problem-solving skills, critical thinking, and the ability to adapt to changes.

   • Experience working with large-scale datasets in a distributed computing environment.

   • Familiarity with CI/CD for data pipelines and best practices in data security & compliance.

Preferred Qualifications

   • Experience with BigQuery, Dataflow, Dataproc, and Pub/Sub on Google Cloud Platform (GCP).

   • Knowledge of modern data lake architectures and real-time data streaming.

   • Experience with Apache Airflow, dbt, or other workflow orchestration tools.