1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Texas, on a contract-to-hire basis, offering competitive pay. Required skills include expertise in AI/ML, proficiency in Python/Java/Scala, and experience with data tools like Apache Spark and cloud platforms (AWS, Azure, GCP).
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
960
🗓️ - Date discovered
April 7, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Texas, United States
🧠 - Skills detailed
#Computer Science #Terraform #Docker #Data Quality #Infrastructure as Code (IaC) #Redshift #"ETL (Extract #Transform #Load)" #Programming #Apache Spark #GIT #Spark (Apache Spark) #Mathematics #Airflow #AWS (Amazon Web Services) #Python #Data Lake #Java #Scala #Security #GCP (Google Cloud Platform) #Data Governance #Compliance #S3 (Amazon Simple Storage Service) #Scripting #Kafka (Apache Kafka) #Leadership #Data Science #Cloud #BigQuery #Storage #DevOps #Azure #Version Control #AI (Artificial Intelligence) #Data Engineering #Data Pipeline #Code Reviews #Data Modeling #ML (Machine Learning) #Databases #SQL (Structured Query Language) #Data Architecture
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Job Title: Senior Data Engineer

Location: Texas

Industry: Financial Services

Employment Type: Contract-To-Hire

About Us:

We are working with an investment business focused on innovation and data-driven financial solutions. We are seeking a Senior Data Engineer with expertise in AI/ML to shape and implement advanced technologies that drive business growth and optimize investment strategies.

Key Responsibilities:

   • Data Architecture Design - Design and maintain scalable, reliable, and secure data pipelines and architectures to support business analytics, machine learning, and operational needs.

   • ETL/ELT Pipeline Development - Build and optimize complex Extract, Transform, Load (ETL) or ELT workflows for ingesting data from diverse sources into data lakes, warehouses, or other storage systems.

   • Data Quality and Governance - Implement data validation, quality checks, and ensure compliance with data governance policies, security protocols, and best practices.

   • Collaboration with Cross-Functional Teams - Work closely with data scientists, analysts, and software engineers to understand data needs and deliver reliable data solutions that align with business goals.

   • Mentorship and Leadership - Mentor junior engineers, contribute to code reviews, set engineering standards, and help shape the team’s technical roadmap.

Required Qualifications:

   • Educational Background - Bachelor's or Master’s degree in Computer Science, Engineering, Mathematics, or a related field.

   • Proficiency in Programming Languages - Strong experience with languages like Python, Java, or Scala, especially for data pipeline development and scripting.

   • Expertise in Data Tools & Technologies - Deep knowledge of tools such as Apache Spark, Kafka, Airflow, SQL, and experience with cloud platforms like AWS, Azure, or GCP (e.g., BigQuery, Redshift, S3).

   • Database & Data Modeling Skills - Strong understanding of relational and non-relational databases, data modeling, and performance tuning.

   • Experience with CI/CD and DevOps Practices - Familiarity with version control (e.g., Git), containerization (Docker), infrastructure as code (e.g., Terraform), and CI/CD workflows.

If this looks like something that you are interested in then please do apply with your CV and we will get in touch.