1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 12+ month contract at $90/hour W2, hybrid in Glendale, CA. Requires 7+ years of data engineering experience, proficiency in Python or Java, strong SQL skills, and familiarity with AWS and data pipeline orchestration.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
720
🗓️ - Date discovered
March 29, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Glendale, CA
🧠 - Skills detailed
#Datasets #Spark (Apache Spark) #Data Science #Computer Science #Python #Data Governance #Agile #Data Pipeline #Airflow #Infrastructure as Code (IaC) #Java #SQL (Structured Query Language) #Scrum #Snowflake #Data Modeling #Delta Lake #Data Engineering #Data Quality #Kafka (Apache Kafka) #AWS (Amazon Web Services) #Documentation #Scala #Programming #Databricks #Data Processing #"ETL (Extract #Transform #Load)" #Cloud
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Senior Data Engineer

Long term contract role- 12+ months

$90/hour W2

   • 

   • 

   • Hybrid (2 days/week on-site) in Glendale, CA candidates who are not local to the greater Los Angeles area not eligible for consideration

As a Senior Data Engineer, you will play a pivotal role in the transformation of data into actionable insights. Collaborate with our dynamic team of technologists to develop cutting-edge data solutions that drive innovation and fuel business growth. Your responsibilities will include managing complex data structures and delivering scalable and efficient data solutions. Your expertise in data engineering will be crucial in optimizing our data-driven decision-making processes. If you're passionate about leveraging data to make a tangible impact, we welcome you to join us in shaping the future of our organization.

Key Responsibilities:

   • Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines

   • Build tools and services to support data discovery, lineage, governance, and privacy

   • Collaborate with other software/data engineers and cross-functional teams

   • Build and maintain APIs to expose data to downstream applications

   • Develop real-time streaming data pipelines

   • Tech stack includes Airflow, Spark, Databricks, Delta Lake, Snowflake, Graph Database, Kafka

   • Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform

   • Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more

   • Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)

   • Be an active participant and advocate of agile/scrum ceremonies to collaborate and improve processes for our team

   • Maintain detailed documentation of your work and changes to support data quality and data governance requirements

   • Address urgent production issues on timely manner during non-standard working hours

   • Experience working with studio production data a plus

Qualifications:

   • 7+ years of data engineering experience developing large data pipelines

   • Proficiency in at least one major programming language (e.g. Python,Java, Kotlin)

   • Strong SQL skills and ability to create queries to analyze and extract complex datasets

   • Hands-on production environment experience with distributed processing systems such as Spark

   • Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines

   • Deep Understanding of AWS or other cloud providers as well as infrastructure as code

   • Familiarity with Data Modeling techniques and Data Warehousing standard methodologies and practices

   • Strong algorithmic problem-solving expertise

   • Excellent written and verbal communication

   • Advance understanding of OLTP vs OLAP environments

   • Willingness and ability to learn and pick up new skill sets

   • Self-starting problem solver with an eye for detail and excellent analytical and communication skills

   • Strong background in at least one of the following: distributed data processing or software engineering of data services, or data modeling

   • Graph Database experience a plus

   • Realtime Event Streaming experience a plus

   • Familiar with Scrum and Agile methodologies

Education: Bachelor’s degree in computer science, Information Systems equivalent industry experience