Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Sr Data Engineer

This role is for a Sr Data Engineer in Glendale, CA, for 18 months at a W2 pay rate. Requires 7+ years of experience, expertise in Airflow, Apache Spark, Snowflake or Databricks, and data pipeline development.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
680
🗓️ - Date discovered
February 22, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
On-site
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Glendale, CA
🧠 - Skills detailed
#BigQuery #Data Science #Snowflake #PySpark #Data Modeling #Graph Databases #Databases #Datasets #AWS (Amazon Web Services) #Scala #Apache Spark #Storytelling #Data Pipeline #Programming #Java #Airflow #Documentation #Spark (Apache Spark) #Python #API (Application Programming Interface) #Data Access #Cloud #GraphQL #Consul #Databricks #Data Engineering #Delta Lake
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Join the Team at YO HR CONSULTANCY!

Senior Data Engineer

Experience: 7 - 25 Years

Location: Glendale, CA – Onsite 3-4 days a week W2

Contract Duration: 18 months W2

Essential Skills Required
• Airflow
• Apache Spark
• Snowflake OR Databricks
• Data Modeling
• In-depth knowledge and hands-on experience with PySpark and Databricks

About Us

Based in Los Angeles, our company is a frontrunner in the Entertainment & Media sector, dedicated to providing top-notch storytelling and experiences for a worldwide audience. Our technology teams prioritize innovation and leverage advanced technology to deliver the best entertainment.

Technology Stack

Your work will involve technologies such as Python, AWS, Snowflake, Databricks, and Airflow.

Your Role As a Senior Data Engineer
• Assist in maintaining, updating, and enhancing current Core Data platform data pipelines
• Design and manage APIs to make data accessible for downstream applications
• Create and sustain real-time streaming data pipelines
• Utilize a tech stack that includes Airflow, Spark, Databricks, Delta Lake, and Snowflake
• Work alongside product managers, architects, and fellow engineers to ensure the effectiveness of the Core Data platform
• Help in the creation and documentation of internal and external standards and best practices related to pipeline configurations and naming conventions
• Guarantee high operational efficiency and quality for Core Data platform datasets, ensuring our solutions meet SLAs and uphold reliability and accuracy for all stakeholders (Engineering, Data Science, Operations, and Analytics teams)

Qualifications

We would love to have you on board if you possess:
• A minimum of 7 years in data engineering, specifically in developing extensive data pipelines
• Expertise in at least one significant programming language (e.g., Python, Java, Scala)
• Practical production experience with distributed processing frameworks such as Spark
• Production experience using data pipeline orchestration tools like Airflow for designing and managing data pipelines
• Experience with at least one primary Massively Parallel Processing (MPP) or cloud database solution (Snowflake, Databricks, BigQuery)
• Background in API development with GraphQL
• Strong comprehension of OLTP versus OLAP systems
• Experience with Graph Databases is a plus
• Knowledge in Real-time Event Streaming is an added advantage

Skills: core data,aws,apis,data,apache spark,python,pyspark,api development,data modeling,snowflake,spark,airflow,pipelines,database,delta lake,graphql,databricks