Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Senior AI & Data Engineer (Scala, Spark, GCP)

This role is for a Senior AI & Data Engineer (Scala, Spark, GCP) in Sunnyvale, CA, on a W2 contract. Requires strong skills in Scala, Spark, GCP, ETL processes, and AI/ML integration, with a focus on data engineering and next-gen AI strategies.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 21, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Hybrid
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Sunnyvale, CA
🧠 - Skills detailed
#Compliance #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Big Data #Data Storage #Java #Strategy #Data Science #Storage #Data Processing #Data Pipeline #Data Engineering #Cloud #Scala #GCP (Google Cloud Platform) #Spark (Apache Spark) #AI (Artificial Intelligence) #Data Quality
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Role: Senior AI & Data Engineer (Scala, Spark, GCP).

Location: Sunnyvale, CA (Hybrid (2 days onsite))

Contract W2.

Skills Required: SCALA, SPARK, GCP preferred, LLM, AI.

Job Description:

We are seeking a highly skilled Senior AI & Data Engineer to join our team and work on a cutting-edge Marketing Technology Platform.

In this role, you will focus on data engineering while also exploring AI-driven innovations.

You will work with structured and unstructured data, perform ETL processes using Scala, Spark, and Java, and operate in a GCP-based environment leveraging Hive tables.

Additionally, you will play a key role in Next-Gen AI strategy, helping integrate LLM models into the platform. This role combines data engineering and data science, with a stronger emphasis on engineering.

Key Responsibilities:

Extract, transform, and load (ETL) data from structured and unstructured sources.

Develop scalable and efficient data pipelines using Scala, Spark, and Java.

Work within a GCP environment, leveraging Hive tables and cloud-based big data solutions.

Optimize and maintain high-performance data processing workflows.

Collaborate with AI/ML teams to integrate Large Language Models (LLMs) into the platform.

Explore and implement Next-Gen AI strategies for marketing technology applications.

Ensure data quality, governance, and compliance across all engineering tasks.

Required Skills & Experience:

Strong experience in Scala, Spark, and Java for big data processing.

Hands-on experience working in a Google Cloud Platform (GCP) environment.

Proficiency in ETL development and data pipeline orchestration.

Experience handling structured and unstructured data.

Familiarity with Hive tables and distributed data storage solutions.

Understanding of AI/ML concepts, with experience in integrating LLMs preferred.

Strong problem-solving skills and ability to work in a fast-paced environment.