1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Lead Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer with a 1-year contract, remote (PST time zone), offering competitive pay. Required skills include MapReduce, Spark (Pyspark/Scala), ETL, SQL, and experience in large-scale data pipelines and real-time processing.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 1, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#AWS (Amazon Web Services) #HDFS (Hadoop Distributed File System) #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #MongoDB #NoSQL #Data Pipeline #Database Systems #Cloud #Scala #PySpark #S3 (Amazon Simple Storage Service) #DynamoDB #Lambda (AWS Lambda) #Datadog #Kafka (Apache Kafka) #Apache Spark #Spark (Apache Spark) #PostgreSQL #Computer Science #Big Data #Java #API (Application Programming Interface) #Data Engineering #Athena
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Lead Data Engineer

Remote (PST Time zone)

1 Year

Primary Skills

MapReduce, HDFS, Spark - Pyspark, ETL Fundamentals, SQL, SQL (Basic + Advanced), Spark - Scala, Python, Data Warehousing, Hive, Modern Data Platform Fundamentals, Data Modelling Fundamentals, PLSQL, T-SQL, Stored Procedures, Oozie

Job Description:

As a Senior Data Engineer, you’ll be responsible for building high performance, scalable data solutions that meet the needs of millions of agents, brokers, home buyers, and sellers.

You’ll design, develop, and test robust, scalable data platform components.

You’ll work with a variety of teams and individuals, including product engineers to understand their data pipeline needs and come up with innovative solutions.

You’ll work with a team of talented engineers and collaborate with product managers and designers to help define new data products and features. Skills, accomplishments, interests you should have:

BS/MS in Computer Science, Engineering, or related technical discipline or equivalent combination of training and experience.

7+ years core Scala/Java experience: building business logic layers and high-volume/low latency/big data pipelines.

5+ years of experience in large scale real-time stream processing using Apache Flink or Apache Spark with messaging infrastructure like Kafka/Pulsar.

7+ years of experience on Data Pipeline development, ETL and processing of structured and unstructured data.

5+ years of experience using NoSQL systems like MongoDB, DynamoDB and Relational SQL Database systems (PostgreSQL) and Athena.

Experience with technologies like Lambda, API Gateway, AWS Fargate, ECS, CloudWatch, S3, DataDog.

Experience owning and implementing technical/data solutions or pipelines.

Excellent written and verbal communication skills in English.

Strong work ethic and entrepreneurial spirit.