Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Data Warehouse Engineer

This role is for a Data Warehouse Engineer with a contract length of "unknown," offering a pay rate of "$unknown" and is remote. Requires 5+ years of experience in DBMS, RDBMS, ETL, advanced SQL, Python, and familiarity with Hadoop.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
800
🗓️ - Date discovered
February 13, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Princeton, NJ
🧠 - Skills detailed
#Datasets #Kafka (Apache Kafka) #Database Design #EDW (Enterprise Data Warehouse) #Computer Science #Data Pipeline #Data Modeling #Programming #Unix #Airflow #BI (Business Intelligence) #Spark (Apache Spark) #Data Warehouse #Microsoft Power BI #Tableau #Linux #HDFS (Hadoop Distributed File System) #Scala #"ETL (Extract #Transform #Load)" #Hadoop #PySpark #SQL (Structured Query Language) #RDBMS (Relational Database Management System) #Python
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

The Enterprise Data Warehouse and Business Intelligence team is looking for a proven senior engineer to join its team! Our fast-growing, diverse team of engineers is responsible for building scalable ingestion systems which handle and prepare petabytes of data for reporting, dashboards, and advanced analytics.

  • You will collaborate with cross-functional teams to enhance and ingest data from internal and external sources

  • You will identify opportunities to simplify and automate workflows

  • You will translate business requirements into robust and scalable data pipelines for key business metrics

  • You will use technologies such as Hadoop, Spark, Hive, Kafka and more

  • You will be working with structured and unstructured datasets

You will need to have:

  • 5+ years of experience with DBMS, RDBMS and ETL methodologies.

  • Experience building automated, scalable architectures in an enterprise setting

  • Advanced SQL capabilities are required. Knowledge of database design and experience working with extremely large data volumes is a plus.

  • Programming experience in Python. PySpark.

  • Familiarity with Hadoop ecosystem (HDFS, Spark, oozie)

  • Strong understanding of data warehousing methodologies, ETL processing and dimensional data modeling.

  • Strong problem-solving skills and trouble-shooting skills

  • Knowledge of Airflow is a plus

  • BA, BS, MS, PhD in Computer Science, Engineering or related technology field

Nice to Have:

  • Knowledge of MPP systems

  • Knowledge of streaming technologies like Kafka

  • Knowledge of business intelligence reporting tools such as QlikSense, Tableau, Power BI, Cognos

  • Experience working in a UNIX or Linux development environment