

Data Engineer
Position: Data Engineer
Contract Duration: 4 months
Location: (preferred locations: Pike & Rose, MD or Scottsdale, AZ; 4 days/week in office, Monday-Thursday)
Job Overview:
A large hospitality company is seeking a highly skilled Data Engineer to join a dynamic team working on an exciting data pipeline project.. The ideal candidate will be responsible for building and optimizing data pipelines, working with large data sets, and collaborating with teams to ensure proper data flow and reporting capabilities.
Key Responsibilities:
• Data Pipeline Development: Lead the effort to build and optimize a new data pipeline that will support reporting and analytics for the brands hotels and other properties
• Data Integration: Work within an existing enterprise data structure (Redshift) to integrate third-party APIs and other data sources, ensuring data is properly transformed, validated, and usable for reporting.
• Collaboration with Reporting Teams: Understand and work closely with reporting teams to gather requirements and build meaningful datasets that meet business reporting needs.
• Data Validation: Ensure that the data flow is accurate, validated, and aligned with reporting needs. Collaborate with the IDS platform team to integrate pricing and other data that will feed into enterprise systems.
• Data Transformation and Querying: Perform data transformations using complex SQL queries within Redshift to prepare data for consumption by downstream teams.
• Automation and Scheduling: Implement automation using Airflow to schedule data processing and pipeline workflows.
• Continuous Improvement: Collaborate with a small but highly skilled team, contributing to best practices and improvements in data engineering processes.
Skills & Qualifications:
• SQL Expertise: Strong proficiency in SQL (business logic will be implemented in SQL).
• Python: Ability to use Python for data processing, automation, and scripting.
• Redshift: Experience working with Redshift, including managing databases and optimizing queries.
• Airflow: Familiarity with using Airflow to schedule and orchestrate data pipelines.
• Glue: Experience working with AWS Glue for ETL tasks is a plus.
• Data Flow Knowledge: Understanding of how enterprise data layers (e.g., user layer, enterprise layer) work together, and how data is validated and transformed across layers.
• Reporting Collaboration: Experience working with reporting teams to ensure data is structured and validated for accurate reporting.
Team Structure:
• Immediate Team: You will be working alongside a Data Engineering Manager and two other Data Engineers. The Data Visualization team (3-4 members) will support dashboard creation and reporting.
• Project Team: Cross-functional collaboration with the DAP team and business intelligence teams will be key.
Interview Process:
Non-technical Round: A conversation with the hiring manager and one other person
Technical Round: An in-depth technical interview with a team of 2-3 data engineers to evaluate your technical skills, problem-solving abilities, and knowledge in data engineering.