

Data Engineer
Job Summary:
We are seeking a highly skilled Data Engineer with strong expertise in Snowflake, DBT (Data Build Tool), and IBM DataStage to join our data team. The ideal candidate will be responsible for designing, building, and optimizing scalable data pipelines, ensuring high data quality and performance. This role will collaborate closely with data analysts, data scientists, and business teams to support data-driven decision-making.
Key Responsibilities:
• Design, develop, and maintain scalable and efficient ETL/ELT pipelines using Snowflake, DBT, and DataStage.
• Optimize data warehouse performance, including query tuning and cost management.
• Develop and implement data transformation models using DBT.
• Manage and orchestrate data workflows and schedules to ensure seamless data movement.
• Integrate various data sources (structured and unstructured) into the data platform.
• Implement and enforce data governance, security, and compliance best practices.
• Collaborate with stakeholders to understand business needs and translate them into data solutions.
• Monitor and troubleshoot data pipelines, ensuring reliability and accuracy.
• Work on CI/CD pipelines for data integration and deployment automation.
Required Qualifications:
• Experience in Data Engineering or a related field.
• Strong experience in Snowflake, including schema design, performance tuning, and cost optimization.
• Proficiency in DBT for data modeling, transformations, and testing.
• Hands-on experience with IBM DataStage for ETL development and management.
• Experience with SQL and Python for data processing and automation.
• Familiarity with cloud platforms (AWS, Azure, or GCP) and data orchestration tools (Airflow preferred).
• Knowledge of data warehouse best practices, data lakes, and data modeling techniques.
• Experience working with version control systems (Git) and CI/CD pipelines.
• Strong problem-solving and communication skills.
Preferred Qualifications:
• Experience with Kafka, Spark, or other streaming technologies.
• Knowledge of APIs and microservices architecture for data integration.
• Exposure to Machine Learning pipelines and analytics frameworks.
• Experience with DataOps practices and Agile methodologies.
Job Summary:
We are seeking a highly skilled Data Engineer with strong expertise in Snowflake, DBT (Data Build Tool), and IBM DataStage to join our data team. The ideal candidate will be responsible for designing, building, and optimizing scalable data pipelines, ensuring high data quality and performance. This role will collaborate closely with data analysts, data scientists, and business teams to support data-driven decision-making.
Key Responsibilities:
• Design, develop, and maintain scalable and efficient ETL/ELT pipelines using Snowflake, DBT, and DataStage.
• Optimize data warehouse performance, including query tuning and cost management.
• Develop and implement data transformation models using DBT.
• Manage and orchestrate data workflows and schedules to ensure seamless data movement.
• Integrate various data sources (structured and unstructured) into the data platform.
• Implement and enforce data governance, security, and compliance best practices.
• Collaborate with stakeholders to understand business needs and translate them into data solutions.
• Monitor and troubleshoot data pipelines, ensuring reliability and accuracy.
• Work on CI/CD pipelines for data integration and deployment automation.
Required Qualifications:
• Experience in Data Engineering or a related field.
• Strong experience in Snowflake, including schema design, performance tuning, and cost optimization.
• Proficiency in DBT for data modeling, transformations, and testing.
• Hands-on experience with IBM DataStage for ETL development and management.
• Experience with SQL and Python for data processing and automation.
• Familiarity with cloud platforms (AWS, Azure, or GCP) and data orchestration tools (Airflow preferred).
• Knowledge of data warehouse best practices, data lakes, and data modeling techniques.
• Experience working with version control systems (Git) and CI/CD pipelines.
• Strong problem-solving and communication skills.
Preferred Qualifications:
• Experience with Kafka, Spark, or other streaming technologies.
• Knowledge of APIs and microservices architecture for data integration.
• Exposure to Machine Learning pipelines and analytics frameworks.
• Experience with DataOps practices and Agile methodologies.