

Data Engineer
We are seeking a skilled Data Engineer to join our growing team to design, develop, and maintain scalable data pipelines and architectures. The ideal candidate will have hands-on experience with Azure Synapse, Python, PySpark, SQL, and Azure Data Factory, along with a deep understanding of cloud-based data engineering practices.
Key Responsibilities:
• Design, build, and manage scalable and efficient data pipelines using Azure Synapse and Azure Data Factory
• Develop and optimize ETL/ELT workflows using PySpark and SQL
• Work with stakeholders to understand data requirements and deliver solutions that support data-driven decision-making
• Collaborate with data scientists, analysts, and business users to ensure data quality, governance, and integrity
• Implement data transformation, cleansing, and validation processes
• Monitor and troubleshoot data workflows and optimize performance
• Ensure secure data practices and compliance with company policies and regulations
Required Skills & Qualifications:
• Strong proficiency in Azure Synapse Analytics and Azure Data Factory
• Expertise in Python and PySpark for data processing and transformation
• Advanced SQL skills for data querying, transformation, and performance tuning
• Experience with data lake and data warehouse architecture
• Familiarity with Delta Lake, Azure Blob Storage, and Azure Data Lake Gen2
• Solid understanding of CI/CD pipelines and version control using Git
• Experience with data modeling and schema design
• Good grasp of data governance, security, and compliance standards
Preferred Qualifications:
• Experience with Azure DevOps, Databricks, or other cloud-based data platforms
• Knowledge of Apache Airflow or other orchestration tools
• Familiarity with Power BI or other data visualization tools
• Experience in handling large-scale, real-time data processing pipelines
• Understanding of REST APIs and integrating external data sources
We are seeking a skilled Data Engineer to join our growing team to design, develop, and maintain scalable data pipelines and architectures. The ideal candidate will have hands-on experience with Azure Synapse, Python, PySpark, SQL, and Azure Data Factory, along with a deep understanding of cloud-based data engineering practices.
Key Responsibilities:
• Design, build, and manage scalable and efficient data pipelines using Azure Synapse and Azure Data Factory
• Develop and optimize ETL/ELT workflows using PySpark and SQL
• Work with stakeholders to understand data requirements and deliver solutions that support data-driven decision-making
• Collaborate with data scientists, analysts, and business users to ensure data quality, governance, and integrity
• Implement data transformation, cleansing, and validation processes
• Monitor and troubleshoot data workflows and optimize performance
• Ensure secure data practices and compliance with company policies and regulations
Required Skills & Qualifications:
• Strong proficiency in Azure Synapse Analytics and Azure Data Factory
• Expertise in Python and PySpark for data processing and transformation
• Advanced SQL skills for data querying, transformation, and performance tuning
• Experience with data lake and data warehouse architecture
• Familiarity with Delta Lake, Azure Blob Storage, and Azure Data Lake Gen2
• Solid understanding of CI/CD pipelines and version control using Git
• Experience with data modeling and schema design
• Good grasp of data governance, security, and compliance standards
Preferred Qualifications:
• Experience with Azure DevOps, Databricks, or other cloud-based data platforms
• Knowledge of Apache Airflow or other orchestration tools
• Familiarity with Power BI or other data visualization tools
• Experience in handling large-scale, real-time data processing pipelines
• Understanding of REST APIs and integrating external data sources