Data Engineer
The Data Engineer will be developing high-quality data pipelines and ETL processes. They will be responsible for designing and implementing testable and scalable code.
Key Responsibilities:
Develop and implement efficient data pipelines and ETL processes to migrate and manage client, investment, and accounting data in Databricks
Work closely with the investment management team to understand data structures and business requirements, ensuring data accuracy and quality.
Monitor and troubleshoot data pipelines, ensuring high availability and reliability of data systems.
Optimize database performance by designing scalable and cost-effective solutions.
Qualifications:
• Proficiency in Apache Spark. Databricks Data Cloud, including schema design, data partitioning, and query optimization
• Experience with Azure
• Exposure to Streaming technologies. (e.g Autoloader, DLT Streaming)
• Advanced SQL, data modeling skills and data warehousing concepts tailored to investment management data (e.g., transaction, accounting, portfolio data, reference data etc).
• Experience with ETL/ELT tools like snap logic and programming languages (e.g., Python, Scala, R programing).
• Familiarity workload automation and job scheduling tool such as Control M.
• Familiar with data governance frameworks and security protocols