

Data Engineer
Job Title: Data Engineer
Location: Remote
Must-have skills for this role: Databricks, Python
Years of experience: 5
Education for this position? Professional Degree
Work location: Remote
Resource has to be Local: No
What are some nice-to-have skills: GitHub, Terraform, AWS Services
Job Description
Client & Project: We are seeking for a new talent to join the Data & Integration team where you will have the opportunity to collaborate in a project specializing in healthcare services.
Responsibilities: As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing.
Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Expected to be an SME and collaborate with the team to perform.
Engage with multiple teams and contribute on key decisions.
Expert proficiency in the Databricks Platform and Python is required. Advanced proficiency in GitHub, Terraform, and Intermediate proficiency in AWS Cloud Computing are suggested.
Develop and optimize data pipelines to enhance efficiency
Collaborate with cross-functional teams to integrate data solutions
Implement best practices for data quality assurance
Contribute to the continuous improvement of data processes
Job Title: Data Engineer
Location: Remote
Must-have skills for this role: Databricks, Python
Years of experience: 5
Education for this position? Professional Degree
Work location: Remote
Resource has to be Local: No
What are some nice-to-have skills: GitHub, Terraform, AWS Services
Job Description
Client & Project: We are seeking for a new talent to join the Data & Integration team where you will have the opportunity to collaborate in a project specializing in healthcare services.
Responsibilities: As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing.
Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Expected to be an SME and collaborate with the team to perform.
Engage with multiple teams and contribute on key decisions.
Expert proficiency in the Databricks Platform and Python is required. Advanced proficiency in GitHub, Terraform, and Intermediate proficiency in AWS Cloud Computing are suggested.
Develop and optimize data pipelines to enhance efficiency
Collaborate with cross-functional teams to integrate data solutions
Implement best practices for data quality assurance
Contribute to the continuous improvement of data processes