Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Azure Databricks Architect / Sr. Data Engineer with Salesforce Knowledge and Healthcare Background

This role is for an "Azure Databricks Architect / Sr. Data Engineer" with a contract length of "unknown," offering a pay rate of "unknown." Key skills required include "Azure, Python, SQL, Salesforce API, and healthcare data modeling." A minimum of "six years" experience is necessary.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 19, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#Azure cloud #Storage #Microsoft SQL #Linux #Databricks #Data Science #Data Lake #Spark (Apache Spark) #Snowflake #API (Application Programming Interface) #Data Warehouse #Microsoft SQL Server #Data Engineering #Programming #Scala #Data Analysis #Azure #MS SQL (Microsoft SQL Server) #SQL (Structured Query Language) #Agile #ADF (Azure Data Factory) #PySpark #Data Quality #Data Ingestion #Data Modeling #Oracle #Project Management #SQL Server #Kafka (Apache Kafka) #Data Integration #Azure Data Factory #Data Processing #Airflow #DevOps #Azure Databricks #FHIR (Fast Healthcare Interoperability Resources) #Metadata #Microsoft Power BI #BI (Business Intelligence) #Cloud #Data Pipeline #"ETL (Extract #Transform #Load)" #Python #Security
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Responsibilities

Responsible for developing, implementing, and operating stable, scalable, low cost solutions to source data from client systems into the Data Lake, data warehouse and end-user facing BI applications. Responsible for ingestion, transformation and integration of data to provide a platform that supports data analysis and enrichment as well as making data operationally available for analysis. The data Engineer will be a data pipeline builder, data wrangler and support application developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture and is consistent throughout our ongoingprojects. Essential Functions:
• Build and maintain scalable automated data pipelines. Support critical data pipelines with a highly scalable distributed architecture
• Have Knowledge on Saleforce API to pull data from Salesforce system to Databricks Delta tables
• including data ingestion from Salesforce, data integration, data curation.
• Need to build scalable and metadata driven framework to reuse notebooks/pipelines to move data from Salesforce to Databricks. Solution should work for future ingestions from Salesforce to Databricks data movement with minimal
• Deploy, automate, maintain and manage Azure cloud-based production system, to ensure the availability, performance, scalability and security of productions systems.
• Good architectural understanding to build and ensure customer success when building new solutions and migrating existing data applications on Azure platform.
• Conduct full technical discovery, identifying pain points, business and technical requirements, "as is" and "to be" scenarios.
• Design and arrangement of scalable, highly attainable, and fault tolerant systems on Azure platform.
• Ownership and responsibility for end-to-end design and development, testing, release of key components.
• Understand and implement best practices in management of data, including master data, reference data, metadata, data quality and lineage. Experience with code versioning tools and a command of configuration management concepts and tools, CI-CD including DevOps. Other duties as assigned.

Experience
• Expert level SQL knowledge and experience.
• Expert level experience with Python/Pyspark/Scala and Object Oriented programming.
• Experience with streaming integration and cloud-based data processing systems such as; Kafka, and Databricks.
• Hands-on knowledge of cloud-based data warehouse solutions like Azure & Snowflake.
• Experience with Azure cloud architecture and solutions.
• Experience with Azure Data lake store, Blob Storage, VMs, Data Factory, SQL Data Warehouse, Azure Databricks, HDInsight, etc. Experience with data pipeline and workflow management tools: Azure Data Factory, Airflow, etc.
• Experience with Oracle, Microsoft SQL Server Database system.
• Experience working within Agile methodologies.
• Experience with Microsoft Windows and Linux virtual servers. Moderate skill in Power BI.

Other Skills
• Experience with healthcare data modeling standards, such as HL7 or FHIR is preferred.
• Experience in processing healthcare data sets (medical records, claims, clinical data etc.) is preferred.
• Strong project management skills.
• Strong problem-solving, decision-making and analytical skills.
• Excellent interpersonal and organizational skills a team player who can effectively partner with all levels of the company.
• Detail oriented and organized.
• Ability to handle numerous assignments simultaneously.
• Ability to work independently and as part of a team.
• Bachelor's degree (BA or BS) from an accredited college or university plus a minimum of six (6) years of experience in the specific or related field.