Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

DataHub Developer

This role is for a DataHub Developer with a 6-month contract, offering a pay rate of "$X/hour". Key skills include DataHub, Apache Spark, Java, Python, and AWS. Requires 5+ years in metadata management and open-source contribution experience.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 15, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Austin, TX
🧠 - Skills detailed
#Python #GDPR (General Data Protection Regulation) #Data Lineage #Data Governance #Spark (Apache Spark) #Data Processing #Data Ingestion #Data Lake #Terraform #API (Application Programming Interface) #Java #Collibra #Scala #Documentation #Apache Spark #Anomaly Detection #ML (Machine Learning) #Batch #Data Management #"ETL (Extract #Transform #Load)" #REST (Representational State Transfer) #Data Catalog #Metadata #Databases #Compliance #REST API #Alation #AWS (Amazon Web Services) #Data Enrichment #Version Control
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Position Overview

We are looking for an experienced DataHub Developer with Committer Experience to join our team and contribute to the design, development, and optimization of enterprise metadata management and data lineage solutions. The ideal candidate will have strong expertise in data cataloging, data lineage, data governance, and hands-on experience with DataHub, Spark-based frameworks, and machine learning for anomaly detection. This role demands a mix of open-source contribution, technical problem-solving, and metadata management expertise.

Key Responsibilities

  1. DataHub Development and Integration
    • Lead projects involving metadata cataloging using the DataHub open-source framework.
    • Design and develop custom APIs to integrate ETL pipelines and enable real-time metadata ingestion.
    • Ingest metadata from multiple systems, including data lakes, upstream, and downstream systems, to provide a holistic metadata ecosystem.
    • Customize and extend DataHub to enrich impact analysis by identifying pipelines reading/writing to data assets.

  2. Data Lineage and Governance Implementation
    • Provide end-to-end data lineage solutions for PII identification, governance, and compliance reporting.
    • Develop and implement processes to enhance impact analysis and ensure seamless data governance practices.

  3. Spark-Based Framework Development
    • Design, develop, and maintain Spark-based custom frameworks for config-as-code mechanisms to facilitate data enrichment and transfer.
    • Improve the performance and scalability of Spark applications to ensure seamless data processing.
    • Provide recommendations and guidance on the design and development of ETL pipelines using Spark.

  4. Machine Learning Integration for Anomaly Detection
    • Collaborate with ML engineers to create features from profiled batch data.
    • Develop and integrate machine learning models for anomaly detection in data patterns.

  5. AWS Cost Optimization and Platform Efficiency
    • Lead AWS cost optimization initiatives to enhance platform-wide efficiency.
    • Successfully support Spark version upgrades and ensure the platform's scalability and performance.

  6. Community Engagement and Contributions
    • Act as a committer to the DataHub open-source community by contributing new features, fixing issues, and enhancing documentation.
    • Participate in open-source discussions, propose architectural improvements, and represent the organization in community events.

Required Qualifications
• Experience:
• 5+ years in metadata management, data lineage, or data governance roles.
• Proven track record as a committer or active contributor to the DataHub open-source project.
• Technical Skills:
• Proficiency in Java, Python, and REST API development.
• Strong experience with Apache Spark for ETL pipeline design and custom framework development.
• Expertise in metadata ingestion from systems like data lakes, databases, and ETL tools.
• Hands-on experience with AWS services and cost optimization strategies.
• Familiarity with machine learning techniques for anomaly detection.
• Other Skills:
• Strong analytical and problem-solving skills.
• Excellent communication and collaboration abilities.

Preferred Qualifications
• Knowledge of data governance regulations like GDPR, CCPA, or HIPAA.
• Experience with infrastructure-as-code tools such as Terraform or Helm.
• Familiarity with other metadata management tools like Amundsen, Collibra, or Alation.
• Understanding of version control, CI/CD pipelines, and open-source development practices.