Contract Role: Palantir Data Engineer at Cary, NC or Remote

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a long-term contract for a Palantir Data Engineer, offering a pay rate of "unknown," based in Cary, NC or remote. Key skills required include proficiency in Palantir Foundry, data engineering expertise, and familiarity with cloud and big data ecosystems.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 4, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Cary, NC
🧠 - Skills detailed
#Java #Scala #SQL (Structured Query Language) #Python #Data Engineering #Spark (Apache Spark) #Documentation #"ETL (Extract #Transform #Load)" #Data Integration #Azure #Oracle #Programming #PostgreSQL #Monitoring #Data Manipulation #GCP (Google Cloud Platform) #Data Governance #Hadoop #MySQL #Data Modeling #Cloud #Data Quality #AWS (Amazon Web Services) #Data Pipeline #Big Data #Palantir Foundry
Role description

Palantir Data Engineer

Cary, NC or Remote

Long Term Contract

Job Description

We are seeking a highly skilled Palantir Data Engineer to design, develop, and optimize data solutions using Palantir Foundry, ensuring seamless data integration, transformation, and analysis. The ideal candidate will have experience with Palantir's platforms, a strong background in data engineering, and expertise in implementing robust and scalable data workflows.

Required Skills & Experience:

   • Proficiency in Palantir Foundry:

   • Hands-on experience in developing and managing pipelines, workflows, and applications on the Palantir Foundry platform

Data Engineering Expertise:

   • Strong programming skills in Python, Java, or Scala for data manipulation and pipeline creation.

   • Proficiency in SQL and database technologies (e.g., PostgreSQL, MySQL, or Oracle).

   • Experience in data modeling, ontology design, and implementing data governance.

Cloud & Big Data Ecosystems

   • Familiarity with AWS, Azure, or GCP and Big Data tools (e.g., Spark, Hadoop)

   • Experience with APIs for data integration"

Key Responsibilities:

Data Integration Transformation:

   • Design and implement ETL pipelines in Palantir Foundry to handle diverse data sources, ensuring high data quality and consistency.

   • Optimize and enhance data workflows to improve performance and scalability.

Data Modeling Architecture:

   • Develop data models, ontology, and schemas that align with business requirements.

   • Implement robust data governance and lineage mechanisms within the Palantir ecosystem.

Maintenance & Monitoring:

   • Monitor data pipelines, troubleshoot issues, and ensure uninterrupted data operations.

   • Maintain documentation for workflows, schemas, and integration processes.

Continuous Improvement:

   • Drive innovation by leveraging the latest Palantir Foundry features and industry best practices.

   • Automate repetitive processes to enhance operational efficiency.