

Palantir Data Engineer
We are seeking a talented Data Engineer to join our team and work on an existing Palantir deployment.
Job Title: Palantir Data Engineer (Remote)
Start Date: 1-2 weeks
Duration:
• 6 months for contract positions with extension or perm option
Eligibility:
• U.S. Citizens only
Project Environment:
• Existing Palantir deployment with 4 use cases/projects.
• 90 data connections.
• 7 SAP instances.
• Upcoming scope: SCM and EAC (Estimate at Complete).
• Scheduled jobs with scheduling collisions that need to be optimized.
Interview Process:
• Applications will be reviewed, and interviews scheduled promptly.
Responsibilities:
• Data Engineering: Design, develop, and maintain data pipelines within the Palantir platform.
• Data Transformation and Analysis: Write efficient SQL queries and leverage Python and PySpark to transform and analyze large datasets.
• Data Visualization: Work with existing visualizations within the Palantir platform, potentially enhancing or creating new ones.
• Code Management: Contribute to the code repository, ensuring code quality and maintainability.
• Project Support: Assist with the ongoing maintenance and optimization of existing Palantir use cases.
• Future Projects: Contribute to the development of additional projects, including potential work in Supply Chain Management (SCM) and Estimate at Completion (EAC) analysis.
• Ontology Optimization: Refactor and optimize existing ontologies.
• Data Pipeline Management: Manage and optimize data pipelines, including external data sources.
• Scheduled Job Optimization: Analyze and resolve scheduling conflicts within existing scheduled jobs.
• Advisory Role: Provide seasoned advisory and leadership in Palantir, addressing the client's lack of Palantir knowledge and ensuring best practices.
Required Skills:
• Core Skills:Strong proficiency in SQL and Python programming languages.
• Hands-on experience with data warehousing and data lakes.
• Key Skills:Experience with big data processing frameworks like Apache Spark and PySpark.
• Familiarity with data visualization tools and techniques.
• Understanding of data modeling and data warehousing concepts.
• Experience with Snowflake and Databricks.
• Code repository experience.
Preferred Skills:
• Experience with Palantir Foundry and its components (Gotham, Foundry, Foundry Virtual).
• Knowledge of cloud platforms (AWS, Azure, GCP).
• Experience with machine learning and data science.
• Former Palantir employees.
• SAP experience.
Team Environment:
• Collaborative environment involving multiple parties.
• Total team size: 40-60 people.
• Candidates must be able to work independently and collaboratively.
• Candidates must be politically aware.
• Consultants will act as independent advisors to ensure the client's best interests.
We are seeking a talented Data Engineer to join our team and work on an existing Palantir deployment.
Job Title: Palantir Data Engineer (Remote)
Start Date: 1-2 weeks
Duration:
• 6 months for contract positions with extension or perm option
Eligibility:
• U.S. Citizens only
Project Environment:
• Existing Palantir deployment with 4 use cases/projects.
• 90 data connections.
• 7 SAP instances.
• Upcoming scope: SCM and EAC (Estimate at Complete).
• Scheduled jobs with scheduling collisions that need to be optimized.
Interview Process:
• Applications will be reviewed, and interviews scheduled promptly.
Responsibilities:
• Data Engineering: Design, develop, and maintain data pipelines within the Palantir platform.
• Data Transformation and Analysis: Write efficient SQL queries and leverage Python and PySpark to transform and analyze large datasets.
• Data Visualization: Work with existing visualizations within the Palantir platform, potentially enhancing or creating new ones.
• Code Management: Contribute to the code repository, ensuring code quality and maintainability.
• Project Support: Assist with the ongoing maintenance and optimization of existing Palantir use cases.
• Future Projects: Contribute to the development of additional projects, including potential work in Supply Chain Management (SCM) and Estimate at Completion (EAC) analysis.
• Ontology Optimization: Refactor and optimize existing ontologies.
• Data Pipeline Management: Manage and optimize data pipelines, including external data sources.
• Scheduled Job Optimization: Analyze and resolve scheduling conflicts within existing scheduled jobs.
• Advisory Role: Provide seasoned advisory and leadership in Palantir, addressing the client's lack of Palantir knowledge and ensuring best practices.
Required Skills:
• Core Skills:Strong proficiency in SQL and Python programming languages.
• Hands-on experience with data warehousing and data lakes.
• Key Skills:Experience with big data processing frameworks like Apache Spark and PySpark.
• Familiarity with data visualization tools and techniques.
• Understanding of data modeling and data warehousing concepts.
• Experience with Snowflake and Databricks.
• Code repository experience.
Preferred Skills:
• Experience with Palantir Foundry and its components (Gotham, Foundry, Foundry Virtual).
• Knowledge of cloud platforms (AWS, Azure, GCP).
• Experience with machine learning and data science.
• Former Palantir employees.
• SAP experience.
Team Environment:
• Collaborative environment involving multiple parties.
• Total team size: 40-60 people.
• Candidates must be able to work independently and collaboratively.
• Candidates must be politically aware.
• Consultants will act as independent advisors to ensure the client's best interests.