Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Data Platform Engineer

This role is for a Data Platform Engineer in San Jose, CA for 4 months at $79.58/hour. Key skills include AWS, Azure, Databricks, and data storage technologies. A BS in Computer Science or similar is required.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
632
🗓️ - Date discovered
February 22, 2025
🕒 - Project duration
3 to 6 months
🏝️ - Location type
On-site
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
San Jose, CA
🧠 - Skills detailed
#Azure ADLS (Azure Data Lake Storage) #Microsoft Azure #Linux #Data Storage #Compliance #Monitoring #Databases #Neo4J #AWS (Amazon Web Services) #ADLS (Azure Data Lake Storage) #Azure #Splunk #Kubernetes #Public Cloud #SQL (Structured Query Language) #Airflow #MySQL #Automation #Computer Science #Spark (Apache Spark) #Data Lake #Storage #S3 (Amazon Simple Storage Service) #Cloud #Security #Prometheus #AWS S3 (Amazon Simple Storage Service) #Collibra #Databricks #Deployment #MongoDB #Web Services #Jira
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Job Title: Data Platform Engineer

Location: San Jose, CA 95110

Duration: 4 months

Contract Type: W2 only

Pay Rate: $79.58/hour

Duties:
• Setup and maintain production scale Databricks environment on public cloud such as Microsoft Azure and AWS (Amazon Web Services)
• Setup and maintain production scale data storage such as ADLS (Azure Data Lake Storage) and AWS S3 for multiple tenant teams using our Data Platform
• Setup and maintain production scale micro services to support the daily operation of our data platform. Services include job scheduling, security, financial, and administrative services, etc.
• Provide triage and guidance to the team on various support issues raised by our tenants
• Develop tools and automation solutions for configuration management, service deployments, monitoring, and alerting to assist with daily RTB (Running the Business) operations
• Budget and monitor cloud spend, always think of ways to avoid cloud resource wastage, utilize 3rd party tools, or develop your own tools to help the team with cost optimization
• Assure security and privacy compliance and implement Adobe Security & Compliance solutions to lock down data stored in our data lake
• Explore GenAI technologies and find opportunity to integrate them with our data platform, providing platform enhancement or improving platform user experience in the end
• Work with various 3rd party vendors for troubleshooting, proof of concept, and other collaborative projects to enhance our product.

Skills:
• Cloud Infrastructure Administration and Automation: AWS, Azure
• Proficient with following storage technologies: ADLS Gen2, AWS S3, Hive or MySQL, MongoDB, Vector Databases
• Setup, troubleshoot and maintain following technologies: Databricks Workspace, includes but not limited to - Unity Catalog, Vector Search, SQL Warehouse, Serverless Compute, Spark workloads, Airflow and DAGs, Azure Kubernetes Service or Elastic Kubernetes Service, Collibra, Neo4J, Metric Insights
• Ability to setup monitoring and alerting with: Databricks System Tables, Prometheus, Splunk, ELK, PowerBI
• Familiar with how to troubleshoot and maintain: Servers with Linux system, Kubernetes environment
• Knowledge to operate with: Jira, Service Now

Education: BS in Computer Science, Computer Engineering, or similar