Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Data Engineer

This role is a remote Data Engineer position for a 6-month contract to hire, offering expertise in GCP, Vertex AI, and feature engineering. Key skills include data pipeline development, automated testing, and mentoring. Strong data engineering experience is required.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 20, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#Automated Testing #Datasets #"ETL (Extract #Transform #Load)" #Kafka (Apache Kafka) #Cloud #Python #Data Ingestion #Documentation #Leadership #NoSQL #Scala #GCP (Google Cloud Platform) #ML (Machine Learning) #Data Engineering #Libraries #SQL (Structured Query Language) #Compliance #BigQuery #Data Pipeline #Data Quality #Databricks #AI (Artificial Intelligence)
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Job Title: Data Engineer

Location: Remote

Duration: 6 Month(s), Contract to Hire

Key Responsibilities

As a Sr. Data Engineer, you will have the opportunity to lead the development of innovative data solutions, enabling the effective use of data across the organization. You will be responsible for designing, building, and maintaining robust data pipelines and platforms to meet business objectives, focusing on data as a strategic asset. Your role will involve collaboration with cross-functional teams, leveraging cutting-edge technologies, and ensuring scalable, efficient, and secure data engineering practices. A strong emphasis will be placed on expertise in GCP, Vertex AI, and advanced feature engineering techniques.

· Provide Technical Leadership: Offer technical leadership to ensure clarity between ongoing projects and facilitate collaboration across teams to solve complex data engineering challenges.

· Build and Maintain Data Pipelines: Design, build, and maintain scalable, efficient, and reliable data pipelines to support data ingestion, transformation, and integration across diverse sources and destinations, using tools such as Kafka, Databricks, and similar toolsets.

· Drive Digital Innovation: Leverage innovative technologies and approaches to modernize and extend core data assets, including SQL-based, NoSQL-based, cloud-based, and real-time streaming data platforms.

· Implement Feature Engineering: Develop and manage feature engineering pipelines for machine learning workflows, utilizing tools like Vertex AI, BigQuery ML, and custom Python libraries.

· Implement Automated Testing: Design and implement automated unit, integration, and performance testing frameworks to ensure data quality, reliability, and compliance with organizational standards.

· Optimize Data Workflows: Optimize data workflows for performance, cost efficiency, and scalability across large datasets and complex environments.

· Mentor Team Members: Mentor team members in data principles, patterns, processes, and practices to promote best practices and improve team capabilities.

· Draft and Review Documentation: Draft and review architectural diagrams, interface specifications, and other design documents to ensure clear communication of data solutions and technical requirements.

· Cost/Benefit Analysis: Present opportunities with cost/benefit analysis to leadership, guiding sound architectural decisions for scalable and efficient data solutions.