1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Sr. Data Engineer with GCP and Vertex AI

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer specializing in GCP and Vertex AI, offering a contract position with a pay rate of “X” for “Y” months. Key skills include Python, data pipelines, and machine learning, with a focus on cloud technologies.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 2, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#Automated Testing #Libraries #Databricks #Python #Data Pipeline #NoSQL #Kafka (Apache Kafka) #BigQuery #Documentation #AI (Artificial Intelligence) #SQL (Structured Query Language) #ML (Machine Learning) #Cloud #GCP (Google Cloud Platform) #Data Ingestion #Scala #Data Engineering #"ETL (Extract #Transform #Load)" #Leadership
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Role: Sr Data Engineer (GCP and Vertex AI)

Location: Remote (EST candidates preferred)

Job Type: Contract

Job Description:

   • As a Sr. Data Engineer, you will have the opportunity to lead the development of innovative data solutions, enabling the effective use of data across the organization.

   • You will be responsible for designing, building, and maintaining robust data pipelines and platforms to meet business objectives, focusing on data as a strategic asset.

   • our role will involve collaboration with cross-functional teams, leveraging cutting-edge technologies, and ensuring scalable, efficient, and secure data engineering practices.

   • A strong emphasis will be placed on expertise in GCP, Vertex AI, and advanced feature engineering techniques.

Key Responsibilities:

   • Provide Technical Leadership

   • Build and Maintain Data Pipelines: Design, build, and maintain scalable, efficient, and reliable data pipelines to support data ingestion, transformation, and integration across diverse sources and destinations, using tools such as Kafka, Databricks, and similar toolsets.

   • Drive Digital Innovation: Leverage innovative technologies and approaches to modernize and extend core data assets, including SQL-based, NoSQL-based, cloud-based, and real-time streaming data platforms.

   • Implement Feature Engineering: Develop and manage feature engineering pipelines for machine learning workflows, utilizing tools like Vertex AI, BigQuery ML, and custom Python libraries.

   • Implement Automated Testing

   • Optimize Data Workflows

   • Mentor Team Members

   • Draft and Review Documentation

   • Cost/Benefit Analysis