1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

GCP Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Data Engineer in Charlotte, NC (Hybrid) for a 12-24 month contract. Requires 4-6 years of experience in Python, GCP services, ETL pipelines, and Spark. Financial services, healthcare, or retail industry experience preferred.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
400
🗓️ - Date discovered
April 2, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
1099 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Charlotte, NC
🧠 - Skills detailed
#Data Lake #Scripting #Compliance #Python #Deployment #API (Application Programming Interface) #Apache Airflow #DynamoDB #NoSQL #Storage #BigQuery #Django #Microservices #MongoDB #Terraform #Data Transformations #AI (Artificial Intelligence) #Model Deployment #Automation #ML (Machine Learning) #Cloud #Data Security #Spark (Apache Spark) #Distributed Computing #S3 (Amazon Simple Storage Service) #GCP (Google Cloud Platform) #Version Control #DevOps #Scala #Data Engineering #Databases #Security #"ETL (Extract #Transform #Load)" #Airflow #Kubernetes #Data Processing
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

GCP Data Engineer

   • Location: Charlotte, NC (Hybrid)

   • Duration: 12-24 months (Contract)

   • Client: Top Bank

About the Role:

We are seeking a GCP Data Engineer with expertise in GCP, ETL pipelines, Python, and Kubernetes to build and optimize scalable data solutions. This role involves working with GCP services (Dataproc, Composer, GCS), Apache Airflow, Spark, and hybrid cloud clusters to enhance data processing capabilities.

Responsibilities:

   • Design, develop, and optimize ETL pipelines for efficient data processing.

   • Work with GCP services (Dataproc, Composer, GCS) to build and manage cloud-based solutions.

   • Develop Python-based solutions for scripting, automation, and API development.

   • Implement Spark-based data processing frameworks for handling large-scale data.

   • Build and manage hybrid cloud clusters using OpenShift and GCP.

   • Deploy and manage GKE clusters for containerized workloads.

   • Automate workflows and orchestrate data jobs using Apache Airflow.

   • Implement CI/CD pipelines for deployment and version control.

   • Ensure data security, governance, and compliance across cloud and on-premise systems.

   • Troubleshoot performance issues, optimize queries, and enhance data processing capabilities.

Qualifications & Skills:

   • 4-6 years of hands-on experience in Python development with strong database expertise.

   • Experience with GCP services (Dataproc, Composer, GCS) and OpenShift environments.

   • Strong expertise in ETL pipeline development and handling large-scale data transformations.

   • Proficiency in Spark, Django, and Microservices architecture.

   • Experience with S3 object storage and handling unstructured data.

   • Familiarity with API development, CI/CD pipelines, and DevOps best practices.

   • Hands-on experience in GKE and container orchestration.

   • Expertise in Apache Airflow for job orchestration and workflow automation.

   • Strong problem-solving skills with expertise in database query optimization.

Preferred Skills:

   • Experience with BigQuery, Terraform, and Cloud Functions.

   • Knowledge of distributed computing frameworks and data lake architectures.

   • Familiarity with ML/AI model deployment in cloud environments.

   • Exposure to NoSQL databases like MongoDB, Cassandra, or DynamoDB.

   • Experience in financial services, healthcare, or retail industries.