Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Big Data Engineer

This role is for a Big Data Engineer on a 12-month contract, offering a competitive pay rate. Key skills include GCP, data analysis, programming, and experience with ETL processes. Industry experience in data engineering and Agile practices is essential.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 22, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Nashville, TN
🧠 - Skills detailed
#Big Data #Hadoop #Data Science #Consul #Deployment #"ETL (Extract #Transform #Load)" #Data Engineering #Programming #Agile #GCP (Google Cloud Platform) #Data Analysis #Consulting #Classification #Cloud #Databases
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Classification: Contract

Contract Length: 12 Months

Position Summary:

The Big Data Engineer/Consulting-Level serves as a primary development resource for design, writing code, test, implementation, document functionality, and maintain of NextGen solutions for the GCP Cloud enterprise data initiatives. The role requires working closely with data teams, frequently in a matrixed environment as part of a broader project team. Due to the emerging and fast-evolving nature of GCP/Hadoop technology and practice, the position requires that one stay well-informed of technological advancements and be proficient at putting new innovations into effective practice. In addition, this position requires a candidate who can analyze business requirements, perform design tasks, construct, test, and implement solutions with minimal supervision. This candidate will have a record of accomplishment of participation in successful projects in a fast-paced, mixed team environment.

Responsibilities:
• This role will provide application development for specific business environments.
• Responsible for building and supporting a GCP based ecosystem designed for enterprise-wide analysis of structured, semi-structured, and unstructured data.
• Bring new data sources into GCP, transform and load to databases and support regular requests to move data from one cluster to another
• Develop a strong understanding of relevant product area, codebase, and/or systems
• Demonstrate proficiency in data analysis, programming, and software engineering
• Work closely with the Lead Architect and Product Owner to define, design and build new features and improve existing products
• Produce high quality code with good test coverage, using modern abstractions and frameworks
• Work independently, and complete tasks on-schedule by exercising strong judgment and problem-solving skills
• Closely collaborates with team members to successfully execute development initiatives using Agile practices and principles
• Participates in the deployment, change, configuration, management, administration and maintenance of deployment process and systems
• Proven experience effectively prioritizing workload to meet deadlines and work objectives
• Works in an environment with rapidly changing business requirements and priorities
• Work collaboratively with Data Scientists and business and IT leaders throughout the company to understand their needs and use cases.
• Work closely with management, architects and other teams to develop and implement the projects.
• Actively participate in technical group discussions and adopt any new technologies to improve the development and operations.

Requirements:
• This role will provide application development for specific business environments.
• Responsible for building and supporting a GCP based ecosystem designed for enterprise-wide analysis of structured, semi-structured, and unstructured data.
• Bring new data sources into GCP, transform and load to databases and support regular requests to move data from one cluster to another
• Develop a strong understanding of relevant product area, codebase, and/or systems
• Demonstrate proficiency in data analysis, programming, and software engineering
• Work closely with the Lead Architect and Product Owner to define, design and build new features and improve existing products
• Produce high quality code with good test coverage, using modern abstractions and frameworks
• Work independently, and complete tasks on-schedule by exercising strong judgment and problem-solving skills
• Closely collaborates with team members to successfully execute development initiatives using Agile practices and principles
• Participates in the deployment, change, configuration, management, administration and maintenance of deployment process and systems
• Proven experience effectively prioritizing workload to meet deadlines and work objectives
• Works in an environment with rapidly changing business requirements and priorities
• Work collaboratively with Data Scientists and business and IT leaders throughout the company to understand their needs and use cases.
• Work closely with management, architects and other teams to develop and implement the projects.
• Actively participate in technical group discussions and adopt any new technologies to improve the development and operations.