Big Data Specialist

This role is for a Big Data Specialist in Phoenix, AZ, with a contract length of unspecified duration and a pay rate of "unknown." Key skills include Hadoop, Spark, Hive, GCP/Azure/AWS, and proficiency in Java or Scala.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
January 14, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Phoenix, AZ
🧠 - Skills detailed
#GCP (Google Cloud Platform) #Big Data #Java #Cloud #Batch #AWS (Amazon Web Services) #Data Pipeline #Hadoop #Scripting #Spark (Apache Spark) #Data Engineering #Deployment #Scala #Shell Scripting #Migration #Programming #Azure #Security
Role description
Log in or sign up for free to view the full role description and the link to apply.

Job Description

At Impetus Technologies, we drive innovation and deliver cutting-edge solutions to our clients. We are hiring an experienced Big Data Engineer with a strong focus on GCP/Azure/AWS to join our team in Phoenix, AZ. The ideal candidate will have extensive experience in Hadoop, Spark (Batch/Streaming), Hive, and Shell scripting and solid programming skills in Java or Scala. A deep understanding and hands-on experience with GCP/Azure/AWS are critical for this role.

Qualifications:
• Proven experience with Hadoop, Spark (Batch/Streaming), and Hive.
• Proficiency in Shell scripting and programming languages such as Java and/or Scala.
• Strong hands-on experience with GCP/Azure/AWS and a deep understanding of its services and tools.
• Ability to design, develop, and deploy big data solutions in a GCP/Azure/AWS environment.
• Experience with migrating data systems to GCP/Azure/AWS .
• Excellent problem-solving skills and the ability to work independently or as part of a team.
• Strong communication skills to effectively collaborate with team members and stakeholders.

Responsibilities:
• Development: Design and develop scalable big data solutions using Hadoop, Spark, Hive, and GCP/Azure/AWS services.
• Design: Architect and implement big data pipelines and workflows optimized for GCP/Azure/AWS, ensuring efficiency, security, and reliability.
• Deployment: Deploy big data solutions on GCP/Azure/AWS, leveraging the best practices for cloud-based environments.
• Migration: Lead the migration of existing data systems to GCP/Azure/AWS, ensuring a smooth transition with minimal disruption and optimal performance.
• Collaboration: Work closely with cross-functional teams to integrate big data solutions with other cloud-based services and business goals.
• Optimization: Continuously optimize big data solutions on GCP/Azure/AWS to improve performance, scalability, and cost-efficiency.

Mandatory Skills

Hadoop, Spark, Hive, GCP/Azure/AWS