Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Big Data Developer

This role is for a Senior Big Data Engineer in Phoenix, AZ, with a 12-month W2 contract. Requires 10+ years of experience and expertise in "Java/Python, Spark/PySpark, SQL, Shell/Unix Scripting, Hive." Onsite interview mandatory.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 22, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
On-site
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Phoenix, AZ
🧠 - Skills detailed
#Data Science #PySpark #Datasets #"ETL (Extract #Transform #Load)" #Scala #Apache Spark #Scripting #SQL (Structured Query Language) #Data Pipeline #Programming #Java #Leadership #Big Data #Documentation #Automation #Computer Science #Spark (Apache Spark) #Unix #Python #Data Architecture #Spark SQL #Data Manipulation #SQL Queries #Data Engineering #Data Processing #Data Extraction
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Job Title: Senior Big Data Engineer

Location: Phoenix, AZ (Locals Only)

Job Type: 12 Months W2

Onsite Interview

Experience Level: 10+ years

Must Skills: "Java/Python, Spark/PySpark, SQL/SQL Query, Shell/Unix Scripting, Hive"

Position Overview:

As a Senior Big Data Engineer, you will play a crucial role in designing, developing, and optimizing our big data systems. You will leverage your extensive experience in Java, Python, and big data technologies to build scalable solutions and ensure the efficient processing and analysis of large datasets. Your expertise will help drive key business decisions and strategies.

Key Responsibilities:
• Architecture & Design: Lead the design and implementation of scalable big data architectures using Spark/PySpark, ensuring high performance and reliability.
• Data Processing: Develop and maintain complex data pipelines and workflows, utilizing SQL/SQL Query, Hive, and other relevant technologies to process and analyze large volumes of data.
• Programming: Write efficient and maintainable code in Java and Python for data processing, transformation, and integration tasks.
• Scripting: Create and optimize Shell/Unix scripts to automate data processing tasks and streamline workflows.
• Performance Optimization: Monitor and optimize the performance of big data systems, troubleshoot issues, and implement improvements to enhance processing speed and accuracy.
• Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver actionable insights.
• Documentation: Maintain comprehensive documentation of data processes, workflows, and system configurations.
• Leadership: Mentor and guide junior engineers, sharing best practices and providing technical support.

Qualifications:
• Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree preferred.
• Experience: 10+ years of hands-on experience in big data technologies and frameworks.
• Technical Skills:
• Java: Expertise in developing and maintaining applications using Java.
• Python: Proficiency in Python for data manipulation and scripting.
• Spark/PySpark: In-depth experience with Apache Spark and PySpark for big data processing.
• SQL/SQL Query: Advanced skills in writing complex SQL queries for data extraction and manipulation.
• Shell/Unix Scripting: Strong experience with Shell and Unix scripting for automation tasks.
• Hive: Proficiency in Hive for querying and managing large datasets.