Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Big Data Consultant

This role is for a Big Data Consultant in Rockville, Maryland, on a long-term contract. Key skills include Hadoop, Spark, Python, and Scala. Candidates should have experience with production data pipelines and AWS; certifications are preferred. Hybrid work is required.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
560
🗓️ - Date discovered
February 14, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Hybrid
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Rockville, MD
🧠 - Skills detailed
#Data Science #AWS (Amazon Web Services) #Data Processing #"ETL (Extract #Transform #Load)" #Trino #AWS CLI (Amazon Web Services Command Line Interface) #Data Architecture #CLI (Command-Line Interface) #Presto #Quality Assurance #Data Ingestion #Python #Data Pipeline #Storage #Scala #Data Engineering #Data Quality #System Testing #Spark (Apache Spark) #Consul #Hadoop #Big Data #Automated Testing
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Big Data Engineer

Rockville, Maryland (Hybrid)

Long term contract

NO C2C PLEASE

ConsultNet is seeking a Big Data Engineer to support an ongoing effort in Rockville, Maryland. This role has onsite requirements with a minimum of two days per week onsite in the office.

Responsibilities:
• Design, develop, and maintain large-scale data processing pipelines using Big Data technologies (e.g., Hadoop, Spark, Python, Scala).
• Implement data ingestion, storage, transformation, and analysis of solutions that are scalable, efficient, and reliable.
• Stay current with industry trends and emerging Big Data technologies to continuously improve the data architecture
• Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
• Optimize and enhance existing data pipelines for performance, scalability, and reliability.
• Develop automated testing frameworks and implement continuous testing for data quality assurance.
• Conduct unit, integration, and system testing to ensure the robustness and accuracy of data pipelines.
• Work with data scientists and analysts to support data-driven decision-making across the organization.
• Ability to write and maintain automated unit, integration, and end-to-end tests
• Monitor and troubleshoot data pipelines in production environments to identify and resolve issues.

Preferred:
• Experience with big data tools such as Hadoop, Spark, and Presto/Trino
• Experience with production data pipelines/ETL systems
• Comfortable in AWS console and using AWS cli tools
• AWS certifications