Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Data Engineer

This role is a remote Data Engineer contract position, focusing on data modeling, ETL processes, and data warehousing. Key skills include SQL, Hadoop, Spark, and Kafka. Experience with big data technologies and data governance is required. Contract length and pay rate unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 12, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#Data Lake #Data Security #Big Data #Batch #Data Science #Data Storage #SQL Queries #Scala #Data Engineering #Cloud #Data Extraction #NoSQL #Data Pipeline #Kafka (Apache Kafka) #Data Governance #Datasets #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Modeling #Databases #Hadoop #Storage #Security #Spark (Apache Spark)
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Company Description:

Hirein5 is a platform designed to reduce the hiring time of technology talent from 6-8 weeks to 5 hours. Our solution integrates all aspects of hiring in a single platform, making the process quick, effortless, and easy. We prioritize transparency, efficiency, and speed to empower recruiters, hiring managers, client directors, and sales teams to confidently commit to onboarding resources for client projects.

Role Description:

This is a contract remote role for a Data Engineer. The Data Engineer will be responsible for tasks such as data modeling, ETL processes, data warehousing, and data analytics. The role involves working closely with the technology team to optimize data pipelines and ensure efficient data flow.

Key Responsibilities:

·    Design and maintain scalable ETL pipelines for large datasets.

·    Work with structured and unstructured data from various sources, including databases, APIs, and flat files.

·    Develop and optimize complex SQL queries for data extraction and transformation.

·    Implement data storage solutions with SQL/NoSQL databases, cloud platforms, and data lakes.

·    Collaborate with data scientists and analysts to ensure data is accessible, accurate, and usable for analytics.

·    Monitor and troubleshoot data pipeline performance to ensure high reliability and quality.

·    Ensure data governance and implement data security practices.

·    Work with big data technologies like Hadoop, Spark, and Kafka for real-time and batch processing.

·    Integrate external data sources and platforms to enhance internal data infrastructure.

·    Automate data workflow processes to improve operational efficiency.