1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 3-5 years of experience, focusing on Databricks, AWS, and Apache Spark. It is a W2 position located in North Quincy, MA, offering a hybrid work environment. Key skills include SQL, Scala/Python, and Hadoop.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 7, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Hybrid
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Boston, MA
🧠 - Skills detailed
#Docker #Hadoop #Programming #Apache Spark #Spark (Apache Spark) #Airflow #AWS (Amazon Web Services) #Python #OpenSearch #Java #Scala #BI (Business Intelligence) #Databricks #Kubernetes #RDBMS (Relational Database Management System) #Migration #Big Data #Data Engineering #Database Architecture #ML (Machine Learning) #SQL (Structured Query Language) #Spark SQL
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

W2 Only role.

Data Engineer (Must have Databricks)

Need Mid level (3-5Yrs)

Location : North Quincy MA – Hybrid Onsite

Key Skill: Strong Databricks, AWS, Apache Spark, SQL, Scala/Python, Hadoop

Skill Sets Required

   • Good decision-making and problem solving skills

   • Solid understanding of Databricks fundamentals/architecture and have hands on experience in setting up Databricks cluster, working in Databricks modules (Data Engineering and SQL warehouse).

   • Solid Knowledge on medallion architecture, DLT and unity catalog within Databricks.

   • Experience in migrating data from on-prem Hadoop to Databricks/AWS

   • Understanding of core AWS services, uses, and AWS architecture best practices

   • Hands-on experience in different domains, like database architecture, business intelligence, machine learning, advanced analytics, big data, etc.

   • Solid knowledge on Airflow

   • Solid knowledge on CI/CD pipeline in AWS technologies

   • Application migration of RDBMS, java/python applications, model code, Opensearch etc.

   • Solid programming background on scala and/or python with Spark

   • Experience with Docker and Kubernetes is a plus