Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer based in Salt Lake City, Utah, on a contract exceeding 6 months, with a pay rate of "unknown." Key skills include 6+ years in Spark (PySpark), Python, SQL, and Databricks development.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 16, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Salt Lake City, UT
🧠 - Skills detailed
#Cloud #Kubernetes #PySpark #Code Reviews #Data Engineering #SQL (Structured Query Language) #Consul #Spark (Apache Spark) #Kafka (Apache Kafka) #Data Pipeline #Scrum #Agile #Migration #Linux #GCP (Google Cloud Platform) #Data Lake #Azure #Documentation #Docker #Kanban #Consulting #Computer Science #Databricks #GIT #Airflow #Python
Role description

NOT WORKING WITH 3rd parties or C2C etc. You will be Rejected, if you didn't read this message.

Please don't apply if you don't currently living in UT

Job Title: Sr Data Engineer

Location: Salt Lake City, Utah

Status: Contract

Job Number: 15623

This is a contract with a great organization. As a contractor for Smith Johnson, you are eligible for medical, dental, life, disability. Smith Johnson pays for 70% of your medical and dental and 100% of life and disability. The contract is also eligible for PTO/holiday accrued monthly. You are also eligible for a 3% retirement matching plan. Smith Johnson believes in taking care of our contractors.

Do you want to take your career to the next level? Are you ready for the responsibility of working with high-profile clients? You will solve challenging business and technical problems as a full-time consultant serving local, enterprise clients. You also can work on cutting edge technologies and cloud native development. Tired of the same old thing? Take your talents to a world class consulting firm that inspires personal and professional growth and values your ideas.

Your future duties and responsibilities

How you'll make an impact

   • Play key role in establishing and implementing migration patterns for the Data Lake Modernization project

   • Actively migrate use cases from our on-premises Data Lake to Databricks on GCP

   • Collaborate with Product Management and business partners to understand case requirements and reporting

   • Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation)

   • Document and showcase feature designs/workflows

   • Participate in team meetings and discussions around product development

   • Stay up to date on industry latest industry trends and design patterns

Required qualifications to be successful in this role

   • 6+ years development experience with Spark (PySpark), Python and SQL

   • Extensive knowledge building data pipelines

   • Hands on experience with Databricks Development

   • Strong experience developing on Linux OS

   • Experience with scheduling and orchestration (e.g. Databricks Workflows, airflow, prefect, control-m)

   • Solid understanding of distributed systems, data structures, design principles

   • Comfortable communicating with teams via showcases/demos

   • Agile Development Methodologies (e.g. SAFe, Kanban, Scrum)

   • Bachelor's in Computer Science, Computer Engineering or related field

Desired qualifications (Nice to Have):

   • 3+ years experience with GIT

   • 3+ years experience with CI/CD (e.g. Azure Pipelines)

   • Experience with streaming technologies, such as Kafka, Spark

   • Experience building applications on Docker and Kubernetes

   • Cloud experience (e.g. Azure, Google)