1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer for a long-term contract, remote (PST hours), with a pay rate of "unknown." Requires 10+ years of experience, expertise in Databricks, Snowflake, SQL, AWS, and Python, plus a relevant degree.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 5, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Portland, OR
🧠 - Skills detailed
#Athena #Airflow #Spark (Apache Spark) #Data Management #Apache Spark #S3 (Amazon Simple Storage Service) #Programming #Computer Science #Databricks #Logging #RDS (Amazon Relational Database Service) #AWS S3 (Amazon Simple Storage Service) #Scala #SQS (Simple Queue Service) #Snowflake #Lambda (AWS Lambda) #AWS (Amazon Web Services) #Integration Testing #Big Data #Data Pipeline #Cloud #Data Analysis #Python #Data Processing #Data Catalog #Data Engineering #SQL (Structured Query Language) #Deployment #Libraries #Splunk #Data Quality
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Tittle: Senior Data Engineer

Location: Portland, OR / Remote PST Hours

Contract

Preference: will be Ex-Nike Long Term Project Exp. Candidates.

Responsibilities:

   • Design and implement data products and feature in collaboration with product owners, data analysts, and business partners.

   • Work with a variety of teammates to build first-class solutions for Client technology and its business partners, working on development projects related to supply chain, commerce, consumer behaviour and web analytics among others.

   • Contribute to overall architecture, frameworks and patterns for processing and storing large data volumes.

   • Research, evaluate and utilize new technologies/tools/frameworks cantered around high-volume data processing.

   • Responsible for the evaluation of technical feasibility or risks and conveying that information to the team.

   • Translate backlog items into engineering design and logical units of work. Profile and analyse data for the purpose of designing scalable solutions.

   • Define and apply appropriate data acquisition and consumption strategies for given technical scenarios.

   • Design and implement distributed data processing pipelines using tools and languages prevalent in the big data ecosystem.

   • Build utilities, user defined functions, libraries, and frameworks to better enable data flow patterns.

   • Implement complex automated routines using workflow orchestration tools. Work with architecture, engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to.

   • Anticipate, identify and solve issues concerning data management to improve data quality.

   • Build and incorporate automated unit tests and participating in integration testing efforts.

   • Utilize and advance continuous integration and deployment frameworks.

   • Troubleshoot data issues and performing root cause analysis.

   • Applicant must have a Bachelors/master’s degree in computer science, Computer Information Systems, or Information Management and 10+ years of experience in the job offered or a computer related occupation.

Experience must include:

   • Databricks

   • Databricks sole

   • Snowflake

   • SQL

   • EMR

   • Spark

   • Dynamo DB

   • Data Pipelines

   • Python programming

   • Airflow

   • AWS (S3, SQS, Lambda, Athena, Open search, Glue data catalog, cloud watch)

   • Apache Spark

   • RDS

   • Logging (Splunk, Slack)

   • Hive

   • Metastore