1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Data and Analytics - Data Engineer 3 Data Engineer 3 #: 25-10720

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer 3 in Beaverton, OR, with a contract length of "unknown" and a pay rate of "unknown." Requires 8+ years in IT, 6+ years Python expertise, and 4+ years in Data Warehousing (Databricks, Snowflake).
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 1, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Beaverton, OR
🧠 - Skills detailed
#Agile #AWS (Amazon Web Services) #IAM (Identity and Access Management) #Data Science #Python #Azure Data Factory #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #MongoDB #NumPy #Talend #Triggers #GIT #Snowflake #Azure #NoSQL #Data Pipeline #Cloud #EC2 #Alteryx #PySpark #Hadoop #S3 (Amazon Simple Storage Service) #MySQL #DynamoDB #Lambda (AWS Lambda) #Pandas #AWS Glue #RDBMS (Relational Database Management System) #REST (Representational State Transfer) #Airflow #Data Warehouse #DevOps #Spark (Apache Spark) #Schema Design #Data Modeling #REST API #Computer Science #Data Processing #Apache Airflow #Redis #ADF (Azure Data Factory) #Libraries #JSON (JavaScript Object Notation) #Data Engineering #Athena #Databricks
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Job Description: Senior Data Engineer

Location: Beaverton, OR

Responsibilities

   • Python programmers/developers who done extensive hands-on work in data engineering space

   • Willingness to quickly learn and adapt.

   • Experience in solutioning and implementing data pipelines, data curation, data modeling and implementing data solutions.

   • Strong understanding of different type of data and the lifecycle of data.

   • Design, develop, and launch extremely efficient and reliable data pipelines using Python frameworks to move data and to provide intuitive analytics to our partner teams.

   • Collaborate with other engineers and Data Scientists to Client for the best solutions.

   • Diagnose and solve issues in our existing data pipelines and envision and build their successors.

Required Qualifications

   • Bachelor’s degree in Computer Science or equivalent work experience.

   • Minimum 8+ years experience in IT

   • 6+ years Proficiency working with Python specifically related to data processing with proficiency in Python Libraries - Pandas, NumPy, PySpark, PyOdbc, PyMsSQL, Requests, Boto3, SimpleSalesforce, Json.

   • 4+ years experience in Data Warehouse technologies – Databricks and Snowflake

   • 4+ years Strong SQL (SQL, performance, Stored Procedures, Triggers, schema design) skills and knowledge of one of more RDBMS and NoSQL DBs like MSSQL/MySQL and DynamoDB/MongoDB/Redis.

   • 4+ years in designing, developing and managing REST APIs.

   • 2+ years Strong AWS skills using AWS Data Exchange, Athena, Cloud Formation, Lambda, S3, AWS Console, IAM, STS, EC2, EMR

   • 2+ years ETL tools like Apache Airflow/AWS Glue/Azure Data Factory/Talend/Alteryx

   • 1+ year in Hadoop, Hive

   • Excellent verbal communication skills.

   • Knowledge of DevOps/Git for agile planning and code repository