

Data and Analytics - Data Engineer 3 Data Engineer 3 #: 25-10720
Job Description: Senior Data Engineer
Location: Beaverton, OR
Responsibilities
• Python programmers/developers who done extensive hands-on work in data engineering space
• Willingness to quickly learn and adapt.
• Experience in solutioning and implementing data pipelines, data curation, data modeling and implementing data solutions.
• Strong understanding of different type of data and the lifecycle of data.
• Design, develop, and launch extremely efficient and reliable data pipelines using Python frameworks to move data and to provide intuitive analytics to our partner teams.
• Collaborate with other engineers and Data Scientists to Client for the best solutions.
• Diagnose and solve issues in our existing data pipelines and envision and build their successors.
Required Qualifications
• Bachelor’s degree in Computer Science or equivalent work experience.
• Minimum 8+ years experience in IT
• 6+ years Proficiency working with Python specifically related to data processing with proficiency in Python Libraries - Pandas, NumPy, PySpark, PyOdbc, PyMsSQL, Requests, Boto3, SimpleSalesforce, Json.
• 4+ years experience in Data Warehouse technologies – Databricks and Snowflake
• 4+ years Strong SQL (SQL, performance, Stored Procedures, Triggers, schema design) skills and knowledge of one of more RDBMS and NoSQL DBs like MSSQL/MySQL and DynamoDB/MongoDB/Redis.
• 4+ years in designing, developing and managing REST APIs.
• 2+ years Strong AWS skills using AWS Data Exchange, Athena, Cloud Formation, Lambda, S3, AWS Console, IAM, STS, EC2, EMR
• 2+ years ETL tools like Apache Airflow/AWS Glue/Azure Data Factory/Talend/Alteryx
• 1+ year in Hadoop, Hive
• Excellent verbal communication skills.
• Knowledge of DevOps/Git for agile planning and code repository
Job Description: Senior Data Engineer
Location: Beaverton, OR
Responsibilities
• Python programmers/developers who done extensive hands-on work in data engineering space
• Willingness to quickly learn and adapt.
• Experience in solutioning and implementing data pipelines, data curation, data modeling and implementing data solutions.
• Strong understanding of different type of data and the lifecycle of data.
• Design, develop, and launch extremely efficient and reliable data pipelines using Python frameworks to move data and to provide intuitive analytics to our partner teams.
• Collaborate with other engineers and Data Scientists to Client for the best solutions.
• Diagnose and solve issues in our existing data pipelines and envision and build their successors.
Required Qualifications
• Bachelor’s degree in Computer Science or equivalent work experience.
• Minimum 8+ years experience in IT
• 6+ years Proficiency working with Python specifically related to data processing with proficiency in Python Libraries - Pandas, NumPy, PySpark, PyOdbc, PyMsSQL, Requests, Boto3, SimpleSalesforce, Json.
• 4+ years experience in Data Warehouse technologies – Databricks and Snowflake
• 4+ years Strong SQL (SQL, performance, Stored Procedures, Triggers, schema design) skills and knowledge of one of more RDBMS and NoSQL DBs like MSSQL/MySQL and DynamoDB/MongoDB/Redis.
• 4+ years in designing, developing and managing REST APIs.
• 2+ years Strong AWS skills using AWS Data Exchange, Athena, Cloud Formation, Lambda, S3, AWS Console, IAM, STS, EC2, EMR
• 2+ years ETL tools like Apache Airflow/AWS Glue/Azure Data Factory/Talend/Alteryx
• 1+ year in Hadoop, Hive
• Excellent verbal communication skills.
• Knowledge of DevOps/Git for agile planning and code repository