Senior Data Engineer

This role is for a Senior Data Engineer with a long-term remote contract, requiring US citizenship. Key skills include SQL, ETL development, and experience with large data systems. A bachelor's degree and 7+ years in data engineering are essential.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
January 17, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#Security #Database Administration #"ETL (Extract #Transform #Load)" #Data Quality #Data Integration #Data Warehouse #DynamoDB #SAP Hana #Qlik #Datasets #Python #SQL (Structured Query Language) #Data Mining #SAP #Data Pipeline #Big Data #Databases #Data Security #Visualization #Hadoop #Data Storage #Scala #Data Science #Storage #Splunk #Databricks #Data Governance #Data Engineering #Data Management #Data Accuracy #Oracle #Spark (Apache Spark)
Role description
Log in or sign up for free to view the full role description and the link to apply.

WE have urgent requirement for Senior Data Engineer role for long term Remote project – Need only US Citizens

Please Attach your resume in word document and provide me below details for immediate submission
• First Name :
• Last Name
• Current Location :
• Contact Number :
• Email ID :
• Hourly Rate:
• Interviews or Offers in Pipeline :
• Work Authorization in USA :
• Interview Availability :
• Start Availability :
• Open to relocate and work onsite :
• Linkedin :

Senior Data Engineer

Duration : Long Term

Location : Remote ( USA )

Note : Its federal government Project so needed only US Citizens for this role

Senior Data Engineer will work with our data warehousing team to transform transactional data into datasets that are consumable by a variety of consumers, for reporting and analytics. The successful candidate will be flexible and forward-leaning, able to learn new tools and skills, and able to learn new data domains as the data warehouse grows.

Responsibilities

Design, build, and maintain scalable and reliable data pipelines to support data integration, processing, and analysis.

Collaborate with data scientists, analysts, and other stakeholders to understand data needs and deliver high-quality data solutions.

Implement best practices for data management, including data governance, data quality, and data security.

Optimize and tune data processes for performance and cost-efficiency.

Develop and maintain ETL processes to ingest and transform data from various sources.

Create and manage data models, schemas, and databases to support data storage and retrieval.

Monitor and troubleshoot data pipelines to ensure data accuracy and availability.

Mentor and provide guidance to junior data engineers and other team members.

Stay up to date with the latest technologies and industry trends in data engineering.

Education and Experience

Bachelor’s degree plus at least 7 years in a data engineering role including ETL development, database development, data integration, data mining, big data.

Required Skills

Strong experience developing and maintaining complex SQL for ETL and reporting

Strong experience with large complex data systems

Experience performing database administration tasks (manage scheduled jobs, cluster configurations)

Experience performing software administration tasks (deploy applications, manage scheduled tasks)

Ability to learn new tools quickly as needed to provide new ideas for solving problems

Ability and desire to work with other program staff and customers to reach design decisions within given constraints

Excellent diplomacy and communication skills with both clients and technical staff

Desired Skills

Proficiency in Python and Scala

Experience using Spark and Hive

Experience with Qlik or other data visualization administration

Experience completing Databricks development and/or administrative tasks

Familiarity with some of these tools: DB2, Oracle, SAP, Postgres, Elastic Search, Glacier, Cassandra, DynamoDB, Hadoop, Splunk, SAP HANA, Databricks

Experience working with federal government clients