1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a 12+ month contract in Dallas, TX or Charlotte, NC, offering $60.00 - $65.00 per hour. Key skills include big data technologies, Python, ETL processes, and cloud platforms like AWS or Azure.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
520
🗓️ - Date discovered
March 30, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Charlotte, NC
🧠 - Skills detailed
#Python #Data Engineering #Cloud #Data Storage #Hadoop #Data Pipeline #"ETL (Extract #Transform #Load)" #API (Application Programming Interface) #Datasets #Talend #Storage #ML (Machine Learning) #Data Processing #Big Data #Automation #Database Design #Scala #Data Integration #Data Lake #Azure #Data Warehouse #Scripting #Data Access #Data Science #Database Performance #AWS (Amazon Web Services) #Data Manipulation #Agile #Programming #Spark (Apache Spark) #BI (Business Intelligence)
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Client - Wells Fargo

12+month contract

Location: Dallas, Tx or Charlotte, NC

OverviewWe are seeking a skilled and motivated Data Engineer to join our dynamic team. In this role, you will be responsible for designing, developing, and maintaining data pipelines that support our analytics and business intelligence initiatives. You will work closely with data scientists, analysts, and other stakeholders to ensure that our data infrastructure is robust, scalable, and efficient. The ideal candidate will have a strong background in big data technologies and a passion for transforming raw data into actionable insights.

Responsibilities

Design and implement scalable data pipelines using big data technologies.

 Develop ETL processes to extract, transform, and load data from various sources into data warehouses.

 Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs.

 Optimize database performance and ensure the integrity of data through effective database design.

 Utilize Python for scripting and automation of data processing tasks.

 Implement RESTful APIs to facilitate data access for applications and services.

 Participate in Agile development processes to deliver high-quality solutions in a timely manner.

 Conduct model training and validation to support machine learning initiatives.

 Work with cloud platforms such as AWS or Azure Data Lake to manage large datasets effectively.

 Utilize tools like Talend for data integration tasks.

Experience

Proven experience in big data technologies such as Hadoop or Spark is preferred.

 Strong proficiency in Python programming language for data manipulation and analysis.

 Familiarity with AWS or Azure Data Lake services for cloud-based data storage solutions.

 Experience with RESTful API development for seamless integration between systems.

 Knowledge of Agile methodologies and practices in software development.

 Understanding of model training processes and analytics techniques is a plus.

 Demonstrated ability in database design principles and optimization strategies.

Join us as we leverage the power of data to drive innovation and make informed decisions!

Job Types: Full-time, Contract

Pay: $60.00 - $65.00 per hour

Work Location: On the road