Data Migration Consultant

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Migration Consultant with a contract length of "X months" at a pay rate of "$X per hour." Required skills include ETL frameworks, Oracle databases, cloud services (AWS, Azure), and experience with data processing systems.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 15, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#Azure Service Bus #Data Science #Data Integration #Scrum #BI (Business Intelligence) #Data Pipeline #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Spark (Apache Spark) #Storage #Visualization #Big Data #Cloud #Data Warehouse #Scripting #Azure #Databases #Data Processing #Oracle #Python #Consul #Agile #Data Storage #Migration #Data Architecture #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Data Migration #Data Engineering #Hadoop #Datasets #Computer Science #GIT #Version Control #Data Ingestion
Role description

RESPONSBILTIES:

   • Develop, construct, test, and maintain data architectures and pipelines.

   • Create best-practice Extract, Transform, Load (ETL) frameworks; repeatable and reliable data pipelines that convert data into powerful signals and features.

   • Handle raw data (structured, unstructured, and semi structured) and align it into a more usable, structured format that is better suited for reporting and analytics.

   • Work with cloud solutions architect to ensure data solutions are aligned with company platform architecture and all aspects related to infrastructure.

   • Collaborate with business teams to improve data models that feed business intelligence tools, increasing data

   • accessibility and fostering data-driven decision making across the organization.

   • Ensure data pipeline architecture will support the requirements of the business.

   • Document processes and perform periodic system reviews to ensure adherence to established standards and

   • processes.

   • Evaluate and advise on technical aspects of open work requests in the product backlog with the project lead.

   • Define Cloud infrastructure Reference Architectures for common solution archetypes.

SKILLS & EXPERIENCE REQUIRED:

   • Proven experience as a data migration engineer or in a similar role, with a track record of manipulating, processing, and extracting value from large, disconnected datasets.

   • Demonstrated technical proficiency with data architecture, databases, and processing large data sets.

   • Proficient in Oracle databases and comprehensive understanding of ETL processes, including creating and implementing custom ETL processes.

   • Experience with cloud services (AWS, Azure), and understanding of distributed systems, such as Hadoop/MapReduce, Spark, or equivalent technologies.

   • Knowledge of Kafka, Kinesis, OCI Data Integration, Azure Service Bus or similar technologies for real-time data processing and streaming.

   • Experience designing, building, and maintaining data processing systems, as well as experience working with either a MapReduce or an MPP system.

   • Strong organizational, critical thinking, and problem-solving skills, with clear understanding of high-performance algorithms and Python scripting.

   • Hands-on experience with data warehouses.

   • Demonstrated experience in managing and optimizing data pipelines and architectures.

   • Strong understanding of streaming data platforms and pub-sub models.

   • In-depth knowledge of data warehousing concepts, including data storage, retrieval, and pipeline optimization.

   • Experience with machine learning toolkits, data ingestion technologies, data preparation technologies, and data visualization tools is a plus.

   • Excellent communication and collaboration abilities, with the ability to work in a dynamic, team-oriented environment and adapt to changes in a fast-paced work environment.

   • Data-driven mindset, with the ability to translate business requirements into data solutions.

   • Experience with version control systems e.g. Git, and with agile methodologies/scrum.

   • Certifications in related field would be an added advantage (e.g. Google Certified Professional Data Engineer, AWS Certified Big Data, etc.).

EDUCATION:

   • A bachelor’s degree in Computer Science, Data Science, Software/Computer Engineering, or a related field.