1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Senior Azure Data Engineer - Mexico

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Azure Data Engineer contract position for 40 hours per week, paying $17.00 - $25.00 per hour. Key requirements include 7+ years in data engineering, expertise in Azure Databricks, PySpark, and machine learning model development.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
200
🗓️ - Date discovered
April 7, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Remote
🧠 - Skills detailed
#Computer Science #Pandas #PySpark #Model Deployment #Docker #Azure Synapse Analytics #"ETL (Extract #Transform #Load)" #NoSQL #Datasets #TensorFlow #GIT #Data Processing #Spark (Apache Spark) #Mathematics #Python #Data Lake #Data Integration #Visualization #NumPy #Statistics #Azure Blob Storage #Scala #SQL (Structured Query Language) #Agile #BI (Business Intelligence) #Databricks #Azure Machine Learning #Azure Event Hubs #Data Analysis #Libraries #SciPy #Delta Lake #Tableau #Apache Kafka #Azure DevOps #Kafka (Apache Kafka) #Data Science #Cloud #Storage #DevOps #Azure #Big Data #Azure cloud #Version Control #Data Engineering #Data Pipeline #Azure Databricks #ML (Machine Learning) #Databases #Deployment #Spark SQL #Distributed Computing #Microsoft Power BI #Synapse #Keras #MLflow
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Job Summary As a Senior Azure Data Engineer, you play a crucial role in leveraging cutting-edge Azure data technologies to drive data transformation, analytics, and machine learning initiatives. We seek a skilled professional with exceptional technical expertise and a passion for delivering data-driven solutions.

You will work with large, complex datasets and collaborate with cross-functional teams to design, build, and optimize data pipelines, machine learning workflows, and cloud data platform integrations. Your focus will include transforming raw data into actionable insights, ensuring high-performance distributed computing, and staying at the forefront of advancements in Azure and big data technologies. Your efforts will enable the organization to maximize the value of its data assets by implementing scalable and efficient cloud-based solutions.

Main Responsibilities 

Data Engineering & Transformation

Work with large, complex datasets and design scalable data pipelines on Azure Databricks using PySpark and Spark Pools.

 Transform raw data into structured, actionable insights for data science and analytics use cases.

Machine Learning Development

Build, deploy, and maintain machine learning models in Azure Databricks using PySpark and frameworks like MLlib or TensorFlow.

 Implement end-to-end machine learning workflows, from data collection to model deployment.

Cloud Data Platform Integration

Design and optimize solutions for integrating data from various sources such as Azure Blob Storage, Azure Data Lake, and SQL/NoSQL databases.

 Leverage Databricks Notebooks and Delta Lake for advanced data processing and analysis.

Performance Optimization

Execute large-scale data processing jobs efficiently using Spark Pools.

 Fine-tune Spark configurations and cluster resources to optimize distributed data processing tasks.

Job Requirements 

Education

Bachelor's or Master's degree in Computer Science, Data Science, Mathematics, Statistics, or a related field.

Experience

7+ years of experience in data engineering, focusing on big data technologies and Azure cloud platforms.

 3+ years of experience developing data integrations using in Azure Databricks, PySpark, Spark Pools, and large-scale data processing.

Technical Skills

Proficient in Python (specifically PySpark) and data analysis libraries like Pandas, NumPy, and SciPy.

 Experience with Spark SQL, DataFrames, and RDDs for data processing and transformation.

 Hands-on experience with Azure Data Lake, Azure Blob Storage, and Azure Synapse Analytics.

 Familiarity with Databricks Notebooks, Delta Lake, and MLflow for model tracking and management.

 Expertise in creating, optimizing, and managing Spark Pools for distributed computing.

Machine Learning

Knowledge of machine learning algorithms, model training, evaluation, and deployment.

 Experience with tools like MLlib, TensorFlow, Keras, or scikit-learn for model development.

Soft Skills

Strong problem-solving abilities and analytical thinking.

 Excellent communication skills to collaborate effectively with technical and non-technical stakeholders.

 Ability to thrive in a fast-paced, agile environment while managing multiple tasks.

Preferred Skills (Nice-to-Have)

Experience with Apache Kafka or Azure Event Hubs for real-time data streaming.

 Familiarity with CI/CD pipelines and version control tools (e.g., Git, Azure DevOps).

 Knowledge of Docker and containerization for machine learning model deployment.

 Experience with additional Azure tools like Azure Machine Learning Studio, Azure Functions, or Power BI for analytics.

 Familiarity with Tableau or other data visualization tools to create interactive dashboards.

Job Type: Contract

Pay: $17.00 - $25.00 per hour

Expected hours: 40 per week

Schedule:

8 hour shift

Work Location: Remote