1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Data/ML Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data/ML Engineer in San Francisco, CA, offering a 12-month hybrid contract. Requires 3-7 years of experience, strong Python and SQL skills, expertise in big data frameworks, and cloud experience (AWS/GCP/Azure).
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
640
🗓️ - Date discovered
April 4, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
San Francisco, CA
🧠 - Skills detailed
#Lambda (AWS Lambda) #Java #Scala #SQL (Structured Query Language) #Deployment #AWS S3 (Amazon Simple Storage Service) #Kafka (Apache Kafka) #MLflow #Logging #SageMaker #Python #Data Engineering #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #Synapse #"ETL (Extract #Transform #Load)" #Prometheus #Azure #Docker #Kubernetes #ML (Machine Learning) #Databases #Data Privacy #AI (Artificial Intelligence) #Monitoring #Data Processing #Grafana #GCP (Google Cloud Platform) #TensorFlow #Compliance #Hadoop #PyTorch #Model Deployment #BigQuery #GDPR (General Data Protection Regulation) #Cloud #Data Quality #Data Framework #AWS (Amazon Web Services) #Data Science #NLP (Natural Language Processing) #A/B Testing #Airflow #Big Data #Data Architecture
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Hiring: Data/ML Engineer – San Francisco, CA (Hybrid, 2 Days Onsite) | 12-Month Contract

We are looking for a Data/ML Engineer to join our team in San Francisco, CA. This role is hybrid (onsite 2 days per week, preferred) and offers a 12-month contract opportunity. If you're passionate about big data processing, cloud technologies, and MLOps, we’d love to hear from you!

🔹 What You’ll Do:

✅ Design and develop ETL/ELT pipelines for structured and unstructured data.

✅ Build scalable data architectures on AWS, GCP, or Azure.

✅ Optimize machine learning pipelines for training, validation, and deployment.

✅ Work with data scientists to productionize ML models (MLflow, TensorFlow, PyTorch, Scikit-learn).

✅ Implement MLOps best practices, including CI/CD pipelines for model deployment.

✅ Develop real-time data streaming solutions (Kafka, Kinesis, Flink).

✅ Automate workflows using Airflow, Prefect, or Dagster.

✅ Ensure data quality, governance, and compliance with industry standards.

✅ Monitor model performance and manage retraining pipelines.

🔹 What We’re Looking For:

✔ 3-7 years of experience in Data Engineering/ML Engineering.

✔ Strong coding skills in Python, SQL (Scala/Java is a plus).

✔ Expertise in big data frameworks (Spark, Hadoop, Dask).

✔ Experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn).

✔ Hands-on with containerization (Docker, Kubernetes).

✔ Familiarity with feature engineering, feature stores, vector databases (Feast, Pinecone).

✔ Proficiency in MLOps tools (Kubeflow, MLflow, SageMaker, Vertex AI).

✔ Cloud experience (AWS S3, Lambda, SageMaker | GCP BigQuery, Vertex AI | Azure Synapse, ML Studio).

✔ Experience with monitoring/logging tools (Prometheus, Grafana, ELK Stack).

🔹 Bonus Points If You Have:

⭐ Experience in Retail, Finance, Healthcare, or E-commerce.

⭐ Exposure to A/B testing, recommendation systems, NLP applications.

⭐ Understanding of data privacy regulations (GDPR, CCPA).