Data Scientist II

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist II, a 12-month remote contract position, requiring proficiency in Python, GenAI/ML modeling, and cloud environments. Key skills include CI/CD practices, SQL/NoSQL, and collaboration with cross-functional teams.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
560
🗓️ - Date discovered
April 16, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Cambridge, MA
🧠 - Skills detailed
#Cloud #SQL (Structured Query Language) #Data Engineering #BI (Business Intelligence) #Deep Learning #Spark (Apache Spark) #NoSQL #ML (Machine Learning) #AWS (Amazon Web Services) #Agile #Visualization #GCP (Google Cloud Platform) #R #Snowflake #Microsoft Power BI #AI (Artificial Intelligence) #Data Science #Scala #Tableau #Datasets #Databricks #Plotly #Python
Role description

Job Title: Data Scientist II

Location: Cambridge, MA- 02141 (Remote)

Duration: 12 months contract

We are looking for a highly technical AI/ML Engineer or Data Scientist with strong hands-on experience in GenAI/ML modeling and pipeline orchestration. This person should be proficient in Python, experienced in working with cloud and high-performance computing environments, and skilled in developing and deploying production-ready AI/ML solutions using CI/CD practices. The candidate should be collaborative, a strong communicator, and able to simplify technical concepts for non-technical business stakeholders

Day-to-Day Responsibilities

Modeling & Development:

Design and implement AI/ML models using complex datasets.

Work on advanced algorithms including supervised, unsupervised, deep learning, and Generative AI techniques.

Coding & Software Engineering:

Write clean, modular, and production-grade Python code.

Possibly use R or Scala for specific modeling tasks.

Pipeline Orchestration & CI/CD:

Build and manage automated ML pipelines using orchestration tools.

Deploy ML models in a production environment with CI/CD practices.

Cloud & Data Engineering:

Work in cloud or high-performance computing environments like AWS, GCP, Databricks, Spark.

Query and handle structured/unstructured data using SQL/NoSQL and Snowflake.

Collaboration & Communication:

Work in Agile teams focused on building AI products.

Communicate complex technical solutions to non-technical stakeholders using dashboards and visualizations (Tableau, Power BI, Plotly).

Cross-functional Work:

Collaborate with data engineers, product managers, and domain experts.

Understand business problems and translate them into AI solutions.