Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Senior Data Scientist

This role is a Senior Data Scientist on a 12-month contract, paying "competitive rates." Candidates must have 4+ years of experience, strong Python skills, and proficiency in Docker, RabbitMQ, and SQLite. Familiarity with generative AI and cloud platforms is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
💰 - Day rate
Unknown
Unknown
360
🗓️ - Date discovered
February 15, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United Kingdom
🧠 - Skills detailed
#Kubernetes #Data Processing #Libraries #Data Engineering #Data Modeling #Scala #Strategy #AWS (Amazon Web Services) #Data Pipeline #ML (Machine Learning) #Pandas #Data Integrity #Database Management #SQL (Structured Query Language) #TensorFlow #NumPy #Datasets #Computer Science #Cloud #Python #AI (Artificial Intelligence) #Visualization #Azure #Docker #Data Science #Data Analysis
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Overview

The Senior Data Scientist plays a critical role in our organization by leveraging advanced analytical techniques to drive strategic decision-making. As an integral member of the data science team, you will be responsible for interpreting complex data sets to provide actionable insights that support business objectives. This position requires a strong foundation in statistical analysis, machine learning, and data modeling. The Senior Data Scientist will collaborate with various departments to identify opportunities for data-driven solutions and measure the impact of those solutions on department performance and overall company goals. Data plays a pivotal role in shaping our business strategy, and your expertise will empower us to remain competitive in an ever-evolving marketplace. By translating data trends into business opportunities, you will influence product development, enhance customer satisfaction, and optimize operational efficiencies.

This role will initially be a 12-month fixed-term contract with the possibility of extension.

We are looking for a highly skilled and versatile Data Scientist to join our team. The ideal candidate will have a strong technical background, be proficient in Python, and have experience managing data pipelines and using technologies like Docker, RabbitMQ, SQLite, and more. You will be joining a team that is contributing to GenAI features on our product roadmap.

Data Science And Engineering
• Develop and implement advanced data science models.
• Design and optimize data pipelines for various AI features across our product suite.
• Utilize Python and its major libraries (Pandas, Scikit-learn, NumPy, etc.) to analyze and process large datasets.

Product Mastery
• Gain deep knowledge of our various AI or generative AI features across our products.
• Work closely with the product development team to integrate advanced data science methodologies into our products.

Pipeline Management
• Design, build, and maintain scalable data pipelines that ensure smooth operation across various products.
• Optimize data processing workflows using tools like Docker, RabbitMQ, and SQLite.
• Monitor and troubleshoot data pipelines, ensuring data integrity and performance.

Stakeholder Communication
• Communicate complex data insights to non-technical stakeholders in a clear and concise manner.

Research
• Stay up-to-date with the latest technology trends and techniques in data science and implement new methodologies as appropriate.

Qualifications
• Bachelor’s or Master’s degree in Data Science, Computer Science, Engineering, or a related field.
• 4+ years of experience in data science and data engineering roles.
• Strong proficiency in Python and major libraries such as Pandas, Scikit-learn, NumPy, TensorFlow, etc.
• Proven experience in building and managing data pipelines using Docker, RabbitMQ, and SQLite.
• Familiarity with SQL and database management.
• Strong problem-solving skills and the ability to work both independently and collaboratively.
• Experience with generative AI technologies.
• Familiarity with containerization and orchestration tools like Kubernetes.
• Experience with cloud platforms like AWS, Azure, or Google Cloud.

Skills: statistical modeling,data engineering,data,data visualization,cloud computing,generative ai,python,scikit-learn,sql,cloud,cloud platforms,tensorflow,statistical analysis,team collaboration,problem solving,numpy,sqlite,data analysis,docker,data modeling,azure,data science,rabbitmq,pipelines,pandas,google cloud,aws,machine learning