1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Sr. LLM Engineer (Gen AI)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. LLM Engineer (Gen AI) in Dallas, TX, on a long-term contract. Requires 8+ years in ML, 2+ years in LLMs and Generative AI, expert Python and SQL skills, and cloud service knowledge (Azure, GCP, AWS).
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 4, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Hybrid
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Dallas, TX
🧠 - Skills detailed
#Scala #SQL (Structured Query Language) #Deployment #GCP (Google Cloud Platform) #Azure #Cloud #Data Science #AWS (Amazon Web Services) #Python #ML (Machine Learning) #Programming #AI (Artificial Intelligence)
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Note: We are looking candidates only who can work on W2/1099 !!!

Job Title: Sr. LLM Engineer (Machine Learning & Gen AI)

Location: Dallas, TX (3 days onsite Hybrid)

Project Duration: Long Term Contract

Required skills

   • 8+ years of professional experience in building Machine Learning models & systems.

   • 2+ years of hands-on experience in how LLMs work & Generative AI (LLM) techniques, particularly prompt engineering, RAG, and agents.

   • Expert proficiency in programming skills in Python, Lang chain/Lang graph, and SQL is a must.

   • Understanding of Cloud services, including Azure, GCP, or AWS.

   • Excellent communication skills to effectively collaborate with business SMEs.

Roles & Responsibilities

   • Develop and optimize LLM-based solutions: Lead the design, training, fine-tuning, and deployment of large language models, leveraging techniques like prompt engineering, retrieval-augmented generation (RAG), and agent-based architectures.

   • Codebase ownership: Maintain high-quality, efficient code in Python (using frameworks like Lang Chain/Lang Graph) and SQL, focusing on reusable components, scalability, and performance best practices.

   • Cloud integration: Aid in deployment of GenAI applications on cloud platforms (Azure, GCP, or AWS), optimizing resource usage and ensuring robust CI/CD processes.

   • Cross-functional collaboration: Work closely with product owners, data scientists, and business SMEs to define project requirements, translate technical details, and deliver impactful AI products.

   • Continuous innovation: Stay abreast of the latest advancements in LLM research and generative AI, proposing and experimenting with emerging techniques to drive ongoing improvements in model performance.