

Need:: LLM Engineer at Dallas, TX Hybrid Contract Position
Dice is the leading career destination for tech experts at every stage of their careers. Our client, VRTek Consulting, is seeking the following. Apply via Dice today!
Title:- LLM Engineer
Location: Dallas, TX (3 days onsite Hybrid)
Rate: USD 70/Hr CTC
No H1B
About the role:
Turing is looking for people with LLM experience to join us in solving business problems for our Fortune 500 customers. You will be a key member of the Turing GenAI delivery organization and part of a GenAI project. You will be required to work with a team of other Turing engineers across different skill sets. In the past, the Turing GenAI delivery organization has implemented industry-leading multi-agent LLM systems, RAG systems, and Open Source LLM deployments for major enterprises.
Required Skills
• 5+ years of professional experience in building Machine Learning models & systems.
• 2+ years of hands-on experience in how LLMs work & Generative AI (LLM) techniques, particularly prompt engineering, RAG, and agents.
• Expert proficiency in programming skills in Python, Langchain/Langgraph, and SQL is a must.
• Understanding of Cloud services, including Azure, Google Cloud Platform, or AWS.
• Excellent communication skills to effectively collaborate with business SMEs.
Roles & Responsibilities
• Develop and optimize LLM-based solutions: Lead the design, training, fine-tuning, and deployment of large language models, leveraging techniques like prompt engineering, retrieval-augmented generation (RAG), and agent-based architectures.
• Codebase ownership: Maintain high-quality, efficient code in Python (using frameworks like LangChain/LangGraph) and SQL, focusing on reusable components, scalability, and performance best practices.
• Cloud integration: Aid in deployment of GenAI applications on cloud platforms (Azure, Google Cloud Platform, or AWS), optimizing resource usage and ensuring robust CI/CD processes.
• Cross-functional collaboration: Work closely with product owners, data scientists, and business SMEs to define project requirements, translate technical details, and deliver impactful AI products.
• Continuous innovation: Stay abreast of the latest advancements in LLM research and generative AI, proposing and experimenting with emerging techniques to drive ongoing improvements in model performance.
Jitender Sagar
Resource Manager
VRTEK Consulting
EMail:
Dice is the leading career destination for tech experts at every stage of their careers. Our client, VRTek Consulting, is seeking the following. Apply via Dice today!
Title:- LLM Engineer
Location: Dallas, TX (3 days onsite Hybrid)
Rate: USD 70/Hr CTC
No H1B
About the role:
Turing is looking for people with LLM experience to join us in solving business problems for our Fortune 500 customers. You will be a key member of the Turing GenAI delivery organization and part of a GenAI project. You will be required to work with a team of other Turing engineers across different skill sets. In the past, the Turing GenAI delivery organization has implemented industry-leading multi-agent LLM systems, RAG systems, and Open Source LLM deployments for major enterprises.
Required Skills
• 5+ years of professional experience in building Machine Learning models & systems.
• 2+ years of hands-on experience in how LLMs work & Generative AI (LLM) techniques, particularly prompt engineering, RAG, and agents.
• Expert proficiency in programming skills in Python, Langchain/Langgraph, and SQL is a must.
• Understanding of Cloud services, including Azure, Google Cloud Platform, or AWS.
• Excellent communication skills to effectively collaborate with business SMEs.
Roles & Responsibilities
• Develop and optimize LLM-based solutions: Lead the design, training, fine-tuning, and deployment of large language models, leveraging techniques like prompt engineering, retrieval-augmented generation (RAG), and agent-based architectures.
• Codebase ownership: Maintain high-quality, efficient code in Python (using frameworks like LangChain/LangGraph) and SQL, focusing on reusable components, scalability, and performance best practices.
• Cloud integration: Aid in deployment of GenAI applications on cloud platforms (Azure, Google Cloud Platform, or AWS), optimizing resource usage and ensuring robust CI/CD processes.
• Cross-functional collaboration: Work closely with product owners, data scientists, and business SMEs to define project requirements, translate technical details, and deliver impactful AI products.
• Continuous innovation: Stay abreast of the latest advancements in LLM research and generative AI, proposing and experimenting with emerging techniques to drive ongoing improvements in model performance.
Jitender Sagar
Resource Manager
VRTEK Consulting
EMail: