

Sr. Flink Data Streaming Engineer
Please don't apply if you are looking for C2C/1099.
We are looking for ONLY W2 Candidates. Also ready for H1B Transfer if required.
Role: Sr. Big Data Streaming Engineer
Contact: 12+ months
Location options: Concord, CA / Charlotte, NC (Hybrid working model – 3 days each week onsite)
Alternative option - 2 days in San Francisco, CA 1 day in Concord, CA for the right candidate
Job Summary: We are seeking a highly skilled Engineer Level 5 with expertise in Flink stream processing, MongoDB, Kafka, and Big Data technologies. The ideal candidate will have hands-on experience with OpenShift and Kubernetes for deploying and managing containerized applications in cloud environments. This role requires strong problem-solving skills, an analytical mindset, and a passion for working with real-time data processing at scale.
Key Responsibilities:
• Design, develop, and optimize real-time stream processing applications using Apache Flink.
• Develop and maintain MongoDB databases, ensuring high availability and performance.
• Work with Kafka for event-driven data streaming, message brokering, and real-time processing.
• Utilize Big Data frameworks and tools to build scalable and efficient data solutions.
• Deploy and manage applications on OpenShift and Kubernetes, ensuring scalability and resilience.
• Collaborate with cross-functional teams, including data engineers, software developers, and DevOps, to deliver high-performance solutions.
• Optimize application performance, troubleshoot issues, and implement best practices for distributed systems.
Required Qualifications:
• Extensive experience in Flink for stream processing and real-time analytics.
• Strong expertise in MongoDB database administration and development.
• Hands-on experience with Kafka for real-time event processing and messaging.
• Proven background in Big Data technologies and distributed computing.
• Proficiency in OpenShift and Kubernetes for container orchestration and cloud-native deployments.
• Experience working with large-scale, high-performance systems.
• Strong problem-solving skills and ability to work independently in a fast-paced environment.
Preferred Qualifications:
• Experience with additional Big Data frameworks (Hadoop, Spark, etc.).
• Knowledge of cloud platforms (AWS, GCP, or Azure) and cloud-native architectures.
• Familiarity with DevOps tools for CI/CD and infrastructure automation.
• Strong scripting and programming skills in Python, Java, or Scala.
If you are an experienced Big Data Engineer with a specialization in Flink stream processing, MongoDB, Kafka, and containerized deployments, we encourage you to apply!
EEO:
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment based on – Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”