Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Sr. Flink Data Streaming Engineer

This role is for a Sr. Flink Data Streaming Engineer with a contract length of 12+ months, offering a pay rate of "unknown." It requires expertise in Flink, MongoDB, Kafka, OpenShift, and Kubernetes, with a hybrid work location in Concord, CA or Charlotte, NC.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
880
🗓️ - Date discovered
February 11, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Charlotte, NC
🧠 - Skills detailed
#DevOps #Databases #Distributed Computing #Spark (Apache Spark) #Data Engineering #Database Administration #AWS (Amazon Web Services) #MongoDB #Programming #GCP (Google Cloud Platform) #Python #Azure #Automation #Data Framework #Data Processing #Cloud #Kubernetes #Deployment #Java #Big Data #Scripting #Kafka (Apache Kafka) #Scala #Hadoop
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Please don't apply if you are looking for C2C/1099.

We are looking for ONLY W2 Candidates. Also ready for H1B Transfer if required.

Role: Sr. Big Data Streaming Engineer

Contact: 12+ months

Location options: Concord, CA / Charlotte, NC (Hybrid working model – 3 days each week onsite)

Alternative option - 2 days in San Francisco, CA 1 day in Concord, CA for the right candidate

Job Summary: We are seeking a highly skilled Engineer Level 5 with expertise in Flink stream processing, MongoDB, Kafka, and Big Data technologies. The ideal candidate will have hands-on experience with OpenShift and Kubernetes for deploying and managing containerized applications in cloud environments. This role requires strong problem-solving skills, an analytical mindset, and a passion for working with real-time data processing at scale.

Key Responsibilities:
• Design, develop, and optimize real-time stream processing applications using Apache Flink.
• Develop and maintain MongoDB databases, ensuring high availability and performance.
• Work with Kafka for event-driven data streaming, message brokering, and real-time processing.
• Utilize Big Data frameworks and tools to build scalable and efficient data solutions.
• Deploy and manage applications on OpenShift and Kubernetes, ensuring scalability and resilience.
• Collaborate with cross-functional teams, including data engineers, software developers, and DevOps, to deliver high-performance solutions.
• Optimize application performance, troubleshoot issues, and implement best practices for distributed systems.

Required Qualifications:
• Extensive experience in Flink for stream processing and real-time analytics.
• Strong expertise in MongoDB database administration and development.
• Hands-on experience with Kafka for real-time event processing and messaging.
• Proven background in Big Data technologies and distributed computing.
• Proficiency in OpenShift and Kubernetes for container orchestration and cloud-native deployments.
• Experience working with large-scale, high-performance systems.
• Strong problem-solving skills and ability to work independently in a fast-paced environment.

Preferred Qualifications:
• Experience with additional Big Data frameworks (Hadoop, Spark, etc.).
• Knowledge of cloud platforms (AWS, GCP, or Azure) and cloud-native architectures.
• Familiarity with DevOps tools for CI/CD and infrastructure automation.
• Strong scripting and programming skills in Python, Java, or Scala.

If you are an experienced Big Data Engineer with a specialization in Flink stream processing, MongoDB, Kafka, and containerized deployments, we encourage you to apply!

EEO:

“Mindlance is an Equal Opportunity Employer and does not discriminate in employment based on – Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”