Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Site Reliablity Engineer

This role is for a Senior Site Reliability Engineer (AWS) in Wilmington, DE, offering a 9-month contract to hire. Requires 7-8 years of experience with AWS, Big Data, and monitoring tools. Must have skills in Python, Shell scripting, and SQL.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 14, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Wilmington, DE
🧠 - Skills detailed
#Java #R #Shell Scripting #SQL Queries #Batch #SQL (Structured Query Language) #Perl #AWS (Amazon Web Services) #Data Migration #Spark (Apache Spark) #Grafana #Python #Migration #Big Data #Monitoring #Scripting
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

BCforward is looking for a Senior Site Reliablity Engineer in Wilmington, DE.

Job Title: Site Reliability Engineer (AWS) (SRE)

Location: Wilmington, DE (Hybrid)

Duration: 9 Months Contract to hire

Minimum Years of experience, 7-8+
•  AWS – Knowledge is a must. Experience should be around AWS for storing data in pipelines. Not actual Java development.
•  BigData – Basic to intermediate level understanding. Does not need to be an architect. Should understand data migration, managing pipelines of migration of data provisioned in AWS, and managing & monitoring via scheduler tools like Control-M and R-Flow.
•  Spark - Basic understanding good to have, not looking for an expert.
•  General Message – Not trying to be unrealistic by asking everything must have. AWS knowledge with BigData migration and management by SQL queries via scheduler tools (control-M and RFlow).
•  Batch Support and troubleshooting data flows– Needs to have experience on monitoring, troubleshooting and understanding how batch jobs work and how data can move between systems.

Must have Skills:
• a. Skillset – AWS, Big Data, Spark, Python, Shell / Perl Scripting, Control-M, Autosys. Grafana, AppDynamics, APICA
• b. Experience –
• • At least 5 years of experience in AWS, Big Data, Spark
• • 2-3 years in Python, Shell Scripting