1 of 5 free roles viewed today. Upgrade to premium for unlimited from only $19.99 with a 2-day free trial.

Big Data Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Big Data Engineer with a 12-month contract in San Francisco, CA (Hybrid). Key skills include Java or Scala, SQL, NoSQL, Apache Spark, and cloud platforms. Extensive big data experience is required.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 2, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
San Francisco, CA
🧠 - Skills detailed
#AWS (Amazon Web Services) #Unit Testing #Teradata #Redshift #Python #Deployment #Data Pipeline #NoSQL #Big Data #Programming #Kafka (Apache Kafka) #BigQuery #Apache Spark #Azure #Data Quality #SQL (Structured Query Language) #Data Modeling #NiFi (Apache NiFi) #Java #Cloud #Spark (Apache Spark) #Impala #Data Warehouse #Scala #Data Engineering #Airflow
Role description
You've reached your limit of 5 free role views today.
Upgrade to premium for unlimited access - from only $19.99.

Title: Senior Big Data Engineer

Location: San Francisco, CA - Hybrid

Duration: 12 Months

Mandatory Skills: Either Java or Scala, SQL and NoSQL, Apache Spark, Cloud Platforms

Data Engineer Responsibilities:

   • System development and maintenance activities of the team to meet service level agreements and create solutions with innovation, cost effectiveness, high quality and faster time to market.

   • Support code versioning, and code deployments for Data Pipelines.

   • Ensure test coverage for unit testing and support integration and performance testing.

   • Contribute ideas to help ensure that required standards and processes are in place.

   • Research and evaluate current and upcoming technologies and frameworks.

   • Build data expertise and own data quality for your areas.

Minimum Qualifications:

   • Strong experience on Java/Scala development experience.

   • Strong experience on SQL and NoSQL experience (handling structured and unstructured data).

   • Extensive experience with Spark Processing engine.

   • Experience with Big data tools / technologies/ Streaming (Hive, Impala, OOZIE, Airflow, NIFI, Kafka)

   • Experience with Data Modeling.

   • Experience analyzing data to discover opportunities and address gaps.

   • Experience working with cloud or on-prem Big Data platform(i.e. Netezza, Teradata, AWS Redshift, Google BigQuery, Azure Data Warehouse, or similar)

   • Programming experience in Python - Nice to have but not mandatory.