Refer a freelancer, and you both get 1 free week of DFH Premium. They must use your code {code} at sign-up. More referrals = more free weeks! T&Cs apply.
1 of 5 free roles viewed today. Upgrade to premium for unlimited.

Data Modeler (Hybrid Onsite) - W2 Contract Only

This role is a Data Modeler (Hybrid Onsite) for a long-term W2 contract, located in Boston/Quincy, MA or Hartford, CT. Requires 5+ years in data modeling, expertise in Apache Spark, and BFS domain experience. Familiarity with cloud platforms is desirable.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
February 15, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Hybrid
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Boston, MA
🧠 - Skills detailed
#Big Data #Python #Azure Databricks #HBase #Data Quality #Apache Hive #Data Warehouse #Databricks #Kafka (Apache Kafka) #Physical Data Model #Spark (Apache Spark) #Data Processing #BigQuery #Data Analysis #Scala #Cloud #Azure #Data Modeling #Apache Spark #Data Management #Data Architecture #ERWin #SQL (Structured Query Language) #Redshift #Amazon Redshift #R #AWS (Amazon Web Services) #Hadoop
Role description
You've reached your limit of 5 free role views today. Upgrade to premium for unlimited access.

Dice is the leading career destination for tech experts at every stage of their careers. Our client, Xoriant Corporation, is seeking the following. Apply via Dice today!

Job Title : Data Modeler

Client Location: Boston/Quincy, MA or Hartford, CT (Hybrid 3 days a week onsite)

Contract: W2

Duration: Long term contract

Job Description:
• Key Responsibilities:


• Design and develop robust data models using tools such as Erwin Data Modeler or IBM Data Architect, aligning with business requirements and industry best practices.
• Conduct thorough data analysis using tools like SQL, Python, or R to identify patterns, trends, and insights, ensuring high data quality and integrity.
• Hands-on experience with Apache Spark, leveraging its capabilities for efficient data processing and analytics tasks. Proficiency in related tools like Databricks is a plus.
• Collaborate with cross-functional teams to understand data requirements and contribute to the development of scalable and efficient data solutions.
• Key Requirements:


• Minimum of 5 years of proven experience as a Data Modeler, with a strong emphasis on hands-on data analysis.
• Proficient in Apache Spark for large-scale data processing, with a track record of implementing solutions for complex data scenarios.
• Solid understanding of the BFS (Banking, Financial Services) domain, with a focus on data modeling tailored to industry-specific needs.
• Expertise in creating conceptual, logical, and physical data models, ensuring alignment with business objectives and regulatory requirements.
• Familiarity with cloud platforms such as AWS (Amazon Redshift, Glue), Azure (Azure Databricks, SQL Data Warehouse), or Google Cloud (BigQuery) is highly desirable.
• Ability to optimize and fine-tune data models for performance and scalability using tools like Apache Hive or Apache HBase.
• Strong communication skills to articulate complex technical concepts to both technical and non-technical stakeholders.
• Proactive problem-solving attitude and the ability to work in a dynamic, fast-paced environment.
• Preferred Qualifications:


• Previous experience in migrating and modernizing data platforms within the BFS domain.
• Certification in relevant technologies such as Apache Spark or cloud platforms is a plus.
• Knowledge of emerging trends in data management and analytics within the financial services industry.
• Familiarity with big data technologies like Hadoop, Kafka, or Flink.