

IBM DataStage Developer
Role: IBM DataStage Developer
Type: Hybrid / Columbus Ohio
Job Summary
:We are seeking a highly skilled IBM DataStage Developer to design, develop, and maintain ETL (Extract, Transform, Load) processes for our data integration and warehousing projects. The ideal candidate will have expertise in IBM InfoSphere DataStage, strong knowledge of SQL, database design, and data warehousing concepts, and experience working with large-scale data transformation pipelines
• .Key Responsibilities
:Design, develop, and implement ETL workflows using IBM DataStage to support data integration and warehousing solutions
.Extract, transform, and load data from multiple sources, including databases, flat files, APIs, and cloud platforms
.Optimize and enhance existing ETL processes to improve performance and maintainability
.Work with business analysts, data architects, and database administrators to understand requirements and deliver data solutions
.Perform unit testing, debugging, and troubleshooting of ETL jobs to ensure accuracy and reliability
.Develop data validation and quality checks to ensure data integrity
.Manage job scheduling, automation, and monitoring using DataStage and other scheduling tools
.Document technical specifications, best practices, and process flows for ETL solutions
.Collaborate with cross-functional teams to support data governance and compliance requirements
.Provide production support for ETL processes and resolve data-related issues
.Required Skills & Qualifications
:3+ years of hands-on experience with IBM InfoSphere DataStage (8.x/11.x or higher)
.Strong expertise in SQL, PL/SQL, and database design concepts (Oracle, SQL Server, DB2, PostgreSQL, etc.)
.Experience in data warehousing and ETL architecture, including star/snowflake schema design
.Knowledge of UNIX/Linux shell scripting and experience working in command-line environments
.Familiarity with performance tuning, optimization, and debugging ETL jobs
.Experience with cloud-based ETL solutions (AWS, Azure, GCP) is a plus
.Exposure to big data technologies (Hadoop, Spark, Hive) is an advantage
.Strong problem-solving skills and ability to work independently in a fast-paced environment
.Excellent verbal and written communication skills
.Preferred Qualifications
:Experience with data modeling tools such as Erwin or Visio
.Familiarity with Agile/Scrum methodologies and DevOps practices
.Knowledge of Python, Java, or other scripting languages for data transformation
.Prior experience in banking, healthcare, or retail domains is a plus
.Education
:Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent experience)
.