Data Engineer for Scalable Pipelines
The role focuses on designing, building, and maintaining scalable, high-performance data pipelines and architectures to support analytics and data integration. Responsibilities include developing ETL/ELT workflows using tools such as Apache Spark, Kafka, Airflow, and SSIS, and managing both structured and unstructured data. The position requires close collaboration with data analysts, data scientists, and business stakeholders to deliver reliable data solutions while ensuring data quality, integrity, security, and governance. The role also involves monitoring and optimizing pipeline performance, supporting data migration, replication, and Change Data Capture (CDC) initiatives, and adhering to Agile and DevOps practices. Strong SQL skills, experience with relational and NoSQL databases, data warehousing concepts, and reporting tools like SSRS are essential, along with strong problem-solving, communication, and teamwork skills.
Apply tot his job
Apply To this Job