We are looking for an experienced Data Engineer to design, build, and maintain robust data pipelines and architectures that will enable effective data processing and analysis across the organization. This role involves working closely with data scientists, analysts, and other stakeholders to ensure that data infrastructure supports data-driven decision-making and business objectives. The ideal candidate will have a strong technical background, experience in big data technologies, and an analytical mindset. Key Responsibilities: Data Pipeline Development: Design, build, and maintain efficient and scalable ETL (Extract, Transform, Load) pipelines to ingest data from a variety of sources. Ensure data accuracy and consistency across pipelines and troubleshoot issues as they arise. Data Architecture and Storage: Develop and optimize data storage solutions, including data lakes, data warehouses, and databases, tailored to specific use cases. Collaborate with IT to manage and scale data storage infrastructure. Data Quality and Integrity: Implement data validation checks and monitoring processes to maintain high data quality standards. Monitor pipeline performance and troubleshoot issues related to data quality, accuracy, and latency. Collaboration and Stakeholder Support: Work closely with data scientists, data analysts, and other stakeholders to understand data needs and provide infrastructure that supports analytical and machine learning tasks. Translate business requirements into data models and schema designs to support reporting, analytics, and business intelligence. Automation and Optimization: Develop tools and frameworks for data automation, reducing manual processes and ensuring repeatability. Optimize pipeline performance for efficient processing and minimal downtime.