We are seeking an experienced Data Engineer with 2 years of professional experience in building and maintaining data pipelines and working with cloud services. The ideal candidate will have strong technical skills in SQL, experience with cloud platforms (preferably AWS), and the ability to develop and optimize scalable data solutions. Experience with workflow orchestration tools like Apache Airflow, knowledge of CI / CD practices, and expertise in
data ingestion into data marts for analytics and reporting
are highly desirable. Responsibilities
Design, develop, and maintain robust ETL / ELT pipelines for large-scale data processing. Manage and optimize data pipelines using SQL and cloud technologies. Develop and manage DAGs (Directed Acyclic Graphs) in
Apache Airflow
for workflow orchestration and scheduling. Work with AWS services such as S3, Lambda, Redshift, and RDS to store, process, and transform data. Design and implement
efficient data ingestion strategies from diverse sources into data marts
to support business intelligence and reporting needs. Collaborate with data scientists and analysts to understand data requirements and ensure data is accurate, accessible, and secure. Write complex SQL queries to extract, transform, and analyze data for reporting and analytics. Implement
CI / CD pipelines
to automate testing, deployment, and monitoring of data workflows. Monitor data pipeline performance and troubleshoot issues to ensure data accuracy and timeliness. Optimize data storage and retrieval strategies for performance and scalability. Maintain data integrity and compliance with data governance policies. Qualifications
2 years of hands-on experience in data engineering or related fields. Strong proficiency in SQL and experience working with relational databases (e.g., PostgreSQL, MSSQL). Experience with AWS cloud services (e.g., S3, Redshift, Lambda, RDS). Experience developing and managing workflows using
Apache Airflow . Solid experience in
data ingestion, transformation, and loading into data marts
or analytical databases. Familiarity with
CI / CD tools and practices
(e.g., Git, GitHub Actions, Jenkins, or similar). Familiarity with ETL / ELT processes, data pipeline frameworks and
Medallion Architecture
(tiered architecture). Experience with data warehousing concepts and tools. Strong analytical skills and the ability to work with large datasets. Proficiency in Python or another scripting language is a plus. Good understanding of data security and governance best practices. Bachelor’s degree in Computer Science, Engineering, or a related field. Job Details
Seniority level : Associate Employment type : Full-time Job function : Information Technology and Analyst Industries : Appliances, Electrical, and Electronics Manufacturing
#J-18808-Ljbffr
Data Engineer • Shah Alam, Malaysia