CloudMile Federal Territory of Kuala Lumpur, Malaysia
Get AI-powered advice on this job and more exclusive features.
Direct message the job poster from CloudMile
We are seeking an experienced Senior Data Engineer to design, build, and maintain modern, scalable data platforms and pipelines. In this role, you will partner with tech leads, solution architects, and clients to deliver high-performance, reliable, and cost-efficient data solutions that power analytics, business intelligence, and AI initiatives. You will also play a key role in mentoring junior engineers and driving engineering best practices across the team.
Key Responsibilities
- Design, build, and optimize ETL / ELT pipelines using GCP services such as Dataflow, Dataproc, Composer (Airflow), Dataform, and Cloud Functions.
- Develop both batch and streaming pipelines to handle structured, semi-structured, and unstructured data.
Data Modeling & Governance
Implement scalable data models in BigQuery to support analytics, BI reporting, and machine learning workloads.Apply data governance frameworks, quality checks, and metadata management practices to ensure trusted and compliant data.Optimization & Reliability
Optimize queries, pipeline performance, and storage costs while ensuring platform scalability, reliability, and fault tolerance.Monitor, troubleshoot, and tune pipelines for continuous improvement.Migration & Integration
Lead data migration projects from on-premises systems (e.g., Oracle, MicroStrategy) to modern cloud environments.Integrate data from multiple sources, including APIs, databases, and event streams, into unified data platforms.Work closely with analysts, BI developers, and business teams to enable self-service analytics and faster decision-making.Mentor and coach junior engineers, setting coding standards, reviewing designs, and sharing best practices.Requirements
Experience
More than 5 years of experience in data engineering, with at least 2 years on Google Cloud Platform (GCP).Proven track record in building and scaling cloud-native data platforms and pipelines.Technical Skills
Strong SQL and Python skills for data transformation, automation, and pipeline orchestration.Hands‑on expertise with the GCP Data Stack (BigQuery, Dataflow, Dataproc, Composer, Dataform).Experience with workflow orchestration (Airflow / Composer) and CI / CD for data pipelines.Familiarity with modern data modeling concepts (relational, dimensional, and schema-on-read approaches).Nice to Have
Exposure to Azure Data Stack (Synapse, Data Factory, Databricks, Logic Apps).Knowledge of data governance, metadata management, and security best practices.Soft Skills
Strong problem-solving, analytical, and communication skills.Ability to balance technical depth with business impact, influencing both engineering teams and business stakeholders.Seniority level
Mid-Senior levelEmployment type
ContractJob function
Information TechnologyIndustries
IT Services and IT Consulting#J-18808-Ljbffr