Senior Consultant, | Data & AI | DevOps & SRE | Cybersecurity & Governance | Cloud Engineering
About Our Client
Our client is a fast-growing financial services startup backed by a reputable regional financial institution. With an expanding presence in Malaysia, the company is aggressively scaling its data engineering, analytics, and digital product capabilities to support its next phase of growth.
Built on a modern technology stack and a strong engineering culture, the organization places heavy emphasis on real-time data processing, automation, and data reliability to power its banking and fintech offerings.
Key Responsibilities :
- Design and build scalable data pipelines using technologies such as Scala, PySpark, and Apache Spark to support both batch and streaming data use cases.
- Design end-to-end data solutions covering ingestion, storage, processing, transformation, and consumption, ensuring they are modular, cost-efficient, and aligned with cloud-native best practices.
- Develop and maintain ETL / ELT workflows on modern data platforms (e.g., Databricks or equivalent), ensuring efficient data processing, transformation, and orchestration.
- Work with cross-functional teams (Engineering, Product, Analytics) to understand data requirements and deliver high-quality, well-structured datasets for business and product use.
- Apply strong coding and engineering practices to data pipelines, including version control, testing, documentation, and collaboration through Git-based workflows.
What We’re Looking For (Requirements)
Minimum 5–8 years of hands‑on experience in Data Engineering , preferably within fintech, digital banking, technology startups, or high-growth data environments.Strong proficiency in distributed data processing technologies , including Apache Spark, Scala, and / or PySpark , with proven experience building large‑scale batch or streaming pipelines.Solid experience with modern data platforms , ideally Databricks , including notebook development, Delta Lake usage, job scheduling, and performance tuning.Practical expertise in cloud environments such as AWS, Azure, or GCP , including data storage, compute, networking basics, IAM, and cloud-native data tools.Strong engineering fundamentals , including version control (Git), environment management, testing practices, and building CI / CD pipelines for data workflows .Ability to design end-to-end data solutions , including ingestion, modelling, transformation, and consumption layers, with good understanding of data architecture patterns (e.g., lakehouse, event‑driven, ELT / ETL).Seniority level
Mid‑Senior level
Employment type
Full‑time
Job function
Information Technology
Industries
Staffing and Recruiting, Financial Services, and Banking
#J-18808-Ljbffr