Overview
Boost Federal Territory of Kuala Lumpur, Malaysia
Get AI-powered advice on this job and more exclusive features. Direct message the job poster from Boost. Your friendly Career Match-Maker. Just a 'Down To Earth' and simple recruiter.
The role will be responsible to bridge the gap between data science and IT operations, enabling seamless model lifecycle management and scalable ML infrastructure. Apart from that, the role is also responsible for designing, developing and optimizing data pipelines, ensuring reliable and efficient data workflows across the organization for structure and unstructured data related to Boost’s use cases and mainly to support Boost’s to incorporate unified data modelling across AI, Machine learning, and Analytics projects. This role will collaborate with cross-functional teams, including data scientists, analysts, software engineers and DevOps to optimize the production and deployment of machine learning solutions, and support and enhance data-driven decision-making and analytics.
Note : This refined description preserves the core responsibilities and qualifications from the original content while removing unrelated postings and boilerplate.
Responsibilities
- Develop and automate deployment pipelines for machine learning models, ensuring smooth transition from development to production.
- Implement monitoring systems to track model performance and stability, and address model drift or data drift issues.
- Design, build, manage and maintain end-to-end model lifecycle processes including pipelines for ML models, CI / CD, performance evaluation and monitoring, experiments, observability, deployments, versioning and serving.
- Manage and optimize ML infrastructure and cloud platforms (e.g., AWS, GCP, Azure, Databricks, Snowflake).
- Collaborate with data scientists and DevOps to support model scalability, reliability, and reproducibility in production.
- Maintain documentation for model pipelines, infrastructure, and deployment procedures to ensure consistency and compliance.
- Design, develop, manage and maintain scalable ETL / ELT data pipelines to support ingestion, transformation, and integration of data from multiple sources.
- Design and implement robust data architectures, data models, and storage solutions to ensure efficient data processing and accessibility.
- Optimize and manage data warehouses, ensuring high availability, reliability, and performance. Implement data quality checks, monitoring, and governance protocols to maintain data accuracy, consistency, and integrity, including access management.
- Identify and address performance bottlenecks in implemented solutions.
- Maintain thorough documentation of data pipelines, catalogs, workflows and lineage for transparency and reproducibility.
- Support the centralized Data Platform initiative to create a single view of Boost users and merchants.
- Automate data pipelines to reduce manual effort and improve efficiency.
- Conform to best practices when designing or implementing a solution.
Qualifications
Bachelor’s Degree in Computer Science, Data Engineering, Machine Learning or related field with 3+ years of experience in MLOps / Data Engineering for large data warehousing and analytics projects.Strong problem-solving, collaboration, adaptability to evolving technology, attention to detail and ability to communicate technical concepts to non-technical stakeholders.Strong knowledge of cloud platforms for data solutions (AWS / Azure / GCP / Databricks / Snowflake); experience with ETL / ELT tools (e.g., Apache Airflow, AWS Glue, Databricks Jobs / Pipelines); proficiency in data modeling and schema design.Proficiency in programming languages such as Python or Bash scripting.Proficiency in SQL and data warehouses (e.g., Redshift, BigQuery, Databricks, Snowflake).Familiarity with ML / MLOps frameworks (e.g., mlflow, TensorFlow, PyTorch, scikit-learn).Familiarity with data governance, data lake architectures, big data processing frameworks (e.g., Apache Spark, Hadoop).Familiarity with Infrastructure as Code (IaC) tools (e.g., Terraform, Terragrunt, Serverless framework).Experience with CI / CD tools (e.g., Jenkins, GitLab CI / CD).Seniorities
ExecutiveEmployment type
Full-timeJob function
Analyst and Information TechnologyIndustries : Banking, Financial Services, and InsuranceReferrals increase your chances of interviewing at Boost by 2x
Get notified about new Data Engineer jobs in Federal Territory of Kuala Lumpur, Malaysia.
#J-18808-Ljbffr