Razer Inc. Bangsar South, Federal Territory of Kuala Lumpur, Malaysia
Overview
Senior Data Engineer role at Razer. Responsible for leading technical initiatives in data engineering and AI data infrastructure, building scalable data pipelines, and supporting analytics, product optimization, and AI / ML applications. Collaborate with stakeholders to understand data needs and deliver robust data solutions. Exposure to cutting-edge AI / ML technologies as part of the innovation roadmap.
Responsibilities
- Lead the design and development of robust, scalable data pipelines for both traditional analytics and AI / ML workloads
- Build and maintain data architectures including data warehouses, data lakes, and real-time streaming solutions using tools like Redshift, Spark, Flink, and Kafka
- Implement and optimize data orchestration workflows using Airflow and data transformation processes using DBT
- Design and implement dimensional modeling solutions, leading dimensional modeling design initiatives
- Develop automated data workflows and integrate with DevOps / MLOps frameworks using Docker, Kubernetes, and cloud infrastructure
- Implement best practices for data governance, including data quality, security, compliance, data lineage, and access control
- Collaborate with data scientists, analysts, and business stakeholders to understand technical requirements and deliver reliable data infrastructure
- Demonstrate strong business sensitivity to ensure data solutions align with business objectives and requirements
- Support AI / ML initiatives by building feature stores, vector databases, and real-time inference pipelines
- Continuously explore and adopt new technologies in data engineering and AI / ML space
- Proactively drive new initiatives and mentor junior team members
Qualifications
Bachelor's degree in Computer Science, Data Engineering, Statistics, or related field5+ years of experience in data engineering with focus on scalable data architecturesExpert proficiency in Python and SQL programming languagesHands-on experience with AWS Redshift, Apache Airflow, and DBT (Data Build Tool)Strong experience with big data frameworks : Apache Spark, Apache Flink, and Apache KafkaSolid understanding of Linux, Docker, and Kubernetes for containerization and orchestrationAt least one cloud platform experience (AWS preferred, but GCP or Azure acceptable)Proven experience in dimensional modeling design and implementationStrong business acumen with sensitivity to business requirements and ability to translate them into robust technical data solutionsFluent in English (reading, writing, and verbal communication)Experience in data governance including data quality, security, access management, and data lineageFoundational knowledge of AI / ML workflows, model deployment pipelines, and LLM integration patternsDemonstrated ability to lead technical initiatives and drive adoption of new technologies independentlyStrong analytical and communication skills with experience working across cross-functional teamsNice To Have
Experience with OpenMetadata for data catalog and governanceSQL Server database experienceExperience in gaming, e-commerce, or fintech industriesPre-Requisites
Are you game?
Employment type : Full-time | Seniority level : Mid-Senior level | Job function : Information Technology | Industries : Computers and Electronics Manufacturing
#J-18808-Ljbffr