Responsibilities :
- Develop and maintain data pipelines using Databricks and Python.
- Perform data transformations, cleansing, and aggregations.
- Work on large-scale distributed data processing (Spark).
- Optimize performance of data pipelines and workflows.
- Collaborate with data scientists and business teams for analytical use cases.
- Ensure data quality, security, and compliance.
Requirements :
Bachelor's Degree in Computer Science, Data Engineering, or related field.5+ years of experience in data engineering or development.Strong hands-on experience with Databricks, Spark, and Python.Experience with cloud data platforms (Azure Data Lake, AWS Glue, GCP BigQuery).Strong SQL and ETL development experience.Good communication and teamwork skills.Job Types : Full-time, Contract
Contract length : 12 months
Pay : RM4, RM11,787.52 per month
Ability to commute / relocate :
Kuala Lumpur : Reliably commute or planning to relocate before starting work (Required)Experience :
Azure Data Lake : 3 years (Required)AWS Glue : 3 years (Required)Work Location : In person