top of page

Acerca de

Data Engineer

Taipei City, Taiwan

Full Time

At SWAG Live, data is the backbone of how we innovate and grow. With our BI system established, the data team is now focused on building the next generation of platforms that power recommendation systems, AI agents, AI pipelines, and campaign-facing services —as well as external-facing data services that enhance our products.

We are seeking a Data Engineer who will design and operate the infrastructure that makes these AI-driven initiatives possible. Your work will center on building cost-efficient, reliable, and maintainable systems that deliver measurable business value. You’ll collaborate closely with data scientists and product teams to turn models into production-ready services, enabling personalization, campaign optimization, and intelligent product features.

This is a role for engineers who want to shape the future of applied AI while keeping efficiency at the core.

Responsibilities

  • Build and maintain cost-efficient data pipelines and warehouses that power analytical tool, recommendation systems, AI agents, and campaign-facing services.

  • Develop data services that integrate with external products, ensuring reliability, maintainability, and clear SLAs.

  • Optimize queries, schemas, and storage to maximize performance and minimize cost across transactional and analytical workloads.

  • Implement and operate event-driven streaming architectures for real-time personalization and campaign insights.

  • Collaborate with data scientists and product teams to move AI models from experimentation to production.

  • Create internal tools and frameworks that accelerate the productivity of analysts, scientists, and engineers.

  • Ensure data quality, governance, and observability across pipelines and services.

Requirements

  • Solid skills in Python and SQL, with experience in data pipelines, queries, and building simple external services (e.g., APIs or campaign systems).

  • Knowledge of  transactional databases and data warehouses , including hands-on experience in schema design and  query optimization.

  • Practical experience with streaming data pipelines  using frameworks such as Apache Spark, Flink, Kafka, or Beam.

  • Familiarity with Google Cloud Platform (GCP) and ability to design or operate cost-efficient data workflows through workload tuning.

  • Self-motivated learner with strong problem-solving ability and willingness to grow in a fast-paced environment.

Good to have

  • Exposure to machine learning workflows (e.g., feature engineering, model serving, or MLOps on GCP such as Vertex AI).

  • Experience building or maintaining campaign-facing or external-facing services (APIs, SDKs, integrations).

  • Hands-on DevOps practices (CI/CD, IaC, Monitoring) with a focus on improving developer experience.

  • Familiarity with data science domains relevant to the following areas: search, recommendation systems, ads, content understanding/moderation, or anti-fraud

Salary Range

Negotiable

Application for This Position

Please attach your resume to the email and include your name, contact number, and the best time for us to contact you.Thank you for your application!

  • email
  • briefcase
  • LinkedIn
  • Facebook
  • Instagram
  • YouTube

Copyright © 2024 SWAG. All rights reserved. 

bottom of page