Loading…
Attending this event?
In-person
21-23 August, 2024
Learn More and Register to Attend

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for KubeCon + CloudNativeCon + Open Source Summit + AI_Dev China 2024 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

Please note: This schedule is automatically displayed in Hong Kong Standard Time (UTC +8). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date." The schedule is subject to change and session seating is available on a first-come, first-served basis. 

亲临现场
2024年8月21-23日
了解更多并注册参加

Sched应用程序允许您创建自己的日程安排,但不能替代您的活动注册。您必须注册参加KubeCon + CloudNativeCon + Open Source Summit + AI_Dev China 2024,才能参加会议。如果您尚未注册但希望加入我们,请访问活动注册页面购买注册。

请注意:本日程自动显示为香港标准时间(UTC +8)。要查看您偏好的时区的日程,请从右侧“按日期筛选”上方的下拉菜单中选择。日程可能会有变动,会议席位先到先得。
AI_dev: Open Source GenAI & ML Summit Sessions clear filter
Wednesday, August 21
 

11:00 HKT

Addressing Challenges of Cross-Architecture Dynamic Migration Over Heterogeneous Acceleration System | 解决异构加速系统上跨架构动态迁移的挑战 - Yanjun Chen, China Mobile
Wednesday August 21, 2024 11:00 - 11:35 HKT
With the surge of application computing demand, the industry began to run AI applications on diverse acceleration hardwares (GPU, FPGA, NPU...) to gain more processing capability. One key problem to use diverse accelerators is tool chain & vendor lock-in in application Dev-to-Run processes. Cross-system (multi-arch chips + multi-vendor tool chain) application development and migration is hard to achieve. In this presentation China Mobile will introduce the practices to solve above challenges allowing AI applications smoothly migrate among different accelerators. It includes a unified abstraction for diverse accelerators, a middle-compiler using existing compilers (CUDA, ROCm, oneAPI...) to achieve cross-architecture compile in the same execution, and a runtime supporting dynamic and replaceable link. We want to enable applications migrate freely between diverse accelerators without changing development habits, and show the architecture design, open source plans and a demo.

随着应用计算需求的激增,行业开始在各种加速硬件(GPU、FPGA、NPU等)上运行AI应用程序,以获得更多的处理能力。在使用各种加速器时,一个关键问题是在应用程序开发到运行过程中的工具链和供应商锁定。跨系统(多架构芯片+多供应商工具链)应用程序开发和迁移很难实现。在这个演示中,中国移动将介绍解决上述挑战的实践,使AI应用程序能够在不同的加速器之间平稳迁移。这包括对各种加速器的统一抽象,使用现有编译器(CUDA、ROCm、oneAPI等)的中间编译器实现跨架构编译在同一执行中,以及支持动态和可替换链接的运行时。我们希望能够使应用程序在不改变开发习惯的情况下自由迁移至各种加速器,并展示架构设计、开源计划和演示。
Speakers
avatar for Yanjun Chen

Yanjun Chen

Open Source Expert, China Mobile
Yanun Chen is the open source expert and CNCF delegate in China Mobile. She joined actively in many open source projects and now she is the TSC member of LF Edge Akraino.
Wednesday August 21, 2024 11:00 - 11:35 HKT
Level 1 | Hung Hom Room 3

11:50 HKT

Ethics in the Cloud: Safeguarding Responsible AI Development in Asia | 云计算中的伦理:在亚洲保障负责任的人工智能发展 - Quiana Berry, Red Hat
Wednesday August 21, 2024 11:50 - 12:25 HKT
Ethics serve as the compass guiding responsible innovation and societal progress. This presentation blends ethics, cloud computing, and AI advancement, spotlighting the imperative of upholding responsible AI practices, particularly within the Asian market. From safeguarding data privacy and fortifying cybersecurity to navigating regulatory compliance and governance, this comprehensive discourse delves into multifaceted dimensions essential for ethical AI development. As Asia, including China, propels the frontier of AI innovation, the imperative of embedding ethics and responsible practices becomes increasingly pronounced. This session is tailored to provide actionable strategies and regulatory insights for Asian leaders. Together, we'll empower attendees to become champions of responsible AI practices, fostering a culture of integrity and innovation in the vibrant and diverse tech landscape of Asia.

伦理道德是引导负责任创新和社会进步的指南。本次演讲融合了伦理、云计算和人工智能的进步,重点关注在亚洲市场内坚持负责任人工智能实践的必要性。从保护数据隐私和加强网络安全到遵守监管合规和治理,这场全面的讨论深入探讨了对伦理人工智能发展至关重要的多方面维度。 随着亚洲,包括中国,推动人工智能创新的前沿,嵌入伦理和负责任实践的必要性变得日益突出。本场演讲旨在为亚洲领导者提供可行的策略和监管洞察。 让我们共同助力与会者成为负责任人工智能实践的倡导者,在亚洲充满活力和多样化的科技领域中培育诚信和创新的文化。
Speakers
avatar for Quiana Berry

Quiana Berry

Product Lead, Red Hat
Quiana is a dynamic cloud Product Lead at Red Hat/IBM, dedicated to crafting innovative developer tools and reshaping the future of technology. With an academic foundation encompassing Anthropology, Biology, and Chemistry and a specialty in the fusion of (DEI) and Ethical AI, Quiana... Read More →
Wednesday August 21, 2024 11:50 - 12:25 HKT
Level 1 | Hung Hom Room 3

13:50 HKT

Is Your GPU Really Working Efficiently in the Data Center? N Ways to Improve GPU Usage | 您的GPU在数据中心真的高效工作吗?提高GPU使用率的N种方法 - Xiao Zhang, DaoCloud
Wednesday August 21, 2024 13:50 - 14:25 HKT
AI has penetrated into various industries, and companies have purchased many expensive AI GPU devices and used them for training and inference. So what is the reality of the use of these devices? Is the usage rate really high? Is the GPU card being monopolized by a large number of applications that are not heavily used? Do these AI devices work efficiently 24/7? This session will combine our mass production practices to summarize N ways to improve the utilization rate of AI equipment, such as * How to avoid monopoly and improve GPU usage through GPU sharing technology * How to improve GPU device usage through co-located in scenes with obvious tides * How to better perform GPU mark group matching training and inference applications to improve GPU usage This session will combine the practical experience of the two open source projects HAMi and Volcano in production, hoping to give everyone a clearer understanding of how to improve GPU usage.

人工智能已经渗透到各个行业,公司购买了许多昂贵的人工智能GPU设备,并将它们用于训练和推理。那么这些设备的使用情况如何呢?使用率真的很高吗?GPU卡是否被大量不常用的应用程序垄断?这些人工智能设备是否能够24/7高效工作? 本场演讲将结合我们的大规模生产实践,总结提高人工智能设备利用率的N种方法,例如: * 如何通过GPU共享技术避免垄断并提高GPU使用率 * 如何通过与明显潮汐场景共同使用GPU设备来提高GPU使用率 * 如何更好地执行GPU标记组匹配训练和推理应用程序,以提高GPU使用率 本场演讲将结合两个开源项目HAMi和Volcano在生产中的实际经验,希望能让大家更清楚地了解如何提高GPU使用率。
Speakers
avatar for xiaozhang

xiaozhang

Senior Technical Lead, DaoCloud
- Xiao Zhang is leader of the Container team(focus on infra,AI,Muti-Cluster,Cluster - LCM,OCI) - Kubernetes / Kubernetes-sigs active Contributor、member - Karmada maintainer,kubean maintainer,HAMi maintainer - Cloud-Native Developer - CNCF Open Source Enthusiast. - GithubID: waw... Read More →
Wednesday August 21, 2024 13:50 - 14:25 HKT
Level 1 | Hung Hom Room 3

14:40 HKT

Self-Hosted LLM Agent on Your Own Laptop or Edge Device | 在自己的笔记本电脑或边缘设备上自托管LLM Agent - Michael Yuan, Second State
Wednesday August 21, 2024 14:40 - 15:15 HKT
As LLM applications evolve from chatbots to copilots to AI agents, there are increasing needs for privacy, customization, cost control, and value alignment. Running open-source LLMs and agents on personal or private devices is a great way to achieve those goals. With the release of a new generation of open-source LLMs, such as Llama 3, the gap between open-source and proprietary LLMs is narrowing fast. In many cases, open source LLMs are already outperforming SaaS-based proprietary LLMs. For AI agents, open-source LLMs are not just cheaper and more private. They allow customization through finetuning and RAG prompt engineering using private data. This talk shows you how to build a complete AI agent service using an open-source LLM and a personal knowledge base. We will use the open-source WasmEdge + Rust stack for LLM inference, which is fast and lightweight without complex Python dependencies. It is cross-platform and achieves native performance on any OSes, CPUs, and GPUs.

随着LLM应用程序从聊天机器人发展到副驾驶员再到AI代理,对隐私、定制、成本控制和价值对齐的需求越来越大。在个人或私人设备上运行开源LLMs和代理是实现这些目标的好方法。 随着新一代开源LLMs(如Llama 3)的发布,开源和专有LLMs之间的差距迅速缩小。在许多情况下,开源LLMs已经超越了基于SaaS的专有LLMs。对于AI代理来说,开源LLMs不仅更便宜、更私密,还允许通过微调和使用私人数据进行RAG提示工程来进行定制。 本次演讲将向您展示如何使用开源LLM和个人知识库构建完整的AI代理服务。我们将使用开源的WasmEdge + Rust堆栈进行LLM推理,这种方法快速轻便,不需要复杂的Python依赖。它是跨平台的,在任何操作系统、CPU和GPU上都能实现原生性能。
Speakers
avatar for Michael Yuan

Michael Yuan

Product Manager, Second State
Dr. Michael Yuan is a maintainer of WasmEdge Runtime (a project under CNCF) and a co-founder of Second State. He is the author of 5 books on software engineering published by Addison-Wesley, Prentice-Hall, and O'Reilly. Michael is a long-time open-source developer and contributor... Read More →
Wednesday August 21, 2024 14:40 - 15:15 HKT
Level 1 | Hung Hom Room 3

15:35 HKT

Sit Back and Relax with Fault Awareness and Robust Instant Recovery for Large Scale AI Workloads | 坐和放宽,了解大规模 AI 负载场景下的故障感知和健壮的快速故障恢复 - Fanshi Zhang & Kebe Liu, DaoCloud
Wednesday August 21, 2024 15:35 - 16:10 HKT
The fault tolerance during train, fine-tuning, and even inferencing is crucial to modern AI workloads when it happens on large scale, with loads of GPU clusters. For training and fine-tuning tasks, failure of GPUs, storages, any hardware issues often cause the extending the training time to weeks and even months significantly. For inferencing, when massive loads of requests income, if one of the inferencing servers went faulty, we need a policy and scheduler to perform mitigation to transfer the workloads fast and efficiently. In this talk, We will introduce a series of mechanism we have designed to help Kubernetes clusters and workloads itself to locate, diagnostic the root cause, schedule and perform mitigation when it comes to any of hardware or CUDA API call failures to reduce the overall operating challenges. But the possibilities will not stop here, the fault awareness and mitigation scheduler will help any of the workloads to mitigate during failures.

在大规模GPU集群上进行训练、微调甚至推理时的容错性对现代人工智能工作负载至关重要。 对于训练和微调任务,GPU、存储等硬件故障经常会导致训练时间延长至数周甚至数月。对于推理任务,当大量请求涌入时,如果其中一个推理服务器出现故障,我们需要一种策略和调度程序来快速高效地转移工作负载。 在本次演讲中,我们将介绍一系列我们设计的机制,帮助Kubernetes集群和工作负载本身定位、诊断根本原因,并在硬件或CUDA API调用失败时进行调度和执行缓解,以减少整体运营挑战。但可能性不会止步于此,故障感知和缓解调度程序将帮助任何工作负载在故障期间进行缓解。
Speakers
avatar for Kebe Liu

Kebe Liu

Senior software engineer, DaoCloud
Member of Istio Steering Committee, focused on cloud-native and Istio, eBPF and other areas in recent years. Founder of Merbridge project.
avatar for Neko Ayaka

Neko Ayaka

Software Engineer, DaoCloud
Cloud native developer, AI researcher, Gopher with 5 years of experience in loads of development fields across AI, data science, backend, frontend. Co-founder of https://github.com/nolebase
Wednesday August 21, 2024 15:35 - 16:10 HKT
Level 1 | Hung Hom Room 3

16:25 HKT

Simplify AI Infrastructure with Kubernetes Operators | 使用Kubernetes Operators简化AI基础设施 - Ganeshkumar Ashokavardhanan, Microsoft & Tariq Ibrahim US, NVIDIA
Wednesday August 21, 2024 16:25 - 17:00 HKT
ML applications often require specialized hardware and additional configuration to run efficiently and reliably on Kubernetes. However, managing the cluster lifecycle, the diversity and complexity of hardware configuration across nodes can be challenging. How can we simplify and automate this process to ensure a smooth experience for kubernetes users? Kubernetes Operators offer a great solution. In this session, we will go over operators and demonstrate how they can help automate the installation, configuration, and lifecycle management of AI-ready infra end to end from cluster provisioning and k8s node configuration to deep learning model deployments. We will demo a fine-tuning LLM workload, to showcase how existing operators in the ecosystem such as Cluster API Operator, GPU Operator, Network Operator, and the Kubernetes AI Toolchain Operator, can be used to simplify the infra. Finally, we will discuss challenges and best practices of using operators in production.

ML 应用通常需要专门的硬件和额外的配置才能在 Kubernetes 上高效可靠地运行。然而,管理集群生命周期、节点间硬件配置的多样性和复杂性可能具有挑战性。我们如何简化和自动化这个过程,以确保 Kubernetes 用户的顺畅体验? Kubernetes 运算符提供了一个很好的解决方案。在本场演讲中,我们将介绍运算符,并演示它们如何帮助自动化 AI-ready 基础架构的安装、配置和生命周期管理,从集群提供和 k8s 节点配置到深度学习模型部署。我们将演示一个微调 LLM 工作负载,展示生态系统中现有运算符(如 Cluster API Operator、GPU Operator、Network Operator 和 Kubernetes AI Toolchain Operator)如何简化基础架构。最后,我们将讨论在生产环境中使用运算符的挑战和最佳实践。
Speakers
avatar for Ganeshkumar Ashokavardhanan

Ganeshkumar Ashokavardhanan

Software Engineer, Microsoft
Ganesh is a Software Engineer on the Azure Kubernetes Service team at Microsoft, working on node lifecycle, and is the lead for the GPU workload experience on this kubernetes platform. He collaborates with partners in the ecosystem like NVIDIA to support operator models for machine... Read More →
avatar for Tariq Ibrahim US

Tariq Ibrahim US

Senior Cloud Platform Engineer, NVIDIA
Tariq Ibrahim is a Senior Cloud Platform Engineer on the Cloud Native team at NVIDIA where he works on enabling GPUs in containers and Kubernetes. He is a maintainer of the NVIDIA GPU Operator. He has also contributed to several cloud native OSS projects like kube-state-metrics, Istio... Read More →
Wednesday August 21, 2024 16:25 - 17:00 HKT
Level 1 | Hung Hom Room 3

17:15 HKT

Unlocking Heterogeneous AI Infrastructure K8s Cluster: Leveraging the Power of HAMi | 解锁异构AI基础设施K8s集群:发挥HAMi的力量 - Xiao Zhang, DaoCloud & Mengxuan Li, The 4th Paradigm
Wednesday August 21, 2024 17:15 - 17:50 HKT
With AI's growing popularity, Kubernetes has become the de facto AI infrastructure. However, the increasing number of clusters with diverse AI devices (e.g., NVIDIA, Intel, Huawei Ascend) presents a major challenge. AI devices are expensive, how to better improve resource utilization? How to better integrate with K8s clusters? How to manage heterogeneous AI devices consistently, support flexible scheduling policies, and observability all bring many challenges The HAMi project was born for this purpose. This session including: * How K8s manages heterogeneous AI devices (unified scheduling, observability) * How to improve device usage by GPU share * How to ensure the QOS of high-priority tasks in GPU share stories * Support flexible scheduling strategies for GPU (NUMA affinity/anti-affinity, binpack/spread etc) * Integration with other projects (such as volcano, scheduler-plugin, etc.) * Real-world case studies from production-level users. * Some other challenges still faced and roadmap

随着人工智能的日益普及,Kubernetes已成为事实上的人工智能基础设施。然而,不断增加的具有多样化人工智能设备(如NVIDIA、Intel、华为Ascend)的集群数量带来了重大挑战。人工智能设备价格昂贵,如何更好地提高资源利用率?如何更好地与K8s集群集成?如何一致地管理异构人工智能设备,支持灵活的调度策略和可观察性都带来了许多挑战。HAMi项目应运而生。本场演讲包括: * K8s如何管理异构人工智能设备(统一调度、可观察性) * 如何通过GPU共享提高设备使用率 * 如何确保GPU共享故事中高优先级任务的QOS * 为GPU支持灵活的调度策略(NUMA亲和性/反亲和性、binpack/spread等) * 与其他项目的集成(如volcano、scheduler-plugin等) * 来自生产级用户的实际案例研究。 * 仍然面临的一些其他挑战和路线图
Speakers
avatar for xiaozhang

xiaozhang

Senior Technical Lead, DaoCloud
- Xiao Zhang is leader of the Container team(focus on infra,AI,Muti-Cluster,Cluster - LCM,OCI) - Kubernetes / Kubernetes-sigs active Contributor、member - Karmada maintainer,kubean maintainer,HAMi maintainer - Cloud-Native Developer - CNCF Open Source Enthusiast. - GithubID: waw... Read More →
avatar for Mengxuan Li

Mengxuan Li

senior developer, The 4th Paradigm Co., Ltd
Reviewer of volcano community Founder of CNCF Landscape project HAMi responsible for the development of gpu virtualization mechanism on volcano. It have been merged in the master branch of volcano, and will be released in v1.8. speaker, in OpenAtom Global Open Source Commit#2023 speaker... Read More →
Wednesday August 21, 2024 17:15 - 17:50 HKT
Level 1 | Hung Hom Room 3
 
Thursday, August 22
 

11:00 HKT

Unlocking the Power of Kubernetes: AI-Driven Innovations for Next-Gen Infrastructure | 释放 Kubernetes 的力量:面向下一代基础设施的 AI 驱动创新 - Brandon Kang, Akamai Technologies
Thursday August 22, 2024 11:00 - 11:35 HKT
My session is about dynamic synergy between Kubernetes and AI, unveiling a transformative paradigm shift in modern infrastructure management. The presentation unveils how Kubernetes serves as an enabler for deploying and scaling AI workloads efficiently, optimizing resource utilization, and ensuring unparalleled scalability. Delving deeper, it explores the realm of AI-powered automation, showcasing how intelligent algorithms enhance auto-scaling, workload optimization, and predictive maintenance within Kubernetes clusters. Moreover, it sheds light on the crucial aspect of security, elucidating how AI-driven measures bolster threat detection and anomaly identification, fortifying Kubernetes environments against potential risks. This presentation beckons organizations to embrace the convergence of Kubernetes and AI, unlocking boundless possibilities to redefine infrastructure management and propel towards unprecedented efficiency and resilience.

我的演讲是关于 Kubernetes 和人工智能之间的动态协同作用,揭示了现代基础设施管理中的转变范式。演示展示了 Kubernetes 如何作为部署和扩展人工智能工作负载的促进者,有效优化资源利用率,并确保无与伦比的可扩展性。更深入地探讨了基于人工智能的自动化领域,展示了智能算法如何增强 Kubernetes 集群内的自动扩展、工作负载优化和预测性维护。此外,它还阐明了安全的关键方面,阐明了人工智能驱动的措施如何加强威胁检测和异常识别,加固 Kubernetes 环境抵御潜在风险。 这个演示呼吁组织 embrace Kubernetes 和人工智能的融合,解锁无限可能性,重新定义基础设施管理,并朝着前所未有的效率和韧性迈进。
Speakers
avatar for Brandon Kang

Brandon Kang

Principal Technical Solutions Architect, Akamai Technologies
Brandon Kang is a Cloud Specialist at Akamai Technologies, where he oversees cloud computing projects across the APJ markets, including China. Before his tenure at Akamai, Brandon was a software engineer at Samsung, a program manager at Microsoft, and a service platform expert at... Read More →
Thursday August 22, 2024 11:00 - 11:35 HKT
Level 1 | Hung Hom Room 3

11:50 HKT

VeScale: A PyTorch Native LLM Training Framework | veScale:一个PyTorch原生LLM训练框架 - Hongyu Zhu, ByteDance
Thursday August 22, 2024 11:50 - 12:25 HKT
The era of giant LLM today calls forth distributed training. Despite countless distributed training frameworks that have been published in the past decade, few have excelled at real industry production, as the quality favored the most is often the Ease of Use instead of pure Performance. The Ease of Use lies in two essentials -- PyTorch and Automatic Parallelism, because: i) PyTorch ecosystem dominates and owns 92% of models on HuggingFace, and ii) giant models cannot be trained without complex nD Parallelism. Currently, this Ease of Use is "broken" for industry-level frameworks, as they are either not PyTorch-native (TensorFlow/JAX) or not fully Automated (Megatron/DeepSpeed/torch). We propose a novel framework that combines PyTorch Nativeness and Automatic Parallelism for scaling LLM training with Ease of Use. We only expect developers to write single-device torch code but automatically parallelize it into nD parallelism with all heavy lifting handled transparently.

当今巨型LLM时代呼唤分布式训练。尽管过去十年中已经发布了无数分布式训练框架,但很少有能够在真实产业生产中表现出色,因为最受青睐的质量往往是易用性而不是纯性能。易用性在于两个关键点--PyTorch和自动并行性,因为:i)PyTorch生态系统主导并拥有HuggingFace上92%的模型,ii)巨型模型无法在没有复杂的nD并行性的情况下进行训练。 目前,这种易用性对于产业级框架来说已经“破碎”,因为它们要么不是PyTorch原生的(TensorFlow/JAX),要么不是完全自动化的(Megatron/DeepSpeed/torch)。 我们提出了一个结合了PyTorch原生性和自动并行性的新型框架,以便通过易用性扩展LLM训练。我们只期望开发人员编写单设备torch代码,但自动将其并行化为nD并行性,所有繁重的工作都由框架透明地处理。
Speakers
avatar for Hongyu Zhu

Hongyu Zhu

Machine Learning System Software Engineer, ByteDance
Hongyu is a Machine Learning System Engineer in ByteDance AML group, working on systems and compilers for training workloads. He got his PhD degree from University of Toronto, where he worked with Professor Gennady Pekhimenko. He is generally interested in machine learning compilers... Read More →
Thursday August 22, 2024 11:50 - 12:25 HKT
Level 1 | Hung Hom Room 3

13:50 HKT

Model Openness Framework: The Path to Openness, Transparency and Collaboration in Machine Learning Models | 模型开放框架:机器学习模型中开放性、透明度和协作的路径 - Ibrahim Haddid & Cailean Osborne, The Linux Foundation
Thursday August 22, 2024 13:50 - 15:15 HKT
Generative AI (GAI) offers unprecedented opportunities for research and innovation, but its commercialization has raised concerns about transparency, reproducibility, and safety. Many open GAI models lack the necessary components for full understanding and reproducibility, and some use restrictive licenses whilst claiming to be "open-source"'. To address these concerns, the Generative AI Commons at the LF AI & Data Foundation has proposed the Model Openness Framework (MOF), a ranked classification system that rates machine learning models based on their completeness and openness, following principles of open science, open source, open data, and open access. The MOF requires specific components of the model development lifecycle to be included and released under appropriate open licenses. This framework aims to prevent misrepresentation of models claiming to be open, guide researchers and developers in providing all model components under permissive licenses, and help individuals and organizations identify models that can be safely adopted without restrictions.

In this talk, we will discuss the MOF, showcase a demonstration of the Model Openness Tool (the tool that implements the framework), and discuss the benefits the MOF offers to both model producers and consumers. We strongly believe that a wide adoption of the MOF will foster a more open AI ecosystem, benefiting research, innovation, and adoption of state-of-the-art models.

生成AI(GAI)为研究和创新提供了前所未有的机会,但其商业化引发了对透明度、可复现性和安全性的担忧。许多开放的GAI模型缺乏完全理解和可复现性所需的组件,而一些则使用限制性许可证,却声称是“开源”的。为了解决这些问题,LF AI & Data Foundation的生成AI Commons提出了模型开放性框架(MOF),这是一个排名分类系统,根据其完整性和开放性评估机器学习模型,遵循开放科学、开源、开放数据和开放获取的原则。MOF要求模型开发生命周期的特定组件必须包含并发布在适当的开放许可证下。该框架旨在防止声称开放的模型被误解,指导研究人员和开发者在宽松许可证下提供所有模型组件,并帮助个人和组织识别可以安全采纳而无需限制的模型。

在本次讲话中,我们将讨论MOF,并展示模型开放性工具(实施该框架的工具)的演示,探讨MOF对模型生产者和消费者所带来的好处。我们坚信广泛采用MOF将促进更加开放的AI生态系统,有利于研究、创新和最新模型的采用。
Speakers
avatar for Ibrahim Haddad

Ibrahim Haddad

Executive Director, LF AI & Data Foundation
.
avatar for Cailean Osborne

Cailean Osborne

Researcher, Linux Foundation
Cailean is a Researcher at the Linux Foundation and a PhD Candidate in Social Data Science at the Oxford Internet Institute, University of Oxford. His interests are in OSS, the digital commons, and public interest computing. Previously, Cailean worked as the International Policy Lead... Read More →
Thursday August 22, 2024 13:50 - 15:15 HKT
Level 1 | Hung Hom Room 3

15:35 HKT

Empower Large Language Models (LLMs) Serving in Production with Cloud Native AI Technologies | 利用云原生人工智能技术在生产环境中赋能大型语言模型(LLMs) - Lize Cai, SAP & Yang Che, Alibaba Cloud Intelligence
Thursday August 22, 2024 15:35 - 16:10 HKT
LLMs have heightened public expectations of generative models. However, as noted in the Gartner report, running AI applications in production poses significant challenges. To tackle the challenges, we have redesigned and optimized the software capabilities of Cloud Native AI Technologies. By extending KServe to handle OpenAI's streaming requests, it can accommodate the inference load of LLM. With Fluid and Vineyard, It shows a result of reducing Llama-30B model loading time from 10 minutes to under 25 seconds. However, the above optimizations do not stop there. Since LLM loading is not a high-frequency operation,It is crucial to utilize cronHPA for timed auto-scaling in order to achieve a balance between cost and performance, and to evaluate the cost-effectiveness of the scaling process. As KServe and Fluid's reviewer and maintainer, we share our insights on the challenges in the session. We will showcase effective use of Cloud Native AI and share our experiences in production.

LLM让公众对生成式大模型的期望提高。然而,正如Gartner报告所指出的,将AI应用程序投入生产中存在重大挑战。为了解决这些挑战,我们重新设计和优化了云原生AI技术的软件能力。通过扩展KServe以处理OpenAI的流式请求,它可以容纳LLM的推理负载。通过Fluid和Vineyard,我们成功将Llama-30B模型的加载时间从10分钟缩短到不到25秒。然而,上述优化并不止于此。由于LLM加载不是高频操作,利用cronHPA进行定时自动扩展至关重要,以实现成本和性能之间的平衡,并评估扩展过程的成本效益。作为KServe和Fluid的审阅者和维护者,我们在本场演讲中分享了对挑战的见解。我们将展示云原生AI的有效使用,并分享我们在生产中的经验。
Speakers
avatar for Yang Che

Yang Che

senior engineer, Alibaba Cloud Intelligence
Yang Che, is a senior engineer of Alibaba Cloud. He works in Alibaba cloud container service team, and focuses on Kubernetes and container related product development. Yang also works on building elastic machine learning platform on those technologies. He is an active contributor... Read More →
avatar for Lize Cai

Lize Cai

Senior Software Engineer, SAP
Lize is a senior software engineer at SAP, based in Singapore. With a strong product mindset, Lize has extensive experience in building enterprise-grade machine learning platforms. A passionate advocate for open source technology, Lize actively contributes to various projects, including... Read More →
Thursday August 22, 2024 15:35 - 16:10 HKT
Level 1 | Hung Hom Room 3

16:25 HKT

Effortless Scalability: Orchestrating Large Language Model Inference with Kubernetes | 无缝扩展性:使用Kubernetes编排大型语言模型推理 - Joinal Ahmed & Nirav Kumar, Navatech Group
Thursday August 22, 2024 16:25 - 17:00 HKT
In the dynamic landscape of AI/ML, deploying and orchestrating large open-source inference models on Kubernetes has become paramount. This talk delves into the intricacies of automating the deployment of heavyweight models like Falcon and Llama 2, leveraging Kubernetes Custom Resource Definitions (CRDs) to manage large model files seamlessly through container images. The deployment is streamlined with an HTTP server facilitating inference calls using the model library. This session will explore eliminating manual tuning of deployment parameters to fit GPU hardware by providing preset configurations. Learn how to auto-provision GPU nodes based on specific model requirements, ensuring optimal utilization of resources. We'll discuss empowering users to deploy their containerized models effortlessly by allowing them to provide a pod template in the workspace custom resource inference field. The controller dynamically, in turn, creates deployment workloads utilizing all GPU nodes.

在AI/ML不断发展的领域中,在Kubernetes上部署和编排大型开源推理模型变得至关重要。本次演讲将深入探讨自动化部署像Falcon和Llama 2这样的重型模型的复杂性,利用Kubernetes自定义资源定义(CRDs)通过容器镜像无缝管理大型模型文件。部署通过HTTP服务器简化,以便使用模型库进行推理调用。 本场演讲将探讨通过提供预设配置来消除手动调整部署参数以适应GPU硬件的需求。了解如何根据特定模型要求自动配置GPU节点,确保资源的最佳利用。我们将讨论如何赋予用户轻松部署其容器化模型的能力,允许他们在工作区自定义资源推理字段中提供一个pod模板。控制器动态地创建部署工作负载,利用所有GPU节点。
Speakers
avatar for Joinal Ahmed

Joinal Ahmed

AI Architect, Navatech Group
Joinal is a seasoned Data Science expert passionate about rapid prototyping, community involvement, and driving technology adoption. With a robust technical background, he excels in leading diverse teams through ML projects, recruiting and mentoring talent, optimizing workflows, and... Read More →
avatar for Nirav Kumar

Nirav Kumar

Head of AI and Engineering, Navatech Group
Nirav Kumar is a leader in the field of Artificial Intelligence with over 13 years of experience in data science and machine learning. As Head of AI and Engineering at Navatech Group, he spearheads cutting-edge research and development initiatives aimed at pushing the boundaries of... Read More →
Thursday August 22, 2024 16:25 - 17:00 HKT
Level 1 | Hung Hom Room 3

17:15 HKT

Navigating the Ethical Horizon: Pioneering Responsible AI with the Generative AI Commons | 穿越伦理地平线:与生成式AI共同开创负责任的AI - Anni Lai, Futurewei
Thursday August 22, 2024 17:15 - 17:50 HKT
Join me to explore Responsible AI's vital role in shaping technology ethically. We'll navigate ethical dilemmas and societal impacts, emphasizing the urgency for frameworks prioritizing human well-being. At the core is the Responsible AI Framework by Generative AI Commons, guiding developers, researchers, and policymakers. Through transparency, fairness, accountability, and inclusivity, it empowers stakeholders to uphold ethical standards across the AI lifecycle. Let's journey towards an AI-powered future that's not just innovative but also ethically responsible.

加入我,探索负责任人工智能在塑造技术道德方面的重要作用。我们将探讨伦理困境和社会影响,强调制定以人类福祉为重点的框架的紧迫性。核心是生成式人工智能共同体的负责任人工智能框架,指导开发人员、研究人员和政策制定者。通过透明度、公平性、问责制和包容性,它赋予利益相关者在整个人工智能生命周期中维护伦理标准的能力。让我们一起走向一个不仅创新而且道德负责的人工智能驱动的未来。
Speakers
avatar for Anni Lai

Anni Lai

Head of Open Source Operations, Chair of Generative AI Commons, LF AI & Data, Futurewei
Anni drives Futurewei’s open source (O.S.) governance, process, compliance, training, project alignment, and ecosystem building. Anni has a long history of serving on various O.S. boards such as OpenStack Foundation, LF CNCF, LF OCI, LF Edge, and is on the LF OMF board and LF Europe... Read More →
Thursday August 22, 2024 17:15 - 17:50 HKT
Level 1 | Hung Hom Room 3
 
Friday, August 23
 

10:35 HKT

Breaking Boundaries: TACC as an Unified Cloud-Native Infra for AI + HPC | 打破界限:TACC作为AI + HPC统一云原生基础设施 - Peter Pan, DaoCloud & Kaiqiang Xu, Hong Kong University of Science and Technology
Friday August 23, 2024 10:35 - 11:10 HKT
Large AI models are driving significant investment in GPU clusters. Yet, managing these clusters is hard: Slurm-based HPC setups lack of management granularity and stability, while Kubernetes poses usability challenges for AI users. This talk introduces TACC, an AI infra management solution that bridges the advantages of both K8S and Slurm setups. This is a joint-work from computer system researchers at HKUST and leading CNCF contributors at DaoCloud. TACC manages a large-scale cluster at HKUST that supports over 500 active researchers since 2020. In this talk, we share our five-year journey with TACC, covering: * [User Experience] A seamless UI for job submissions and management, supporting both container and Slurm format, all on the same backbone * [Resource Management] Multi-tenant allocation with configurable strategies, using CNCF HAMi and Kueue * [Performance and Scalability] A robust distributed infrastructure with networked storage and RDMA, via CNCF SpiderPool,Fluid...

大型AI模型正在推动GPU集群的重大投资。然而,管理这些集群很困难:基于Slurm的HPC设置缺乏管理粒度和稳定性,而Kubernetes对AI用户存在可用性挑战。 本次演讲介绍了TACC,这是一种AI基础设施管理解决方案,可以结合K8S和Slurm设置的优势。这是香港科技大学的计算机系统研究人员与DaoCloud领先的CNCF贡献者共同合作的成果。 TACC自2020年以来管理着香港科技大学支持超过500名活跃研究人员的大规模集群。在本次演讲中,我们分享了与TACC一起的五年历程,涵盖以下内容: * [用户体验] 无缝的UI界面用于作业提交和管理,支持容器和Slurm格式,均在同一基础上 * [资源管理] 多租户分配与可配置策略,使用CNCF HAMi和Kueue * [性能和可扩展性] 强大的分布式基础设施,具有网络存储和RDMA,通过CNCF SpiderPool,Fluid...
Speakers
avatar for Peter Pan

Peter Pan

VP of R&D Engineering, DaoCloud
├ DaoCloud R&D Engineering VP├ CNCF wg-AI (AI Working-Group) member├ Maintainer of a few CNCF projects (GithubID: panpan0000): CloudTTY, KuBean, HwameiStor├ Public Tech Events:└─ 2023 KubeCon SH Speaker (https://sched.co/1PTFI)└─ 2023 KubeCon EU Program Committee... Read More →
avatar for Kaiqiang Xu

Kaiqiang Xu

Researcher, Hong Kong University of Science and Technology
Hong Kong University of Science and Technology
Friday August 23, 2024 10:35 - 11:10 HKT
Level 1 | Hung Hom Room 3

13:20 HKT

Constructing the 10x Efficiency of Cloud-Native AI Infrastructure | 如何让你的 AI 底座效能提升 10 倍? - Peter Pan, DaoCloud & 秋萍 戴, daocloud
Friday August 23, 2024 13:20 - 13:55 HKT
Enterprises keep invested in AI. But once GPU are installed in a data center, a challenge arises: how to construct an "AI cloud" atop bare-metal. Even when K8S is recognized as the foundational infrastructure for AI, But K8S only is merely the initial step. Organizations may face challenges: - Maximizing GPU utilization - Unifying multi-arch accelerators/GPUs (k8s DRA) - Organization quotas and cost management - Resource isolation among organizations - Smarter scheduling, tiered GPU allocation, task prioritization.. - Sharing GPU clusters between VMs & containers - Harnessing the full potential of high-speed networks , Storage optimization and dataset orchestration Leveraging open source stacks in Linux Foundation and CNCF, we've experience in building AI clouds for IDC or internal usage. We can share experiences to empower communities' journey towards constructing the 10x efficiency of cloud-native AI. Refer to `Additional resources` chapter for more details

企业继续投资于人工智能。但是一旦在数据中心安装了GPU,就会面临一个挑战:如何在裸金属之上构建一个“AI云”。即使K8S被认为是AI的基础基础设施,但K8S只是一个起步。 组织可能面临的挑战包括: - 最大化GPU利用率 - 统一多架构加速器/GPU(k8s DRA) - 组织配额和成本管理 - 组织之间的资源隔离 - 更智能的调度,分层GPU分配,任务优先级... - 在虚拟机和容器之间共享GPU集群 - 充分利用高速网络的潜力,优化存储和数据集编排 利用Linux基金会和CNCF中的开源堆栈,我们在为IDC或内部使用构建AI云方面有经验。我们可以分享经验,以赋予社区构建云原生AI的效率提升10倍的旅程。 有关更多详细信息,请参考“附加资源”章节。
Speakers
avatar for Peter Pan

Peter Pan

VP of R&D Engineering, DaoCloud
├ DaoCloud R&D Engineering VP├ CNCF wg-AI (AI Working-Group) member├ Maintainer of a few CNCF projects (GithubID: panpan0000): CloudTTY, KuBean, HwameiStor├ Public Tech Events:└─ 2023 KubeCon SH Speaker (https://sched.co/1PTFI)└─ 2023 KubeCon EU Program Committee... Read More →
avatar for 秋萍 戴

秋萍 戴

product mananger, daocloud
QiuPing Dai is a senior Technology Product Manager at DaoCloud for 5 years and involved in Cloud Computing ( including Kubernetes Computing, Storage, Network) development work. Before that, Qiuping worked at IBM for Cloud Computing. QiuPing is interested in Storage, Network , Scheduling... Read More →
Friday August 23, 2024 13:20 - 13:55 HKT
Level 1 | Hung Hom Room 2

13:20 HKT

Write Once Run Anywhere, but for GPUs | GPU 时代的“一次编写,到处运行” - Michael Yuan, Second State
Friday August 23, 2024 13:20 - 13:55 HKT
With the popularity of LLM apps, there is an increasing demand for running and scaling AI workloads in the cloud and on edge devices. Rust and Wasm offer a solution by providing a portable bytecode that abstracts hardware complexities. LlamaEdge is a lightweight, high-performance and cross-platform LLM inference runtime. Written in Rust and built on WasmEdge, LlamaEdge provides a standard WASI-NN API to developers. Developers only need to write against the API and compile to Wasm. The Wasm file can run on any device, where WasmEdge translates and routes Wasm calls to the underlying native libraries such as llama.cpp. This talk will discuss the design and implementation of LlamaEdge and show how it enables cross-platform LLM app development and deployment. We will also walk through several code examples from a basic sentence completion app, to a chat bot, to an RAG agent app with external knowledge in vector databases, to a Kubernetes managed app across a heterogeneous cluster.

随着LLM应用程序的流行,云端和边缘设备上运行和扩展AI工作负载的需求不断增加。Rust和Wasm通过提供一个抽象硬件复杂性的可移植字节码来提供解决方案。 LlamaEdge是一个轻量级、高性能和跨平台的LLM推理运行时。使用Rust编写,并构建在WasmEdge上,LlamaEdge为开发人员提供了一个标准的WASI-NN API。开发人员只需针对API编写代码并编译为Wasm。Wasm文件可以在任何设备上运行,WasmEdge将Wasm调用转换并路由到底层的本地库,如llama.cpp。 本次演讲将讨论LlamaEdge的设计和实现,并展示它如何实现跨平台的LLM应用程序开发和部署。我们还将从基本的句子补全应用程序、聊天机器人,到具有外部知识的矢量数据库中的RAG代理应用程序,再到跨异构集群的Kubernetes管理应用程序,演示几个代码示例。
Speakers
avatar for Michael Yuan

Michael Yuan

Product Manager, Second State
Dr. Michael Yuan is a maintainer of WasmEdge Runtime (a project under CNCF) and a co-founder of Second State. He is the author of 5 books on software engineering published by Addison-Wesley, Prentice-Hall, and O'Reilly. Michael is a long-time open-source developer and contributor... Read More →
Friday August 23, 2024 13:20 - 13:55 HKT
Level 1 | Hung Hom Room 3

14:10 HKT

Unveiling the Future: Nurturing Openness in AI Development | 揭示未来:培育人工智能开放性发展 - Anni Lai, Futurewei & Mer Joyce, Do Big Good LLC
Friday August 23, 2024 14:10 - 14:45 HKT
In the rapidly evolving landscape of AI, the concept of openness emerges as a cornerstone for ethical, accountable, and sustainable development. This talk delves into the significance of fostering openness in AI endeavors, exploring two groundbreaking efforts: the Open Source AI Definition led by the Open Source Initiative (OSI) and Model Openness Framework (MOF) introduced by LF AI & Data Generative AI Commons. Through the lens of the OSI's definition co-design process, we'll navigate the evolving landscape of Open Source AI, deciphering its potential to democratize access to cutting-edge technology while fortifying principles of inclusivity and collaboration. We'll unravel the transformative potential of the MOF to foster transparency and trust in AI models. By elucidating the core tenets of the framework and the definition, we'll illuminate pathways for advancing responsible AI development.

在人工智能快速发展的领域中,开放性的概念成为道德、负责任和可持续发展的基石。本次演讲深入探讨了在人工智能努力中培育开放性的重要性,探索了两项开创性的工作:由开源倡议组织(OSI)领导的开源人工智能定义和LF AI & Data生成人工智能共同体引入的模型开放性框架(MOF)。 通过OSI定义的共同设计过程,我们将探索开源人工智能不断发展的领域,解读其潜力,使人们能够民主化获得尖端技术,同时巩固包容性和合作原则。 我们将揭示MOF在促进人工智能模型透明度和信任方面的转变潜力。通过阐明框架和定义的核心原则,我们将阐明推进负责任人工智能发展的途径。
Speakers
avatar for Anni Lai

Anni Lai

Head of Open Source Operations, Chair of Generative AI Commons, LF AI & Data, Futurewei
Anni drives Futurewei’s open source (O.S.) governance, process, compliance, training, project alignment, and ecosystem building. Anni has a long history of serving on various O.S. boards such as OpenStack Foundation, LF CNCF, LF OCI, LF Edge, and is on the LF OMF board and LF Europe... Read More →
avatar for Mer Joyce

Mer Joyce

Founder, Do Big Good LLC
Mer Joyce (she/her) is the founder of the co-design firm Do Big Good and is the facilitator of the Open Source Initiative's consultative process to co-design the Open Source AI Definition (OSAID). She has over a decade of international experience at the intersection of research, tech... Read More →
Friday August 23, 2024 14:10 - 14:45 HKT
Level 1 | Hung Hom Room 3

15:15 HKT

Detecting and Overcoming GPU Failures During ML Training | 在ML训练过程中检测和克服GPU故障 - Ganeshkumar Ashokavardhanan, Microsoft & Sarah Belghiti, Wayve
Friday August 23, 2024 15:15 - 15:50 HKT
Scaling ML training demands powerful GPU infrastructure, and as model sizes and training scale increases, GPU failures become an expensive risk. From outright hardware faults to subtle performance degradation, undetected GPU problems can sabotage training jobs, inflating costs and slowing development. This talk dives into GPU failure challenges in the context of ML training, particularly distributed training. We will explore the spectrum of GPU issues, and why even minor performance drops can cripple large jobs. Learn how observability (leveraging tools like NVIDIA DCGM) enables proactive problem detection through GPU health checks. Understand principles of fault-tolerant distributed training to mitigate GPU failure fallout. Drawing on cloud provider and autonomous vehicle company experience, we will share best practices for efficient identification, remediation, and prevention of GPU failures. We will also explore cutting-edge ideas like CRIU and task pre-emption for GPU workloads.

随着模型规模和训练规模的增加,机器学习训练需要强大的GPU基础设施,而GPU故障成为一种昂贵的风险。从硬件故障到性能逐渐下降,未被发现的GPU问题可能会破坏训练任务,增加成本并减缓开发速度。本次演讲将深入探讨在机器学习训练中GPU故障所带来的挑战,特别是在分布式训练中。我们将探讨各种GPU问题的范围,以及为什么即使是轻微的性能下降也可能瘫痪大型任务。 了解如何通过观测性(利用诸如NVIDIA DCGM之类的工具)通过GPU健康检查实现问题的主动检测。了解容错分布式训练的原则,以减轻GPU故障的后果。借鉴云服务提供商和自动驾驶汽车公司的经验,我们将分享高效识别、纠正和预防GPU故障的最佳实践。我们还将探讨像CRIU和任务抢占等尖端想法,以应对GPU工作负载。
Speakers
avatar for Ganeshkumar Ashokavardhanan

Ganeshkumar Ashokavardhanan

Software Engineer, Microsoft
Ganesh is a Software Engineer on the Azure Kubernetes Service team at Microsoft, working on node lifecycle, and is the lead for the GPU workload experience on this kubernetes platform. He collaborates with partners in the ecosystem like NVIDIA to support operator models for machine... Read More →
avatar for Sarah Belghiti

Sarah Belghiti

ML Platform Engineer, Wayve
Sarah Belghiti is an ML Platform Engineer at Wayve, a leading developer of embodied intelligence for autonomous vehicles. She works on the infrastructure, scheduling and monitoring of ML workloads. With GPUs becoming an increasingly scarce resource, her focus has been on building... Read More →
Friday August 23, 2024 15:15 - 15:50 HKT
Level 1 | Hung Hom Room 3

16:05 HKT

Boosting LLM Development and Training Efficiency: Automated Parallelization with MindSpore | 提升LLM开发和培训效率:MindSpore自动并行化 - Yufeng Lyu, Huawei Technologies Co., Ltd
Friday August 23, 2024 16:05 - 16:40 HKT
With the popularity of LLM, large-scale pre-training has become an indispensable step in AI research and implementation. However, large-scale distributed parallel training requires developers to consider various factors affecting the efficiency of model development and training, such as partitioning and communication, and then modify the model accordingly. In this presentation, we will demonstrate an automatic parallelization approach that allows developers to focus on algorithm research without the need for intrusive model modifications. Distributed training on a large-scale cluster can be achieved simply by configuring strategies. Developers can also utilize MindSpore's hyperparameter search model to automatically find the best parallelization strategy. The parallel strategy obtained through search can achieve 90%-110% of the expert tuning performance, significantly reducing the time required for model modifications while efficiently accelerating LLM training.

随着LLM的流行,大规模预训练已成为人工智能研究和实施中不可或缺的一步。然而,大规模分布式并行训练需要开发人员考虑各种影响模型开发和训练效率的因素,如分区和通信,然后相应地修改模型。 在本次演示中,我们将展示一种自动并行化方法,使开发人员能够专注于算法研究,而无需进行侵入性的模型修改。通过配置策略,可以简单实现在大规模集群上的分布式训练。开发人员还可以利用MindSpore的超参数搜索模型自动找到最佳的并行化策略。通过搜索获得的并行策略可以实现专家调整性能的90%-110%,显著减少了模型修改所需的时间,同时有效加速LLM的训练。
Speakers
avatar for Yufeng Lyu

Yufeng Lyu

Senior Engineer, Huawei Technologies Co., Ltd
Lyu Yufeng, a technical architect at MindSpore and maintainer of the MindNLP framework, focuses his research on natural language processing and distributed parallelism for LLM. He possesses extensive experience in the development and implementation of LLM solutions.
Friday August 23, 2024 16:05 - 16:40 HKT
Level 1 | Hung Hom Room 3

16:05 HKT

dora-rs: Dataflow Oriented Robotic AI framework | dora-rs:面向数据流的机器人AI框架 - Philipp Oppermann, Freelancer & Xavier Tao, 1ms.ai
Friday August 23, 2024 16:05 - 16:40 HKT
dora-rs project (https://github.com/dora-rs) is a Dataflow Oriented Robotic AI framework that putting ML powered robots at the reach of anyone. It also leverages the existing legacy robotic ecosystem thru its ROS bridge extension.

dora-rs is developed in Rust, surpassing C/C++ in terms of development speed, quality, memory safety and security. dora-rs offers multiple programming language APIs, especially Python which is treated as a first-class citizen. Developers can fully utilize the latest ML models from open-source communities to quickly build robots prototypes for research, education, and rapid prototyping. C/C++/Rust API is also available for high-performance, low-latency production environments.

We will showcase dora-rs robotic framework for the open-source AI revolution and discuss design decisions behind it. We will bring physical robots to do demoes:

· robot understanding its surroundings powered by (VLM).
· robot programming itself in real time powered by (LLM).
Speakers
avatar for Philipp Oppermann

Philipp Oppermann

Freelancer, Open-source software engineer
I'm a freelance software engineer from Germany. I focus on Rust, open-source software, operating systems, and other system-level software. I currently work on the dora-rs project, a modern robotic framework. I'm also part of the `rust-osdev` organization on GitHub, which maintains... Read More →
avatar for Xavier Tao

Xavier Tao

Founder, 1ms.ai
Xavier Tao is a French software engineer developing practical solutions for ML/AI users and engineers through open-source projects. One such project is Dora-rs that aims to make building AI applications fast and easy.
Friday August 23, 2024 16:05 - 16:40 HKT
Level 1 | Hung Hom Room 7
 

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.