During the CXL Forum at OCP Global Summit, XConn CEO Gerry Fan provided an overview of the Apollo CXL switch and how it can be used in AI/HPC environments.
4. CXL Enables DRAM Usage
Optimization
4
Host 1 Host 2 Host 3
CXL Switch
Host 1 Host 2 Host 3
JBO
M
Multiple Heterogenous Servers
Each With Dedicated DRAM CXL Enabled Memory Pooling / Sharing
5. Scalable Memory Pooling/Sharing
Enabled by CXL Switch
5
Memory Pooling/Sharingwith
CXL 1.1 Hostsand Single Logical Devices
H
#
H
1
H
3
H
4
H
2
D
#
D
1
D
3 D
4
D
2
CXL2.0Switch
Memory Pooling with CXL 2.0 Hosts and
Multiple Logical Devices
D
#
H
#
D
1
H
1
D
3
H
3
D
4
H
4
D
2
H
2
CXL2.0Switch
Management Host
(CXLFabricManager)
Management Host
(CXLFabricManager)
8. • Industry consensus is CXL memory pooling & sharing will be
the early application to ramp CXL usage in AI/HPC
• XConn is accelerating the adoption of this application with CXL
2.0/CXL3.1 Switches
• XConn is helping the CXL eco-system mature earlier by working
with all the CPU, Memory & Software partners and
demonstrating interoperability between the eco-system players
How to Accelerate CXL Adoption
9. XConn Product Timeline
2023 2024 2025 2026
JBOG, JBOA,
Data Center,
HPC,
AI/DL/ML
CXL Switch
for Memory
Pooling and
PCIe Switch
for
Connectivity
CXL 2.0 & PCIe 5.0
CS – March, 2023
XC60256
XC61256
CXL 3.1 & PCIe 6.1
CS – 1Q25
XC50256
XC51256
CXL 2.0 & PCIe 5.0
MP – 2Q24
Apollo II
XC50256
XC51256
Apollo I
XC60256
XC61256
CXL 3.1 & PCIe 6.1
MP – 2Q26
For product ordering, please visit www.xconn-tech.com