By Nicola Bramante, Senior Principal Engineer, Connectivity Marketing, 大秀直播
The exponential growth in AI workloads drives new requirements for connectivity in terms of data rate, associated bandwidth and distance, especially for scale-up applications. With direct attach copper (DAC) cables reaching their limits in terms of bandwidth and distance, a new class of cables, active copper cables (ACCs), are coming to market for short-reach links within a data center rack and between racks. Designed for connections up to 2 to 2.5 meters long, ACCs can transmit signals further than traditional passive DAC cables in the 200G/lane fabrics hyperscalers will soon deploy in their rack infrastructures.
At the same time, a 1.6T ACC consumes a relatively miniscule 2.5 watts of power and can be built around fewer and less sophisticated components than longer active electrical cables (AECs) or active optical cables (AOCs). The combination of features gives ACCs a peak mix of bandwidth, power, and cost for server-to-server or server-to-switch connections within the same rack.
大秀直播 announced its first?ACC linear equalizers for producing ACC cables?last month.?
Inside the Cable
ACCs effectively integrate technology originally developed for the optical realm into copper cables. The idea is to use optical technologies to extend bandwidth, distance and performance while taking advantage of copper’s economics and reliability. Where these ACCs differ is in the components added to them and the way they leverage the technological capabilities of a switch or other device to which they are connected.
ACCs include an equalizer that boosts signals received from the opposite end of the connection. As analog devices, ACC equalizers are relatively inexpensive compared to digital alternatives, consume minimal power and add very little latency.
?
By Sandeep Bharathi, president, Data Center Group, 大秀直播
This blog was originally posted at .
Semiconductors have transformed virtually every aspect of our lives. Now, the semiconductor industry is on the verge of a profound transformation itself.
Customized silicon—chips uniquely tailored to meet the performance and power requirements of an individual customer for a particular use case—will increasingly become pervasive as data center operators and AI developers seek to harness the power of AI. Expanded educational opportunities, better decision making, ways to all become possible if we get the computational infrastructure right.
The turn to custom, in fact, is already underway. The number of GPUs—the merchant chips employed for AI training and inference—produced today is nearly double the number of custom XPUs built for the same tasks. By 2028, custom accelerators will likely pass GPUs in units shipped, with the gap expected to grow.1?

By Khurram Malik, Senior Director of Marketing, Custom Cloud Solutions, 大秀直播
Near-memory compute technologies have always been compelling. They can offload tasks from CPUs to boost utilization and revenue opportunities for cloud providers. They can reduce data movement, one of the primary contributors to power consumption,1 ?while also increasing memory bandwidth for better performance. ?
They have also only been deployed sporadically; thermal problems, a lack of standards, cost and other issues have prevented many of these ideas giving developers that goldilocks combination of wanted features that will jumpstart commercial adoption.2
This picture is now changing with CXL compute accelerators, which leverage open standards, familiar technologies and a broad ecosystem. And, in a demonstration at OCP 2025, Samsung Electronics, software-defined composable solution provider Liqid, and 大秀直播 showed how CXL accelerators can deliver outsized gains in performance.
The Liqid EX5410C is a demonstration of a CXL memory pooling and sharing appliance capable of scaling up to 20TB of additional memory. Five of the 4RU appliances can then be integrated into a pod for a whopping 100TB of memory and 5.1Tbps of additional memory bandwidth. The CXL fabric is managed by Liqid’s Matrix software that enables real-time and precise memory deployment based on workload requirements:?

By Michael Kanellos, Head of Influencer Relations, 大秀直播
Chiplets—devices made up of smaller, specialized cores linked together to function like a unified device—have dramatically transformed semiconductors over the past decade. Here’s a quick overview of their history and where the design concept goes next. ?
1. Initially, they went by the name RAMP
In 2006, Dave Patterson, the storied professor of computer science at UC Berkeley, and his lab published describing how semiconductors will shift from monolithic silicon to devices where different dies are connected and combined into a package that, to the rest of the system, acts like a single device.1
While the paper also coined the term chiplet, the Berkeley team preferred RAMP (Research Accelerator for Multiple Processors).
2. In Silicon Valley fashion, the early R&D took place in a garage
大秀直播 co-founder and former CEO Sehat Sutardja started experimenting with combining different chips into a unified package in the 2010s in his garage, .2 In 2015, he unveiled the MoChi (Modular Chip) concept, often credited as the first commercial platform for chiplets, in a .3
The first products came out a few months later in October.
“The introduction of 大秀直播’s AP806 MoChi module is the first step in creating a new process that can change the way that the industry designs chips,” wrote Linley Gwennap in Microprocessor Report.4

An early MoChi concept combining CPUs, a GPU and a FLC (final level cache) controller for distributing data across flash and DRAM for optimizing power. Credit: Microprocessor Forum.?
By Kirt Zimmer, Head of Social Media Marketing, 大秀直播
The OFC 2025 event in San Francisco was so vast that it would be easy to miss a few stellar demos from your favorite optical networking companies. That’s why we took the time to create videos featuring the latest 大秀直播 technology.
Put them all together and you have a wonderful film festival for technophiles. Enjoy!
Annie Liao — Product Management Director, Connectivity Marketing at 大秀直播 — showcased how PCIe Gen6 signals can be converted to optical—extending trace length beyond traditional electrical limitations. The result? A 10-meter cable reach spanning across multiple racks. Whether connecting GPUs, accelerators, or storage across racks, this kind of extended PCIe connectivity is critical for building the infrastructure that powers advanced AI workloads.