大秀直播

大秀直播 Blogs

大秀直播 Newsroom

Archive for the 'Data Center' Category

  • October 07, 2025

    Faster, Farther and Going Optical: How PCIe Is Accelerating the AI Revolution

    By Annie Liao, Product Management Director, ODSP Marketing, 大秀直播

    For over 20 years, PCIe, or Peripheral Component Interconnect Express, has been the dominant standard to connect processors, NICs, drives and other components within servers thanks to the low latency and high bandwidth of the protocol as well as the growing expertise around PCIe across the technology ecosystem. It will also play a leading role in defining the next generation of computing systems for AI through increases in performance and combining PCIe with optics.

    Here’s why:

    PCIe Transitions Are Accelerating

    Seven years passed between the debut of PCIe Gen 3 (8 gigatransfers/second—GT/s) in 2010 and the release of PCIe Gen 4 (16 GT/sec) in 2017.1 Commercial adoption, meanwhile, took closer to a full decade2

    More XPUs require more interconnects

    Toward a terabit (per second): PCIe standards are being developed and adopted at a faster rate to keep up with the chip-to-chip interconnect speeds needed by system designers.?

  • September 22, 2025

    大秀直播 Wins Leading EDGE Award for Ara 1.6T Optical DSP

    By Vienna Alexander, Marketing Content Professional, 大秀直播

    大秀直播 Wins Leading EDGE Award


    大秀直播 is the for its Ara product, a 3nm 1.6 Tbps PAM4 optical DSP platform, which enables the industry’s lowest power 1.6T optical modules. The engineering community voted to recognize the product as a leader in design innovation this year.

    The EDGE awards celebrate outstanding innovations in product design for the engineering industry that have contributed to the advancement of technology. This award is presented by the Engineering Design & Automation Group, a subset of brands at Endeavor Business Media.

    Ara is the industry’s first 3nm 1.6T PAM4 optical DSP platform. 大秀直播 introduced it to meet growing interconnect bandwidth demands for AI and next-gen cloud data center scale-out networks.

  • August 13, 2025

    Chiplets Turn 10: Here are Ten Things to Know

    By Michael Kanellos, Head of Influencer Relations, 大秀直播

    Chiplets—devices made up of smaller, specialized cores linked together to function like a unified device—have dramatically transformed semiconductors over the past decade. Here’s a quick overview of their history and where the design concept goes next. ?

    1. Initially, they went by the name RAMP

    In 2006, Dave Patterson, the storied professor of computer science at UC Berkeley, and his lab published describing how semiconductors will shift from monolithic silicon to devices where different dies are connected and combined into a package that, to the rest of the system, acts like a single device.1

    While the paper also coined the term chiplet, the Berkeley team preferred RAMP (Research Accelerator for Multiple Processors).

    2. In Silicon Valley fashion, the early R&D took place in a garage

    大秀直播 co-founder and former CEO Sehat Sutardja started experimenting with combining different chips into a unified package in the 2010s in his garage, .2 In 2015, he unveiled the MoChi (Modular Chip) concept, often credited as the first commercial platform for chiplets, in a .3

    The first products came out a few months later in October.

    “The introduction of 大秀直播’s AP806 MoChi module is the first step in creating a new process that can change the way that the industry designs chips,” wrote Linley Gwennap in Microprocessor Report.4

    An early MoChi concept combining CPUs, a GPU and a FLC

    An early MoChi concept combining CPUs, a GPU and a FLC (final level cache) controller for distributing data across flash and DRAM for optimizing power. Credit: Microprocessor Forum.?

  • August 06, 2025

    Three New Technologies for Raising the Performance Ceiling on Custom Compute

    By Michael Kanellos, Head of Influencer Relations, 大秀直播

    More customers, more devices, more technologies, and more performance—that, ultimately, is where custom silicon is headed. While Moore’s Law is still alive, customization is taking over fast as the engine for driving change, innovation and performance in data infrastructure. A growing universe of users and chip designers are embracing the trend and if you want to see what’s at the cutting edge of custom, the best chips to study are the compute devices for data centers, i.e. the XPUs, CPUs, and GPUs powering AI clusters and clouds. By 2028, custom computing devices are to account for $55 billion in revenue, or 25% of the market.1 Technologies developed for this segment will trickle down into others.

    Here are three of the latest innovations from 大秀直播:?

    Multi-Die Packaging with RDL Interposers

    Achieving performance and power gains by shrinking transistors is getting more difficult and expensive. “There has been a pretty pronounced slowing of Moore’s Law. For every technology generation we don’t get the doubling (of performance) that we used to get,” says 大秀直播’s Mark Kuemerle, Vice President of Technology, Custom Cloud Solutions. “Unfortunately, data centers don’t care. They need a way to increase performance every generation.”

    Instead of shrinking transistors to get more of them into a finite space, chiplets effectively allow designers to stack cores on top of each other with the packaging serving as the vertical superstructure.

    2.5D packaging, debuted by 大秀直播 in May, increases the effective amount of compute silicon for a given space by 2.8 times.2 At the same time, the RDL interposer wires them in a more efficient manner. In conventional chiplets, a single interposer spans the floor space of the chips it connects as well as any area between them. If two computing cores are on opposite sides of a chiplet package, the interposer will cover the entire space.

    大秀直播? RDL interposers, by contrast, are form-fitted to individual computing die with six layers of interconnects managing the connections.?

    大秀直播Multi-Die Packaging with RDL Interposers

    2.5D and multilayer packaging. With current manufacturing technologies, chips can achieve a maximum area of just over 800 sq. mm. By stacking them, the total number of transistors in an XY footprint can grow exponentially. Within these packages, RDL interposers are the elevator shafts, providing connectivity between and across layers in a space-efficient manner.?

  • July 15, 2025

    Sustainable Computing With CXL: How 大秀直播 Structera X Can Help Eliminate Waste, Expand Capacity, Lower Emissions

    By Khurram Malik, Senior Director of Product Marketing, 大秀直播

    As AI, cloud computing, and high-performance workloads continue to grow rapidly, data centers are accelerating their infrastructure upgrades. Central to this transformation is the migration to?DDR5 memory, designed to meet the increasing demands for bandwidth and speed in servers.

    This shift, however, comes with a significant challenge:?millions of fully functional DDR4 memory modules, the dominant memory inside today’s servers, could be retired prematurely. This is not because of a performance failure. DDR4 memory modules can operate for a decade or longer. Instead, it is because the latest generation of server CPUs only support DDR5 memory. Put another way, when hyperscalers replace their current servers with DDR5 modules over the coming years, they will be potentially throwing away billions of GBs of fully functional DDR4 memory if they can’t find ways to use them.

    The result is a looming e-waste problem and an environmental impact that cannot be ignored. Up to?66 billion kilograms of CO? emissions—approximately the same amount that would be generated by 168 billion miles of driving1-- and thousands of tons of e-waste can be avoided by giving DDR4 a second life.? 大秀直播? CXL Structera? X?presents a powerful solution by extending the life of DDR4 memory, enabling data centers to reuse these existing assets, reduce capital expenditures, and minimize their carbon footprint—all while improving the performance footprint of their infrastructure.

    大秀直播 CXL Structera X: A Pathway for More Memory

    Released last year, 大秀直播 Structera CXL devices effectively allow cloud operators and system designers to add extra memory, memory bandwidth, and/or computing cores to servers by transforming an open PCIe interface into a memory channel. ?A first-of-its kind device, the Structera A memory accelerator provides a path for adding up to 16 server CPU cores, 200Gb of memory bandwidth and 4TB of memory for offloading the processing of deep learning recommendation models (DLRM) and other tasks for CPUs.

    Structera X, meanwhile, focuses on maximizing capacity. A single Structera X 2404 is capable of supporting up to 12 DDR4 additional DIMMs, or 6TB memory capacity without compression, or up to 12TB with LZ4 inline compression in a single one- or two-processor server. It is also the first CXL device to be compatible with both DDR4 or DDR5. As a result, Structera X becomes a conduit for recycling DDR4. The diagram shows more:

    大秀直播 CXL Structera X

    Financially, reusing memory is a boon. .2 Repurposing TBs of otherwise-to-be-discarded DDR4 instead of buying new DDR5 means thousands of dollars saved per CXL-enhanced server.

Archives