WhyChips

A professional platform focused on electronic component information and knowledge sharing.

Optical Interconnect vs Optical Computing: What Scales First

Abstract visualization of glowing fiber optic strands carrying data, with luminous light particles representing digital information flow

The Boundary of Optical Computing and Optical Interconnect: What Scales First, What’s Still Concept

AI infrastructure is stretching a familiar limit: data movement is becoming as expensive as computation. As model sizes grow and GPU/accelerator clusters sprawl across racks, the industry starts to obsess over the “power per bit” of interconnects. That obsession fuels a wave of claims about photonics: optical I/O, co-packaged optics (CPO), optical switching, and even photonic compute. But commercialization paths differ drastically by layer. Some pieces are already crossing into volume deployments; others remain research-grade demonstrations or tightly-scoped pilots.

This is why the topic matters for WhyChips: AI scale is creating the imagination for lower interconnect power, but the routes to product are uneven. A “truth vs. hype” decomposition is exactly what readers need—procurement teams, infrastructure architects, and product planners who must decide what to bet on in 2026–2028 versus what to monitor as long-horizon R&D.

This article maps the boundary. It focuses on four tags—silicon photonics, optical interconnect, quantum control electronics, and in-/near-memory computing—because they share one reality: the bottleneck is not a single transistor; it is system-level scaling of energy, packaging, and manufacturability.

What Readers Want from This Analysis

If you publish in Photonics & Emerging Compute, your readers typically have three questions:

  1. What part of “optical” is actually shipping or close to shipping?
  2. What part is technically credible but still blocked by integration, reliability, or economics?
  3. What part is mostly a conceptual promise—useful for research agendas, but not for near-term roadmaps?

We will answer these questions layer by layer.

A Simple Stack Model: Where Photonics Enters the System

To avoid mixing categories, treat the ecosystem as a stack:

  • Links and modules (pluggables): optics outside the package
  • Near-packaged optics / CPO / optical I/O: optics at or inside the package boundary
  • Fabrics: optical switching and topology changes
  • Compute: photonic accelerators and “optical computing” claims

Different layers have different maturity drivers. Links are governed by supply chain and standards. Packaging optics is governed by thermal, reliability, and assembly yield. Fabrics are governed by architecture and operational tooling. Optical compute is governed by algorithm fit and end-to-end system accuracy—not just raw multiply speed.

What Can Scale First (2026–2028): Optical Interconnect Moves Closer to the Chip

1) Silicon Photonics for Data Center Links: Mature Manufacturing, Expanding Role

Silicon photonics is no longer a novelty. The industry has learned how to build photonic integrated circuits (PICs) with CMOS-adjacent manufacturing flows and then pair them with electronics (drivers, TIAs, DSPs). What is changing now is not whether silicon photonics works; it is where it sits in the system.

As electrical SerDes pushes to higher speeds, the system-level penalty grows: equalization, DSP, thermal load, and limited reach at acceptable bit error rates. The result is a gradual migration from “optics as a module” toward “optics as an I/O architecture.”

The “scale-first” point is this: even if the deepest co-packaged vision takes time, the value of silicon photonics in data center I/O is already bankable. Readers should think of it as a continuum: pluggables → near-packaged optics → true CPO.

2) Co-Packaged Optics (CPO): Early Commercialization, Hard Constraints Are Known

CPO is a pragmatic response to a power-and-density ceiling in copper-based interconnects. TrendForce describes the motivation clearly: copper is struggling with both density and energy efficiency at rising data rates. It notes that, at these higher speeds, traditional copper cabling can consume over 10 pJ/bit and drive substantial system power, accelerating the shift toward optics.[1]

TrendForce further claims that micro-LED-based CPO architectures could reduce overall power consumption to about 5% of copper solutions, and illustrates an example where micro-LED CPO could cut an optical communication product’s power substantially (discussing pJ/bit and system-level implications).[2]

Two important editorial notes for WhyChips:

  • Treat these numbers as source-quoted claims, not universal constants. Your job is to contextualize: “TrendForce estimates…” rather than “CPO will…”
  • The key story is not the exact multiplier; the key story is why the package boundary is moving: power per bit and bandwidth density.

In terms of maturity, CPO is not “concept-stage,” but it is not “fully commoditized” either. It is in the messy phase where the industry knows the obstacles:

  • Laser integration strategy (external lasers vs. on-package)
  • Thermal stability (PIC behavior under package temperatures)
  • Manufacturability and test (how you test optics at scale, at acceptable cost)
  • Serviceability (replaceable components vs. sealed packages)

CPO is scaling first where it has the clearest ROI: high-bandwidth, short-reach links inside AI systems where copper power is painful.

3) Optical I/O Chiplets and Standards: The Ecosystem Is Trying to De-risk Integration

A major signal of commercialization is when companies push optics into a chiplet form factor and align with standardized electrical interfaces.

Ayar Labs announced what it describes as the industry’s first UCIe optical interconnect chiplet, stating its TeraPHY optical I/O chiplet can achieve 8 Tbps bandwidth and is powered by a 16-wavelength light source (SuperNova), with the goal of easing integration into customer chip designs via a UCIe electrical interface.[3]

Again, the correct editorial framing is: this does not prove that all optical chiplets are ready for mass adoption. It does show that the ecosystem is converging on a pattern that tends to scale:

  • Separate photonics and electronics into modular components
  • Use packaging as the integration plane
  • Use standard interfaces to reduce “one-off science project” risk

If you want a crisp WhyChips takeaway: the first scalable optical products will be those that look like regular semiconductor products in procurement and integration—standard form factors, predictable test flows, interoperable interfaces, and clear RAS (reliability, availability, serviceability) stories.

What Is Likely to Scale Next (But Not Everywhere): Optical Fabrics, Optical Switching, and “Where the Switch Lives”

The next boundary is not just “copper vs. fiber.” It is where switching happens.

In the data center, switching is an operationally mature art: link monitoring, congestion control, telemetry, failure domains. Optical switching and optical circuit switching can promise lower latency or lower power under certain traffic models, but they introduce new operational complexity:

  • Circuit provisioning vs. packet switching assumptions
  • Failure detection and reroute times
  • Interactions with congestion control and collective communication patterns

The likely path is gradual: optical switching appears first in narrowly-defined roles (e.g., reconfigurable high-bandwidth paths) rather than replacing the entire switching fabric overnight.

In editorial terms: this layer is pilot-to-early-deployment, not pure concept. But it will be uneven by operator maturity and workload profile.

What’s Still Mostly Concept (or Narrow Research Fit): Optical Computing Claims

The phrase “optical computing” is overloaded. It can mean:

  • Photonics used for communication (interconnect)
  • Photonics used for analog linear algebra (matrix operations)
  • Photonics used as a general-purpose compute substrate (rare)

The strongest evidence for technical credibility in optical compute is in carefully scoped accelerators.

A Nature paper reports an integrated large-scale photonic accelerator with more than 16,000 photonic components, designed to deliver matrix multiply–accumulate functions, with high speed up to 1 GHz and very low latency per cycle, using a co-integrated electronics chip and a 2.5D hybrid packaging approach.[4]

This is real progress. But it is not the same as “photonic GPUs are coming.” The boundary conditions matter:

  • Many photonic compute approaches are naturally analog, which raises calibration and accuracy questions.
  • They can be excellent at specific linear operations, but system value depends on data movement, quantization, and software stack.
  • Packaging and yield can be as hard as the photonic circuit itself.

So the correct WhyChips position is:

  • Optical compute is credible research with real prototypes.
  • Broad general-purpose replacement narratives are concept-stage.
  • The commercialization path is likely via specialized accelerators and hybrid systems, not a clean swap of electronic compute.

Quantum Control Electronics: A “Quiet” Scaling Problem That Looks Like a Semiconductor Roadmap

Quantum computing is often discussed as a physics race, but the scaling barrier increasingly resembles a classical semiconductor systems problem: control, wiring, power, and packaging.

One of the clearest “commercialization-near” signals is when leading quantum efforts show integrated control electronics moving toward cryogenic operation and higher integration.

IBM Research describes a cryo-CMOS control system for large-scale superconducting qubit quantum computing, presenting what it calls a large-scale demonstration of cryogenic CMOS control in a hybrid architecture that combines cryogenic flux control ASICs with room-temperature RF electronics, and notes deployment on IBM’s 156-qubit Heron R2 processor.[5]

Why does this matter for the “boundary” theme?

  • Quantum control electronics is not a consumer product today, but its scaling path is becoming engineering-driven.
  • It will scale first in the layers that can be packaged, replicated, and power-budgeted in a predictable way.
  • It remains concept-stage in the sense of “mass market,” but it is not concept-stage in the sense of “only theoretical.”

For WhyChips readers, the honest framing is: quantum computing’s near-term boundary is not “algorithm advantage.” It is “systems integration.” Cryo-CMOS control is one of the practical bridges.

In-/Near-Memory Computing: Where Commercial Reality Is Narrower Than the Hype

“In-memory computing” and “compute-in-memory” (CIM) are another category where the press headline can outrun commercialization.

What can scale earlier:

  • Embedded non-volatile memories (like MRAM, ReRAM) in certain nodes and applications
  • Near-memory acceleration ideas that reduce data movement in specific pipelines

What remains hard:

  • High-precision, large-scale CIM for mainstream training workloads
  • Tooling and programming models that make CIM broadly usable
  • Device-level variation, endurance, and yield issues for analog crossbars

If you apply the same boundary logic as photonics, CIM scales first where it can hide inside a product as a subsystem: very specific accelerators, edge inference, or niche dataflow engines. The “concept-stage” risk comes when vendors suggest it is ready to replace conventional memory hierarchies broadly.

The Real Boundary Lines (A WhyChips “Truth vs. Hype” Checklist)

When you evaluate a photonics or emerging compute claim, use this checklist:

A) Does it fit a standard procurement story?

If it can be bought, integrated, tested, and serviced like a normal semiconductor product, it scales faster.

B) Does packaging dominate the problem?

Many photonics projects fail not because the device physics is wrong, but because assembly yield, thermal stability, and test cost are too high.

C) Is there a credible “power-per-bit” or “power-per-operation” story at system level?

Marketing numbers must map to rack power, not just lab benches.

D) Is the deployment environment controlled?

Technologies often scale first in hyperscale environments, where operators can tolerate new operational tooling and new failure modes.

E) Is the software stack ready?

Optical compute and CIM can be limited by mapping, calibration, and developer ergonomics.

What to Watch in 2026–2028

For daily or weekly coverage, these are the signals that matter:

  • Optical I/O moving from demos to platform announcements (standards, packaging partners, volume test flows)
  • CPO deployments tied to real AI system roadmaps, not only conference talks
  • Reliability metrics becoming part of product specs, not only lab narratives
  • Evidence of serviceability strategies (how failures are handled in the field)
  • For quantum: integration of control electronics, wiring density improvements, and realistic power budgets
  • For CIM: products where CIM is one component of a system, not the whole pitch

FAQ (Voice Search + AI Search Friendly)

What is the difference between silicon photonics and co-packaged optics?

Silicon photonics is a manufacturing and integration approach for photonic components on silicon. Co-packaged optics is a system architecture choice: placing optical engines at or inside the package boundary to reduce electrical reach, power per bit, and density constraints.

What will commercialize first: optical interconnect or optical computing?

Optical interconnect is commercializing first because the value proposition is clear (power per bit, bandwidth density) and it plugs into an existing data center procurement model. Optical computing is advancing rapidly in research and niche accelerators, but broad general-purpose adoption faces bigger software, accuracy, and packaging challenges.

Why is packaging such a big deal for photonics?

Because the optical device can work in a lab, yet fail economically at scale due to thermal stability, alignment tolerance, test time, yield loss, and serviceability.

How do quantum control electronics relate to the photonics “boundary” story?

Both are examples of scaling bottlenecks moving from device physics to system engineering. Cryo-CMOS control electronics attempts to reduce wiring complexity and improve scalability—similar to how optical I/O attempts to reduce electrical I/O power and reach issues.

Conclusion

The boundary is clear: optical interconnect is moving from modules toward the package, and that path is the most scalable near-term “optical” story. Silicon photonics and early CPO/optical I/O efforts have the strongest commercialization vectors because they solve a system-level pain that AI expansion makes unavoidable.

In contrast, optical computing is best treated as “credible but narrow” today: impressive research results and early accelerators, but not a generalized replacement for electronic compute. Quantum control electronics is quietly becoming an engineering roadmap problem with tangible demonstrations, while in-/near-memory compute remains a patchwork of real subsystems and overbroad claims.

For WhyChips, the editorial advantage is precision: separate layers, attribute numbers, and describe the path to scale in operational terms. That is how you turn “AI imagination” into a decision-useful map.

发表回复