Demartek Reports

Demartek provides real-world, hands-on research & analysis by focusing on industry analysis and lab validation testing of server, network, storage and security technologies in its ISO 17025 accredited test lab.

, President of Demartek, includes information shown here in some of his storage presentations. His public speaking schedule is also available.

If you are looking for an industry analyst who has hands-on experience with the products and technologies he speaks and writes about, please contact us.

Related Demartek Content


Sign-up for the Demartek Newsletter!
Demartek Storage Interface Comparison

Demartek Storage Networking Interface Comparison

Updated 21 November 2014

By , Demartek President


Interactive PDF Version

Recently, Demartek updated the Storage Networking Interface Comparison and included a new Interactive PDF version. With this format you are capable of viewing and interacting with the information in ways not available with standard PDF readers. To get the most out of this guide we encourage you to upgrade to the latest version of the Adobe Acrobat Reader. If you experience any problems viewing or interacting with this document, please contact us regarding the problem.


Download the Demartek Storage Networking Interface Comparison Interactive PDF

(PDF, 12.6 MB)
(If using Internet Explorer, Firefox, Safari or Opera,
right-click, save the file and open in your PDF reader of choice.)

Because of the number of storage interface types and related technologies that are used for storage devices, we have compiled this summary document providing some basic information for each of the interfaces. This document will be updated periodically. This document may become larger over time. Contact us if you’d like to see additional information in this document.

The interface types listed here are known as “block” interfaces, meaning that they provide an interface for “block” reads and writes. They simply provide a conduit for blocks of data to be read and written, without regard to file systems, file names or any other knowledge of the data in the blocks. The host requesting the block access provides a starting address and number of blocks to read or write.

We are producing deployment guides for some of the technologies described in this document.


More information

To be notified when we post new deployment guides, evaluation reports and commentaries, sign-up for our free monthly newsletter, Demartek Lab Notes. We do not give out, rent or sell our email list.


Contents


Acronyms

Network throughput rates are generally measured in bits per second. Storage throughput rates are generally measured in bytes per second.

Return to top of page


Storage Networking Interface Comparison Table

Number of Devices Maximum Distance (m) Cable Type Interface Device Transfer Rate (MB/sec) Interface Attributes
FC 16M 10 (copper)
10KM+ (optical)
Copper
Optical
HBA 100, 200, 400, 800, 1600 Dual Port
FCoE 16M 10 (copper)
very long (optical)
Copper
Optical
CNA
10GbE NIC
1150 Dual Port
IB 48M 15 (copper)
very long (optical)
Copper
Optical
HCA 1000, 2000, 4000, 7000 Full Duplex, Dual Port
iSCSI Many Ethernet cable distance Copper
Optical
NIC, HBA 100, 1000  
SAS
(passive)
16K 10 Copper Onboard, HBA 300, 600, 1200 Full Duplex, Dual Port
SAS
(active)
16K 20 Copper Onboard, HBA 300, 600, 1200 Full Duplex, Dual Port
SAS
(active)
16K 100 Optical Onboard, HBA 300, 600, 1200 Full Duplex, Dual Port
SATA 1 1 Copper Onboard, HBA 150, 300, 600 Half Duplex, Single Port
Thunderbolt 6 4 Copper Onboard 1000, 2000  
USB 127 5 Copper, Wireless Onboard, Adapter card 0.15, 1.5, 48, 500 Single Port

PCIe data rates are provided in the PCI Express section below.

Return to top of page


Transfer Rate

Transfer rate, sometimes known as transfer speed, is the maximum rate at which data can be transferred across the interface. This is not to be confused with the transfer rate of individual devices that may be connected to this interface. Some interfaces may not be able to transfer data at the maximum possible transfer rate due to processing overhead inherent with that interface. Some interface adapters provide hardware offload to improve performance, manageability and/or reliability of the data transmission across the respective interface. The transfer rates listed are across a single port at half duplex.

Bits vs. Bytes and Encoding Schemes

Transfer rates for storage interfaces and devices are generally listed as MB/sec or MBps (MegaBytes per second), which is generally calculated as Megabits per second (Mbps) divided by 10. Many of these interfaces use “8b/10b” encoding which maps 8 bit bytes into 10 bit symbols for transmission on the wire, with the extra bits used for command and control purposes. When converting from bits to bytes on the interface, dividing by ten (10) is exactly correct. 8b/10b encoding results in a 20 percent overhead (10-8)/10 on the raw bit rate.

Beginning with 10GbE and 10GbFC (for ISL’s), some of the newer speeds emerging in 2010 and beyond, a newer “64b/66b” encoding scheme is being used to improve data transfer efficiency. 64b/66b is the encoding scheme for 16Gb FC and is planned for higher data rates for IB. 64b/66b encoding is not directly compatible with 8b/10b, but the technologies that implement it will be built so that they can work with the older encoding scheme. 16Gb Fibre Channel uses a line rate of 14.025 Gbps, but with the 64b/66b encoding scheme results in a doubling of the throughput of 8Gb Fibre Channel, which uses a line rate of 8.5 Gbps with the 8b/10b encoding scheme. 64b/66b encoding results in a 3 percent overhead (66-64)/66 on the raw bit rate.

PCIe versions 1.x and 2.x use 8b/10b encoding. PCIe version 3 uses 128b/130b encoding, resulting in a 1.5 percent overhead on the raw bit rate. Additional PCIe information is provided in the PCI Express section below.

USB 3.1 will use 128b/132b encoding. See Roadmaps section below.

Encoding Scheme Table

Overhead Applications
8b/10b 20% 1GbE, FC (up to 8Gb), IB (SDR, DDR & QDR), PCIe (1.0 & 2.0) SAS, SATA, USB (up to 3.0)
64b/66b 3% 10GbE, 100GbE, FC (10Gb & 16Gb), FCoE, IB (FDR & EDR)
128b/130b 1.5% PCIe 3.0, 24Gb SAS (likely)
128b/132b 3% USB 3.1 (10 Gbps, see Roadmaps section below)

Fibre Channel Speed Table

Throughput (MBps) Line Rate (GBaud) Encoding Host Adapter requirements (dual-port cards)
1GFC 100 1.0625 8b/10b PCI-X
2GFC 200 2.125 8b/10b PCI-X
4GFC 400 4.25 8b/10b PCI-X 2.0 or
PCIe 1.0 x4
8GFC 800 8.5 8b/10b PCIe 1.0 x8 or
PCIe 2.0 x4
16GFC 1600 14.025 64b/66b PCIe 2.0 x8 or
PCIe 3.0 x4
32GFC 3200 28.05 64b/66b PCIe 3.0 x8

InfiniBand Speed Table

1X data rate 4X data rate 12X data rate Encoding Host Adapter requirements
(dual-port cards)
SDR 2 Gb/s 8 Gb/s 24Gb/s 8b/10b PCIe 1.0 x8
DDR 4 Gb/s 16Gb/s 48 Gb/s 8b/10b PCIe 1.0 x16 or
PCIe 2.0 x8
QDR 8 Gb/s 32 Gb/s 96Gb/s 8b/10b PCIe 2.0 x8
FDR-10*
* Mellanox only
10.3125 Gb/s 41.25 Gb/s 123.75 Gb/s 64b/66b PCIe 3.0 x8
FDR 13.64 Gb/s 54.55 Gb/s 163.64 Gb/s 64b/66b PCIe 3.0 x8
EDR 25 Gb/s 100 Gb/s 300 Gb/s 64b/66b PCIe 3.0 x16
InfiniBand connections can be aggregated into 4x (4 lanes) and 12x (12 lanes), depending on the application and connector. QSFP and QSFP+ connectors are used for 4x connections, and CXP connectors are typically used for 12x connections. See the Connector Types section below for more details on the connector types.

Return to top of page


History

Products became available with the interface speeds listed during these years. Newer interface speeds are often available in switches and adapters long before they are available in storage devices and storage systems.

PCIe history is provided in the PCI Express section below.

Return to top of page


Roadmaps

These roadmaps include the estimated calendar years that higher speeds may become available and are based on our industry research, which are subject to change. Past history indicates that several of these interfaces are on a three or four year development cycle for the next improvement in speed. It is reasonable to expect that pace to continue.

It should be noted that it typically takes several months after the specification is complete before products are generally available in the marketplace. Widespread adoption of those new products takes additional time, sometimes years.

Some of the standards groups are now working on “Energy Efficient” versions of these interfaces to indicate additions to their respective standards to reduce power consumption.

See the Connector Types section below for additional roadmap information.

SAS-SATA Connector Compatibility

SAS-SATA Connector Compatibility
Source: SCSI Trade Association

Express Bay Connector Backplane

Express Bay Connector Backplane
Source: SCSI Trade Association

SATA Express Connector Mating Matrix

SATA Express Connector Mating Matrix
Source: SATA-IO

Return to top of page


Cables: Fiber Optics and Copper

As interface speeds increase, expect increased usage of fiber optic cables and connectors for most interfaces. At higher Gigabit speeds (10Gb+), copper cables and interconnects generally have too much amplitude loss except for short distances, such as within a rack or to a nearby rack. This amplitude loss is sometimes called a poor signal-to-noise ratio or simply “too noisy”.

Single-mode fiber vs. Multi-mode fiber

There are two general types of fiber optic cables available: single-mode fiber and multi-mode fiber.

Meter-for-meter, single-mode and multi-mode cables are similarly priced. However, some of the other components used in single-mode links are more expensive than their multi-mode equivalents.

When planning datacenter cabling requirements, be sure to consider that a service life of 15 to 20 years can be expected for fiber optic cabling, so the choices made today need to support legacy, current and emerging data rates. Also note that deploying large amounts of new cable in a datacenter can be labor- intensive, especially in existing environments.

There are different designations for fiber optic cables depending on the bandwidth supported.

OM3 and OM4 are newer multi-mode cables that are “laser optimized” (LOMMF) and support 10 Gigabit Ethernet applications. OM3 and OM4 cables are also the only multi-mode fibers included in the IEEE 802.3ba 40G/100G Ethernet standard that was ratified in June 2010. The 40G and 100G speeds are currently achieved by bundling multiple channels together in parallel with special multi-channel (or multi-lane) connector types. This standard defines an expected operating range of up to 100m for OM3 and up to 150m for OM4 for 40 Gigabit Ethernet and 100 Gigabit Ethernet. These are estimates of distance only and supported distances may differ when 40GbE and 100GbE products become available in the coming years. See the Connector Types section below for additional detail. OM4 cabling is expected to support 32GFC up to 100 meters.

Newer multi-mode OM2, OM3 and OM4 (50 µm) and single-mode OS1 (9 µm) fiber optic cables have been introduced that can handle tight corners and turns. These are known as “bend optimized,” “bend insensitive,” or have “enhanced bend performance.” These fiber optic cables can have a very small turn or bend radius with minimal signal loss or “bending loss.” The term “bend optimized” multi-mode fiber (BOMMF) is sometimes used.

OS1 and OS2 single-mode fiber optics are used for long distances, up to 10,000m (6.2 miles) with the standard transceivers and have been known to work at much longer distances with special transceivers and switching infrastructure.

Each of the multi-mode and single-mode fiber optic cable types includes two wavelengths. The higher wavelengths are used for longer-distance connections.

Update: 24 April 2012 — The Telecommunications Industry Association (TIA) Engineering Committee TR-42 Telecommunications Cabling Systems has approved the publication of TIA-942-A, the revised Telecommunications Infrastructure Standard for Data Centers. A number of changes were made to update the specification with respect to higher transmission speeds, energy efficiency and harmonizing with international standards. For backbone and horizontal cabling and connectors, the following are some of the important updates:

10Gb Ethernet Fiber-Optic Cables

Indoor vs. Outdoor cabling

Indoor fiber-optic cables are suitable for indoor building applications. Outdoor cables, also known as outside plant or OSP, are suitable for outdoor applications and are water (liquid and frozen) and ultra-violet resistant. Indoor/outdoor cables provide the protections of outdoor cables with a fire-retardant jacket that allows deployment of these cables inside the building entrance beyond the OSP maximum distance, which can reduce the number of transition splices and connections needed.

Fiber Optic Cable Characteristics

Mode Core Diameter Wavelength Modal Bandwidth Cable jacket color
OM1 multi-mode 62.5 µm 850 nm
1300 nm
200 MHz Orange
OM2 multi-mode 50 µm 850 nm
1300 nm
500 MHz Orange
OM3 multi-mode 50 µm 850 nm
1300 nm
2000 MHz Aqua
OM4 multi-mode 50 µm 850 nm
1300 nm
4700 MHz Aqua
OS1
single-mode 9 µm 1310 nm
1550 nm
Yellow

Fiber Optic Cable by Distance and Speed

OM1 OM2 OM3 OM4
1 Gb/s 300m 500m 860m  
2 Gb/s 150m 300m 500m  
4 Gb/s 70m 150m 380m 400m
8 Gb/s 21m 50m 150m 190m
10 Gb/s 33m 82m Up to 300m Up to 400m ¹
16Gb/s 15m ¹ 35m ¹ 100m ¹ 125m ¹

¹ OM1 cable is not recommended for 16Gb/s FC, but is expected to operate up to 15m.

Distances supported in actual configurations are generally less than the distance supported by the raw fiber optic cable. The distances shown above are for 850 nm wavelength multi-mode cables. The 1300 nm wavelength multi-mode cables can support longer distances.

Active Copper vs. Passive Copper

Passive copper connections are common with many interfaces. The industry is finding that as the transfer rates increase, passive copper does not provide the distance needed and takes up too much physical space. The industry is moving towards an active copper type of interface for higher speed connections, such as 6Gb/s SAS. Active copper connections include components that boost the signal, reduce the noise and work with smaller-gauge cables, improving signal distance, cable flexibility and airflow. These active copper components are expected to be less expensive and consume less electric power than the equivalent components used with fiber optic cables.

Copper: 10GBASE-T and 1000BASE-T

1000BASE-T cabling is commonly used for 1Gb Ethernet traffic in general, and 1Gb iSCSI for storage connections. This is the familiar four pair copper cable with the RJ45 connectors. Cables used for 1000BASE-T are known as Cat5e (Category 5 enhanced) or Cat6 (Category 6) cables.

10GBASE-T cabling supports 10Gb Ethernet traffic, including 10Gb iSCSI storage traffic. The cables and connectors are similar to, but not the same as the cables used for 1000BASE-T. 10GBASE-T cables are Cat6a (Category 6 augmented), also known as Class EA cables. These support the higher frequencies required for 10Gb transmission up to 100 meters (330 feet). Cables must be certified to at least 500MHz to ensure 10GBASE-T compliance. Cat7 (Category 7, Class F) cable is also certified for 10GBASE-T compliance, and is typically deployed in Europe. Cat6 cables may work in 10GBASE-T deployments up to 55m, but should be tested first. 10GBASE-T cabling is not expected to be deployed for FCoE applications in the near future. Some newer 10GbE switches support 10GBASE-T (RJ45) connectors.

10GBASE-CR — Currently, the most common type of copper 10GbE cable is the 10GBASE-CR cable that uses an attached SFP+ connector, also known as a Direct Attach Copper (DAC). This fits into the same form factor connector and housing as the fiber optic cables with SFP+ connectors. Many 10GbE switches accept cables with SFP+ connectors, which support both copper and fiber optic cables. These cables are available in 1m, 3m, 5m, 7m, 8.5m and longer distances. The most commonly deployed distances are 3m and 5m.

10GBASE-CX4 — These cables are older and not very common. This type of cable and connector is similar to cables used for InfiniBand technology.

USB Type-C Cables
Information on the new USB Type-C cables is located in the Roadmaps section above.

Return to top of page


Connector Types

Several types of connectors are available with cables used for storage interfaces. This is not an exhaustive list but is intended to show the more common types. Each of the connector types includes the number of lanes (or channels) and the rated speed.

As of early 2011, the fastest generally available connector speeds supported were 10 Gbps per lane. Significantly higher speeds are currently achieved by bundling multiple lanes in parallel, such as 4x10 (40 Gbps), 10x10 (100 Gbps), 12x10 (120 Gbps), etc. Most of the current implementations of 40GbE and 100GbE use multiple lanes of 10GbE and are considered “channel bonded” solutions.

14 Gbps per lane connectors appeared in the last half of 2011. These connectors support 16Gb Fibre Channel (single-lane) and 56Gb (FDR) InfiniBand (multi-lane).

25 Gbps per lane connectors may become available in 2012 or 2013 as prototypes. When 25 Gbps per lane connectors are available, then higher speeds, such as 100 Gbps can be achieved by bundling four of these lanes together. Other variations of bundling multiple lanes of 25 Gbps may be possible, such as 10x25 (250 Gbps), 12x25 (300 Gbps) or 16x25 (400 Gbps). It is expected that the 25 Gbps (actually 28 Gbps) connectors will support 32Gb Fibre Channel in single-lane configurations and higher speeds for Ethernet and InfiniBand in multi-lane configurations.

In calendar Q1 2012, several fiber-optic connector manufacturers demonstrated working prototypes of the “25/28G” connectors. These connectors support speeds up to 28 Gbps per lane and will be used for 100 Gbps Ethernet (100GbE) in a 4x25 configuration. These connector technologies will also be used for other high-speed applications such as the next higher speeds of Fibre Channel (32GFC) and InfiniBand. End-user products with these higher speed technologies were originally estimated to become available in 2013 or 2014, but more work remains before 25Gb products become generally available.

Two of the popular fiber-optic cable connectors are SFP+ and QSFP+ (see diagrams below). SFP+ is used for single-lane high-speed connections, and QSFP+ is used for four-lane high-speed connections. Many in the industry use the four-lane (“quad”) interface to provide increased bandwidth. Currently, the single-lane SFP+ is used for 10Gb Ethernet and 8Gb and 16Gb Fibre Channel. The four-lane QSFP+ is used for 40Gb Ethernet and 40Gb (QDR) and 56Gb (FDR) InfiniBand. The Fibre Channel technical committee is now officially discussing a single-lane and four-lane (“quad”) solution with the 32GFC technology (4x32) for a 128 Gb/sec connection. See the Roadmaps section above.

SFP+ QSFP+ Connector/Interface Table

SFP SFP+ QSFP+
Ethernet 1GbE 10GbE 40GbE
Fibre Channel 1GFC, 2GFC, 4GFC 8GFC, 16GFC
InfiniBand QDR, FDR

Note the encoding schemes described above for additional detail on speeds available for various connector and cable combinations.

Connector Table

Type Lanes Max. Speed per lane (Gbps) Max. Speed total (Gbps) Cable type Usage
Mini SAS SAS 4 6 24 Copper 3Gb, 6Gb SAS
Mini SAS HD SAS 4, 8 12 48, 96 Copper 6Gb, 12Gb SAS
Copper CX4 CX4 4 5 20 Copper 10Gb Ethernet,
SDR and DDR InfiniBand
Small Form-factor Pluggable SFP 1 4 4 Copper, Optical 1Gb Ethernet,
Fibre Channel: 1, 2, 4Gb
Small Form-factor Pluggable enhanced SFP+ 1 16 16 Copper, Optical 10Gb Ethernet, 8Gb & 16Gb Fibre Channel,
10Gb FCoE
Quad Small Form-factor Pluggable QSFP 4 5 20 Copper, Optical Various
Quad Small Form-factor Pluggable enhanced QSFP+ 4 16 64 Copper, Optical 40Gb Ethernet,
DDR, QDR & FDR InfiniBand,
64Gb Fibre Channel
CXP CXP 10, 12 10 100, 120 Copper 100Gb Ethernet,
120Gb other
CFP CFP 10 10 100 Optical 100Gb Ethernet

PCIe data rates and connector types are provided in the PCI Express section.

InfiniBand Data Rates

SDR: Single Data Rate, DDR: Double Data Rate, QDR: Quad Data Rate, FDR: Fourteen Data Rate, EDR: Enhanced Data Rate

Connector Diagrams

Type Diagram
Mini SAS SAS
Mini SAS HD SAS HD
Copper CX4 CX4
Small Form-factor Pluggable SFP, SFP+
Quad Small Form-factor Pluggable QSFP, QSFP+

PCIe connector types are provided in the PCI Express section.

Demartek mini-SFP photo Mini SFP

In the second half of 2010, a new variant of the SFP/SFP+ connector was introduced to accommodate the Fibre Channel backbone with 64-port blades and the planned increased density Ethernet core switches. This new connector, known as mSFP, mini-SFP or mini-LC SFP, narrows the optical centerline of a conventional SFP/SFP+ connector from 6.25 mm to 5.25 mm. Although this connector looks very much like a standard SFP style connector, it is narrower and is required for the higher-density devices. The photo at the right shows the difference between mini-SFP and the standard size.

CXP and CFP

The CXP (copper) and CFP (optical) connectors are expected to be used initially for switch-to-switch connections. These are expected for Ethernet and may also be used for InfiniBand. CFP connectors currently support 10 lanes of 10 Gbps connections (10x10) that consume approximately 35-40 watts. CFP2 is a single board, smaller version of CFP that also supports 10x10 but uses less power than CFP. During 2013, quite a bit of development activity is focused on CFP2. A future CFP4 connector is in the planning stages that is expected to use the 25/28G connectors and support 4x25. CFP4 is expected to handle long range fiber optic distances.

Mini SAS and Mini SAS HD

The Mini SAS connector is the familiar 4-lane connector available on most SAS cables today. The Mini SAS HD connector provides twice the density as the Mini SAS connector, and is available in 4-lane and 8-lane configurations. The Mini SAS HD connector is the same connector for passive copper, active copper and optical SAS cables. The diagrams below compare these two types of SAS connectors.
Mini SAS HD Receptacle Comparison Mini SAS HD Comparison
Source: SCSI Trade Association

Return to top of page


PCI Express (PCIe)

PCI Express®, also known as PCIe®, stands for Peripheral Component Interconnect Express and is the computer industry standard for the I/O bus for computers introduced in the last few years. The first version of the PCIe specification, 1.0a, was introduced in 2003. Version 2.0 was introduced in 2007 and version 3.0 was introduced in 2010. These versions are often identified by their generation (“gen 1”, “gen 2”, etc.). It can take a year or two between the time the specification is introduced and general availability of computer systems and devices using those specification versions. The PCIe specifications are developed and maintained by the PCI-SIG (PCI Special Interest Group). PCI Express and PCIe are registered trademarks of the PCI-SIG.

Data rates for different versions of PCIe are shown in the table below. PCIe data rates are expressed in Gigatransfers per second (GT/s) and are a function of the number of lanes in the connection. The number of lanes is expressed with an “x” before the number of lanes, and is often spoken as “by 1”, “by 4”, etc. PCIe supports full-duplex (traffic in both directions). The data rates shown below are in each direction. Note the explanation of encoding schemes described above.

PCIe Data Rate Table

GT/s Encoding x1 x2 x4 x8 x16
PCIe 1.x 2.5 8b/10b 250 MB/s 500 MB/s 1 GB/s 2 GB/s 4 GB/s
PCIe 2.x 5 8b/10b 500 MB/s 1 GB/s 2 GB/s 4 GB/s 8 GB/s
PCIe 3.x 8 128b/130b 1 GB/s 2 GB/s 4 GB/s 8 GB/s 16Gb/s

Efforts are underway to enable SATA and SAS to be carried over PCIe connections. See the roadmaps section above.

Mini-PCIe — PCI Express cards are also available in a mini PCIe form factor. This is a special form factor for PCIe that is approximately 30mm x 51mm or 30mm x 26.5mm, designed for laptop and notebook computers, and equivalent to a single-lane (x1) PCIe slot. A variety of devices including WiFi modules, WAN modules, video/audio decoders, SSDs and other devices are available in this form factor.

Demartek M.2 photo M.2 — M.2 is the next generation PCIe connector for ultra-thin tablets and other mobile platforms. Its multiple socket definitions support WWAN, SSD and other applications. M.2 can support PCIe protocol or SATA protocol, but not both at the same time on the same device. M.2 supports a variety of board width and length options. M.2 is available in single-sided modules that can be soldered down, or single-sided and dual-sided modules used with a connector.

SFF-8639 — SFF-8639 is the I/O backplane connector designed for high-density SSD storage devices and is backward compatible with existing storage interfaces. SFF-8639 supports PCIe/NVMe, SAS and SATA devices and enables hot plug and hot swap of devices while the system is running. Revision 0.7 of the SFF-8639 specification was released in March 2014, with Revision 1.0 expected by year-end 2014. The SFF-8639 connector is expected to meet similar electrical requirements as a standard PCIe CEM connector.


Demartek SFF-8639 photo



M-PCIe™ — M-PCIe is the specification that maps PCIe over the MIPI® Alliance M-PHY® technology used in low-power mobile and handheld devices. M-PCIe is optimized for RFI/EMI requirements and supports M-PHY gears 1, 2 and 3 and will be extended to support gear 4.

PCIe 2.0 — Servers that have PCIe 2.0 x8 slots can support two ports of 10GbE or two ports of 16GFC on one adapter.

PCIe 3.0 — On 6 March 2012, the major server vendors announced their next generation servers that support PCIe 3.0, which, among other things, doubles the I/O throughput rate from the previous generation. These servers also provide up to 40 PCIe 3.0 lanes per processor socket, which is also at least double from the previous server generation. Workstation and desktop computer motherboards that support PCIe 3.0 first appeared in late 2011. PCIe 3.0 graphics cards appeared in late 2011. Other types of adapters supporting PCIe 3.0 were announced in 2012 and 2013. The PCIe 3.0 specification was completed in November 2010.

PCIe 3.1 — The PCIe 3.1 specification was released in October 2014. It incorporates M-PCIe and consolidates numerous protocol extensions and functionality for ease of access.

PCIe 4.0 — In November 2011, PCI-SIG announced the approval of 16 gigatransfers per second (GT/s) as the bit rate for the next generation of PCIe architecture, known as PCIe 4.0. After technical analysis, it was determined that 16 GT/s could be manufactured and deployed with known technologies, while maintaining backward compatibility with previous generations of PCIe architecture such as PCIe 1.x, 2.x and 3.x. Revision 0.5 of the PCIe 4.0 specification is expected by year-end 2014, while Revision 0.9 is expected to be available in 1H 2016. It may take up to a year or more for products that support PCIe 4.0 technology to become generally available after the final PCIe 4.0 specification is complete.

OCuLINK — OCuLINK is intended to be a low-cost, small cable form factor for PCIe internal and external devices, offering bit rates starting at 8Gbps, with headroom to scale, and new independent cable clock integration. OCuLINK supports x1, x2 and x4 lanes of PCIe 3.0 connectivity. OCuLINK supports passive cables capable of reaching up to 2—3 meters and active copper and optical cables. Active copper cables can reach up to 3—10 meters while active optical cables can reach up to 300 meters in length. The OCuLINK specification will be completed in the first calendar quarter of 2015 with products expected shortly thereafter.

I/O Virtualization — In 2008, the PCI-SIG announced the completion of its I/O Virtualization (IOV) suite of specifications including single-root IOV (SR-IOV) and multi-root IOV (MR-IOV). These technologies can work with system virtualization technologies and can allow multiple operating systems to natively share PCIe devices. SR-IOV is currently supported with several 10GbE NICs and hypervisors. See the recent Demartek I/O Virtualization presentation for additional detail.

The concept of sharing PCIe devices or providing access to PCIe devices that may be physically larger than some smaller form-factor systems can accommodate has led to the development of external connections to some PCIe devices. Cables have been developed for extending the PCIe bus outside of the chassis holding the PCIe slots. These cables are specified by indicating the number of PCIe lanes (x4, x8, etc.) supported. Cables are typically available for x4, x8 and x16 lane configurations. Common cable lengths are 1m and 3m. The photo below shows some PCIe cables and connectors. PCIe can also be carried over fiber-optic cables for longer distances. In the future we will begin to see the shift from PCIe cables and connectors to OCuLINK cables and connectors, also shown in the image below.

PCIe and OCuLINK cables

Return to top of page

The original version of this page is available at www.demartek.com/Demartek_Interface_Comparison.html on the Demartek website.