Demartek Storage Networking Interface Comparison
Updated 21 November 2014
By Dennis Martin, Demartek President
Interactive PDF Version
Recently, Demartek updated the Storage Networking Interface Comparison and included a new Interactive PDF version. With this format you are capable of viewing and interacting with the information in ways not available with standard PDF readers. To get the most out of this guide we encourage you to upgrade to the latest version of the Adobe Acrobat Reader. If you experience any problems viewing or interacting with this document, please contact us regarding the problem.
Download the Demartek Storage Networking Interface Comparison Interactive PDF
(PDF, 12.6 MB)
(If using Internet Explorer, Firefox, Safari or Opera,
right-click, save the file and open in your PDF reader of choice.)
Because of the number of storage interface types and related technologies that are used for storage devices, we have compiled this summary document providing some basic information for each of the interfaces. This document will be updated periodically. This document may become larger over time. Contact us if you’d like to see additional information in this document.
The interface types listed here are known as “block” interfaces, meaning that they provide an interface for “block” reads and writes. They simply provide a conduit for blocks of data to be read and written, without regard to file systems, file names or any other knowledge of the data in the blocks. The host requesting the block access provides a starting address and number of blocks to read or write.
We are producing deployment guides for some of the technologies described in this document.
- Demartek Fibre Channel Deployment Guide
- Demartek iSCSI Deployment Guide
- Demartek SSD Deployment Guide
To be notified when we post new deployment guides, evaluation reports and commentaries, sign-up for our free monthly newsletter, Demartek Lab Notes. We do not give out, rent or sell our email list.
- Storage Networking Interface Comparison Table
- Transfer Rate, Bits vs. Bytes, and Encoding Schemes
- Cables: Fiber Optics and Copper
- Connector Types
- PCI Express® (PCIe®)
- FC — Fibre Channel (also see Demartek FC Zone)
- FCoE — Fibre Channel over Ethernet (also see Demartek FCoE Zone)
- IB — InfiniBand
- iSCSI — Internet Small Computer System Interface (also see Demartek iSCSI Zone)
- NVMe — NVM Express
- PCIe — PCI Express
- SAS — Serial Attached SCSI
- SATA — Serial ATA
- USB — Universal Serial Bus
- 10GbE — 10 Gigabit Ethernet
- CNA — Converged Network Adapter (used with FCoE)
- HBA — Host Bus Adapter (used with FC, iSCSI, SAS, SATA)
- HCA — Host Channel Adapter (used with IB)
- NIC — Network Interface Controller or Network Interface Card (used with FCoE, iSCSI)
- ISL — Inter-Switch Link
- SAN — Storage Area Network
- Gb — Gigabit
- GB — Gigabyte
- Mb — Megabit
- MB — Megabyte
- Gb/s — Gigabits per second
- Gbit/s — Gigabits per second
- Gbps — Gigabits per second
- GB/s — Gigabytes per second
- GBps — Gigabytes per second
- Mb/s — Megabits per second
- Mbit/s — Megabits per second
- Mbps — Megabits per second
- MB/s — Megabytes per second
- MBps — Megabytes per second
- HDD — Hard Disk Drive
- SSD — Solid State Drive (also see Demartek SSD Zone)
- SSHD — Solid State Hybrid Drive
- SDR — Single Data Rate (InfiniBand)
- DDR — Double Data Rate (InfiniBand)
- QDR — Quad Data Rate (InfiniBand)
- FDR — Fourteen Data Rate (InfiniBand)
- EDR — Enhanced Data Rate (InfiniBand)
PCIe data rates are provided in the PCI Express section below.
Transfer rate, sometimes known as transfer speed, is the maximum rate at which data can be transferred across the interface. This is not to be confused with the transfer rate of individual devices that may be connected to this interface. Some interfaces may not be able to transfer data at the maximum possible transfer rate due to processing overhead inherent with that interface. Some interface adapters provide hardware offload to improve performance, manageability and/or reliability of the data transmission across the respective interface. The transfer rates listed are across a single port at half duplex.
Bits vs. Bytes and Encoding Schemes
Transfer rates for storage interfaces and devices are generally listed as MB/sec or MBps (MegaBytes per second), which is generally calculated as Megabits per second (Mbps) divided by 10. Many of these interfaces use “8b/10b” encoding which maps 8 bit bytes into 10 bit symbols for transmission on the wire, with the extra bits used for command and control purposes. When converting from bits to bytes on the interface, dividing by ten (10) is exactly correct. 8b/10b encoding results in a 20 percent overhead (10-8)/10 on the raw bit rate.
Beginning with 10GbE and 10GbFC (for ISL’s), some of the newer speeds emerging in 2010 and beyond, a newer “64b/66b” encoding scheme is being used to improve data transfer efficiency. 64b/66b is the encoding scheme for 16Gb FC and is planned for higher data rates for IB. 64b/66b encoding is not directly compatible with 8b/10b, but the technologies that implement it will be built so that they can work with the older encoding scheme. 16Gb Fibre Channel uses a line rate of 14.025 Gbps, but with the 64b/66b encoding scheme results in a doubling of the throughput of 8Gb Fibre Channel, which uses a line rate of 8.5 Gbps with the 8b/10b encoding scheme. 64b/66b encoding results in a 3 percent overhead (66-64)/66 on the raw bit rate.
PCIe versions 1.x and 2.x use 8b/10b encoding. PCIe version 3 uses 128b/130b encoding, resulting in a 1.5 percent overhead on the raw bit rate. Additional PCIe information is provided in the PCI Express section below.
USB 3.1 will use 128b/132b encoding. See Roadmaps section below.
|8b/10b||20%||1GbE, FC (up to 8Gb), IB (SDR, DDR & QDR), PCIe (1.0 & 2.0) SAS, SATA, USB (up to 3.0)|
|64b/66b||3%||10GbE, 100GbE, FC (10Gb & 16Gb), FCoE, IB (FDR & EDR)|
|128b/130b||1.5%||PCIe 3.0, 24Gb SAS (likely)|
|128b/132b||3%||USB 3.1 (10 Gbps, see Roadmaps section below)|
HistoryProducts became available with the interface speeds listed during these years. Newer interface speeds are often available in switches and adapters long before they are available in storage devices and storage systems.
- FC — 1Gb/s in 1997, 2Gb/s in 2001, 4Gb/s in 2005, 8Gb/s in 2008, 10Gb/s (ISL only) 2004, 16Gb/s in 2011 (FC is generally backward compatible with the previous two generations)
- FCoE — FC:4Gb/s and Ethernet:10Gb/s in 2008, 10Gb/s in 2009.
(FC-BB-5 was approved in June 2009, INCITS 462-2010 was approved in Spring 2010)
- IB — 10Gb/s in 2002, 20Gb/s in 2005, 40Gb/s in 2008, 56Gb/s in 2011
- iSCSI — 1Gb/s in 2003, 10Gb/s in 2007 (basic 10GbE first appeared in 2002)
- NVMe — Version 1.0 specification published in March 2011. Version 1.1 specification finalized in October 2012. Version 1.2 specification released in November 2014.
- SAS — 3Gb/s in 2005, 6Gb/s in 2009, 12Gb/s in 2H 2013
- SATA — 1.5Gb/s in 2003, 3Gb/s in 2005, 6Gb/s in 2010 (traditional SATA is not expected to extend beyond 6Gb/s, see Roadmaps section below)
- SATA μSSD was introduced in August 2011 as a new implementation of SATA for embedded SSDs. These devices do not have the traditional SATA interface connector but use a single ball grid array (BGA) package that can be surface mounted directly on a system motherboard. These SATA μSSD devices are intended for mobile platforms such as tablets and ultrabooks, and consume less electric power than traditional SATA interface devices.
- SATA Revision 3.2 has been ratified and the announcement was made in August 2013. This revision includes new SATA form factors, details for SATA Express, power management enhancements and optimizations for solid state hybrid drives (SSHDs). One of the new SATA form factors is M.2 that enables small form-factor SATA SSDs suitable for thin devices such as tablets and notebooks. M.2 was formerly known as NGFF, is defined by the PCI-SIG and supports a variety of applications including WiFi, WWAN, USB, PCIe and SATA. The v3.2 specification standardizes the SATA M.2 connector pin layout. SATA v3.2 also introduces USM Slim, which reduces the thickness of the module from 14.5mm to 9mm.
- Thunderbolt — 10Gb/s in 2011, 20Gb/s in late 2013
- USB — 1.5Mb/s in 1997?, 12Mb/s in 1999?, 480Mb/s in 2001?, 5Gb/s in 2009
PCIe history is provided in the PCI Express section below.
RoadmapsThese roadmaps include the estimated calendar years that higher speeds may become available and are based on our industry research, which are subject to change. Past history indicates that several of these interfaces are on a three or four year development cycle for the next improvement in speed. It is reasonable to expect that pace to continue.
It should be noted that it typically takes several months after the specification is complete before products are generally available in the marketplace. Widespread adoption of those new products takes additional time, sometimes years.
Some of the standards groups are now working on “Energy Efficient” versions of these interfaces to indicate additions to their respective standards to reduce power consumption.
See the Connector Types section below for additional roadmap information.
- Ethernet — In July 2014, two different industry groups announced new work on Ethernet specifications to take advantage of 25Gb PHYs in a single-lane configuration. This would result in a single-lane 25GbE connection similar to the existing 10GbE technology, running 2.5x faster. This has obvious implications for storage applications such as FCoE and iSCSI block protocols as well as other file and object protocols. End-user products using these technologies may become available in late 2015 or 2016. The two industry groups are the 25G/50G Ethernet Consortium and the IEEE 802.3 25Gb/s Ethernet Study Group. The 25G/50G Ethernet Consortium is working on single-lane 25GbE and dual-lane 50GbE solutions. See the Connector Types section below for additional comments on single-lane and multi-lane connection technology.
- 32Gb/s FC (“32GFC”) — Work on the 32Gb/s FC (“32GFC”) standard, FC-PI-6, began in early 2010.
In December 2013, the Fibre Channel Industry Association (FCIA) announced the completion of the PC-PI-6
specification. 32GFC products are expected to become available by 2015 or 2016. 32GFC is expected to use
the 25/28G SFP+ connector technology as described in the Connector section below.
A multi-lane 128GFC interface, known as 128GFCp (parallel, four-lane), is based on the 32GFC work and has been added to the official Fibre Channel roadmap. The T11 committee has accepted this as a project known as FC-PI-6P. This specification is expected to be completed in late 2014 or early 2015, with products possibly available 2015 or 2016. 128GFCp will probably use QSFP+ connectors and may also support CFP2 or CFP4 connectors.
Some vendors refer to 32GFC and 128GFCp as “Gen 6” Fibre Channel, since this version of Fibre Channel supports two different speeds, in two different configurations (serial and parallel).
- 64Gb/s FC (“64GFC”) — Work has not yet begun in the T11 committee for developing the single-lane 64GFC specifications, but 64GFC is on the FCIA speed roadmap. Each FC revision is expected to be backwards compatible with at least the two previous generations.
- SAN interface — FC has a future as a SAN interface for the foreseeable future. There has been a huge investment (US$ Billions) in FC infrastructure over the years, primarily in enterprise datacenters, which is likely to remain deployed for many years.
- Disk drive interface — FC has reached end-of-life as a disk drive interface, as the disk drive and SSD manufacturers have moved to 6Gb/s and 12Gb/s SAS as the interface for enterprise drives. We expect to see the FC interface on 3.5-inch disk drives to live a while to maintain spare parts, due to the relatively large number of 3.5-inch FC disk drives in enterprise disk subsystems. We expect to see relatively few 2.5-inch enterprise disk drives with an FC interface.
- 32Gb/s FC (“32GFC”) — Work on the 32Gb/s FC (“32GFC”) standard, FC-PI-6, began in early 2010. In December 2013, the Fibre Channel Industry Association (FCIA) announced the completion of the PC-PI-6 specification. 32GFC products are expected to become available by 2015 or 2016. 32GFC is expected to use the 25/28G SFP+ connector technology as described in the Connector section below.
- FC-BB-6 — In August 2014, work was completed on the FC-BB-6 standard in the T11 committee. FC-BB-6 includes two enhancements that support Virtual N_Port to Virtual N_Port (VN2VN) connectivity and Domain_ID Scalability. VN2VN enables the establishment of direct point-to-point virtual links between nodes in an FCoE network. This enables simpler configurations for smaller environments. Zoning may be not needed in these network designs, resulting in lower complexity and cost. Domain_ID Scalability helps FCoE fabrics scale to larger SANs.
- 40Gb/s and 100Gb/s — 40Gb/s is a year or two away, possibly in the same time period as 32Gb FC. The IEEE 802.3ba 40Gb/s and 100Gb/s Ethernet standards were ratified in June 2010. Products are expected to follow over time. It is expected that 40Gb FCoE and 100Gb FCoE based on the 2010 standards will be used initially for Inter-Switch Link (ISL) cores, thereby maintaining 10Gb FCoE as the predominant FCoE edge connection through at least 2013. It is expected that future versions of 100GFCoE cables and connectors will be available in 10x10 configurations and later in 4x25 configurations. See the connector types section below for discussion on 40Gb and 100Gb connectors.
- IB — 100Gb/s (Enhanced Data Rate or EDR) is expected to become available by the end of 2014. EDR will use the same 25/28G technology that will be used by other interfaces such as Ethernet and Fibre Channel. See the Connector section below regarding the 25/28G technology. InfiniBand High Data Rate (HDR), supporting twice the speed of EDR, is expected in 2017.
- iSCSI — follows Ethernet roadmap (see FCoE roadmap above).
- NVMe — The first few enterprise NVMe SSDs became available during 2014 and this will continue into 2015. Client NVMe SSDs are expected to become available in 2015. The UEFI 2.4 specification contains updates for NVMe, and full BIOS support of NVMe devices is expected to appear in products beginning in 2014. NVMe is working on a management interface specification that is expected to be available approximately Q1 2015. Work on a new specification known as NVM Express over Fabrics was announced in September 2014. This specification will extend the benefits of NVMe onto fabrics such as Ethernet, Fibre Channel, Infiniband and Intel Omni Scale. The first fabric definition will be the RDMA protocol family used with Ethernet (iWARP and RoCE) and Infiniband, and this addition to the NVMe specification is expected to be completed in 2H 2015. In September 2014, the Fibre Channel Industry Association also announced a new workgroup to align the Fibre Channel technology with NVMe over Fabrics. Additional information on NVMe and NVM Express over Fabrics is available on the Demartek IDF2014 Commentary page.
- 12Gb/s SAS — The SAS 3 specification, that includes 12Gb/s SAS, was submitted to INCITS in Q4 2013. End-user 12Gb/s SAS products began to appear in the second half of 2013, including SAS-interface SSDs, SAS HBAs and RAID controllers. 12Gb/s SAS is required to take full advantage of a PCIe 3.0 bus.
- 24Gb/s SAS — Development is currently underway on the 24Gb/s SAS specification. Early estimates suggest that 24Gb/s SAS components may begin to appear in late 2016 or 2017, but these are estimates and are subject to change. The first end-user products are projected to become available in 2018. 24Gb/s SAS is expected to be backward compatible with 12Gb/s and 6Gb/s SAS. 24Gb/s SAS may use a different encoding scheme than previous versions. Drive connectors are expected in Q1 2014. The development and protoypes for 24Gb/s SAS will use PCIe 3.x technology, but it is likely that the 24Gb/s SAS final products will be aligned with the server platforms that use PCIe 4.0 technology. See the PCI Express section below for information on PCIe 4.0.
- SCSI Express — SCSI Express provides the well-known SCSI Protocol over the PCI Express (PCIe) interface, taking advantage of the low latency of PCIe. SCSI Express is designed to meet the increased performance of SSDs. SCSI Express uses SCSI over PCIe (SOP) and the PCIe Queueing Interface (PQI) to form the SOP-PQI protocol. SCSI Express controllers connect to devices via the Express Bay SFF-8639 multifunction connector, which supports multiple protocols and interfaces, such as PCIe, SAS and SATA. SCSI Express handles PCIe devices that use up to x4 lanes. The first version of the SCSI Express specification was published in February 2014. SCSI Express devices and controllers are estimated to become available in 2H 2014. See the Express Bay Connector Backplane diagram below.
- SAS Advanced Connectivity — New SAS cabling options are offering longer distances by using active copper (powered signal) and optical cables. The Mini SAS HD connector can be used for 6Gb/s SAS and will be used for 12Gb/s SAS connections. See the connector types section below for discussion of Mini SAS and Mini SAS HD connectors.
- SATA Express is included in SATA Revision 3.2 (see history section above). SATA Express enables client SATA and PCI Express (PCIe) solutions to coexist. SATA Express will support increased interface speeds up to 2 lanes of PCIe (2GB/s for PCIe 3.0, 1GB/s for PCIe 2.0), compared with current 0.6GB/s (6Gb/s) SATA technology. These increased speeds are suitable for SSD and SSHD technology, while traditional HDD technology can continue to use today’s SATA interface. The SATA Express device connector pins are multiplexed, meaning that only PCIe or SATA (but not both) can be active for that device at one time. A separate signal, driven by the device tells the host if the device is SATA or PCIe. SATA Express products may become available in 2014 or 2015. See below for the SATA Express Connector Mating Matrix.
- Thunderbolt — Thunderbolt 2 was introduced in late 2013, and is expected to become available on a wide variety of computer motherboards, display, storage and other peripheral devices throughout 2014 and 2015. In spring 2014, Intel publicly demonstrated the use of Thunderbolt 2 to carry 10Gb Ethernet traffic between two different brands of computers, copying files from one to the other. Thunderbolt 2 controllers currently use PCIe 2.0. Future Thunderbolt controllers are expected to support PCIe 3.0. For additional comments on Thunderbolt, view our Demartek CES2014 Commentary page and our Demartek IDF2014 Commentary page.
- USB data rates — The USB 3.0 Promoter Group announced at the end of July 2013 that the USB 3.1 specification had been completed. USB 3.1 enables USB to operate at 10 Gbps, that is backward compatible with existing USB 3.0 and 2.0 hubs and devices. USB 3.1 uses a 128b/132b encoding scheme, with four bits used for command and control for the protocol and the cable management. The first public prototype demonstration of USB running at 10 Gbps was made in September 2013. USB 3.1 is expected to support USB Power Delivery (described below). A new “type C” connector is expected for USB 3.1 that will be similar in size to the “micro-USB” connectors. USB 3.1 products are expected to become available by the end of 2014. For additional comments on USB 3.1, view our Demartek CES2014 Commentary page and our Demartek IDF2014 Commentary page.
- USB Power Delivery — USB is becoming a power delivery interface, with an increasing number of devices charging or receiving power via USB ports in computers or wall sockets and power strips. The USB Power Delivery (PD) Specification, version 1.0, was introduced in July 2012 to allow an increased amount of power to be carried via USB. This specification proposes to raise the limit from 7.5 watts up to 100 watts of power, depending on cable and connector types. Devices negotiate with each other to determine voltage and current levels for the power transmission, and power can flow in either direction. Devices can adjust their power charging rates while transmitting data. Prototypes began to appear in late 2013. USB PD is expected to be included with the USB 3.1 specification (see above).
- USB Type-C Cable — The specification for the new Type-C USB cable and connector was completed in August 2014. This USB cable has an entirely new design with a smaller connector size that can be easily used with a variety of types of devices. With this new specification, the connector and cable are reversible with respect to the plug orientation and the direction of the cable. Type-C USB cables will have the same type of connector on each end of the cable, so it will not matter which end of the cable is plugged into a computer, hub, charger or other device. The Type-C USB cables are electronically marked so that cable information can be passed to the device. Initial cable lengths will be up to 1m in length and will use passive copper technology. Active copper and optical cables could be produced in the future. A close-up view of this cable is available in the Next Generation Storage Networking presentation from the Storage Developer Conference 2014.
SAS-SATA Connector Compatibility
Source: SCSI Trade Association
Express Bay Connector Backplane
Source: SCSI Trade Association
SATA Express Connector Mating Matrix
Cables: Fiber Optics and Copper
As interface speeds increase, expect increased usage of fiber optic cables and connectors for most interfaces. At higher Gigabit speeds (10Gb+), copper cables and interconnects generally have too much amplitude loss except for short distances, such as within a rack or to a nearby rack. This amplitude loss is sometimes called a poor signal-to-noise ratio or simply “too noisy”.
Single-mode fiber vs. Multi-mode fiber
There are two general types of fiber optic cables available: single-mode fiber and multi-mode fiber.
- Single-mode fiber (SMF), typically with an optical core of approximately 9 µm (microns), has lower modal dispersion than multi-mode fiber and can support distances up to 80-100 Km (Kilometers) or more, depending on transmission speed, transceivers and the buffer credits allocated in the switches.
- Multi-mode fiber (MMF), with optical core of either 50 µm or 62.5 µm, supports distances up to 600 meters, depending on transmission speeds and transceivers.
When planning datacenter cabling requirements, be sure to consider that a service life of 15 to 20 years can be expected for fiber optic cabling, so the choices made today need to support legacy, current and emerging data rates. Also note that deploying large amounts of new cable in a datacenter can be labor- intensive, especially in existing environments.
There are different designations for fiber optic cables depending on the bandwidth supported.
- Multi-mode: OM1, OM2, OM3, OM4
- Single-mode: OS1 (there is a proposed OS2 standard)
OM3 and OM4 are newer multi-mode cables that are “laser optimized” (LOMMF) and support 10 Gigabit Ethernet applications. OM3 and OM4 cables are also the only multi-mode fibers included in the IEEE 802.3ba 40G/100G Ethernet standard that was ratified in June 2010. The 40G and 100G speeds are currently achieved by bundling multiple channels together in parallel with special multi-channel (or multi-lane) connector types. This standard defines an expected operating range of up to 100m for OM3 and up to 150m for OM4 for 40 Gigabit Ethernet and 100 Gigabit Ethernet. These are estimates of distance only and supported distances may differ when 40GbE and 100GbE products become available in the coming years. See the Connector Types section below for additional detail. OM4 cabling is expected to support 32GFC up to 100 meters.
Newer multi-mode OM2, OM3 and OM4 (50 µm) and single-mode OS1 (9 µm) fiber optic cables have been introduced that can handle tight corners and turns. These are known as “bend optimized,” “bend insensitive,” or have “enhanced bend performance.” These fiber optic cables can have a very small turn or bend radius with minimal signal loss or “bending loss.” The term “bend optimized” multi-mode fiber (BOMMF) is sometimes used.
OS1 and OS2 single-mode fiber optics are used for long distances, up to 10,000m (6.2 miles) with the standard transceivers and have been known to work at much longer distances with special transceivers and switching infrastructure.
Each of the multi-mode and single-mode fiber optic cable types includes two wavelengths. The higher wavelengths are used for longer-distance connections.
Update: 24 April 2012 — The Telecommunications Industry Association (TIA) Engineering Committee TR-42 Telecommunications Cabling Systems has approved the publication of TIA-942-A, the revised Telecommunications Infrastructure Standard for Data Centers. A number of changes were made to update the specification with respect to higher transmission speeds, energy efficiency and harmonizing with international standards. For backbone and horizontal cabling and connectors, the following are some of the important updates:
- Copper cabling — Cat 6 is the minimum requirement, Cat 6a recommended
- Fiber optic cabling — OM3 is the minimum requirement, OM4 is recommended
- Fiber optic connectors — LC is the standard for one or two fiber connectors
10Gb Ethernet Fiber-Optic Cables
- 10GBASE-SR — Currently, the most common type of fiber-optic 10GbE cable is the 10GBASE-SR cable that supports an SFP+ connector with an optical transceiver rated for 10Gb transmission speed. These are also known as “short reach” fiber-optic cables.
- 10GBASE-LR — These are the “long reach” fiber optic cables that support single-mode fiber optic cables and connectors.
Indoor vs. Outdoor cabling
Indoor fiber-optic cables are suitable for indoor building applications. Outdoor cables, also known as outside plant or OSP, are suitable for outdoor applications and are water (liquid and frozen) and ultra-violet resistant. Indoor/outdoor cables provide the protections of outdoor cables with a fire-retardant jacket that allows deployment of these cables inside the building entrance beyond the OSP maximum distance, which can reduce the number of transition splices and connections needed.
|10 Gb/s||33m||82m||Up to 300m||Up to 400m ¹|
|16Gb/s||15m ¹||35m ¹||100m ¹||125m ¹|
¹ OM1 cable is not recommended for 16Gb/s FC, but is expected to operate up to 15m.
Distances supported in actual configurations are generally less than the distance supported by the raw fiber optic cable. The distances shown above are for 850 nm wavelength multi-mode cables. The 1300 nm wavelength multi-mode cables can support longer distances.
Active Copper vs. Passive Copper
Passive copper connections are common with many interfaces. The industry is finding that as the transfer rates increase, passive copper does not provide the distance needed and takes up too much physical space. The industry is moving towards an active copper type of interface for higher speed connections, such as 6Gb/s SAS. Active copper connections include components that boost the signal, reduce the noise and work with smaller-gauge cables, improving signal distance, cable flexibility and airflow. These active copper components are expected to be less expensive and consume less electric power than the equivalent components used with fiber optic cables.
Copper: 10GBASE-T and 1000BASE-T
1000BASE-T cabling is commonly used for 1Gb Ethernet traffic in general, and 1Gb iSCSI for storage connections. This is the familiar four pair copper cable with the RJ45 connectors. Cables used for 1000BASE-T are known as Cat5e (Category 5 enhanced) or Cat6 (Category 6) cables.
10GBASE-T cabling supports 10Gb Ethernet traffic, including 10Gb iSCSI storage traffic. The cables and connectors are similar to, but not the same as the cables used for 1000BASE-T. 10GBASE-T cables are Cat6a (Category 6 augmented), also known as Class EA cables. These support the higher frequencies required for 10Gb transmission up to 100 meters (330 feet). Cables must be certified to at least 500MHz to ensure 10GBASE-T compliance. Cat7 (Category 7, Class F) cable is also certified for 10GBASE-T compliance, and is typically deployed in Europe. Cat6 cables may work in 10GBASE-T deployments up to 55m, but should be tested first. 10GBASE-T cabling is not expected to be deployed for FCoE applications in the near future. Some newer 10GbE switches support 10GBASE-T (RJ45) connectors.
10GBASE-CR — Currently, the most common type of copper 10GbE cable is the 10GBASE-CR cable that uses an attached SFP+ connector, also known as a Direct Attach Copper (DAC). This fits into the same form factor connector and housing as the fiber optic cables with SFP+ connectors. Many 10GbE switches accept cables with SFP+ connectors, which support both copper and fiber optic cables. These cables are available in 1m, 3m, 5m, 7m, 8.5m and longer distances. The most commonly deployed distances are 3m and 5m.
10GBASE-CX4 — These cables are older and not very common. This type of cable and connector is similar to cables used for InfiniBand technology.
USB Type-C Cables
Information on the new USB Type-C cables is located in the Roadmaps section above.
Several types of connectors are available with cables used for storage interfaces. This is not an exhaustive list but is intended to show the more common types. Each of the connector types includes the number of lanes (or channels) and the rated speed.
As of early 2011, the fastest generally available connector speeds supported were 10 Gbps per lane. Significantly higher speeds are currently achieved by bundling multiple lanes in parallel, such as 4x10 (40 Gbps), 10x10 (100 Gbps), 12x10 (120 Gbps), etc. Most of the current implementations of 40GbE and 100GbE use multiple lanes of 10GbE and are considered “channel bonded” solutions.
14 Gbps per lane connectors appeared in the last half of 2011. These connectors support 16Gb Fibre Channel (single-lane) and 56Gb (FDR) InfiniBand (multi-lane).
25 Gbps per lane connectors may become available in 2012 or 2013 as prototypes. When 25 Gbps per lane connectors are available, then higher speeds, such as 100 Gbps can be achieved by bundling four of these lanes together. Other variations of bundling multiple lanes of 25 Gbps may be possible, such as 10x25 (250 Gbps), 12x25 (300 Gbps) or 16x25 (400 Gbps). It is expected that the 25 Gbps (actually 28 Gbps) connectors will support 32Gb Fibre Channel in single-lane configurations and higher speeds for Ethernet and InfiniBand in multi-lane configurations.
In calendar Q1 2012, several fiber-optic connector manufacturers demonstrated working prototypes of the “25/28G” connectors. These connectors support speeds up to 28 Gbps per lane and will be used for 100 Gbps Ethernet (100GbE) in a 4x25 configuration. These connector technologies will also be used for other high-speed applications such as the next higher speeds of Fibre Channel (32GFC) and InfiniBand. End-user products with these higher speed technologies were originally estimated to become available in 2013 or 2014, but more work remains before 25Gb products become generally available.
Two of the popular fiber-optic cable connectors are SFP+ and QSFP+ (see diagrams below). SFP+ is used for single-lane high-speed connections, and QSFP+ is used for four-lane high-speed connections. Many in the industry use the four-lane (“quad”) interface to provide increased bandwidth. Currently, the single-lane SFP+ is used for 10Gb Ethernet and 8Gb and 16Gb Fibre Channel. The four-lane QSFP+ is used for 40Gb Ethernet and 40Gb (QDR) and 56Gb (FDR) InfiniBand. The Fibre Channel technical committee is now officially discussing a single-lane and four-lane (“quad”) solution with the 32GFC technology (4x32) for a 128 Gb/sec connection. See the Roadmaps section above.
|Fibre Channel||1GFC, 2GFC, 4GFC||8GFC, 16GFC||—|
Note the encoding schemes described above for additional detail on speeds available for various connector and cable combinations.
PCIe data rates and connector types are provided in the PCI Express section.
InfiniBand Data Rates
SDR: Single Data Rate, DDR: Double Data Rate, QDR: Quad Data Rate, FDR: Fourteen Data Rate, EDR: Enhanced Data Rate
|Mini SAS HD||SAS HD|
|Small Form-factor Pluggable||SFP, SFP+|
|Quad Small Form-factor Pluggable||QSFP, QSFP+|
PCIe connector types are provided in the PCI Express section.
In the second half of 2010, a new variant of the SFP/SFP+ connector was introduced to accommodate the Fibre Channel backbone with 64-port blades and the planned increased density Ethernet core switches. This new connector, known as mSFP, mini-SFP or mini-LC SFP, narrows the optical centerline of a conventional SFP/SFP+ connector from 6.25 mm to 5.25 mm. Although this connector looks very much like a standard SFP style connector, it is narrower and is required for the higher-density devices. The photo at the right shows the difference between mini-SFP and the standard size.
CXP and CFP
The CXP (copper) and CFP (optical) connectors are expected to be used initially for switch-to-switch connections. These are expected for Ethernet and may also be used for InfiniBand. CFP connectors currently support 10 lanes of 10 Gbps connections (10x10) that consume approximately 35-40 watts. CFP2 is a single board, smaller version of CFP that also supports 10x10 but uses less power than CFP. During 2013, quite a bit of development activity is focused on CFP2. A future CFP4 connector is in the planning stages that is expected to use the 25/28G connectors and support 4x25. CFP4 is expected to handle long range fiber optic distances.
Mini SAS and Mini SAS HD
The Mini SAS connector is the familiar 4-lane connector available on most SAS cables today. The Mini SAS HD connector provides twice the density as the Mini SAS connector, and is available in 4-lane and 8-lane configurations. The Mini SAS HD connector is the same connector for passive copper, active copper and optical SAS cables. The diagrams below compare these two types of SAS connectors.
Source: SCSI Trade Association
PCI Express (PCIe)
PCI Express®, also known as PCIe®, stands for Peripheral Component Interconnect Express and is the computer industry standard for the I/O bus for computers introduced in the last few years. The first version of the PCIe specification, 1.0a, was introduced in 2003. Version 2.0 was introduced in 2007 and version 3.0 was introduced in 2010. These versions are often identified by their generation (“gen 1”, “gen 2”, etc.). It can take a year or two between the time the specification is introduced and general availability of computer systems and devices using those specification versions. The PCIe specifications are developed and maintained by the PCI-SIG (PCI Special Interest Group). PCI Express and PCIe are registered trademarks of the PCI-SIG.
Data rates for different versions of PCIe are shown in the table below. PCIe data rates are expressed in Gigatransfers per second (GT/s) and are a function of the number of lanes in the connection. The number of lanes is expressed with an “x” before the number of lanes, and is often spoken as “by 1”, “by 4”, etc. PCIe supports full-duplex (traffic in both directions). The data rates shown below are in each direction. Note the explanation of encoding schemes described above.
|PCIe 1.x||2.5||8b/10b||250 MB/s||500 MB/s||1 GB/s||2 GB/s||4 GB/s|
|PCIe 2.x||5||8b/10b||500 MB/s||1 GB/s||2 GB/s||4 GB/s||8 GB/s|
|PCIe 3.x||8||128b/130b||1 GB/s||2 GB/s||4 GB/s||8 GB/s||16Gb/s|
Efforts are underway to enable SATA and SAS to be carried over PCIe connections. See the roadmaps section above.
Mini-PCIe — PCI Express cards are also available in a mini PCIe form factor. This is a special form factor for PCIe that is approximately 30mm x 51mm or 30mm x 26.5mm, designed for laptop and notebook computers, and equivalent to a single-lane (x1) PCIe slot. A variety of devices including WiFi modules, WAN modules, video/audio decoders, SSDs and other devices are available in this form factor.
M.2 — M.2 is the next generation PCIe connector for ultra-thin tablets and other mobile platforms. Its multiple socket definitions support WWAN, SSD and other applications. M.2 can support PCIe protocol or SATA protocol, but not both at the same time on the same device. M.2 supports a variety of board width and length options. M.2 is available in single-sided modules that can be soldered down, or single-sided and dual-sided modules used with a connector.
SFF-8639 — SFF-8639 is the I/O backplane connector designed for high-density SSD storage devices and is backward compatible with existing storage interfaces. SFF-8639 supports PCIe/NVMe, SAS and SATA devices and enables hot plug and hot swap of devices while the system is running. Revision 0.7 of the SFF-8639 specification was released in March 2014, with Revision 1.0 expected by year-end 2014. The SFF-8639 connector is expected to meet similar electrical requirements as a standard PCIe CEM connector.
M-PCIe™ — M-PCIe is the specification that maps PCIe over the MIPI® Alliance M-PHY® technology used in low-power mobile and handheld devices. M-PCIe is optimized for RFI/EMI requirements and supports M-PHY gears 1, 2 and 3 and will be extended to support gear 4.
PCIe 2.0 — Servers that have PCIe 2.0 x8 slots can support two ports of 10GbE or two ports of 16GFC on one adapter.
PCIe 3.0 — On 6 March 2012, the major server vendors announced their next generation servers that support PCIe 3.0, which, among other things, doubles the I/O throughput rate from the previous generation. These servers also provide up to 40 PCIe 3.0 lanes per processor socket, which is also at least double from the previous server generation. Workstation and desktop computer motherboards that support PCIe 3.0 first appeared in late 2011. PCIe 3.0 graphics cards appeared in late 2011. Other types of adapters supporting PCIe 3.0 were announced in 2012 and 2013. The PCIe 3.0 specification was completed in November 2010.
PCIe 3.1 — The PCIe 3.1 specification was released in October 2014. It incorporates M-PCIe and consolidates numerous protocol extensions and functionality for ease of access.
PCIe 4.0 — In November 2011, PCI-SIG announced the approval of 16 gigatransfers per second (GT/s) as the bit rate for the next generation of PCIe architecture, known as PCIe 4.0. After technical analysis, it was determined that 16 GT/s could be manufactured and deployed with known technologies, while maintaining backward compatibility with previous generations of PCIe architecture such as PCIe 1.x, 2.x and 3.x. Revision 0.5 of the PCIe 4.0 specification is expected by year-end 2014, while Revision 0.9 is expected to be available in 1H 2016. It may take up to a year or more for products that support PCIe 4.0 technology to become generally available after the final PCIe 4.0 specification is complete.
OCuLINK — OCuLINK is intended to be a low-cost, small cable form factor for PCIe internal and external devices, offering bit rates starting at 8Gbps, with headroom to scale, and new independent cable clock integration. OCuLINK supports x1, x2 and x4 lanes of PCIe 3.0 connectivity. OCuLINK supports passive cables capable of reaching up to 2—3 meters and active copper and optical cables. Active copper cables can reach up to 3—10 meters while active optical cables can reach up to 300 meters in length. The OCuLINK specification will be completed in the first calendar quarter of 2015 with products expected shortly thereafter.
I/O Virtualization — In 2008, the PCI-SIG announced the completion of its I/O Virtualization (IOV) suite of specifications including single-root IOV (SR-IOV) and multi-root IOV (MR-IOV). These technologies can work with system virtualization technologies and can allow multiple operating systems to natively share PCIe devices. SR-IOV is currently supported with several 10GbE NICs and hypervisors. See the recent Demartek I/O Virtualization presentation for additional detail.
The concept of sharing PCIe devices or providing access to PCIe devices that may be physically larger than some smaller form-factor systems can accommodate has led to the development of external connections to some PCIe devices. Cables have been developed for extending the PCIe bus outside of the chassis holding the PCIe slots. These cables are specified by indicating the number of PCIe lanes (x4, x8, etc.) supported. Cables are typically available for x4, x8 and x16 lane configurations. Common cable lengths are 1m and 3m. The photo below shows some PCIe cables and connectors. PCIe can also be carried over fiber-optic cables for longer distances. In the future we will begin to see the shift from PCIe cables and connectors to OCuLINK cables and connectors, also shown in the image below.
The original version of this page is available at www.demartek.com/Demartek_Interface_Comparison.html on the Demartek website.