Lenovo Thinksystem Sr650 V2 Server: Product Guide
Lenovo Thinksystem Sr650 V2 Server: Product Guide
Product Guide
The Lenovo ThinkSystem SR650 V2 is an ideal 2-socket 2U rack server for small businesses up to large
enterprises that need industry-leading reliability, management, and security, as well as maximizing
performance and flexibility for future growth. The SR650 V2 is based on the new 3rd generation Intel Xeon
Scalable processor family (formerly codenamed "Ice Lake") and the new Intel Optane Persistent Memory
200 Series.
The SR650 V2 is designed to handle a wide range of workloads, such as databases, virtualization and cloud
computing, virtual desktop infrastructure (VDI), infrastructure security, systems management, enterprise
applications, collaboration/email, streaming media, web, and HPC.
Figure 1. Lenovo ThinkSystem SR650 V2 with 2.5-inch front drive bays (3.5-inch drive configurations also
available)
Networking Selectable LOM, 1GbE or Selectable OCP 3.0, 1GbE, Improved performance &
10GbE 10GbE or 25GbE flexibility
Optional ML2 and PCIe Optional PCIe adapters OCP slot supports 25GbE
adapters 1GbE dedicated
1GbE dedicated management port
management port
PCIe Up to 6x PCIe 3.0 slots Up to 8x PCIe 4.0 slots New PCIe 4.0 support
1x dedicated RAID slot 1x internal bay for cabled
RAID/HBA
GPU support Up to 5x NVIDIA T4 GPUs Up to 8x NVIDIA T4 GPUs More GPUs means more
Up to 2x double-wide 300W Up to 3x double-wide GPUs processing power per 2U
GPUs server
Figure 3. Rear view of the ThinkSystem SR650 V2 (configuration with eight PCIe slots)
The following figure shows the locations of key components inside the server.
System architecture
Lenovo ThinkSystem SR650 V2 Server 8
System architecture
The following figure shows the architectural block diagram of the SR650 V2, showing the major components
and their connections.
Standard specifications
The following table lists the standard specifications.
The server also supports these drives for OS boot or drive storage:
Two 7mm drives at the rear of the server (in addition to any 2.5-inch or 3.5-inch drive bays)
Internal M.2 module supporting up to two M.2 drives
Storage 12x Onboard SATA ports (Intel VROC SATA RAID, formerly known as Intel RSTe RAID)
controller Up to 12x Onboard NVMe ports (includes Intel VROC NVMe RAID, with optional license for
non-Intel NVMe SSDs)
NVMe Retimer Adapter (supports Intel VROC NVMe RAID)
12 Gb SAS/SATA RAID adapters:
RAID 530i-8i (cacheless) supports RAID 0, 1, 10, 5, 50
RAID 530i-16i (cacheless) supports RAID 0, 1, 10
RAID 930-8i with 2GB flash-backed cache supports RAID 0, 1, 10, 5, 50, 6, 60
RAID 930-16i with 4GB flash-backed cache supports RAID 0, 1, 10, 5, 50, 6, 60
RAID 940-8i with 4GB or 8GB flash-backed cache supports RAID 0, 1, 10, 5, 50, 6,
60
RAID 940-16i with 8GB flash-backed cache supports RAID 0, 1, 10, 5, 50, 6, 60
RAID 940-32i with 8GB flash-backed cache supports RAID 0, 1, 10, 5, 50, 6, 60
12 Gb SAS/SATA non-RAID: 430-8i, 430-16i and 440-16i HBAs
Slots are configured using three riser cards. Riser 1 (slots 1-3) and Riser 2 (slots 4-6) are installed
in slots in the system board, Riser 3 (slots 7-8) is cabled to ports on the system board.
A variety of riser cards are available. See theI/O expansion for details.
For 2.5-inch front drive configurations, the server supports the installation of a RAID adapter or HBA
in a dedicated area that does not consume any of the PCIe slots.
Rear: 3x USB 3.1 G1 (5 Gb/s) ports, 1x VGA video port, 1x RJ-45 1GbE systems management port
for XCC remote management. Optional DB-9 COM serial port (installs in slot 3).
Internal: 1x USB 3.1 G1 connector for operating system or license key purposes
Cooling 6x (with two processors installed) or 5x (with one processor installed) single-rotor or dual-rotor hot
swap 60 mm fans, configuration dependent. Fans are N+1 redundant, tolerating a single-rotor
failure. One fan integrated in each power supply.
Power supply Up to two hot-swap redundant AC power supplies, 80 PLUS Platinum or 80 PLUS Titanium
certification. 500 W, 750 W, 1100 W and 1800 W AC options, supporting 220 V AC. 500 W, 750 W
and 1100 W options also support 110V input supply. In China only, all power supply options support
240 V DC. Also available is a 1100W power supply with a -48V DC input.
Video G200 graphics with 16 MB memory with 2D hardware accelerator, integrated into the XClarity
Controller. Maximum resolution is 1920x1200 32bpp at 60Hz.
Hot-swap Drives, power supplies, and fans.
parts
Systems Operator panel with status LEDs. Optional External Diagnostics Handset with LCD display. Models
management with 8x or 16x 2.5-inch front drive bays can optionally support an Integrated Diagnostics Panel.
XClarity Controller (XCC) embedded management, XClarity Administrator centralized infrastructure
delivery, XClarity Integrator plugins, and XClarity Energy Manager centralized server power
management. Optional XClarity Controller Advanced and Enterprise to enable remote control
functions.
Security Chassis intrusion switch, Power-on password, administrator's password, Trusted Platform Module
features (TPM), supporting TPM 2.0. In China only, optional Nationz TPM 2.0. Optional lockable front
security bezel.
Operating Microsoft Windows Server, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, VMware ESXi.
systems See the Operating system support section for specifics.
supported
Limited Three-year or one-year (model dependent) customer-replaceable unit and onsite limited warranty
warranty with 9x5 next business day (NBD).
Service and Optional service upgrades are available through Lenovo Services: 4-hour or 2-hour response time,
support 6-hour fix time, 1-year or 2-year warranty extension, software support for Lenovo hardware and
some third-party applications.
Dimensions Width: 445 mm (17.5 in.), height: 87 mm (3.4 in.), depth: 764 mm (30.1 in.). SeePhysical and
electrical specifications for details.
Weight Maximum: 38.8 kg (85.5 lb)
Models of the SR650 V2 are defined based on whether the server has 2.5-inch drive bays at the front (called
the 2.5-inch chassis) or whether it has 3.5-inch drive bays at the front (called the 3.5-inch chassis). For
models, the feature codes for these chassis bases are as listed in the following table.
AP models: Customers in Australia and New Zealand also have access to the Asia Pacific region
models.
AP models: Customers in Japan also have access to the Asia Pacific region models.
Table 10. Models with a 3-year warranty for Latin American countries (except Brazil)
Intel Xeon
Scalable Power Front
Model processor† Memory RAID Drive bays OCP Slots supply VGA XCC Fans
TopSeller models with a 3-year warranty (machine type 7Z73)
7Z73A05PLA 1x Silver 4309Y 1x 16GB 530-8i 8x 3.5" SAS 4x1Gb 3 (x16, x8, 1x Yes Std 5x
8C 105W 2.8G Open bay x8) Gen4 750W Std
7Z73A05ULA 1x Silver 4310 12C 1x 16GB 530-8i 8x 3.5" SAS 4x1Gb 3 (x16, x8, 1x Yes Std 5x
120W 2.1G Open bay x8) Gen4 750W Std
7Z73A05MLA 1x Silver 4314 16C 1x 32GB 930-8i 8x 2.5" SAS 4x1Gb 3 (x16, x8, 1x Yes Std 5x
135W 2.4G 2Rx4 Open bay x8) Gen4 750W Std
7Z73A05SLA 1x Silver 4316 20C 1x 32GB 930-8i 8x 2.5" SAS 2x10GbT 3 (x16, x8, 2x Yes Ent 5x
150W 2.3G 2Rx4 Open bay x8) Gen4 750W Std
7Z73A05NLA 1x Gold 5318Y 1x 32GB 930-8i 8x 2.5" SAS 2x10GbT 3 (x16, x8, 2x Yes Ent 5x
24C 165W 2.1G 2Rx4 Open bay x8) Gen4 750W Std
† Processor description: Processor model, number of cores, thermal design power (TDP), core frequency
Processor options
The table below lists the processors that are supported.
Supported processors have the following features:
Third-generation Intel Xeon Scalable processors (formerly codenamed "Ice Lake")
10 nm process technology
8x DDR4 memory channels
64x PCIe 4.0 I/O lanes available for PCIe and NVMe devices
1.25 MB L2 cache per core
1.5 MB or more L3 cache per core
Intel Deep Learning Boost, which provides built-in Artificial Intelligence (AI) acceleration with the
Vector Neural Network Instruction set (VNNI). DL Boost and VNNI are designed to deliver significant,
more efficient Deep Learning (Inference) acceleration for high-performance AI workloads.
Intel Hyper-Threading Technology, which boosts performance for multithreaded applications by
enabling simultaneous multithreading within each processor core, up to two threads per core.
Memory tiers: All processors support up to 6TB of memory. There are no L or M suffix processors.
Options part numbers only for second processor : The option part numbers listed in the table are only
for use when adding a second processor. It is not supported to upgrade any processors already installed.
Processor features
The following table compares the features of the supported third-generation Intel Xeon processors.
Abbreviations used in the table:
TB: Turbo Boost 2.0
UPI: Ultra Path Interconnect
TDP: Thermal Design Power
SGX: Software Guard Extensions
PMem: Persistent Memory support
One-processor configurations
The SR650 V2 can be used with only one processor installed. Most core functions of the server (including
the XClarity Controller) are connected to processor 1 as shown in the System architecture section.
Memory options
The SR650 V2 uses Lenovo TruDDR4 memory and supports 16 DIMMs per processor or 32 DIMMs with two
processors installed. Each processor has eight memory channels with two DIMMs per channel. With 128
GB 3DS RDIMMs installed, the SR650 V2 supports a total of 4 TB of system memory.
The SR650 V2 also supports Intel Optane Persistent Memory 200 Series, as described in the Persistent
Memory section.
Memory operates at up to 3200 MHz at two DIMMs per channel , depending on the processor model
selected. If the processor selected has a lower memory bus speed, then all DIMMs will operate at that lower
speed.
The following table lists the memory options that are available for the server.
Lenovo TruDDR4 memory uses the highest quality components that are sourced from Tier 1 DRAM
suppliers and only memory that meets the strict requirements of Lenovo is selected. It is compatibility tested
and tuned to maximize performance and reliability. From a service and support standpoint, Lenovo TruDDR4
memory automatically assumes the system warranty, and Lenovo provides service and support worldwide.
Persistent memory
Lenovo ThinkSystem SR650 V2 Server 25
Persistent memory
The SR650 V2 server supports Intel Optane Persistent Memory 200 Series, a new class of memory and
storage technology explicitly architected for data center usage. Persistent memory is an innovative
technology that delivers a unique combination of affordable large memory capacity and persistence (non-
volatility). It offers significantly lower latency than fetching data from SSDs, even NVMe SSDs, and offers
higher capacities than system memory.
Persistent memory technology can help boost the performance of data-intensive applications such as in-
memory analytics, databases, content delivery networks, and high performance computing (HPC), as well as
deliver consistent service levels at scale with higher virtual machine and container density. When data is
stored closer to the processor on nonvolatile media, applications can see significant overall improvement in
performance.
The following table lists the ordering information for the supported persistent memory modules.
The following are the requirements when installing persistent memory (PMem) modules when installed in a
two-socket server with third-generation Intel Xeon Scalable processors ("Ice Lake" processors):
App Direct Mode and Memory Mode are supported. Mixed Mode is not supported.
All PMem modules operate at 3200 MHz when the installed processor runs the memory bus at 3200
MHz.
All installed PMem modules must be the same size. Mixing PMem modules of different capacities is
not supported.
Maximum 8 PMem modules per processor (install 1 in each memory channel).
For each memory channel with both a PMem module and a memory DIMM installed, the PMem
module is installed in channel slot 1 (DIMM1, closer to the processor) and the DIMM is installed in
channel slot 0 (DIMM0).
To maximize performance, balance all memory channels
Both interleaved and non-interleaved modes are supported.
Memory mirroring is not supported with PMem modules installed
For details, including App Direct Mode and Memory Mode configuration requirements, see the Intel Optane
Persistent Memory 200 Series product guide, https://lenovopress.com/LP1380
The specifics of these configurations are covered in the Supported drive bay combinations and Controller
selections sections.
The tables in those sections indicate the number of NVMe drives in each configuration plus the subscription
ratio. The subscription ratio is the number of PCIe lanes from the processor compared to the number of
lanes to the drives. A ratio of 1:1 means all drives get the full number of lanes they need to maximize drive
performance (currently 4 lanes per drive). A ratio of 1:2 means each drive only gets the half the bandwidth
from the processor. NVMe drives connected to a RAID adapter with Tri-Mode support have a 1:4 effective
ratio, since they only have a 1-lane connection to the RAID adapter.
In addition, the SR650 V2 supports two 7mm NVMe drives for use as boot drives. These two drives are
connected via separate RAID controller connected to a single PCIe 3.0 x2 host interface. See the 7mm
drives section for details.
Tri-Mode support
The RAID 940-8i and RAID 940-16i adapters also support NVMe through a feature named Tri-Mode support
(or Trimode support). This feature enables the use of NVMe U.3 drives at the same time as SAS and SATA
drives. Cabling of the controller to the backplanes is the same as with SAS/SATA drives, and the NVMe
drives are connected via a PCIe x1 link to the controller.
NVMe drives connected using Tri-Mode support provide better performance than SAS or SATA drives: A
SATA SSD has a data rate of 6Gbps, a SAS SSD has a data rate of 12Gbps, whereas an NVMe U.3 Gen 4
SSD with a PCIe x1 link will have a data rate of 16Gbps. NVMe drives typcially also have lower latency and
higher IOPS compared to SAS and SATA drives. Tri-Mode is supported with U.3 NVMe drives in either 2.5-
inch and 3.5-inch form factor and requires an AnyBay backplane.
Tri-Mode requires U.3 drives: Only NVMe drives with a U.3 interface are supported. U.2 drives are not
supported. See the Internal drive options section for the U.3 drives supported by the server.
Tip: Configurations with 8x or 16x total drive bays can be configured with or without an Integrated
Diagnostics Panel with pull-out LCD display. With the Integrated Diagnostics Display, 8-bay
configurations can be upgrade to 16 bays, however 16-bay configurations cannot be upgrade to 24 bays.
See the Local management section for details.
Field upgrades: All front backplanes are available as part numbers for field upgrades, along with
required cable option kits, as described in the Field upgrades section below.
The use of front drive bays has the following configuration rules:
If 3.5-inch front drive bays are used, an internal RAID adapter or HBA is not supported as the adapter
and bays occupy the same physical space
Any 8x 2.5-inch and 16x 2.5-inch drive configuration (SAS/SATA, AnyBay, NVMe) can optionally be
configured for use with the Integrated Diagnostics Panel. 3.5-inch drive configurations do not support
the Integrated Diagnostics Panel.
M.2 support: When mid drive bays are configured, the M.2 adapter is installed on the mid drive bay
mechanical as shown in the images.
Field upgrades: Backplanes are available as part numbers for field upgrades along with require cable
option kits, as described in the Field upgrades section below.
The use of drive bays in the mid-chassis area has the following configuration rules:
All processors are supported. Higher TDP processors will require the performance heatsinks.
Full-length adapter cards are not supported
GPUs (including low profile GPUs such as the T4) are not supported
Riser 3: Rear drive bays and Riser 3 are not supported together, since they occupy the same physical
space.
The use of rear drive bays has the following configuration rules:
Riser 3 is not supported since the rear drive bays occupy the space of this riser.
The use of rear drive bays restricts the number of slots and the choice of risers that are supported.
See the I/O expansion section for details.
The use of rear drive bays requires Riser 1 be installed, since power for the rear backplane comes
from Riser 1
The 7mm rear drive kit is supported installed in either slot 3 or slot 6 but not both at the same time.
The 7mm drive enclosure is connected to an onboard port and cannot be connected to any installed
RAID adapter or HBA.
M.2 and 7mm drive support: All 3.5-inch configurations listed in the table supported both M.2 and 7mm
drives, however some specific adapter combinations restrict the use of M.2 or 7mm as listed in the
Controller selections section.
Table 19. Drive bay and backplane combinations with 3.5-inch chassis (Blue cells = SAS/SATA, Purple cells
= AnyBay, Red cells = NVMe) (S/S = SAS/SATA, Any = AnyBay)
Front bays Rear
(3.5") Mid bays bays
Total NVMe S/S Any S/S S/S NVMe S/S S/S Front Mid Rear Riser 3
Cfg CPUs drives drives§ 3.5" 3.5" 3.5" 2.5" 2.5" 3.5" 2.5" backplane backplane backplane support
A 1 or 2 8 0 8 0 0 0 0 0 0 1x 8-S/S None None Yes
B 1 or 2 12 0 12 0 0 0 0 0 0 1x 12-S/S None None Yes
C 1 or 2 14 0 12 0 0 0 0 2 0 1x 12-S/S None 1x 2-3.5 No
D 1 or 2 16 0 12 0 0 0 0 4 0 1x 12-S/S None 1x 4-3.5 No
E 2 20 0 12 0 4 0 0 4 0 1x 12-S/S 1x 4-3.5 1x 4-3.5 No
F 1 or 2 16 0 12 0 0 0 0 0 4 1x 12-S/S None 1x 4-2.5 No
G 2 20 0 12 0 4 0 0 0 4 1x 12-S/S 1x 4-3.5 1x 4-2.5 No
H 2 20 8 (1:1) 12 0 0 0 8 0 0 1x 12-S/S 2x NVMe None No
I 2 12 12 (1:1) 0 12 0 0 0 0 0 1x 12-Any None None No
J 2 16 12 (1:1) 0 12 0 0 0 4 0 1x 12-Any None 1x 4-3.5 No
K 2 20 12 (1:1) 0 12 4 0 0 4 0 1x 12-Any 1x 4-3.5 1x 4-3.5 No
§ The text in parenthesis refers to the subscription ratio. See the NVMe support section for details.
M.2 and 7mm drive support: All 2.5-inch configurations listed in the table supported both M.2 and 7mm
drives.
Table 20. Drive bay and backplane combinations with 2.5-inch chassis (Blue cells = SAS/SATA, Red cells =
NVMe, Purple cells = AnyBay) (S/S = SAS/SATA, Any = AnyBay)
Rear
Total drives
NVMe S/S Any NVMe S/S S/S NVMe S/S S/S Front Mid Rear Riser 3
Cfg CPUs drives§ 2.5" 2.5" 2.5" 3.5" 2.5" 2.5" 3.5" 2.5" backplane backplane b'plane support
A 1 or 2 8 0 8 0 0 0 0 0 0 0 1x 8-S/S None None Yes
B 1 or 2 16 0 16 0 0 0 0 0 0 0 2x 8-S/S None None Yes
C 1 or 2 24 0 24 0 0 0 0 0 0 0 3x 8-S/S None None Yes
D 1 or 2 28 0 24 0 0 0 0 0 0 4 3x 8-S/S None 1x 4-2.5 No
E 2 36 0 24 0 0 0 8 0 0 4 3x 8-S/S 2x 4-2.5 1x 4-2.5 No
F 2 40 0 24 0 0 0 8 0 0 8 3x 8-S/S 2x 4-2.5 2x 4-2.5 No
G 1 or 8 8 (1:1) 0 0 8 0 0 0 0 0 1x 8-NVMe None None No
2†
H 2 16 16 (1:1) 0 0 16 0 0 0 0 0 2x 8-NVMe None None No
I 2 24 24 (1:1) 0 0 24 0 0 0 0 0 3x 8-NVMe None None Yes*
J 2 32 32 (1:2) 0 0 24 0 0 8 0 0 3x 8-NVMe 2x 4-NVMe None Yes
K 1 or 16 8 (1:1) 8 0 8 0 0 0 0 0 1x 8-S/S + None None Yes*
2† 1x 8-NVMe
L 1 or 24 8 (1:1) 16 0 8 0 0 0 0 0 2x 8-S/S + None None Yes*
2† 1x 8-NVMe
M 1 or 24 16 (1:1) 8 0 16 0 0 0 0 0 1x 8-S/S + None None No
2† 2x 8-NVMe
N 1 or 8 8 (1:1) 0 8 0 0 0 0 0 0 1x 8-Any None None Yes*
2†
O 2 16 16 (1:1) 0 8 8 0 0 0 0 0 1x 8-Any + None None No
1x 8-NVMe
P 1 or 16 8 (1:1) 8 8 0 0 0 0 0 0 1x 8-S/S + None None Yes*
2† 1x 8-Any
Q 1 or 24 8 (1:1) 16 8 0 0 0 0 0 0 2x 8-S/S + None None Yes*
2† 1x 8-Any
R 1 or 28 8 (1:1) 16 8 0 0 0 0 0 4 2x 8-S/S + None 1x 4-2.5 No
2† 1x 8-Any
S 1 or 2 24 16 (1:4) 8 16 0 0 0 0 0 0 1x 8-S/S + None None Yes
2x 8-Any
T 1 or 2 16 16 (1:4) 0 16 0 0 0 0 0 0 2x 8-Any None None Yes
U 1 or 2 24 24 (1:4) 0 24 0 0 0 0 0 0 3x 8-Any None None Yes
§ The text in parenthesis refers to the subscription ratio. See the NVMe support section for details.
† Only NVMe configs that use OB NVMe (4) + 1 retimer (4) or configs with a RAID Tri-Mode adapter are
supported with 1 CPU. See the specifics in the Controller selections section.
* No support for Riser 3 if 8x OB NVMe or more ports are used. See the Controller selections section.
Field upgrades
The SR650 V2 is orderable without drive bays, allowing you to add a backplane, cabling and controllers as
field upgrades. The server also supports upgrading some configurations by adding additional front drive bays
(for example, upgrading from 8 to 16x 2.5-inch drive bays).
Upgrade path: The key criteria for upgrade support is to ensure that the target configuration is one of
the supported drive bay configurations as listed in the Supported drive bay combinations section.
For example, if you are upgrading a 2.5-inch drive configuration from Config A to Config B, you will need
these additional options:
4XH7A60930, ThinkSystem SR650 V2/SR665 8x2.5" SAS/SATA Backplane Option Kit
4X97A59811, ThinkSystem SR650 V2 2.5" Chassis Front BP2 SAS/SATA Cable Kit
Table 23. Drive bay field upgrade for the 3.5-inch chassis (Blue = SAS/SATA, Purple = AnyBay, Red =
NVMe)
Front
bays Rear
(3.5") Mid bays bays
S/S Any S/S S/S NVMe S/S S/S
Cfg 3.5" 3.5" 3.5" 2.5" 2.5" 3.5" 2.5" Backplane and cable kits required (all required)
A 8 0 0 0 0 0 0 4XH7A60932, ThinkSystem SR650 V2/SR665 8x3.5" SAS/SATA
Backplane Option Kit
4X97A59804, ThinkSystem SR650 V2 3.5" Chassis Front Backplane
SAS/SATA Cable Kit
B 12 0 0 0 0 0 0 4XH7A60929, ThinkSystem SR650 V2/SR665 12x3.5" SAS/SATA
Backplane Option Kit
4X97A59804, ThinkSystem SR650 V2 3.5" Chassis Front Backplane
SAS/SATA Cable Kit
C 12 0 0 0 0 2 0 4XH7A60929, ThinkSystem SR650 V2/SR665 12x3.5" SAS/SATA
Backplane Option Kit
4XH7A60940, ThinkSystem SR650 V2/SR665 Rear 2x3.5" SAS/SATA
Backplane Option Kit
4X97A59804, ThinkSystem SR650 V2 3.5" Chassis Front Backplane
SAS/SATA Cable Kit
4X97A59806, ThinkSystem SR650 V2 3.5" Chassis Rear Backplane
SAS/SATA Cable Kit
D 12 0 0 0 0 4 0 4XH7A60929, ThinkSystem SR650 V2/SR665 12x3.5" SAS/SATA
Backplane Option Kit
4XH7A60939, ThinkSystem SR650 V2/SR665 Rear 4x3.5" SAS/SATA
Backplane Option Kit
4X97A59804, ThinkSystem SR650 V2 3.5" Chassis Front Backplane
SAS/SATA Cable Kit
4X97A59806, ThinkSystem SR650 V2 3.5" Chassis Rear Backplane
SAS/SATA Cable Kit
E 12 0 4 0 0 4 0 4XH7A60929, ThinkSystem SR650 V2/SR665 12x3.5" SAS/SATA
Backplane Option Kit
4XH7A61053, ThinkSystem SR650 V2 Middle 4x3.5" SAS/SATA
Backplane Option Kit
4XH7A60939, ThinkSystem SR650 V2/SR665 Rear 4x3.5" SAS/SATA
Backplane Option Kit
4X97A59804, ThinkSystem SR650 V2 3.5" Chassis Front Backplane
SAS/SATA Cable Kit
4X97A59806, ThinkSystem SR650 V2 3.5" Chassis Rear Backplane
SAS/SATA Cable Kit
4X97A59807, ThinkSystem SR650 V2 3.5" Chassis Middle Backplane
SAS/SATA Cable Kit
Table 24. Drive bay field upgrade for the 2.5-inch chassis (Blue = SAS/SATA, Purple = AnyBay, Red =
NVMe)
Front bays Rear
(2.5") Mid bays bays
S/S Any NVMe S/S NVMe S/S
Cfg 2.5" 2.5" 2.5" 2.5" 2.5" 2.5" Backplane and cable kits required (all required)
A 8 0 0 0 0 0 4XH7A60930, ThinkSystem SR650 V2/SR665 8x2.5" SAS/SATA
Backplane Option Kit
4X97A59809, ThinkSystem SR650 V2 2.5" Chassis Front BP1 SAS/SATA
Cable Kit
B 16 0 0 0 0 0 4XH7A60930, ThinkSystem SR650 V2/SR665 8x2.5" SAS/SATA
Backplane Option Kit
4XH7A60930, ThinkSystem SR650 V2/SR665 8x2.5" SAS/SATA
Backplane Option Kit
4X97A59809, ThinkSystem SR650 V2 2.5" Chassis Front BP1 SAS/SATA
Cable Kit
4X97A59811, ThinkSystem SR650 V2 2.5" Chassis Front BP2 SAS/SATA
Cable Kit
C 24 0 0 0 0 0 4XH7A60930, ThinkSystem SR650 V2/SR665 8x2.5" SAS/SATA
Backplane Option Kit
4XH7A60930, ThinkSystem SR650 V2/SR665 8x2.5" SAS/SATA
Backplane Option Kit
4XH7A60930, ThinkSystem SR650 V2/SR665 8x2.5" SAS/SATA
Backplane Option Kit
4X97A59809, ThinkSystem SR650 V2 2.5" Chassis Front BP1 SAS/SATA
Cable Kit
4X97A59811, ThinkSystem SR650 V2 2.5" Chassis Front BP2 SAS/SATA
Cable Kit
4X97A59813, ThinkSystem SR650 V2 2.5" Chassis Front BP3 SAS/SATA
Cable Kit
If you have an existing configuration with an HBA or RAID adapter installed in one of the rear PCIe slots,
and you wish to upgrade to one of the internal storage adapters (RAID 940-16i 8GB Flash PCIe Gen4 12Gb
Internal Adapter or 440-16i SAS/SATA PCIe Gen4 12Gb Internal HBA) you will need to order an additional
cable kit as listed in the following table. Contents of the kit is listed in the next section.
4X97A59810 ThinkSystem SR650 V2 2.5" Chassis Front BP2 NVMe Cable Kit
1x SBB7A40147 - PCIe from SFF to front BP
1x SBB7A24100 - Gen4 Slimline x8(2x) 2 MCIO x8(2x) 850mm
1x SBB7A24134 - Gen4 Slimline x8(2x) 2 MCIO x8(2x) 670mm
1x SBB7A32003 - Gen4 Slimline x8(2x) 2 MCIO x8(2x)
1x SBB7A32004 - Gen4 Slimline x8(2x) 2 MCIO x8(2x)
1x SBB7A32006 - 8x2.5 AnyBay PCIe MB to BP long
1x SBB7A32007 - 8x2.5 AnyBay PCIe MB to BP longest
1x SBB7A32008 - RTM NVMe Cable to Front BP
1x SBB7A23679 - Power MB to Front 2.5 BP
4X97A59811 ThinkSystem SR650 V2 2.5" Chassis Front BP2 SAS/SATA Cable Kit
1x SBB7A23679 - Power MB to Front 2.5 BP
1x SBB7A24243 - SAS/SATA SFF (Gen3) to 8X2.5 BP2
1x SBB7A29305 - RAID to 8x2.5 BP2,CPP exp
4X97A59812 ThinkSystem SR650 V2 2.5" Chassis Front BP3 NVMe Cable Kit
1x SBB7A40147 - PCIe from SFF to front BP
1x SBB7A24070 - Gen4 Slimline x8(2x) to MCIO x8(2x) 700mm
1x SBB7A24103 - Gen4 Slimline x8(2x) to MCIO x8(2x) 150mm
1x SBB7A24140 - Gen4 Slimline x8(2x) to MCIO x8(2x) 460mm
1x SBB7A29306 - PCIe5/6 to 8x2.5 AnyBay BP3 NVME 4-7
1x SBB7A32002 - Gen4 Slimline x8(2x) to MCIO x8(2x)
1x SBB7A32011 - SATA Gen4 Slimline x8 to Slimline x4 signal cable
1x SBB7A34212 - 600mm,PCIe Gen4 signal cable
1x SBB7A23679 - Power MB to Front 2.5 BP
4X97A59813 ThinkSystem SR650 V2 2.5" Chassis Front BP3 SAS/SATA Cable Kit
1x SBB7A24177 - Gen4 SAS Cable to Front BP
1x SBB7A24180 - Gen3 SAS Cable to Front BP
1x SBB7A23679 - Power MB to Front 2.5 BP
4X97A59814 ThinkSystem SR650 V2 2.5" Chassis Rear Backplane SAS/SATA Cable Kit
1x SBB7A21673 - SAS/SATA MB to Rear HDD BP
1x SBB7A21682 - Gen3 SAS Cable to M/R BP
1x SBB7A21683 - Gen3 SAS Cable to rear BP
1x SBB7A21687 - Gen3 SAS Cable to F/R BP
1x SBB7A21697 - Gen4 SAS Cable to F/R BP
1x SBB7A23688 - CFF RAID SAS Cable to rear BP
1x SBB7A23694 - CFF RAID SAS Cable to rear BP
1x SBB7A23715 - Gen4 Rear 4X2.5/4X3.5 BP
1x SBB7A23943 - OB SATA Cable to Rear BP
1x SBB7A24006 - Gen4 SAS Cable to rear BP
1x SBB7A24228 - SFF to HDD 4*2.5/2*3.5
1x SBB7A24237 - Gen3 SFF to RearBP (4*2.5/2*3.5)
1x SBB7A28832 - Rear + MID HDD BP4+5 (x4)*2
1x SBB7A31987 - 840mm,SAS/SATA,signal cable
1x SBB7A31999 - Rear BP SAS1 C5 cable
1x SBB7A21685 - Power YRiser to Rear 4X3.5 BP
1x SBB7A21689 - Power YRiser to Middle 4X3.5 BP
1x SBB7A24204 - Power Y-Cable to Rear BP
4X97A59815 ThinkSystem SR650 V2 2.5" Chassis Middle Backplane SAS/SATA Cable Kit
1x SBB7A21682 - Gen3 SAS Cable to M/R BP
1x SBB7A23712 - EXP SAS Cable to Rear BP
1x SBB7A23721 - EXP SAS Cable to Middle BP
1x SBB7A24170 - Gen4 SAS Cable to M/R BP
1x SBB7A28832 - Rear + MID HDD BP4+5 (x4)*2
1x SBB7A21685 - Power YRiser to Rear 4X3.5 BP
1x SBB7A23664 - Power XRiser -Midd to Midd-Riser
4X97A59816 ThinkSystem SR650 V2 2.5" Chassis Middle Backplane NVMe Cable Kit
1x SBB7A21666 - PCIe MB to middle NVMe
1x SBB7A23916 - Switch NVMe Cable to Front BP
1x SBB7A24158 - Switch NVMe Cable to Front BP
1x SBB7A24213 - MB to Middle NVME BP
1x SBB7A21685 - Power YRiser to Rear 4X3.5 BP
1x SBB7A23664 - Power XRiser -Mid to Mid-Riser
When adding drive bays, you will also need to add the appropriate storage controller(s). Consult the tables
in the Controller selections section to determine what controller sections are supported and what additional
controllers you will need. Controllers are described in the Controllers for internal storage section.
When adding a RAID 930 or 940 adapter as a field upgrade to a configuration with 3.5-inch mid drive bays,
order one supercap holder. Ordering information is in the following table.
Table 28. Supercap holder for 3.5-inch mid drive bay config
Part number Feature Description Maximum supported
4M17A61230 B8MQ ThinkSystem 2U Supercap Holder Kit 1 (holds 2 supercaps)
M.2 drives
The SR650 V2 supports one or two M.2 form-factor SATA or NVMe drives for use as an operating system
boot solution or as additional storage.
The M.2 drives install into an M.2 module which is mounted horizontally in the server:
In servers without mid-chassis drives, the M.2 module is mounted on the air baffle
With a mid-chassis drive cage (2.5-inch or 3.5-inch), the M.2 module is mounted on the drive cage,
as shown in the Mid drive bays section.
There are three different M.2 modules supported, as listed in the following table.
Configurations with 14x 3.5-inch SATA drives : An M.2 adapter is supported in all configurations
except when the server is configured 12x front 3.5-inch drives + 2x rear 3.5-inch drives using the
onboard SATA controller. This is because the two rear drives are connected to the same onboard port as
the M.2 adapter. For M.2 support with 14 or more 3.5-inch SATA drives, use a RAID adapter or SAS
HBA.
For further details about M.2 components, see the ThinkSystem M.2 Drives and M.2 Adapters product guide:
https://lenovopress.com/lp0769-thinksystem-m2-drives-adapters
Tri-Mode requires U.3 drives: Only NVMe drives with a U.3 interface are supported. U.2 drives are not
supported. See the Internal drive options section for the U.3 drives supported by the server.
Performance tip: For best performance with VROC NVMe RAID, the drives in an array should all be
connected to the same processor. Spanning processors is possible however performance will be
unpredictable and should be evaluated based on your workload.
By default, VROC NVMe RAID support is limited to use with only Intel-branded NVMe drives (feature B9X7).
If you wish to enable RAID support for non-Intel NVMe SSDs, select the VROC Premium license using the
ordering information in the following table. VROC Premium is fulfilled as a Feature on Demand (FoD) license
and is activated via the XCC management processor user interface.
VROC Premium is only needed for non-Intel NVMe drives in a RAID configuration. You do not need the
VROC Premium license upgrade under any of the following conditions:
If you have SATA drives connected to the onboard SATA ports, you do not need VROC Premium
If you have Intel NVMe drives connected to the onboard NVMe ports, you do not need VROC
Premium
If you have non-Intel NVMe drives connected to the onboard NVMe ports, but you don’t want RAID
support, you do not need VROC Premium
For specifications about the RAID adapters and HBAs supported by the SR650 V2, see the ThinkSystem
RAID Adapter and HBA Comparison, available from:
https://lenovopress.com/lp1288-lenovo-thinksystem-raid-adapter-and-hba-reference#sr650-v2-
support=SR650%2520V2
For details about these adapters, see the relevant product guide:
SAS HBAs: https://lenovopress.com/servers/options/hba
RAID adapters: https://lenovopress.com/servers/options/raid
M.2 drive support: The use of M.2 drives requires an additional adapter as described in the M.2 drives
subsection.
Note: NVMe PCIe SSDs support surprise hot removal and hot insertion, provided the operating system
supports PCIe SSD hot-swap.
Note: NVMe PCIe SSDs support surprise hot removal and hot insertion, provided the operating system
supports PCIe SSD hot-swap.
Note: NVMe PCIe SSDs support surprise hot removal and hot insertion, provided the operating system
supports PCIe SSD hot-swap.
Optical drives
The server supports the external USB optical drive listed in the following table.
The drive is based on the Lenovo Slim DVD Burner DB65 drive and supports the following formats: DVD-
RAM, DVD-RW, DVD+RW, DVD+R, DVD-R, DVD-ROM, DVD-R DL, CD-RW, CD-R, CD-ROM.
I/O expansion
Tip: For configurations with 2.5-inch front drive bays, an internal RAID adapter or HBA can be installed
in a dedicated space and cabled to a PCIe 4.0 x8 connector, thereby freeing up a slot for other purposes.
The following figure shows the locations of the rear-accessible slots for each configuration selection. The
OCP slot in located in the lower-left corner.
Tip: It is also possible to not have any slot selections, in which case slot fillers will be derived in the
configurator. Slots can be added later as field upgrades using option part numbers as listed in the Field
upgrades table.
Serial port
The SR650 V2 optionally supports a RS-232 serial port by adding a COM port bracket to either slot 3 or slot
6. Ordering information is shown in the following table.
Tip: If you want to add both a 7mm drive enclosure plus PCIe slots in slot 4 and 5, you will need to order
the 7mm drive option (either 4XH7A61057 or 4XH7A61058) plus the x16/x16/E PCIe G4 Riser 1/2 Kit,
4XH7A61081. The latter part number provides the 2-slot riser card.
For more information, including the transceivers and cables that each adapter supports, see the list of
Lenovo Press Product Guides in the Networking adapters category:
https://lenovopress.com/servers/options/ethernet
The following table lists additional supported network adapters that can be installed in the regular PCIe slots.
Use of the Mellanox HDR PCIe Aux Kit : The HDR Aux Kit (4C57A14179) enables a Socket Direct
connection which allows the HDR adapter (4C57A15326) to have direct access to each of the two
processors. Such a configuration ensures extremely low latency and CPU utilization in addition to higher
network throughput. Socket Direct also maximizes AI and ML application performance, as it enables
native GPU-Direct Technologies.
Not supported : The following adapters are not supported due to problems with firmware updates:
ThinkSystem Emulex LPe35000 32Gb 1-port PCIe Fibre Channel Adapter, 4XC7A08250
ThinkSystem Emulex LPe35002 32Gb 2-port PCIe Fibre Channel Adapter, 4XC7A08251
For more information, see the list of Lenovo Press Product Guides in the Host bus adapters category:
https://lenovopress.com/servers/options/hba
For details about these adapters, see the Lenovo Press product guides in the Flash Adapters category:
https://lenovopress.com/servers/options/ssdadapter
Configuration rules
The following configuration requirements must be met when installing flash storage adapters:
GPU adapters are not supported
GPU adapters
The SR650 V2 supports the following graphics processing units (GPUs). All GPUs installed must be
identical.
Table 65. ThinkSystem SR650 V2 GPU Full Length Thermal Option Kit
Maximum
Part number Description supported
4H47A38666 ThinkSystem SR650 V2 GPU Full Length Thermal Option Kit 1
2x 1U processor performance heatsinks - replace existing 2U heatsinks
(SBB7A03313)
1x ThinkSystem 2U GPU air duct - replaces main air baffle (SBB7A14414)
3x GPU extend air ducts - needed in a zone if an A10 or other single-wide GPU
> 75W is installed in the upper slot (SBB7A17336)
3x Air duct fillers - needed in each riser zone if no GPU is installed in that zone
(SBB7A17338)
3x GPU power cables for double-wide GPUs (SBB7A21691)
3x GPU power cables for single-wide GPUs (SBB7A21686)
3x GPU power Y-cables when 2x single-wide GPUs installed on one riser
(SBB7A23757)
The following figure shows the GPU air duct with GPU air duct fillers and GPU extend air ducts installed.
Cooling
Lenovo ThinkSystem SR650 V2 Server 85
Cooling
The SR650 V2 server has up to six 60 mm hot-swap fans. Five fans are needed when one processor is
installed and six fans are required when two processors are installed. Fans are N+1 redundant, tolerating a
single-rotor failure. The server also has one or two additional fans integrated in each of the two power
supplies.
Depending on the configuration, the server supports one of the following:
Standard fans (single-rotor, 17K RPM, 60x38 mm)
Performance fans (dual-rotor, 19K RPM, 60x56 mm).
The performance fans are dual-rotor counter-rotating units, which means that the fans have two separate
propellors, one in front of the other, and that the propellors rotate in opposite directions.
For factory (CTO) orders, the configurator will automatically select the fans required for the configuration.
For field upgrades, see the Thermal Rules section in the Information Center for the SR650 V2:
https://thinksystem.lenovofiles.com/help/topic/SR650V2/thermal_rules.html?cp=4_11_7_2_1
Ordering information for the fans is listed in the following table.
Power supplies
The SR650 V2 supports up to two redundant hot-swap power supplies.
The power supply choices are listed in the following table. Both power supplies used in server must be
identical.
Tip: When configuring a server in the DCSC configurator, power consumption is calculated precisely by
interfacing with Lenovo Capacity Planner. You can therefore select the appropriate power supply for
your configuration. However, do consider future upgrades that may require additional power needs.
Dual-voltage power supplies are auto-sensing and support both 110V AC (100-127V 50/60 Hz) and 220V
AC (200-240V 50/60 Hz) power. For China customers, all power supplies support 240V DC.
All supported AC power supplies have a C14 connector. The -48V DC power supply has a Weidmuller TOP
4GS/3 7.6 terminal as shown in the following figure.
110V customers: If you plan to use the ThinkSystem 1100W power supply with a 110V power source,
select a power cable that is rated above 10A. Power cables that are rated at 10A or below are not
supported with 110V power.
For the -48V DC Power Supply, the following power cable is supported.
Local management
The SR650 V2 offers a front operator panel with key LED status indicators, as shown in the following figure.
Tip: The Network LED only shows network activity of the installed OCP network adapter.
Figure 19. Front operator controls are on the left and right side of the server
Light path diagnostics
The server offers light path diagnostics. If an environmental condition exceeds a threshold or if a system
component fails, XCC lights LEDs inside the server to help you diagnose the problem and find the failing
part. The server has fault LEDs next to the following components:
Each processor
Each memory DIMM
Each drive bay
Each system fan
Each power supply
Figure 20. Operator panel choices for the 8x 2.5-inch drive bay configuration
The Integrated Diagnostics Panel allows quick access to system status, firmware, network, and health
information. The LCD display on the panel and the function buttons give you access to the following
information:
Active alerts
Status Dashboard
System VPD: machine type & mode, serial number, UUID string
System firmware levels: UEFI and XCC firmware
XCC network information: hostname, MAC address, IP address, DNS addresses
Environmental data: Ambient temperature, CPU temperature, AC input voltage, estimated power
consumption
Active XCC sessions
System reset action
The Integrated Diagnostics Panel can be configured as listed in the following table. It is only available
configure-to-order (CTO); not available as a field upgrade.
The front of the server also houses an information pull-out tab (also known as the network access tag). See
Figure 2 for the location. A label on the tab shows the network information (MAC address and other data) to
remotely access the service processor.
There are two XClarity Controller upgrades available for the server, Advanced and Enterprise.
XCC Advanced Upgrade adds the following functions:
Remotely viewing video with graphics resolutions up to 1600x1200 at 75 Hz with up to 23 bits per
pixel, regardless of the system state
Remotely accessing the server using the keyboard and mouse from a remote client
International keyboard mapping support
Syslog alerting
Redirecting serial console via SSH
Component replacement log (Maintenance History log)
Access restriction (IP address blocking)
Lenovo SED security key management
Displaying graphics for real-time and historical power usage data and temperature
XCC Enterprise Upgrade enables the following additional features:
Boot video capture and crash video capture
Virtual console collaboration - Ability for up to 6 remote users to be log into the remote session
simultaneously
Remote console Java client
Mapping the ISO and image files located on the local client as virtual drives for use by the server
Mounting the remote ISO and image files via HTTPS, SFTP, CIFS, and NFS
Power capping
System utilization data and graphic view
Single sign on with Lenovo XClarity Administrator
Update firmware from a repository
License for XClarity Energy Manager
For systems with XCC Standard or XCC Advanced installed, field upgrades are available as listed in the
following table.
For more information about XClarity Energy Manager, see the following resources:
Lenovo Support page:
https://datacentersupport.lenovo.com/us/en/solutions/lnvo-lxem
Lenovo Information Center:
https://sysmgt.lenovofiles.com/help/topic/LXEM/lxem_overview.html?cp=4
Security
The server offers the following electronic security features:
Administrator and power-on password
Trusted Platform Module (TPM) supporting TPM 2.0 (no support for TPM 1.2)
Optional Nationz TPM 2.0, available only in China (CTO only)
Self-encrypting drives (SEDs) with support for enterprise key managers - see the SED encryption key
management section
The server is NIST SP 800-147B compliant.
The SR650 V2 server also offers the following physical security features:
Optional chassis intrusion switch
Optional lockable front security bezel
The optional lockable front security bezel is shown in the following figure and includes a key that enables
you to secure the bezel over the drives and system controls thereby reducing the chance of unauthorized or
accidental access to the server.
For more information on this offering, see the paper Introduction to Intel Transparent Supply Chain on
Lenovo ThinkSystem Servers, available from https://lenovopress.com/lp1434-introduction-to-intel-
transparent-supply-chain-on-thinksystem-servers.
Rack installation
The following table lists the rack installation options that are available for the SR650 V2.
The VGA Upgrade Kit allows you to upgrade your server by adding a VGA video port to the front of the
server (if the server does not already come with a front VGA port). When the front VGA is in use, the rear
VGA port is automatically disabled.
You can download supported VMware vSphere hypervisor images from the following web page and load it
on the M.2 drives or 7mm drives using the instructions provided:
https://vmware.lenovo.com/content/custom_iso/
Operating environment
The SR650 V2 server complies with ASHRAE Class A2 specifications with most configurations, and
depending on the hardware configuration, also complies with ASHRAE Class A3 and Class A4
specifications.
For restrictions to ASHRAE support regarding maximum ambient temperature, see the Thermal Rules
section in the Information Center for the SR650 V2:
https://thinksystem.lenovofiles.com/help/topic/SR650V2/thermal_rules.html?cp=4_11_7_2_1
Temperature and humidity
The server is supported in the following environment:
Air temperature:
Operating:
ASHRAE Class A2: 10°C to 35°C (50°F to 95°F); the maximum ambient temperature
decreases by 1°C for every 300 m (984 ft) increase in altitude above 900 m (2,953 ft).
ASHRAE Class A3: 5°C to 40°C (41°F to 104°F); the maximum ambient temperature
decreases by 1°C for every 175 m (574 ft) increase in altitude above 900 m (2,953 ft).
ASHRAE Class A4: 5°C to 45°C (41°F to 113°F); the maximum ambient temperature
decreases by 1°C for every 125 m (410 ft) increase in altitude above 900 m (2,953 ft).
Server off: 5°C to 45°C (41°F to 113°F)
Shipment/storage: -40°C to 60°C (-40°F to 140°F)
Maximum altitude: 3,050 m (10,000 ft)
Relative Humidity (non-condensing):
Services
Lenovo Services is a dedicated partner to your success. Our goal is to reduce your capital outlays, mitigate
your IT risks, and accelerate your time to productivity.
Note: Some service options may not be available in all countries. For more information, go to
https://www.lenovo.com/services. For information about Lenovo service upgrade offerings that are
available in your region, contact your local Lenovo sales representative or business partner.
Regulatory compliance
Lenovo ThinkSystem SR650 V2 Server 108
Regulatory compliance
The SR650 V2 conforms to the following standards:
ANSI/UL 62368-1
IEC 62368-1 (CB Certificate and CB Test Report)
FCC - Verified to comply with Part 15 of the FCC Rules, Class A
Canada ICES-003, issue 7, Class A
CSA C22.2 No. 62368-1
CISPR 32, Class A, CISPR 35
Japan VCCI, Class A
Taiwan BSMI CNS13438, Class A; CNS14336-1; Section 5 of CNS15663
CE, UKCA Mark (EN55032 Class A, EN62368-1, EN55024, EN55035, EN61000-3-2, EN61000-3-3,
(EU) 2019/424, and EN50581)
Korea KN32, Class A, KN35
Russia, Belorussia and Kazakhstan, TP EAC 037/2016 (for RoHS)
Russia, Belorussia and Kazakhstan, EAC: TP TC 004/2011 (for Safety); TP TC 020/2011 (for EMC)
Australia/New Zealand AS/NZS CISPR 32, Class A; AS/NZS 62368.1
UL Green Guard, UL2819
Energy Star 3.0
EPEAT (NSF/ ANSI 426) Bronze
China CCC certificate, GB17625.1; GB4943.1; GB/T9254
China CECP certificate, CQC3135
China CELP certificate, HJ 2507-2011
Japanese Energy-Saving Act
Mexico NOM-019
TUV-GS (EN62368-1, and EK1-ITB2000)
India BIS
Germany GS
For details about supported drives, adapters, and cables, see the following Lenovo Press Product Guides:
Lenovo Storage D1212 and D1224
http://lenovopress.com/lp0512
Lenovo Storage D3284
http://lenovopress.com/lp0513
For more information, see the list of Product Guides in the Backup units category:
https://lenovopress.com/servers/options/backup
The following table lists the external RDX backup options available.
For more information, see the Lenovo RDX USB 3.0 Disk Backup Solution product guide:
https://lenovopress.com/tips0894-rdx-usb-30
For more information, see the Lenovo Press documents in the PDU category:
https://lenovopress.com/servers/options/pdu
Rack cabinets
The following table lists the supported rack cabinets.
For specifications about these racks, see the Lenovo Rack Cabinet Reference, available from:
https://lenovopress.com/lp1287-lenovo-rack-cabinet-reference
For more information, see the list of Product Guides in the Rack cabinets category:
https://lenovopress.com/servers/options/racks
The following table lists the available KVM switches and the options that are supported with them.
For more information, see the list of Product Guides in the KVM Switches and Consoles category:
http://lenovopress.com/servers/options/kvm
LENOVO PROVIDES THIS PUBLICATION ”AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the
information herein; these changes will be incorporated in new editions of the publication. Lenovo may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
The products described in this document are not intended for use in implantation or other life support applications
where malfunction may result in injury or death to persons. The information contained in this document does not
affect or change Lenovo product specifications or warranties. Nothing in this document shall operate as an express or
implied license or indemnity under the intellectual property rights of Lenovo or third parties. All information contained
in this document was obtained in specific environments and is presented as an illustration. The result obtained in
other operating environments may vary. Lenovo may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials
for this Lenovo product, and use of those Web sites is at your own risk. Any performance data contained herein was
determined in a controlled environment. Therefore, the result obtained in other operating environments may vary
significantly. Some measurements may have been made on development-level systems and there is no guarantee
that these measurements will be the same on generally available systems. Furthermore, some measurements may
have been estimated through extrapolation. Actual results may vary. Users of this document should verify the
applicable data for their specific environment.