V1.2.1 - IBM TotalStorage SAN Volume Controller Restrictions

 Flash (Alert)
 
Abstract
This document lists the restrictions specific to SAN Volume Controller V1.2.1. There may be additional restrictions imposed on hardware attached to SAN Volume Controller e.g. Switches and Storage etc.
 
Content

V1.2.1 IBM TotalStorage SAN Volume Controller Restrictions

Host Limitations
Host Based Multipathing
SAN Fibre Networks
Back-end Storage
Hardware
Functions
SAN Volume Controller Concurrent Code Load (CCL)
GUI Support

Host Limitations

The IBM TotalStorage SAN Volume Controller supports a variety of host types and operating systems as described on the Supported Hardware List.

The SAN Volume Controller requires a multipathing driver such as SDD be installed on all attached hosts (for information on coexistence of the SDD multipathing driver please see the Host Based Multipathing section below).

The following constraints are more restrictive than those imposed by the host operating system.

  • IBM eServer pSeries (AIX)
  • A maximum of 512 vdisks per host per SVC Cluster are supported.
  • If the configuration includes both SAN Volume Controller vdisks and native ESS LUNs, SDD limits the number of vpaths as follows:
  • On AIX 5.1, SDD allows a total of 600 vpaths between the two products to be configured on a single host.
  • On AIX 5.2, SDD allows a maximum of 1200 ESS vpaths, and 512 SVC vpaths to be configured.
  • AIX's native multipathing IO (MPIO) is not supported.

  • Intel Servers (Windows 2000/2003)

  • MSCS clustering is supported for 2-way clusters (Windows 2000 and Windows 2003).
  • MSCS clustering is supported for 4-way clusters (Windows 2003 using QLogic HBAs only).
  • Microsoft software RAID above SVC is not supported.
  • SAN Boot support for hosts outside of the BladeCenter environment is limited to those attached via QLogic 23xx HBAs
  • SAN Boot support for BladeCenter is limited to BladeCenter servers with integrated Brocade switches or with the Optical Passthru Module (OPM)
  • Microsoft's native multipathing IO (MPIO) is not supported

  • Intel Servers (Red Hat Linux & SUSE Linux)
  • There is no support for Linux clustering.
  • Also note fabric and CCL restrictions below.

  • Sun (Solaris)
  • There is no support for Sun clustering.
  • Also note CCL restrictions below.
  • SAN Volume Controller supports the 64 bit OS only; 32 bit Solaris is not supported.
  • Sun MPXIO multipathing is not supported

  • HP (HP-UX)
  • There is no support for HP clustering.
  • HP-UX supports a maximum of 8 vdisks for each SAN Volume Controller node pair (I/O group).
  • SAN Volume Controller supports the 64 bit OS only; 32 bit HP-UX is not supported.
  • HP-UX PVLinks multipathing is not supported

  • BladeCenter HS20, HS40
  • Some BladeCenter internal to external switch combinations are required to run in 'interop' mode. Information on configuring BladeCenter with other switches can be found in the BladeCenter documentation.
  • When BladeCenter integrated switch modules are connected to existing SANs the fabric may require running in "'interop" or "open" mode. This is supported and Information on configuring BladeCenter with other switches can be found in the BladeCenter documentation.
  • VMware ESX 2.5
  • SVC supports the attachment of VMware hosts running ESX V2.5. The following VMware guest OS's are supported with VMFS and raw mode disks:
    Windows 2003 Enterprise Edition
    Windows 2000 Advanced Server
  • SVC does not support the attachment of VMware clustered hosts or clustering within the VMware guest operating system.
  • Multipath SAN attachment is supported between the SVC cluster and the VMware host.
  • SDD is not used for multipathing, the native VMWare drivers are used.
  • Qlogic host bus adapters should be used.

  • VMware ESX 2.1
  • SVC supports the attachment of VMware hosts running ESX V2.1. The following VMware guest OS's are supported with VMFS disks:
    Novell Netware 6.5 excluding SAN Volume Controller PPRC relationships.

    Note: Windows 2003 Enterprise Edition & Windows 2000 Advanced Server are no longer supported by VMware at this level and are therefore are only supported under ESX V2.5
  • SVC does not support the attachment of VMware clustered hosts or clustering within the VMware guest operating system.
  • Only single path SAN attachment is supported between the SVC cluster and the VMware host. This means that the VMware host must be configured so there is a single SAN path between the VMware host and the SVC cluster, i.e. a single HBA port on the VMware host must be zoned so that is the only port in that host which can see the SVC cluster, and that HBA port must be able to see only a single port on one node in the cluster. Different VMware hosts may be zoned to different nodes or different ports within the SVC cluster, to perform some level of load balancing. Similarly different HBA ports on the same VMware host may be zoned to a number of different SVC clusters if required. The rule remains though, for any given VMware host and SVC cluster there can only be one SAN path connection.
    The restriction to single path SAN attachment means that the following activities are not supported concurrently with host application I/O:
    1. SAN fabric maintenance
    2. VMware host maintenance
    3. SVC cluster maintenance including SVC cluster code updates
    Note: if a SAN path failure occurs in the single path connection between the VMware host and the SVC cluster, then this may have a serious impact on the host application running within the VMware guest Operating System – and the user may experience application I/O errors, data loss and/or filesystem corruption with a consequent need to perform filesystem recovery or recover data from backups. For example, in Windows, ‘delayed write errors’ may be reported by the operating system if a SAN path failure occurs, and files and folders that are being cached by Windows at the time of the path failure may be damaged.
    This behaviour is an inevitable consequence of using a non-redundant connection between host and storage and would apply to any single path host-storage configuration independent of host operating system or storage subsystem type.


Host Based Multipathing

Co-existence of the SDD multipath driver with DS4000 (FAStT) RDAC is only supported on AIX and Windows platforms. For supported levels of RDAC, please refer to the Recommended Software Levels.

  • Specific levels of the DS4000 (FAStT) multipathing software (RDAC/AVT) have been tested for co-existence with SDD for AIX and Windows platforms. Therefore, DS4000 (FAStT) LUNs and SAN Volume Controller LUNs (vdisks) are supported at the same time on the same host.
  • DS4000 (FAStT) LUNs can be used as back end storage. Any DS4000 (FAStT) disk array may have some LUNs controlled by SAN Volume Controller and other LUNs under the direct control of a host, as long as they are in separate partitions and zoned separately.

  • Currently the DS4000 (FAStT) RDAC driver is not supported with SDD on Windows 2003.

SAN Fibre Networks

The list of switches and firmware levels supported by SAN Volume Controller can be found on the Supported Hardware List.

  • SAN Volume Controller must be set to run at either 1Gbit or 2Gbit on all nodes and ports – auto negotiation is not supported. (Note that this is a statement about the speed at which the SAN Volume Controller will operate, not what devices are supported – both 1Gbit and 2Gbit host and device attachment can be supported in either mode).

  • SAN Volume Controller was tested with a variety of switch configurations including simple redundant and Core-Edge topologies.
  • A maximum of 3 Inter Switch hops within a single cluster are supported in addition to the links connecting SAN Volume Controller hosts or backend storage ports to the fabric.
  • A maximum of 1 hop is supported between clusters involved in remote copy partnerships (multiple ISLs allowed in this single hop):, although 3 hops are still supported within each cluster.

  • Full mesh configurations are not supported.

  • SVC supports the use of PPRC intercluster links in configurations using the Cisco MDS 9506, MDS 9509 and MDS 9516 fabric switches. Successful test results have been obtained with packet latencies of up to 10ms. Distance will depend on the type of network and number of hops but could typically be 100-150kms per ms.

  • CNT Fibre Channel Extenders are now supported with the following distance limitations:
  • The maximum one-way latency supported is 10 ms when using Brocade fabric, and 34 ms when using McData fabric. The relationship between latency and distance is dependent on the network and number of hops, and is likely to be about 100-150 kms per ms.

  • McData Eclipse 1620 SAN Router is supported with the following distance limitation:
  • Intercluster links with latency of up to 20 ms. Distance will depend on the type of network and number of hops but could typically be 100-150 kms per ms.

  • Use of different manufacturers’ switches in the same SAN fabric is not supported. Hence switch ‘interoperability’ modes that enable different switch types to communicate is also not supported even when the switch types are from the same manufacturer. (An exception to this is the support for the IBM BladeCenter. See BladeCenter documentation for further information.)

  • A number of maintenance operations in SAN fabrics have been observed to occasionally cause IO errors for Linux hosts. To avoid these errors, IO on Linux hosts must be quiesced prior to doing any type of SAN re-configuration activity, switch maintenance or SAN Volume Controller maintenance (see later section for Concurrent Code Load restrictions).


Back-end Storage

For a list of the SAN Volume Controller supported storage and associated firmware levels see the Supported Hardware List. Following are the storage related limitations:

  • SAN Volume Controller allows a great deal of flexibility in creating virtual disks and mapping those to the back end storage. It is important that sufficient back end storage be configured to support the anticipated load.

  • Concurrent I/O and download of DS4000 (FAStT) drive and ESM (EXP 100/500/700) microcode is not supported.
  • The DS4000 (FAStT) capability of dynamically expanding controller LUNs is not supported by SAN Volume Controller.

  • When using any supported back end storage for SAN Volume controller in single port attach mode, the following limitations apply:
  • This type of storage must not be used for Quorum disks.
  • This type of storage must not be used as a PPRC target and should not be used for active production data.
  • Recommended usage for this type of storage is as a Flash Copy target and non-mission critical data.

  • Concurrent maintenance is not supported on HPQ, HDS, or EMC storage.


  • Not supported as a boot device.
  • Only FBA devices can be mapped to SVC.
  • The following EMC special LU types cannot be mapped to SVC: BCV, SRDF targets, DRDF pair, DRV, VCM.
  • The SRDF and Timefinder functions are not supported.
  • Specific restrictions for HDS Lightning
  • Quorum disk is not supported.
  • Command LUs cannot be mapped to SVC.

Click here for an important Flash Announcement regarding the renaming of the IBM TotalStorage FAStT


Hardware

SAN Volume Controller must be attached to UPSs to allow cached data to be saved in the event of a power loss. These dedicated UPSs must be configured such that each node in a node pair (IO group) is connected to a different UPS. The SAN Volume Controller controls the operation of the UPS via the serial connections. Following are restrictions on the use of the UPS:

  • UPSs may not be shared across multiple clusters.
  • One UPS supports a maximum of 2 nodes in a cluster. For clusters greater then 4
nodes a second set of UPSs is required.
  • Only the top 3 serial ports of UPS may be used.
  • The UPS must only be used to provide power to the SAN Volume Controller nodes.


Functions

  • Vdisk growth/shrinkage. Where particular host operating systems support system disk expansion/shrinkage, SAN Volume Controller supports these functions also using vdisk growth/shrinkage functions.
    For other operating systems, use of this function may cause data loss.

  • Configuration Backup
  • A background task is scheduled to automatically make a configuration backup on a daily basis (at 1am local time).
  • All cluster maintenance activity, e.g. CCL, removing/replacing nodes should not be executed whilst a backup is in progress. (on average a configuration backup should take no more than 15 minutes).
  • It is important to ensure that a manual configuration backup is run following any configuration changes so that an up to date copy is available for recovery of the cluster.
  • IBM recommends that the CLI, rather than the GUI, be used for configuration restore.
    Note: that restoring the SVC configuration will destroy all data. Restoration of configuration should be coordinated with the IBM support center to preserve data.


SAN Volume Controller Concurrent Code Load (CCL)

  • SAN Volume Controller Concurrent Code Load is now supported on the following operating systems whilst running I/O and advanced function – FlashCopy, data migration and PPRC. Note that in some cases only specific operating system levels are supported in this way – these are detailed where necessary:
    1. AIX
    2. Windows 2000/2003.
    3. Solaris 9 (on PCI based systems only).
    4. HP-UX (the adapter configuration settings must include ‘Physical Volume timeout’ set to 90 seconds to ensure there are no I/O failures).

  • CCL restrictions. I/O errors have been occasionally observed during CCL with hosts running the operating system levels below – All IO must be quiesced on these systems before CCL is started and must not be restarted until the code load is complete.
    1. Linux RH EL 2.1 AS and 3 AS
    2. Solaris 9 on SBus based systems
    3. SLES 8
  • Prior to starting a CCL, the SAN Volume Controller error log must be checked and any error conditions must be resolved and marked as fixed. All host paths must be online, and fabric must be fully redundant with no failed paths. If PPRC is being used, the same checks must be made on the remote cluster.

GUI Support

  • Browser support limited to Internet Explorer 6.2 SP2 , Netscape 6.2 and Netscape 7.0 (AIX only).
 
Cross Reference information
Segment Product Component Platform Version Edition
Storage Virtualization IBM TotalStorage SAN Integration Server V1.2.1 N/A V1.2.1