Page 2
Reproduction in any manner whatsoever without the written permission of Dell Computer Corpo- ration is strictly forbidden. Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, and Dell OpenManage are trademarks of Dell Computer Corporation; ClusterX is a registered trademark and VERITAS is a trademark of VERITAS Corporation;...
Page 3
Preface This guide provides information about the Dell PowerEdge Cluster FE100/FL100 Datacenter Server solution. This information includes procedures for installing, config- uring, and troubleshooting the hardware and software components of PowerEdge Cluster FE100/FL100 Datacenter Server configurations. The chapters and appendixes in this guide are summarized as follows: •...
Warranty and Return Policy Information Dell Computer Corporation (“Dell”) manufactures its hardware products from parts and components that are new or equivalent to new in accordance with industry-standard practices. See your Dell PowerEdge System Information document for complete war- ranty information for your system.
• Dell PowerEdge Expandable RAID Controller Battery Backup Module User's Guide. • The Microsoft Cluster Server Administrator's Guide for the Windows 2000 Cluster Service documentation describes the clustering software used on PowerEdge Cluster FE100/FL100 Datacenter. • The Microsoft Windows 2000 Datacenter Server documentation describes how to install (if necessary), configure, and use the Windows 2000 Datacenter Server operating system.
Page 6
• Filenames and directory names are presented in lowercase bold. Examples: autoexec.bat and c:\windows • Syntax lines consist of a command and all its possible parameters. Commands are presented in lowercase bold; variable parameters (those for which you substi- tute a value) are presented in lowercase italics; constant parameters are presented in lowercase bold.
C H A P T E R 1 Getting Started This chapter provides an overview of the following information for the ™ ™ Dell PowerEdge Cluster FE100/FL100 Datacenter Server configuration: • ® ® Microsoft Windows 2000 Datacenter Server operating system •...
Overview of a Dell PowerEdge Cluster FE100/FL100 Datacenter Server Configuration The PowerEdge Cluster FE100/FL100 Datacenter Server is a cluster solution that implements 2-node to 4-node clustering technology based on the Microsoft Windows 2000 Cluster Service (MSCS) software incorporated within the Windows 2000 Datacenter Server operating system.
Failover and Failback Support” in Chapter 6, “Configuring the System Software.” SAN-Attached Cluster Configuration A PowerEdge Cluster FE100/FL100 Datacenter Server configuration is a SAN-attached cluster configuration where all four cluster nodes are attached to a single PowerVault™ storage system or to multiple PowerVault storage systems through a Dell PowerVault SAN using a redundant Fibre Channel switch fabric.
Figure 1-1. SAN-Attached Cluster Configuration PowerEdge Cluster FE100/FL100 Identification The Dell PowerEdge Fibre Channel clusters are configured and identified by the private network connection (cluster interconnect) that connects the cluster nodes together— FE (Fibre Channel Ethernet) and FL (Fibre Channel Low Latency)—and the type of storage devices in the cluster configuration.
Page 17
Table 1-1 provides an overview of the differences between PowerEdge Cluster FE100 and FL100 Datacenter Server configurations. Table 1-1. PowerEdge Cluster FE100/FL100 Configurations Cluster Solution Cluster Interconnect Cluster Interconnect Type Network Interface Controller (NIC) PowerEdge Fast Ethernet Broadcom NetExtreme Cluster FE100...
Cluster Service. NOTE: For more information on failover, failback, and cluster groups, see “Configuring Failover and Failback Support” in Chapter 6, “Configuring the System Software.” PowerEdge Cluster FE100/FL100 Datacenter Server Failover Options The PowerEdge FE100/FL100 Datacenter Server configuration provides the following failover options: •...
/active failover solution where running applications from a failed node migrate to multiple nodes in the cluster. This active /active type of failover provides the following features: Advantage: • Automatic failover and load-balancing between the cluster nodes. support.dell.com Getting Started...
Disadvantage: • Must ensure that the failover cluster nodes have ample resources available to handle the additional workload. Figure 1-3 shows an example of multiway failover configuration. cluster cluster node 1 node 2 Application A Application B Application C cluster node 3 cluster node 4 Figure 1-3.
(based on cluster node resource availability). This type of solution provides the following features: Advantages: • Adjustable resource allocation. • Added flexibility. Disadvantage: • Solution is not automatic. Figure 1-5 shows an example of an N- Way migration solution. support.dell.com Getting Started...
Operating system and system management software Cluster Nodes Cluster nodes require the following hardware resources: • Two to four supported Dell PowerEdge systems, each with at least two microprocessors. • For each server, a minimum of 2 GB random access memory (RAM) and two HBAs.
For each cluster, a network switch or Giganet cLAN cluster switch to connect the cluster nodes. NOTE: If you have a two-node PowerEdge Cluster FE100/FL100 Datacenter Server configuration that will not expand the configuration to a three or four node cluster, a crossover cable or cLAN cable can be used to connect the nodes rather than using a private network switch.
PowerEdge Cluster FE100/FL100 Datacenter Server Support Configuration Requirements The following tables provide configuration information for the following cluster com- ponents and configurations: • Cluster nodes • Shared storage systems • SAN-attached clusters Required Configuration Requirements for the PowerEdge Cluster FE100/FL100 Datacenter Server Table 1-6 provides the cluster component requirements for a PowerEdge Cluster FE100/FL100 Datacenter Server configuration.
Service Pack and hotfixes (one licensed copy per cluster node) Shared Storage Requirements for the PowerEdge Cluster FE100/FL100 Datacenter Server Configuration Table 1-7 provides the clustering requirements for the PowerEdge Cluster FE100/ FL100 Datacenter Server. Table 1-7. PowerEdge Cluster FE100/FL100 Shared Storage Requirements...
SAN 3.0 QLogic QLA2200/66 with firmware version 1.45 and driver version 7 .04.08.02 HBA failover driver Dell OpenManage ATF version 2.3.2.5 Fibre Channel switch PowerVault 51F Fibre Channel switch with firmware version 2.1.7 PowerVault 56F Fibre Channel switch with firmware version 2.1.7...
The following chapter provides an overview for installing Microsoft Windows 2000 Datacenter Server on the PowerEdge Cluster FE100/FL100 Datacenter Server. To install Windows 2000 Datacenter Server on the PowerEdge Cluster FE100/FL100 Datacenter Server cluster, perform the following steps: Add network interface controllers (NIC), host bus adapters (HBA), redundant...
Page 30
Update the miniport driver for the Fibre Channel HBAs in each node. Install the QLogic Fibre Channel configuration software on each node and reboot. 10. Install Dell OpenManage Application Transparent Failover (ATF) on each node and reboot. 11. Install Dell OpenManage Managed Node (Data Agent) on each node.
WARNING: Hardware installation should be performed only by trained service technicians. Before working inside the computer system, see the safety instructions in your Dell PowerEdge System Information document to avoid a situation that could cause serious injury or death. You may need to add peripheral devices and expansion cards to the system to meet the minimum cluster requirements for a PowerEdge FE100/FL100 Datacenter Server configuration.
Installation and Troubleshooting Guide for your PowerEdge system. Configuring Fibre Channel HBAs on Separate PCI Buses Dell recommends configuring Fibre Channel HBAs on separate PCI buses. While con- figuring the adapters on separate buses improves availability and performance, this recommendation is not a requirement.
• Protecting your cluster from power failure • Cabling your mouse, keyboard, and monitor in a Dell rack Cluster Cabling Components Dell PowerEdge Cluster FE100/FL100 Datacenter Server configurations require cabling for the Fibre Channel storage systems, cluster interconnects, client network connections, and power connections.
Fibre Channel Copper Connectors To connect a PowerVault storage system to a PowerEdge system (cluster node), Dell uses the DB-9 connector and the high-speed serial data connector (HSSDC). The DB- 9 connector, shown in Figure 4-1, attaches to the PowerVault disk-processor enclo- sure (DPE) and PowerVault disk-array enclosure (DAE).
Using NICs in Your Public Network Connection to the public LAN is provided by a Broadcom NetExtreme Gigabit Ether- net or Intel PRO/1000 Gigabit Server Adapter installed in each node. The Dell PowerEdge Cluster FE100/FL100 Datacenter Server supported NICs running Trans- mission Control Protocol/Internet Protocol (TCP/IP) may be used to connect to the public network.
Figure 4-3. Configuration Using a Broadcom NetExtreme Gigabit NICs for the Private Network Using Giganet cLAN for the Private Network PowerEdge Cluster FE100/FL100 Datacenter Server systems can be connected to each other using the following Giganet high-speed interconnect products: •...
Figure 4-4. Configuration Using a GigaNet cLAN NIC for the Private Network Protecting Your Cluster From Power Failure Dell recommends the following guidelines to protect your cluster configuration from power-related failures: • Use uninterruptible power supplies (UPS) for each cluster node •...
Monitor in a Dell Rack If you are installing a PowerEdge Cluster FE100/FL100 Datacenter Server configura- tion in a Dell rack, see the Dell PowerEdge rack installation documentation for instructions on cabling each cluster node's mouse, keyboard, and monitor to the mouse/keyboard/monitor switch box in the rack.
C H A P T E R 5 Configuring Storage Systems (Low-Level Configuration) This chapter provides the necessary steps for configuring the Dell PowerVault shared storage hard-disk drives attached to the PowerEdge Cluster FE100/FL100 Datacenter Server configuration. NOTES: Prior to installing the operating system, be sure to make the necessary low- level software configurations (if applicable) to your PowerEdge FE100/FL100 Data- center Server cluster.
See “Cluster Quorum Resource“in Chapter 6, “Configuring the System Software” for more information on the quorum resource. NOTICE: Dell recommends that you use a RAID level other than RAID 0 for your PowerVault shared storage system. RAID 0 does not provide the level of availability required for the quorum resource.
Dell OpenManage software and using the Windows 2000 Disk Management tool) • Verifying cluster readiness • Configuring the Dell OpenManage Managed Node (Data Agent) for a cluster envi- ronment and cluster failover • Configuring failover and failback support Preparing for Microsoft Windows 2000...
Member servers—A cluster node that is not a domain controller and usually provides user resources as a file, application, database, or remote access server (RAS). The Dell PowerEdge Cluster FE100/FL100 Datacenter Server configuration supports the following domain assignments for each cluster node: •...
Set the private network to Use for internal communications only. NOTE: Dell suggests that you rename your private network to avoid confusion. Set the public network to All communications. This setting provides a redundant path for the cluster-to-cluster communica- tion in the event the private network fails.
Set the private network to Use for internal communications only. NOTE: Dell suggests that you rename your private network to avoid confusion. Set the public network to All communications. This setting provides a redundant path for the cluster-to-cluster communica- tion in the event the private network fails.
IP subnet or a different network ID than the LAN subnet(s) used for a client connection. Dell recommends using the static IP address assignments in Table 6-1 for the NICs assigned to the private network.
2000 Datacenter Server Network The following sections describe an example for using Windows 2000 for configuring your PowerEdge Cluster FE100/FL100 Datacenter Server network. NOTE: The IP addresses for the public network, default gateway, domain name sys- tem (DNS) servers, and Windows Internet naming service (WINS) servers used here are examples and are not representative of actual addresses that should be used for your environment.
Page 47
For example, in Table 6-2, the subnet mask for Cluster Node 1 is 192.168.1.1. Enter the IP addresses for the primary and secondary DNS servers. For example, in Table 6-2, the IP addresses for the primary and secondary DNS servers are 192.168.1.21 and 192.168.1.22, respectively. support.dell.com Configuring the System Software...
(HBA). Before you configure the shared storage system, you must update this driver with the latest driver version. See “PowerEdge Cluster FE100/FL100 Datacenter Server Support Configuration Requirements“in Chapter 1, “Getting Started,” for information on the correct driver version for your HBA.
To manage and configure the storage system attached to the Dell PowerEdge Cluster FE100/FL100, you must install Dell OpenManage software to manage the storage systems attached to the cluster nodes. Table 6-3 provides a list of the Dell OpenManage management software required for the Dell PowerEdge Cluster FE100/ FL100 Datacenter configuration where you should install the software.
Page 50
After you install the Dell OpenManage management software, you must bind the LUNs in the shared storage system that is attached to the cluster. In some cases, the LUNs may have been preconfigured by Dell. However, you must install the manage- ment software and verify that your LUN configuration exists.
For more information on installing Dell OpenManage ATF , Dell OpenManage Managed Node Agent, Dell OpenManage Data Supervisor, or Dell OpenManage Data Adminis- trator, see the PowerVault documentation that came with the storage system. Configuring Shared Drives Using the Windows 2000 Disk...
Select Create Partition, and then click Next. Select Primary partition, and then click Next. In the next dialog box, Dell recommends choosing all available disk space. NOTE: For additional information on partition size recommendations, see “Cluster Quorum Resource,” found later in this chapter.
If one of the cluster nodes fails, any changes to the cluster configuration database are logged to the quorum disk. This logging process occurs to ensure that the node that gains control of the quorum disk can access an up-to-date version of the cluster con- figuration database. support.dell.com Configuring the System Software 6-13...
Adding Additional Applications and Data to the Quorum Disk The quorum disk, by default, is installed in the “Cluster Group.” Dell recommends that you do not install additional applications into the Cluster Group. The Cluster Group also contains a network name and IP address resource, which is used to man- age the cluster.
All recovery groups—and therefore the resources that comprise the recov- ery groups—must be online (or in a ready state) for the cluster to function properly. support.dell.com Configuring the System Software 6-15...
Environment The following procedure provides the necessary steps to configure your Dell OpenManage Managed Node (Data Agent) for clustering. To configure the Dell OpenManage Managed Node (Data Agent) in a cluster, perform the following steps: Open the Agent Configurator. In the Host Description field, type the description of the server.
Page 57
PowerVault storage system. To install the Data Agent as a cluster resource, perform the following steps: Confirm that Dell OpenManage Managed Node (Data Agent) is installed on all of the cluster nodes and is configured to start manually.
Configuring Failover and Failback Support When an individual application or user resource (also known as a cluster resource) fails on a cluster node, Cluster Service will detect the application failure and try to restart the application on the cluster node. If the restart attempt reaches a preset threshold, Cluster Service brings the running application offline, moves the application and its resources to another cluster node, and restarts the application on the other cluster node(s).
Preferred Owners pane are listed in order of failover attempt. Reorder the disks groups by selecting a disk group and clicking the Up and Down arrows on the right side of the window. Click OK. The Properties window appears. Click OK. support.dell.com Configuring the System Software 6-19...
• Microsoft Windows 2000 Cluster Administrator • Dell OpenManage Cluster Assistant With ClusterX Microsoft Cluster Administrator Cluster Administrator is a built-in tool in Windows 2000 Datacenter Server for config- uring and administering a cluster. The following procedures describe how to run Cluster Administrator locally on a cluster node and how to install Cluster Administra- tor on a remote console.
Cluster Administrator on a remote client. NOTE: Using Windows NT Server, Enterprise Edition 4.0 Cluster Administrator may generate error messages if it detects Windows 2000 cluster resources. Dell strongly recommends using Windows 2000 clients and the Windows 2000 Administrator Pack for cluster administration and monitoring.
Simple Network Management Protocol (SNMP) enablement for cluster events on cluster nodes See the installation instructions included with Dell OpenManage Cluster Assistant With ClusterX. Contact your Dell representative for more information about Dell OpenManage Cluster Assistant With ClusterX. NOTE: Dell OpenManage Cluster Assistant With ClusterX version 3.0.1 with Service Pack 2 or later is required for Windows 2000 Datacenter Server support.
Upgrading Your PowerEdge System to a PowerEdge Cluster FE100/FL100 Datacenter Server Configuration To properly upgrade your system to a PowerEdge Cluster FE100/FL100 Datacenter Server configuration, perform the following procedures: Ensure that your existing system configuration meets the minimum configuration required for clustering and install the required hardware and software clustering components as needed.
Using non-Dell hardware or software components may lead to data loss or corruption. Install the required hardware and network interface controllers (NICs). Set up and cable the system hardware. Install and configure the Windows 2000 Datacenter Server operating system with the latest Service Pack and hotfixes (if applicable).
C H A P T E R 9 Maintaining the Cluster This chapter provides information on the following cluster maintenance procedures: • Connecting to your attached PowerVault storage systems using Dell OpenMan- age storage management software • Using the QLogic Fibre Channel Configuration software for PowerVault 65xF storage processor replacement •...
If the previous two options are not available, specify the cluster name in the Data Administrator's Host Administration window. NOTE: Do not run Data Administrator if you are using Dell OpenManage Data Management Station. See the Dell OpenManage Data Agent Installation and Operation Guide and the Dell OpenManage Data Administrator Installation and Operation Guide, and the Dell PowerVault Storage Area Network (SAN) Administrator’s Guide for instructions on...
Connecting to Data Agent Using Data Supervisor To ensure that the Dell OpenManage Data Supervisor can connect to the Data Agent regardless of which node is running Data Agent, perform the following steps: Start Data Supervisor. In the Dell OpenManage Data Supervisor Query dialog box, enter the name of the cluster running Data Agent.
Type atf_restore atf_sp0 and press <Enter>. The failed access path is restored. For more information on using Dell OpenManage ATF , see the Dell OpenManage ATF Operation Guide and the Dell PowerVault Storage Area Network (SAN) Administrator’s Guide.
If you cannot determine the RAID level using Disk Management, you can use the Dell OpenManage Data Agent Configurator to view the RAID configuration of each volume. To view the RAID configuration of a volume using Data Agent Configurator, perform the following steps: Start the Dell OpenManage Data Agent Configurator.
Page 72
• Cluster Service is installed on all cluster nodes. • NICs in each cluster node are configured properly See Table 6-2 in “IP Addresses,” in Chapter 6, “Configuring the System Soft- ware,” for a sample IP configuration scheme of Windows 2000 Datacenter Server.
When the failed node is restarted, the cluster nodes reestablish their connec- tion and the Cluster Administrator changes the failed cluster node icon back to blue to show that the cluster node is back online. support.dell.com Maintaining the Cluster...
Uninstalling Cluster Service You may need to uninstall the Cluster Service for cluster node maintenance, such as upgrading the node and/or replacing the node. Before you can uninstall MSCS from a node, perform the following steps: Take all resource groups offline or move them to the other node. Stop the Cluster Service running on the node that you want to uninstall.
Page 75
Server or Internet Information Server [IIS]) on the new cluster node (if a tape backup is not available). 18. Install any additional service packs or hotfixes. 19. Test the failover capabilities of the cluster resources on the new cluster node. support.dell.com Maintaining the Cluster...
NOTES: See the Dell PowerVault SAN documentation for more information. A PowerEdge Cluster FE100/FL100 Datacenter Server configuration cannot coexist on the Fibre Channel switch fabric with other clusters or stand-alone servers. PowerVault SAN Components for PowerEdge Cluster...
NOTE: See the Dell PowerVault SAN documentation and the appropriate SAN compo- nent documentation for configuration information. SAN-Attached Clusters SAN attached clusters are cluster configurations where redundant Fibre Channel HBAs are cabled to a redundant Fibre Channel switch fabric. Connecting the cluster to the storage system is achieved through the switch fabric.
Fibre Channel switches are linked together using interswitched links (ISL). These ISLs use two high-speed serial data connectors (HSSDC) or two subscriber connectors (SC) to connect the switches. Each ISL is considered a “hop.” While a Fibre Channel support.dell.com SAN Components 10-3...
NOTE: Each segment may vary in components and complexity. Figure 10-2 shows a SAN-attached PowerEdge Cluster FE100/FL100 Datacenter con- figuration using three networking segments: public network, private network, and SAN.
LAN/WAN Interconnect Switch PowerEdge PowerEdge server server PowerEdge PowerEdge server server Fibre Channel Fibre Channel switch switch Fibre Channel bridge PowerVault 130T PowerVault storage system Figure 10-2. SAN-attached Clusters Using a Public, Private, and SAN Network support.dell.com SAN Components 10-5...
Using Dell PowerVault Fibre Channel Switches You can connect cluster nodes to the PowerVault shared storage system by using redundant PowerVault Fibre Channel switches. When cluster nodes are connected to the storage system through Fibre Channel switches, the cluster configuration is tech- nically attached to a SAN.
SCSI bridge to support the PowerVault 130T DLT library on PowerEdge Cluster FE100 Datacenter Server configurations. Figure 10-2 shows a supported PowerEdge Cluster FE100/FL100 Datacenter configu- ration using redundant Fibre Channel switches, Fibre Channel bridge, and PowerVault 130T DLT library. In this configuration, each of the cluster nodes is attached to the backup device and the backup local disk resources, as well as to the owned cluster disk resources.
Cluster FE100/FL100 Datacenter Server configurations. Using the QLogic Fibre Channel Configuration Utility for Storage Processor Failure For more information on installing QLogic Fibre Channel Configuration Utility, see the Dell PowerVault Systems Storage Area Network (SAN) Installation and Troubleshooting Guide. 10-8 User’s Guide...
A P P E N D I X A Troubleshooting This appendix provides troubleshooting information for the Dell PowerEdge Cluster FE100/FL100 Datacenter configurations. Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table A- 2 is specific to Windows 2000 cluster configurations.
System Software” in Chapter 6 of this guide for information about assigning the network IPs. Troubleshooting Windows 2000 This section provides troubleshooting information for Dell PowerEdge Cluster FE100/ FL100 Datacenter Server configurations specific to the Windows 2000 operating system. Table A-2. Windows 2000 Troubleshooting...
In addition, Dell recommends that you have a copy of the form available any time you call Dell for technical support.
Page 88
Table B-1. PowerEdge Cluster FE100/FL100 Configuration Matrix Cluster Type PowerEdge Cluster FE100/FL100 Datacenter Server Cluster Name Installer Date Installed Applications Location Notes Node PowerEdge Server Model Windows 2000 Name Node 1 Node 2 Node 3 Node 4 Storage Array Description (Drive letters, RAID types, applications/data installed)
Page 89
Datacenter Server. Make sure each of the procedures listed below is performed correctly. Pre-Installation Settings Confirm that both nodes and the storage system meet the PowerEdge Cluster FE100/FL100 minimum configuration requirements. Place NICs that support hot-plug peripheral component interconnect (HPPCI) in PCI slots that support HPPCI, if available and if supported for the configuration.
Page 90
Windows 2000 Datacenter Server Operating System Installation and Configuration Install Windows 2000 Datacenter Server, including: Network name for node 1_________________________________ Network name for node 2_________________________________ Network name for node 3_________________________________ Network name for node 4_________________________________ Select Cluster Service during initial installation. Node 1 network IP configuration: Public network IP Address: ___.______._____.____ Subnet Mask: 255.______._____._____...
Page 91
Name each of the installed network segments: Name of network 1 is Public (for local are network [LAN] interconnect). Name of network 2 is Private (for node-to-node interconnect). Name of network 3 (for a connection to an additional public network). support.dell.com Cluster Data Sheets...
Page 92
Management IP Address: ______.______._____._____ Subnet Mask: 255.______._____._____ Join the cluster. Post-Microsoft Cluster Service Installation Reapply the latest Windows 2000 service pack. Install Dell OpenManage Cluster Assistant With ClusterX on management client (optional). Install and configure cluster application programs. User’s Guide...
Page 93
Dell PowerEdge Cluster FE100/FL100 Installer Data Sheet and Checklist for an Upgrade Installation to Windows 2000 Datacenter Server Instructions: Before configuring the systems for clustering, use this checklist to gather information and prepare your systems for a successful installation. This data sheet assumes that Windows 2000 Datacenter Server was factory or customer installed on each node.
Page 94
Name of network 2 is Private (for node-to-node interconnect): Cluster, IP Address: _____.______._____._____ Subnet Mask: 255.______._____._____ Post-Microsoft Cluster Service Installation Verify the functionality of the cluster. Install and set up your cluster application programs. Install Dell OpenManage Cluster Assistant With ClusterX on management client (optional). User’s Guide...
4-2 cabling, 4-3 implementing, 4-3 in your public network, 1-12, 4-3 Dell OpenManage software application transparent failover (ATF), cabling in a Dell rack, 4-6 restoring to a failed storage device, cluster administrator cluster assistant with ClusterX about, 7-1 about, 7-3...
Page 96
10-1 SAN-attached clusters, 10-2 Intel PRO/1000 Gigabit Server Adapter attaching to the network, 10-4 in your public network, 1-12 Dell PowerVault storage area network (SAN) components, 10-1 disk management tool, 6-11 determining RAID levels of the shared member servers, 6-2...
Page 98
switches about, 10-6 Windows 2000 Datacenter Server about, 1-1 configuring the nodes, 6-2 configuring the public and private networks, 6-4 troubleshooting, 1 configuring your cluster nodes, 9-5, 9-8 connecting to a cluster, 2 adding a third NIC to a cluster node, shared storage subsystem, 1 changing the IP address, 9-7 running chkdsk /f on a quorum disk,...