Low cost Automatic Failover-Disaster Recovery scenario for Microsoft File service without using expensive SAN replication technology.

Note before you start:Those who are going to implement this solution in their production environment, please visit the “Information about Microsoft support policy for a DFS-R and DFS-N deployment scenario” link (http://support.microsoft.com/kb/2533009) and understand the Microsoft support policy regarding the solution.

The objective of this post is to provide a low cost high availability Disaster Recovery solution for Microsoft file services.  Generally hosting the File Service on a Microsoft failover cluster is sufficient enough to provide high availability of user data. However, for an organization whose data availability is business critical or are using a VDI solution where user profile/data are stored on file servers, the file service should be available even if the cluster itself is offline due to a disaster in the datacenter.

The following design provides you File Service availability through the Disaster Recovery site without utilizing any expensive SAN storage replication technique.

In this scenario, we will require a two node windows failover cluster in production as well as disaster recovery sites to host the file service. Each cluster will be connected to their respective local SAN storage within their sites.

FileCluster

Shared folder configured on a Client Access Point will be used as a target folder for DFS Namespace. Since the Client Access Point can withstand cluster node failure, the Shared folder will be available even one of the cluster node is offline for maintenance.

DFS01

We need to take this scenario to a further stage where the service can be available even when the whole production datacenter is down. A multi-site cluster (geo cluster) using SAN replication would be very expensive in terms of license cost and complexity in implementation.  By tweaking the built-in replication feature (DFS-R) in Windows server Operating System, the above requirement can be achieved without any additional cost.

Step1: Configure domain-based DFS namespace, add both (production as well as DR) servers as Namespace Servers.

DFS02

Step2: Create a Shared Folder on Client Access Point, link the shared folder to the above DFS namespace.

Step3: Create a Shared Folder on DR Client Access Point, add the shared folder as secondary Target Share for the production shared folder.

DFS04

Step4: Run through the “New Replicated Folders Wizard” to configure the shared folder in DR site as full mesh replica.

DFS03

Step5: Set the target priority by configuring referral order and then disable the DR Target Share.

If both targets are enabled, there is a chance that users start writing into different locations overriding the target priority, this causes DFS Replication service to encounter conflicting data and sharing violations.

DFS05

Disabling one of the folder targets leaves only one target enabled in the namespace ensuring that the users will always hit on that folder target, the other target folder will not have any SMB sessions established from end users.

DFS06

Data availability during disaster:

In ideal working condition, users are always connected to the shared target on production site, the data will be replicated to the shared folder in DR site through DFS-Replication configuration. If the production datacenter is down, the target share in DR site needs to be enabled and whenever users access the folder in the DFS namespace, they will be redirected to the active target share in DR site.

The enabling of standby target share can be automated using File Services Management Pack for Operations Manager to have a smooth and automatic failover of namespace folder. The File Services Management Pack for Operations Manager, monitors the status of production Target Share and through a remediation task, it enables the standby target share automatically.

Following command sets folder targets referral status to “Enabled”

dfsutil.exe property state online “ <UNC of DFS Namespace>”  “<UNC of Shared folder>”

Example: dfsutil.exe property state online “\\acme.com\UserData\Data”  “\\ACME-CAP01-DR\Data”

Converting unsupported scenario to supported scenario:

As per the above mentioned Microsoft support policy for a DFS-R and DFS-N deployment, even if you enable only one folder target at a time, configuring one namespace folder to have multiple folder targets is not supported. In such case, just delete the secondary folder target (do not delete replication) and use dfsutil.exe target add command to create link to the secondary folder target.

Following command adds folder targets to the namespace:

dfsutil.exe target add “ <UNC of DFS Namespace>”  “<UNC of Shared folder>”

Example: dfsutil.exe target add “\\acme.com\UserData\Data”  “\\ACME-CAP01-DR\Data”

As explained earlier, you can use any System Monitoring Solution to automate the above process by configuring auto remediation task.

Storage Spaces Technology Simplified

 

Storage Spaces enables the use of multiple disks of different sizes and interfaces by pooling them together so that the operating system sees them as one large disk. Disks with USB, SATA, SCSI, iSCSI, and SAS are among the supported interfaces.  A group of different physical disks is called a storage pool, one or more storage spaces can be created on top of the storage pool.  Storage spaces will appear as virtual disks, these disks can be further configured to create one or more logical volumes and presented to operating system.

SP-Pool

As shown in the above picture, storage pool can be created from a bunch of physical disks connected to either a Windows 2012 standalone server or Windows 2012 failover cluster nodes. Any available physical disk that is not formatted can be an eligible candidate for the storage pool. Once the operating system detects eligible physical disks connected directly, it will create a Primordial storage space.

Consider Primordial storage space as a holding pool for all unallocated disks that are connected to the currently managed server. You must create new storage pool by grabbing the disks from Primordial pool, because the Primordial pool itself will not allow you to create virtual disks.

From the storage pool, one or more virtual disks can be created which will be later presented to the operating system as one or more volumes. The virtual disks provide different type of storage layout depending upon the resiliency such as Simple, Mirror or Parity. The layout options resembles RAID0, RAID1 and RAID5 however it does not create a mirror copy to a specific mirror disk as in RAID1 or write parity in sequence as in RAID5.   The storage space technology randomly picks disks for writing the mirrored blocks or stripe blocks.

One or more volumes can be created from the available virtual disk depending upon the requirements. Volumes created in storage space supports NTFS and ReFS formats however volumes formatted with ReFS cannot be added to Cluster Shared Volumes (CSV).

Storage Tiers in Windows 2012 R2 Preview

Storage tier is a new feature available in Windows server 2012 R2 Preview which allows creation of virtual disks comprised of two tiers of storage (HDD and SSD). Frequently accessed data will be automatically moved to the SSD tier and less frequently access data will be moved to HDD tire. Storage Spaces technology transparently moves data at a sub-file level between the two tiers based on the frequency of data accessed.

SP-Tier

As shown in the above picture, a mixture of HDD and SSD disks are picked to create storage pool. Then while creating the virtual disk, you may specify the storage space required from each tier. For instance, the virtual disk selected with simple layout will have 40 GB from SSD tier and 200 GB from HDD tier. The storage technology runs an optimization sequence to monitor the frequently used blocks and moves the frequently used blocks to the SSD tire. Storage Space also uses the SSD tier to create a write-back cache which buffers random writes to SSD disks to reduce the latency of writing the data directly to HDDs.

 

 

World Wide Name (WWN) for a fibre channel HBA on Windows Server

There are many ways to find the World Wide Name (WWN) of fibre channel HBA connected to server running windows server operating system.

Few model dependent utilities are:

  1. HBAnyware utility (Emulex)
  2. SANsurfer utility (Qlogic)

For any other model of HBAs, the  Fibre Channel Information Tool utility (FCINFO.EXE) can be used. The fcinfo tool is available for download from Microsoft Download web site. It works on Windows Server 2003 and Windows server 2000 systems, However, Installation of the Fibre Channel Information Tool is not mandatory, but three core files (FCINFO.EXE, HBAAPI.DLL and HBATAPI.DLL) must be copied in a common folder in order to run the tool. For Windows 2008 R2, You can use Storage Explorer to see detailed information about the Fibre Channel host bus adapters (HBAs) on the server.

Running the Fibre Channel Information Tool at a command line with no arguments will give basic information (including WWN) about all the installed host bus adapters (HBAs):

C:\>fcinfo

There are 2 adapters:

com.qlogic-QLA2300/2310-0: PortWWN: 21:00:00:e0:8b:08:95:df \\.\Scsi2:

com.emulex-LP9002-1: PortWWN: 10:00:00:00:c9:30:d0:17 \\.\Scsi3:

For Help,

type FCINFO /? or FCINFO /?? You will get a long list of commands, many of which are very task specific and not necessary for information gathering.

Removing Write-protection for a Disk.

In most cases a VSS compliant Backup program, unexpected disconnection of SAN disk or malfunction of RAID controller can causes Windows Operating System to mark the disk as write-protected to maintain integrity of the data. In such cases, the disk can be accessible only in read-only mode.

As shown in the screenshot below, NEW tab missing when you right click to create new folder. 

Check disk shows “The disk is write protected.”

To clear read-only attribute,

Open command prompt by clicking start ->Run and type cmd

Type Diskpart.exe

 This command will open diskpart utility in another window as shown below.

On the prompt, type LIST DISK 

Select the Disk you wanted to modify by typing  SELECT Disk 0 (in my case it is disk 0)

Type DETAIL DISK to get the disk details

Notice that the Read-only is set to yes.

To change the read-only attribute, type ATTR DISK CLEAR READONLY

 Type DETAIL DISK again to confirm that the read-only attribute is changes to NO. 

The write protection is removed, NEW tab is back and you can create folder now

Configuring iSCSI Target for virtual SAN

This topic explains how to configure windows storage server 2008 to host virtual SAN using iSCSI Software Target.  Requirements for windows cluster includes a Domain controller, both proposed cluster nodes are joined to the domain. If using windows 2008 R2 heartbeat network is not required.  Windows 2008 storage server not required to be the domain member, but should be in the same network segment. iSCSI Software Target should be installed on windows 2008 storage server.

A step by step  procedure to configure iSCSI Target and iSCSI initiator is available here : Microsoft iSCSI

IBM DS5020 storage controller default IP address

Default IP Address:

Controller A                                       Controller B

Prot1:    192.168.128.101                Port1:    192.168.128.102

Port2:    192.168.129.101                Port2:    192.168.129.102

%d bloggers like this: