Monthly Archives: March 2014

Public to Private Network in Windows 8 – 2012R2

In Windows 8, when you connect to a wireless network, it will either register it as a Public network or a Private network. Private networks are basically home and work whereas public is anywhere else. Sometimes Windows 8 or Windows Server detects a private network as a public one and vice versa. You can manually make some changes to ensure that you are not accidentally sharing either too much on a public network or blocking all sharing on a private network.

run-dialog

Then click on Network List Manager Policies at the left and on the right-hand side you should see a couple of items with descriptions and then something called Network, which is the current network you are connected to. It may also be called something else, but it doesn’t have a description.

network-list-manager-policies

Double-click on it and click on the Network Location tab. Here you can manually change the network location from Private to Public and vice versa.

network-location

Server 2012 Extend Volume Error ‘The Parameter is incorrect”

In Windows Server 2008R2 and beyond it has been possible to extend the system volume online and without needing to resort to third party tools which often required at least a reboot.

I’ve now used this feature many times, but yesterday I had an issue when extending the volume of a Windows Server 2012 system drive where it returned the horrible looking error below ‘The parameter is incorrect’.

DiskGrowParameterIncorrect

This leaves you in a scenario where the Disk Management tool reports that the disk has been increased to the new size, but viewing the disk properties in Explorer shows it has not been increased.

DiskSizeMisMatch1

Nicholas Schoonover has a post detailing the same issue and suggests fixing it by first shrinking the disk, then extending it again. There are also comments on that post which suggest that extending it further might also resolve it. I wasn’t particularly keen to try out either of those suggestions immediately so did a bit more research.

It turns out this has been an issue within Windows Server for sometime and this Microsoft KB article details how to resolve it with the Diskpart utility, essentially the disk partition has been increased, but not the file system size.

It’s as simple as

DISKPART> select volume #

where # is the number of the affected volume which can be found with list volume.

DISKPART> extend filesystem

Now the file system size should match the new partition size.

Failover Clustering for Hyper-V

Some veteran IT Pros hear the term ‘Microsoft Clustering’ and their hearts start racing.  That’s because once upon a time Microsoft Cluster Services was very difficult and complicated.  In Windows Server 2008 it became much easier, and in Windows Server 2012 it is now available in all editions of the product, including Windows Server Standard.  Owing to these two factors you are now seeing all sorts of organizations using Failover Clustering that would previously have shied away from it.

The service that we are seeing clustered most frequently in smaller organizations is Hyper-V virtual machines.  That is because virtualization is another feature that is really taking off, and the low cost of virtualizing using Hyper-V makes it very attractive to these organizations.

In this article I am going to take you through the process of creating a failover cluster from two virtualization hosts that are connected to a single SAN (storage area network) device.  However in Windows Server 2012 these are far from the limits.  You can actually cluster up to sixty-four servers together in a single cluster.  Once they are joined to the cluster we call them cluster nodes.

Failover Clustering in Windows Server 2012 allows us to create highly available virtual machines using a method called Active-Passive clustering.  That means that your virtual machine is active on one cluster node, and the other nodes are only involved when the active node becomes unresponsive, or if a tool that is used to dynamically balance the workloads (such as System Center 2012 with Performance and Resource Optimization (PRO) Tips) initiates a migration.

In addition to using SAN disks for your shared storage, Windows Server 2012 also allows you to use Storage Pools.  I explained Storage Pools and showed you how to create them in my article Storage Pools I also explained how to create a virtual SAN using Windows Server 2012 in my article iSCSI Storage in Windows Server 2012.  For the sake of this article, we will use the simple SAN target that we created together in that article.

Step 1: Enabling Failover Clustering

Failover Clustering is a feature on Windows Server 2012.  In order to enable it we will use the Add Roles and Featureswizard.

1. From Server Manager click Manage, and then select Add Roles and Features.

2. On the Before you begin page click Next>

3. On the Select installation type page select Role-based or feature-based installation and click Next>

4. On the Select destination server page select the server onto which you will install the role, and click Next>

5. On the Select server roles page click Next>

6. On the Select features page select the checkbox Failover Clustering.  A pop-up will appear asking you to confirm that you want to install the MMC console and management tools for Failover Clustering.  Click Add Features.  ClickNext>

7. On the Confirm installation selections page click Install.

NOTE: You could also add the Failover Clustering feature to your server using PowerShell.  The script would be:

Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools

If you want to install it to a remote server, you would use:

Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools –ComputerName <servername>

That is all that we have to do to enable Failover Clustering in our hosts.  Remember though, it does have to be done on each server that will be a member of our cluster.

Step 2: Creating a Failover Cluster

Now that Failover Clustering has been enabled on the servers that we want to join to the cluster, we have to actually create the cluster.  This step is easier than it ever was, although you should take care to follow the recommended guidelines.  Always run the Validation Tests (all of them!), and allow Failover Cluster Manager to determine the best cluster configuration (Node Majority, Node and Disk Majority, etc…)

NOTE: The following steps have to be performed only once – not on each cluster node.

1. From Server Manager click Tools and select Failover Cluster Manager from the drop-down list.

2. In the details pane under Management click Create Cluster…

3. On the Before you begin page click Next>

4. On the Select Servers page enter the name of each server that you will add to the cluster and click Add.  When all of your servers are listed click Next>

5. On the Validation Warning page ensure the Yes. When I click Next, run configuration validation tests, and then return to the process of creating the cluster radio is selected, then click Next>

6. On the Before You Begin page click Next>

7. On the Testing Options page ensure the Run all tests (recommended) radio is selected and then click Next>

8. On the Confirmation page click Next> to begin the validation process.

9. Once the validation process is complete you are prompted to name your cluster and assign an IP address.  Do so now, making sure that your IP address is in the same subnet as your nodes.

NOTE: If you are not prompted to provide an IP address it is likely that your nodes have their IP Addresses assigned by DHCP.

10. On the Confirmation page make sure the checkbox Add all eligible storage is selected and click Next>.  The cluster will now be created.

11. Click on Finish.  In a few seconds your new cluster will appear in the Navigation Pane.

Step 3: Configuring your Failover Cluster

Now that your failover cluster has been created there are a couple of things we are going to verify.  The first is in the main cluster screen.  Near the top it should say the type of cluster you have.

If you created your cluster with an even number of nodes (and at least two shared drives) then the type should be anode and disk majority.  In a Microsoft cluster health is determined when a majority (50% +1) of votes are counted.  Every node has a vote.  This means that if you have an even number of nodes (say 10) and half of them (5) go offline then your cluster goes down.  If you have ten nodes you would have long since taken action, but imagine you have two nodes and one of them goes down… that means your entire cluster would go down.  So Failover Clustering uses node and disk majority – it takes the smallest drive shared by all nodes (I usually create a 1GB LUN) and configures it as the Quorum drive – it gives it a vote… so if one of the nodes in your two node cluster goes down, you still have a majority of votes, and your cluster stays on-line.

The next thing that you want to check is your nodes.  Expand the Nodes tree in the navigation pane and make sure that all of your nodes are up.

Once this is done you should check your storage.  Expand the Storage tree in the navigation pane, and then expandDisks.  If you followed my articles you should have two disks – one large one (mine is 140GB) and a small one (mine is 1GB).  The smaller disk should be marked as assigned to Disk Witness in Quorumand the larger disk will be assigned toAvailable Storage.

Cluster Shared Volumes was introduced in Windows Server 2008R2.  It creates a contiguous namespace for your SAN LUNs on all of the nodes in your cluster.  In other words, rather than having to ensure that all of your LUNs have the same drive letter on each node, CSVs create a link – a portal if you will – on your C: under the directoryC:\ClusterStorage.  Each LUN would have its own subdirectory – C:\ClusterStorage\Volume1,C:\ClusterStorage\Volume2, and so on.  However using CSVs means that you are no longer limited to a single VM per LUN, so you will likely need fewer.

CSVs are enabled by default, and all you have to do is right-click on any drive assigned to Available Storage, and clickAdd to Cluster Shared Volumes.  It will only take a second to work.

NOTE: While CSVs create directories on your C drive that is completely navigable, it is never a good idea to use it for anything other than Hyper-V.  No other use is supported.

Step 4: Creating a Highly Available Virtual Machine (HAVM)

Virtual machines are no different to Failover Cluster Manager than any other clustered role.  As such, that is where we create them!

1. In the navigation pane of Failover Cluster Manager expand your cluster and click Roles.

2. In the Actions Pane click Virtual Machines… and click New Virtual Machine.

3. In the New Virtual Machine screen select the node on which you want to create the new VM and click OK.

The New Virtual Machine Wizard runs just like it would in Hyper-V Manager.  The only thing you would do differently here is change the file locations for your VM and VHDX files.  In the appropriate places ensure they are stored under C:\ClusterStorage\Volume1.

At this point your highly available virtual machine has been created, and can be failed over without delay!

 

Step 5: Making an existing virtual machine highly available

In all likelihood you are not starting from the ground up, and you probably have pre-existing virtual machines that you would like to add to the cluster.  No problem… However before you go, you need to put the VM’s storage onto shared storage.  Because Windows Server 2012 includes Live Storage Migration it is very easy to do:

1. In Hyper-V Manager right-click the virtual machine that you would like to make highly available and click Move

2. In the Choose Move Type screen select the radio Move the virtual machine’s storage and click Next>

3. In the Choose Options for Moving Storage screen select the radio marked Move all of the virtual machine’s data to a single location and click Next>

4. In the Choose a new location for virtual machine type C:\ClusterStorage\Volume1 into the field.  Alternately you could click Browse… and navigate to the shared file location.  Then click Next>

5. On the Completing Move Wizard page verify your selections and click Finish.

Remember that moving a running VM’s storage can take a long time.  The VHD or VHDX file could theoretically behuge… depending on the size you selected.  Be patient, it will just take a few minutes.  Once it is done you can continue with the following steps.

6. In Failover Cluster Manager navigate to the Roles tab.

7. In the Actions Pane click Configure Role…

8. In the Select Role screen select Virtual Machine from the list and click Next>.  This step can take a few minutes… be patient!

9. In the Select Virtual Machine screen select the virtual machine that you want to make highly available and clickNext>

NOTE: A great improvement in Windows Server 2012 is the ability to make a VM highly available regardless of its state.  In previous versions you needed to shut down the VM to do this… no more!

10. On the Confirmation screen click Next>

…That’s it! Your VM is now highly available.  You can navigate to Nodes and see which server it is running on.  You can also right-click on it, click Move, select Live Migration, and click Select Node.  Select the node you want to move it to, and you will see it move before your very eyes… without any downtime.

What? There’s a Video??

Yes, We wanted you to read through all of this, but we also wrote it as a reference guide that you can refer to when you try to build it yourself.  However to make your life slightly easier, we also created a video for you and posted it online.  Check it out!

Creating and configuring Failover Clustering for Hyper-V in Windows Server 2012

 

For Extra Credit!

Now that you have added your virtualization hosts as nodes in a cluster, you will probably be creating more of your VMs on Cluster Shared Volumes than not.  In the Hyper-V Settings you can change the default file locations for both your VMs and your VHDX files to C:\ClusterStorage\Volume1.  This will prevent your having to enter them each time.

As well, the best way to create your VMs will be in the Failover Cluster Manager and not in Hyper-V Manager.  FCM creates your VMs as HAVMs automatically, without your having to perform those extra steps.

Conclusion

Over the last few weeks we have demonstrated how to Create a Storage Pool, perform a Shared Nothing Live MigrationCreate an iSCSI Software Target in Windows Server 2012, and finally how to create and configure Failover Clusters in Windows Server 2012.  Now that you have all of this knowledge at your fingertips (Or at least the links to remind you of it!) you should be prepared to build your virtualization environment like a pro.  Before you forget what we taught you, go ahead and do it.  Try it out, make mistakes, and figure out what went wrong so that you can fix it.  In due time you will be an expert in all of these topics, and will wonder how you ever lived without them.  Good luck, and let us know how it goes for you!

Convert a Server Core Installation to Gui

Step 1

If the server is a physical computer, then insert the Windows Server 2012 Operating System Disc into the DVD drive (D: in this example)

If the server is a virtual machine, then first step is to mount a Windows Server 2012 source ISO image to the VM or insert a Windows Server 2012 DVD in the DVD drive of the host machine and attach it to the VM running Windows Server 2012 Server Core as shown below:

ServerCoreTOWinGUI-and-back1

Step 2

At the command prompt type:

mkdir c:\mount

Issue the following command and press Enter:

dism.exe /mount-image /ImageFile:d:\sources\install.wim /Index:4 /Mountdir:c:\mount /readonly

Start Windows Powershell by typing the following command:

Powershell.exe

From the Windows Powershell prompt issue the following commands and press Enter after each:

Import-Module ServerManager
Install-WindowsFeature –IncludeAllSubFeature User-Interfaces-Infra –Source:c:\mount\windows

ServerCoreTOWinGUI-and-back2

Step 3

When prompted, restart the server and logon as Administrator to verify the installation of the GUI components:

Shutdown /r /t 0

Verify that the GUI components appear

ServerCoreTOWinGUI-and-back3

 

ServerCoreTOWinGUI-and-back4

Step 4

To convert a Windows Server 2012 Full Installation to a Server Core Installation, open Windows Powershell and issue the following commands:

Import-Module ServerManager
Uninstall-WindowsFeature User-Interfaces-Infra

ServerCoreTOWinGUI-and-back5

Wait for the removal to complete:

ServerCoreTOWinGUI-and-back6

Step 5

Restart by issuing the following command:

Shutdown /r /t 5

The following screen appears:

ServerCoreTOWinGUI-and-back7

 

ServerCoreTOWinGUI-and-back8

 

ServerCoreTOWinGUI-and-back9

 

The server restarts with Server Core features.

Configuring iSCSI MPIO on Windows 2008 Server

We have recently gone through the process of wiping out our lab and rebuilding from scratch on Windows Server 2008 R2 Enterprise.  During this process, I recorded the steps I used to configure MPIO with the iSCSI initiator in R2.  Just to make life more complex, our servers only have 2 NICs, so I am balancing the host traffic, virtual machine traffic, and MPIO across those two NIC devices.  Is this supported?  I seriously doubt it.  🙂  In the real world you would separate out iSCSI traffic on dedicated NICs, cables, and separate switch paths.  The following step-by-step process should be relatively the same though.

Editorial Note: I do not work for the iSCSI team, I’m a field guy.  If you see something you disagree with here don’t be angry, instead comment your point and I will update the article.  Thanks.

Foundation

The workflow I am following assumes that when starting out one NIC is configured for host traffic and the other for a VM network.  On the WSS the secondary NIC was already configured not to register in DNS.  Also, since I am using WSS and the built-in iSCSI Target I don’t have to configure a DSM for the storage device.  If your configuration is different than that, you may have to ignore or add to a few parts of the below instructions.  Sorry about that.  I can only document what I have available for testing…

First I just want to show a screenshot of the iSCSI target on our Windows Storage Server, to indicate that it does have two IPs.  Once again, I am cheating the system here.  These are not dedicated TOE adapters for iSCSI on a separate network.  This is a poor man’s environment with 1 VLAN and minimal network hardware.  My highly available environment is anything but!  To view this information on your own WSS, right-click on the words “Microsoft iSCSI Software Target” and click Properties.

image_2

Enable the MPIO Feature on the initiating servers

Next I needed to enable MPIO on the servers making the iSCSI connections.  MPIO is a Feature in Server 2008 R2 listed as Multipath I/O.  Adding the Feature did not require a reboot on any of my servers.

image_4

Configuring MPIO to work with iSCSI was simple.  Click Start and type “MPIO”, launch the control panel applet, and you should see the window below.  Click on the Discover Multi-Paths tab, check the box for “Add support for iSCSI devices”, and click Add.  You should immediately be prompted to reboot.  This was consistent across 4 servers where I followed this process.

image_6

 

image_8

After rebooting, if you open the MPIO Control Panel applet again, you should see the iSCSI bus listed as a device.  Note on my servers, the Discover Multi-Paths page becomes grayed out.

image_10 image_12

Check the IP of the existing connection path

Now click Start and type “iSCSI”.  Launch the iSCSI Initiator applet.  Add your iSNS server or Target portal.  There is plenty of documentation on how to do this on TechNet if you need assistance. I want to stay focused on the MPIO configuration.

Once you are connected to the target, click the button labeled “Devices…”.  You should see each of the volumes you have connected listed in the top pane.  Select a Disk and click the MPIO button.  In the Device Details pane you should see information on the current path and session.  If you click the Details button, you can verify the local and remote IPs the current connection is using.  It should be the IPs that resolve from the hostnames of each server.  See my remedial diagram below.

I recommend taking note of this IP, to make life easier later on!

So everything is setup for MPIO but you are only using a single path and that’s not really going to accomplish much now is it?  Since I only have 2 NICs in my test server I need my host to share the second NIC with the VM network.  This is not ideal but again I am using what I have and this is only a test box.

image_14

Setting a second IP on my hosts

In R2 the host does not communicate by default on a NIC where a virtual network is assigned.  To change this, open the Hyper-V console and click “Virtual Network Manager…”.  Check the box “Allow management operating system to share this network adapter”.

image_16

This will create a third device in the network console (to get there click Start, type “ncpa.cpl”, and launch the applet).  You should see the name of the new device matches your Virtual Network name.  In my case Local Area Connection 4 has a device name “External1”.  Right click on the connection and then click Properties.  Select “Internet Protocal Version 4 (TCP/IPv4)” and click the Properties button.  Configure your address and subnet but not the gateway as it should already be assigned on the first adapter.  You also shouldn’t need to set the DNS addresses in the new adapter.  You will however, want to click the “Advanced…” button followed by the DNS tab and uncheck the box next to “Register this connection’s address in DNS”.  This really should be the job of your primary adapter, no need to have multiple addresses for the same hostname registering and causing confusion unless you have a unique demand for it.

image_thumb_8

Add a second path

Back in the iSCSI Initiator Applet, click the Connect button.  I know you already have a connection.  In this step we are adding an additional connection to the Target to provide a second path.

In the subsequent dialogue make sure you check the box next to “Enable multi-path” and then click the Advanced… button.  In the Advanced Settings dialogue you will need to choose the IP for your second path.  In the drop-down menu next to “Local adapter:” select Microsoft iSCSI Initiator”.  In the drop-down next to “Initiator IP:” select the IP on your local server you would like the Initiator to use when making a connection for the secondary path.  In the third drop-down, next to “Target portal IP:” select the IP of the iSCSI Target server you would like to connect to.  This should be the opposite IP of the session we observed a few steps back when I mentioned you should take note of the IP.

image_20

Check your work

image_22

Just one more step.  Let’s verify that you now have 2 connections available for each disk, that they are using separate paths, and have the opportunity to choose the types of load balancing available.  Once you have hit OK out of each of the open dialogues from the step above, click on the Devices… button again and check out the top pane.  On each of my servers I see each disk listed twice, once per Target 0 and once per Target 1, as seen below.  If you follow my remedial diagrams one more time and select a disk, then the MPIO button, you should now see two paths.  Select the first path and click the Details button.  It should be using the local and remote IPs we took note of earlier.  Click OK.  Now select the second path and then the Details… button.  You should see it using the other adapter’s IP on BOTH the local and remote hosts.

IMPROVING HYPER-V PERFORMANCE AND THROUGHPUT

GENERAL GUIDELINES TO IMPROVE HYPER-V SPEED AND ACHIEVE HIGH SYSTEM PERFORMANCE

  • Don’t use dynamically expanding VHDs or VHDXs. These are only meant for test systems and are not recommended for production systems by Microsoft.
  • Don’t use Hyper-V snapshots. These are also only for test and development purposes and not recommended by Microsoft for production use.
  • Use large NTFS cluster sizes, such as 64K.
  • Do not use drive compression of any kind.
  • Use a separate drive for the Windows paging file
  • Defragment all drives regularly, including from within the virtual machine operating system
  • Use fixed sized VHDs with plenty of free space for the VM operating system
  • Have at least 10 to 20% free space on every disk on the host. NTFS and VSS quickly become inefficient when disk space is below that limit.
  • Keep at least 1 GB free RAM on the host
  • Increase the VSS storage size allocation limits for each drive to at least 10% of each drive’s size. Command: vssadmin resize shadowstorage
  • Increase the Windows paging file size to at least 2.5x the RAM size. Use the same setting for minimum and maximum. Ensure the paging file is not fragmented
  • Make sure your system isn’t clogged with orphaned VSS snapshots. (Command: vssadmin list shadows)

GENERAL HARDWARE RECOMMENDATIONS TO IMPROVE HYPER-V SPEED

  • Use high RPM drives
  • Use striped RAID for virtual hard drive storage
  • Use USB 3 or eSATA for external backup drives
  • Use 10 Gbit Ethernet if possible for network traffic
  • Isolate backup network traffic from other traffic.
  • Use separate disks for VMs with high I/O requirements
  • Increase the VM’s RAM
  • Increase the host’s RAM. Always keep at least 1 GB available on the host

CLUSTER SHARED VOLUME RECOMMENDATIONS TO IMPROVE HYPER-V SPEED

  • Try all the steps shown above first
  • If using a cluster shared volume, traffic isolation is very important.
  • Use separate NICs for SAN, backup, and cluster management traffic.
  • Use 10 Gbit Ethernet if available
  • Separate busy VMs into separate volumes
  • Add additional nodes to spread the load
  • Pick a time for backup when the network traffic is low.
  • Disable NetBIOS over TCP/IP
  • Enable jumbo packets
  • Use high quality network switches
  • Keep the LANs short and connect only a few nodes to each CSV. I.e. split large setups into separate CSVs
  • Don’t use several switches on a Ethernet bus because each of them adds latency

BACKUP SETTINGS TO INCREASE HYPER-V BACKUP SPEED

On most systems administrators generally want to keep the Hyper-V backup process in the background so it has little if any impact on the overall system. Since most Hyper-V hosts are active 24/7, there is hardly ever a time to shut down virtual machines for the maintenance.

However, there are time windows, usually at night, where a backup process could be given additional system resources and finish faster, at the cost of a minor system slowdown.

Install Hyper-V on Server Core

Get your server all setup, with a static IP, fully patched, on the domain, Etc, and ready to install hyper-v

from there issue the following command to get into Server Config:

sconfig

sconfig

Install Hyper-V With the following Command:

dism /online /enable-feature /FeatureName:Microsoft-Hyper-V

DISM_Install

from there, BAM, Hyper-V is installed, go ahead and reboot, and connect with Either SCVMM, or Hyper-V Manager from a remote console

Disable Windows Firewall on Server Core

Most Sysadmins will look this up second as soon as they get the server installed, or pretty close.

Open powershell/command window

Powershell1

issue the following command:

netsh advfirewall set allprofiles state off

Powershell2

once you see the letters OK, you are good to go (remember to reboot, it’s windows)

Samba: How To Share Files For Your LAN Without User Authentication

This tutorial will show how to set samba to allow read-only file sharing for your LAN computers as guest (without be prompted for a password).
Because users won’t be prompted for a user/password, this tutorial is meant to be installed in a LAN where all host are to be trusted.

There is many advantages of sharing files in a LAN. For instance, when you have a multimedia box (playing music, movies….) it is great to be able to access the music on that box from any machines in your LAN.

Let’s get started. In the first place, you need to have samba installed.

$sudo apt-get install samba

Because we are going to make samba security insecure, make sure only your local network can access samba service. To do so, open and edit /etc/samba/smb.conf

$sudo vi /etc/samba/smb.conf

and set interfaces to lo and your local network interface. In my case: eth1.

interfaces = lo eth1
bind interfaces only = true
Now, it is time to smoothen samba default security by changing the security variable: security and make sure it is set to share instead of user and that guest account is enabled:

security = share


guest account = nobody
Now, we can create a share to be accessible to guest users:

[Guest Share]
comment = Guest access share
path = /path/to/dir/to/share
browseable = yes
read only = yes
guest ok = yes
You can now test that your configuration is good using testparm:

$ testparm

If everything is fine, it is time to reload samba service to have your new configuration taken into account:

$sudo /etc/init.d/samba reload

That’s it, anybody in your LAN can now access your share.

Time services on Domain Controllers

Several times now I have seen issues where the local domain controllers can loose their time accuracy. I think there are several reasons for this, most notably is where the Domain Controller is a virtual machine.

To fix the time service in Server 2008R2 – 2012R2 – 2016 use the following command prompt:


net stop w32time
w32tm /config /syncfromflags:manual /manualpeerlist:oceania.pool.ntp.org
w32tm /config /reliable:yes
net start w32time
w32tm /query /configuration

This forces the Domain controller to both sync against an outside time source as well as advertising to the network that it is a reliable time source.