Virtualization - Cloud

Author: Ram Prasad (Page 2 of 5)

MS SQL AlwaysOn Availability Groups Implementation: Step by Step Process

An availability group supports a failover environment for a discrete set of user databases, known as availability databases that fail over together. An availability group supports a set of primary databases and one to eight sets of corresponding secondary databases. Secondary databases are not backups. Continue to back up your databases and their transaction logs on a regular basis.

Always ON Availability Group technology is based on Mirroring Technology. Availability Group is improved version of Mirroring. Availability Group is a HA (High Availability) and DR (Disaster Recovery) solution.

Prerequisites required to enable SQL Server 2012 Always On Availability Groups

  • Dedicated domain user account be created for use by the SQL Server service. This should just be a regular or domain account 
  • Having separate accounts for SQL Agent service, SSRS, SSIS & SSRS. Having separate account is more secure and resilient, since a problem with one account won’t affect all of the SQL Server Services 
  • Both SQL & OS Editions, Versions should be at same level on all participating nodes
  • Always on availability groups is only supported in Enterprise edition starting from SQL server 2012 ( except SQL 2016 it supports basic availability group in standard edition)
  • Recommend to have same collation on all replicas
  • Create shared network share on all participating nodes
  • Make sure your databases are in Full Recovery Mode, not Simple or Bulk Logged
  • Databases included in your AlwaysOn group must be user databases. System databases cannot participate in AlwaysOn Availability Groups.
  • Make sure full backups of each of your databases are made prior to installing AlwaysOn
  • No cluster shared volume is required for Always on, it can be configured in local disks
  • Make sure you have a seperate NIC’s for public and private communication
  • Additional NIC is required if you want to isolate always on replication traffic to dedicated NIC
  • Make sure you have two free IP’s each for windows cluster IP and Always on listener IP

Note: (Additional Points to consider)

  • Nodes in cluster (Ex: here 2 servers) must have drives of the same size and with the same name. There must be the same paths inside the drives. The reason is that; To get a database to SQL Server Availability Group, it is more convenient to have the same drives and paths on the secondary server
  • The Windows Cluster Account (Windows Cluster’s name) installed on these 2 servers needs to be granted the Create Computer Object privilege in the OU (Organization Unit) where these 2 servers reside in Active Directory.
  • If you do not give Create Computer Object permission to Windows Cluster, you will get an error as below when creating a listener. The WSFC cluster could not bring the Network Name resource with DNS name ‘XXXXX’ online. The DNS name may have been taken or have a conflict with existing name services, or the WSFC cluster service may not be running or may be inaccessible. Use a different DNS name to resolve name conflicts, or check the WSFC cluster log for more information
  • Create file share for backups and replicas: If you’ve ever setup log shipping you know you have to have a file share on a server and this is the same for this new feature. Create a file share on one of the servers and give read/write access to all your service accounts. Once clustering is setup, 2012 is installed and configured, we can create our first Availability Group for Always On.

AlwaysOn Availability Groups require a Windows Server Failover Cluster, we first need to add the Windows Failover Cluster Feature to all the nodes running the SQL Server instances that we will configure as replicas

We have two node windows failover cluster SQL1 & SQL2 already setup as shown in below screenshot.

Once you have installed failover cluster we can now proceed with enabling the AlwaysOn Availability Groups feature in SQL Server 2012. This needs to be done on all of the SQL Server instances that you will configure as replicas in your Availability Group.

First, we need to enable Always ON Availability Group on two instances. If you do not activate, you will receive an error as follows.

The AlwaysOn feature must be enabled for the server instance” before you can create an availability group on this instance..

To enable Always On Availability Group, we open SQL Server Configuration Manager with Run As Administrator. In SQL Server Services, we right-click on the instance and click properties.

In the tab that opens, select AlwaysOn High Availability, click Enable AlwaysOn Availability Groups and click OK to activate Always On Availability Group. We need to perform this process on the two servers for Always On Availability Group. This will require a service restart. You can restart your sql server services in a controlled manner.

After activating Always On Availability Group in two servers, we click New Availability Wizard from AlwaysOn High Availability on SSMS as follows.

We need to give a name to AG in the incoming screen. I named it “IlkAG”. We’re moving forward by clicking next.

In the incoming screen, we select the databases that we will include in AG. The status of the databases that are suitable for receiving AG appears in the form of “Meets prerequisities”. We’re choosing TestDB.

On the next screen, in the Replicas section, click Add to connect to the second instance of Availability Group. Make sure that the instance names are the same. For example, if your primary server is “Server1\Instance1”, your secondary server should be “Server2\Instance1”. In other words, the name of the named instance on both servers is Instance1.

After the connection is complete, you should see a screen as follows.

Since we want to set the AG to be synchronous and automatic failover, we mark the required fields as follows.

For now, we leave Readable Secondary as “No”.

After performing the operations on the Replicas tab, we switch to the Endpoints tab and a screen like the one below is displayed.

.

To set Always On Availability Group, if you use more than one instance on the same server, you will need to use a different endpoint port for each instance. The default endpoint port is 5022.

For example, you have 3 instances. When creating the availability group for the first instance, the default port is 5022. You must change the port from the Enpoint URL when you create a availability group for your second instance. You can use 5023 for the second instance and 5024 for the third instance. We will use port 5023 for the instance in our example.

Then we go to the Backup Preferences tab and we see a screen like the following. This screen asks for the preferred instance to get Backups. You must choose one of them.

Prefer Secondary If there is an active secondary server, automated backups are performed from the secondary server. If there is no active secondary, it is performed from the primary server.
Secondary only All automated backups must be performed from the secondary server.
Primary All automated backups must be performed from the primary server.
Any Replica Backups can be performed from primary and secondary.

I just select Primary and I’m going to the Listener tab without any further changes.

What is Listener?

There should be minimum 2 instance in Always On Availability Group architecture. The application must always go to the server where the database is active. It is the listener that provides this. Listener has a virtual name and a virtual IP. The application does not know the physical names and physical IPs of 2 servers in the Always On Availability Group architecture. The application only knows the listener name or IP.

When the Listener screen opens, we give a virtual name from the “Listener DNS Name” section as follows.

In the Port section, I give the port information that the application will connect to the databases on this AG. In Network Mode, select Static IP and click Add on the bottom and write my virtual IP. Applicants will know this IP as their database IP.

You can ask your network unit for IP. If you give IP that someone else uses, you will have trouble.

I’m proceeding by clicking Next. In the next screen, it asks us how to do the synchronization with the secondary database in the first stage.

If we choose Full;

It will automatically take the full backup and log backup of each database we selected and transfer these backups to the secondary server itself.

This requires a shared folder. Two instance’s SQL server service accounts must have read and write privileges on this shared folder.

If we choose Join only;

We need to manually take full backup and log backup of each database we selected and transfer it to the secondary server before passing this step.

 If we choose Skip initial data synchronization;

We need to manually take full backup and log backup of each database we selected and transfer it to the secondary server.

But we can do this later. I’ve never used this option until now. We’re choosing Full and click next.

On the next screen, necessary checks are performed. If there is a problem, you can solve the problem and click Re-run validation. To solve the problem, you can go back by clicking the Back button and correct the setting you made wrong and click next. There was no problem with our installation.


Click Next and then Finish. In my example, everything except the listener was created correctly.

When we click on Error near to ‘Create Availability Group Listener’ testAG ”, we can see the detail of the error as follows.

I usually set the port of the availability group to be the same as the instance’s port.

In our example, I set a different port to see what would happen if we set a different port from instance to the availability group.

You can see the error below.

Creating availability group listener resulted in an error.

Although it could not create Listener, it created AG. We can define it as described above by clicking Add Listener from the Availability Group Listeners.

The access information you will give to application developers (the database access information that they write to connection strings) is as follows: TestAG,1435 or “IP address you specify when defining a listener”, 1435

You can also connect via SSMS with this way. After the process is complete, you can see that the database is synchronized.

Difference Between Always On Failover Cluster, Database Mirroring, Always On Availability Group, Replication and Log Shipping

I wanted to write this article to make it easier for you to choose between SQL Server’s technologies used for HA (High Availability) and DR (Disaster Recovery) for Citrix Virtual Apps & Desktop Site Setup

Briefly, we will compare the technologies listed below.

  • Always On Failover Cluster
  • Database Mirroring
  • Always ON Availability Group
  • Replication
  • Log Shipping

Always On Failover Cluster

  • You can use it for HA.
  • The servers to be included in the Failover Cluster must be in the same windows cluster.
  • Supports automatic failover. The failover process can occur automatically if the SQL Service stops.
  • There is no disk redundancy. Because, the database files use a shared disk that can be seen by both servers.
  • It can be done in Instance level (You cannot failover a database to the other server. All databases in that instance will failover. So it can not be practical for a DBA)
  • You can not read or write from the secondary databases.
  • It can be used with Always ON Availability Group, Replication, and Log Shipping.

Database Mirroring

  • You can use it as HA or DR solution. If you choose synchronous for nodes in same data center it can be HA Solution and if you choose asynchronous for nodes in different data centers it can be DR Solution.
  • Its database based. If you have too many databases, you need to do this for all the databases on the instance one by one. But it is more flexible because failover can be done on a database basis.
  • There is disk redundancy. Each node uses its own disks.
  • There is automatic failover if you set Witness Server and set it synchronously.
  • You can not read or write from the Secondary database. But you can read from the snapshot of the secondary database.
  • It will not be available in later versions of Microsoft SQL Server. Always On Availability Group can be used instead of Mirroring.
  • Supports automatic page repair. A nice feature for DBAs. Because this feature prevents the database from falling into suspect mode.

Always ON Availability Group

  • It can be used as HA or DR solution like Database Mirroring.
  • You can create an availability group by making a group of multiple databases. It is both more flexible and easier to manage than Database Mirroring. For example, an application has 7 databases. You can include these 7 databases into a single availability group. You can manage as you like. Availability Group is an improved version of Database Mirroring.
  • There is disk redundancy. Each node uses its own disk.
  • There is automatic failover if you set it synchronously. Does not need Witness server.
  • You can read from Secondary databases.
  • Supports automatic page repair. A nice feature for DBAs. Because this feature prevents the database from falling into suspect mode.
  • With SQL Server 2016, we can now create the Availability Group among different windows clusters.

Replication

  • Replication has many technologies and each offers different features. Therefore it is a little difficult to briefly describe Replication. For details, you should read the articles at the end of the article. Usually not used for HA. I’ve always used it for reporting purposes.

Log Shipping

  • Its a DR solution.
  • Its databases based.
  • You can read from secondary database.
  • There is no automatic failover.

My reasons to choose Always On Availability Group for HA:

  • It is very easy to manage Always On Availability Group.
  • You can include more than one database in an Availability Group.
  • You can use it for both HA and DR.
  • There is disk redundancy. You can read from the secondary database.
  • You can failover your availability group to the other server without anyone feeling the interruption.
  • Because you can group databases, you can get maximum benefit from their resources on 2 servers by running some of your availability groups from the first server and some from the second server.

XenApp/XenDesktop/Netscaler Gateway Communication Workflow

SSL Connection

This is the first step when user type the NetScaler Gateway vServer’s address into browser. We need to focus on the SSL handshake between client and server if any issue happens.

User-added image


Authentication

Commonly, customer uses LDAP domain authentication. In this article, I will use dual factor authentication as an example (LDAP+Radius).

User-added image

Get the App/Desktop List.

User-added image

 Get the ica file.

User-added image

ClienLaunch App/Desktop

User-added image

Ref: https://support.citrix.com/article/CTX227054

Hyper-V VM Integration Services: List of Build Numbers

Hyper-V integration services, are a bundled set of software which, when installed in the virtual machine improves integration between the host server and the virtual machine. Integration services (often called integration components), are services that allow the virtual machine to communicate with the Hyper-V host. Hyper-V Integration Services is a suite of utilities in Microsoft Hyper-V, designed to enhance the performance of a virtual machine’s guest operating system.

In short and general, the integration services are a set of drivers so that the virtual machine can make use of the synthetic devices provisioned to the VM by Hyper-V.

Hyper-V Integration Services optimizes the drivers of the virtual environments to provide end users with the best possible user experience. The suite improves virtual machine management by replacing generic operating system driver files for the mouse, keyboard, video, network and SCSI controller components. It also synchronizes time between the guests and host operating systems and can provide file interoperability and a heartbeat.

Below is the list of Integration Services Version numbers

Windows Server 2008

Build Number Knowledge Base Article ID Comment
6.0.6001.17101 n/a Windows Server 2008 RTM
6.0.6001.18016 KB950050 Windows Server 2008 RTM + KB950050
6.0.6001.22258 KB956710 Windows Server 2008 RTM + KB956710
6.0.6001.22352 KB959962 Windows Server 2008 RTM + KB959962
6.0.6002.18005 KB948465 Windows Server 2008 Service Pack 2
6.0.6002.22233 KB975925 Windows Server 2008 RTM + KB975925

Windows Server 2008 R2

Build Number Knowledge Base Article ID Comment
6.1.7600.16385 n/a Windows Server 2008 R2 RTM
6.1.7600.20542 KB975354 Windows Server 2008 R2 RTM + KB975354
6.1.7600.20683 KB981836 Windows Server 2008 R2 RTM + KB981836
6.1.7600.20778 KB2223005 Windows Server 2008 R2 RTM + KB2223005
6.1.7601.16562 n/a Windows Server 2008 R2 Service Pack 1 Beta
6.1.7601.17105 n/a Windows Server 2008 R2 Service Pack 1 RC
6.1.7601.17514 KB976932 Windows Server 2008 R2 Service Pack 1 RTM

Windows Server 2012

Build Number Knowledge Base Article ID Comment
6.2.9200.16384 n/a Windows Server 2012 RTM
6.2.9200.16433 KB2770917 Windows Server 2012 RTM + KB2770917
6.2.9200.20655 KB2823956 Windows Server 2012 RTM + KB2823956
6.2.9200.21885  KB3161609 June 2016 update rollup for Windows Server 2012

Windows Server 2012 R2

Build Number Knowledge Base Article ID Comment
6.3.9600.16384 n/a Windows Server 2012 R2 RTM
6.3.9600.17415  Windows Server 2012 R2 RTM + KB3000850
6.3.9600.17831 KB3063283 Windows Server 2012 R2 RTM + KB3063283
6.3.9600.18080 KB3063109 Windows Server 2012 R2 RTM + KB3063109
6.3.9600.18339  KB3161606 June 2016 update rollup for Windows Server 2012 R2 
6.3.9600.18398  KB3172614 July 2016 update rollup for Windows Server 2012 R2
6.3.9600.18692 June 27, 2017—KB4022720 (Preview of Monthly Rollup)

 

 

Hyper-V BIN file removal to retain storage space

The files used by Hyper-V VM are as below: In short, to explain:

  • .XML : This file  contain VM configuration details
  • .VHD and .VHDX: These files are virtual disks that hold the current virtual disk data, including partitions and file systems.
  • .BIN : This file contains the memory of a virtual machine or snapshot that is in a saved state
  • .VSV: This file contains VM’s saved state.
  • .AVHD and .AVHDX: These files are differencing virtual disks, commonly used for snapshots and Hyper-V checkpoints

The BIN file created in the virtual machine folder of the virtual machine is equal to the size of the memory of the virtual machine and is a placeholder to save the virtual machine state in the event that the Hyper-V host shut down.

The BIN file contains the memory of a VM and is located inside the GUID folder. If the VM in powered off state, there will be no BIN file present. This file is the equal to the size of the VM’s memory provisioning in Hyper-V Manger.

In Windows Server 2008 and Windows Server 2008 R2 – starting a virtual machine would result in Hyper-V creating a .BIN file which matched the size of the memory assigned to the virtual machine.  Microsoft did this to ensure that we always had enough disk space available to create a saved state (which is particularly critical if the physical computer is shutting down – and the virtual machine is configure to save state when the physical computer shuts down).

The BIN file is simply idle while the virtual machine is powered on; it is pre-allocated so that its space is guaranteed to be available if needed and for quicker response to a save action. However – many people did not like to see their disk space being “wasted” like this.., as BIN file is idle during running state.

To address this, since Windows “2012” Microsoft made a simple change: Hyper-V only pre-create the .BIN file if you choose “Save the virtual machine state” as the Automatic Stop Action for the virtual machine.  If you choose “Turn off the virtual machine or Shut down the guest operating system”, BIN file will not create with equal size of RAM.

It is still possible to save the state manually as long as there is enough room for the file. Above Automatic Stop Action setting in only applicable when Physical computer shutdown.

By default, all virtual machines have an Automatic Stop Action of Save, which means the state of the virtual machine saved to disk. However, the best practice is once Integration Services are enabled the Automatic Stop Action should be changed to “Shut down the guest operating system”, which performs a clean shutdown and no longer needs the BIN file to save the memory content to.

 Considerations:

  • Keeping BIN file is not recommended in a cluster environment as VM’s were configured in High Availability, in case of Physical computer shutdown, VM will failover to another anode hence there is no advantage of keeping BIN file.
  • Consider choosing BIN file if Hyper-v Servers are not in cluster (standalone) and no constraints with storage space.
  • VM move into saved state only when Hyper-v Host is gracefully shutdown and VM will not move to save state in case Hyper-v host is unexpected shutdown/restart.
  • Microsoft do not recommends keeping VM in saved state for the applications like Domain Controllers, Database, etc. Hence, change Automatic Stop Action to “Shut down” from “Save state” as per MS recommendations

 Steps to save storage space by removing BIN File

  • VM need to be powered off
  • Go to VM Settings ->Automatic Stop Action -> Change the Option from “Save the virtual Machine state” to “Shut down the guest operating system”
  • Power on VM
  • Execute similar steps for each VM4

Note:
Above feature succesfully implemented at  multiple customer environments which intern benefied customerin reclaiming Terabyte storage space

 

 

 

 

Local Host Cache Reintroduction– Long Awaited Feature

Local Host Cache (LHC) & Evolution

Local Host Cache was a core feature of the Independent Management Architecture (IMA) that was introduced with Citrix Metaframe XP 1.0 in 2001, and was still used until Citrix XenApp 6.5 and now reintroduced in XenApp/Desktop 7.12

Technically, the LHC is a simple Access database where it stores a subset of the data store in each Presentation (XenApp) server. The IMA service running on each Presentation(XenApp) Server downloads the information for every 30 mins or whenever a configuration change is made in farm.

LHC primary functions are permits a server to function in the absence of a connection to the data store & improves performance by caching information of applications.

LHC contain the information of servers, published applications, Domain & Licensing. LHC evolved a lot over the years and allowed SQL downtimes for an indefinite period in its last release with XenApp 6.5.

If the data store is unreachable, the LHC contains enough information about the farm to allow normal operations for an indefinite period, if necessary. However, no new static information can be published, or added to the farm, until the farm data store is reachable and operational again.

The disappearance of LHC

With the release of the awful version 7.0 of XenApp in 2013 and the move to XenDesktop FlexCast Management Architecture (FMA), Citrix decided to remove the Local Host Cache feature–and many others–without offering any other alternative. To be fair, Citrix converged XenApp into XenDesktop, which was already using the FMA design since the version 5 and without Local Cache Host equivalent.  This decision immediately made the SQL infrastructure a critical piece of any XenApp implementation. Any downtime on the SQL infrastructure would immediately cause a downtime for new sessions on the XenApp infrastructure as well. It could also have some side effects with the old Citrix Web Interface.

Citrix recommends having a highly available SQL infrastructure to host XenApp and XenDesktop databases. While you can successfully implement HA for your SQL infrastructure, it does not necessarily mean that you will avoid downtimes, as many components are to be considered.

The pseudo rebirth of LHC with Connection Leasing (CL)

Facing a storm of complaints, Citrix also started–finally!–to listen to its customers and released XenDesktop 7.6 in Sept 2014 with the Connection Leasing (CL) feature enabled by default.

Unfortunately, CL was not full replacement of LHC and it is alternative option provided in placement of LHC, limited to frequently used and assigned applications/desktops (up to 2 weeks by default). For users not using Citrix frequently or using pooled desktops, CL is completely useless and did not resolve anything. There are also many limitations: load management, workspace control, power actions are not supported.

The reintroduction of LHC

Citrix came up with a milestone achievement with its new idea as part of the XenDesktop 7.12 release in Dec 2016. This time, they claimed to bring back all the Local Host Cache (LHC) features from XenApp 6.5, even adding few improvements to make it more reliable. LHC feature is offered for Cloud and On Premises implementations along Connection Leasing in 7.12, but is considered the primary mechanism to allow connection-brokering operations when database connectivity to the site database is disrupted. Surprisingly, Local Host Cache feature is disabled by default. Let us expect Citrix to enable that feature by default in the next version.

When installing XenDesktop 7.12 and up, a SQL Express instance(Local DB) will be installed locally on each Delivery Controller to store the Local Host Cache. Config Synchronizer Service (CSS) takes care of the synchronization between the remote database and the Local Host Cache (Local DB). The Secondary Brokering Service (Citrix High Availability Service) takes over from the Principal Broker when an outage is detected and does all registration and brokering operations.

There are many limitations to consider with this version of LHC

  • Local DB, which is a runtime version of SQL Server with a specific licensing that limits the usage of four cores.
  • No support for Pooled desktops, which is a huge downside.
  • No change can be made to the farm (assignments, publications, power actions, etc.), you cannot even open the consoles (Director & Studio) and PowerShell
  • No control over the LHC election process and only a single Delivery Controller will take care of all VDA registrations and broker sessions for the whole zone during an outage which limits  5,000 VDAs per zone (not enforced)
  • Most importantly, it is only a one-way communication between the LHC and the remote SQL database
  • New version of the Local Host Cache would not assure you zero downtime. There is also a delay before users can actually connect .When the remote database goes down, VDAs still have to re-register to the newly and ONLY elected Delivered Controller. It can result in users not having icons in StoreFront or users not able to start new sessions for a short period.

In conclusion, it took Citrix almost 4 years to deliver a somewhat equivalent of the good old Local Host Cache for XenDesktop 7.x. The database is not a single point of failure anymore in a XenDesktop/XenApp deployment. However, customers with large deployments are not supported with this version of the Local Host Cache and some of the -HUGE- limitations can discourage you from using that feature

Ref:

 

PVS Streaming Service Abrupt Termination – Cache Change Mode Procedure for production vdisk

Issue:

PVS stream service abrupt termination  intermittently (approx. once in month) which causing user sessions freeze and user unable to launch HSD’s.

Environment :

2 Citrix PVS Servers (VM’s) with version 7.6
2000-3000 concurrent  Users
86 HSD’s & 6 Golden Images
Microsoft Hypervisor 2012R2 ( 15 Node) – CICSO UCS

Observations:

  • Issue occurring once or twice in a month and there is no common pattern in days or hours,issue recurring in both PVS servers at a time
  • No changes in environment
  • Onsite engineer informed that issue existed since 3 months and issue getting resolved post restart of PVS servers.
  • One day,  same issue repeated but issue not sorted out post restarting of PVS servers -> Issue escalated to support team (Me)
  • Observed  Event Id 11 :”Detected one or more hung threads , DbAccess error: <Record was not found> <-31754> (in ServerStatusSetDeviceCount() called from SSProtocolLogin.cpp:2903” -> Indicates “Thread hangs under the stream service” & DB Access errors
  • Observed multiple vDisk retries on the problematic target devices. 11 at boot time and approximately 611 per hour during session
  • Observed recommended MacAfee exclusions are not in place -> Stopped MacAfee service and restarted PVS server -> PVS Streaming service stable for some time on one PVS server  and again terminated ->Due to time constraint, logged a call with vendor(Citrix).
  • After 2 hrs, Citrix support joined the call and started collecting CDF races and procdump collection for the terminating stream service
  • After few hours , issue resolved automatically and Citrix support unable to find root cause with collected logs
  • In 2 months , issue repeated 2 times and customer frustrated as root cause was not found for abrupt streaming service termination intermittently.
  • Support Team (Myself) analyzed the environment and observed the Cache mode is configured as “ Cache on Server”  which is not recommended for Production environment , Best practice to use “Cache on RAM overflow to HDD”  which is a best practice to reduce load on PVS server & optimal performance ->Taken the same observation Citrix support and requested their observations

Explained to customer that missing of best practices will lead to these type of intermittent issues , since  there is no root cause found  and it is not a best practice to keep cache on server in production environment , prepared a plan to change cache configuration to” Cache on RAM overflow to HDD”.

Current PVS Storage configuration for cache as below

PVS1 (VM)->1700 GB allocated  through Virtual HBA ( Total golden Image Sizes is 440 Gb & Remaining for Write Cache)

PVS2 (VM) -> 1700 GB allocated through Virtual HBA ( Total golden Image Sizes is 440 Gb & Remaining for Write Cache)

Proposed Storage change Configuration as below:

Post referring multiple blogs, Write Cache proposed to all images(profiles) is 20 GB -> Therefore , for 86 HSD, 1820 GB required and it should present to complete Hyper-v cluster as HSD hosted on cluster.

« Older posts Newer posts »

© 2020 Tech Blog

Theme by Anders NorenUp ↑