Jump to content
Updated Privacy Statement

PoC Guide: Google Cloud Platform (GCP) Zone Selection Support with Citrix DaaS

  • Contributed By: Matthew Harper Special Thanks To: Josh Penick, Elaine Welch

Overview

Citrix DaaS service supports zone selection on Google Cloud Platform (GCP) to enable sole-tenant node functionality. You specify the zones where you want to create VMs in Citrix Studio. Then use Sole-tenant nodes allow to group your VMs together on the same hardware or separate your them from the other projects. Sole-tenant nodes also enable you to comply with network access control policy, security, and privacy requirements such as HIPAA.

This document covers:

  • Configuring a Google Cloud environment to support zone selection on the Google Cloud Platform in Citrix DaaS environments.

  • Provisioning Virtual Machines on Sole Tenant nodes.

  • Common error conditions and how to resolve them.

Note: Please note that the GCP console screenshots in this article may not be up to date. But functionality-wise there is no difference!

Prerequisites

You must have existing knowledge of Google Cloud and Citrix DaaS for provisioning machine catalogs in a Google Cloud Project.

To set up a GCP project for Citrix DaaS, follow the instructions here.

Google Cloud sole tenant

Sole tenancy provides exclusive access to a sole tenant node, which is a physical Compute Engine server that is dedicated to hosting only your project's VMs. Sole Tenant nodes allow you to group your VMs together on the same hardware or separate your VMs. These nodes can help you meet dedicated hardware requirements for Bring Your Own License (BYOL) scenarios.

Sole Tenant nodes enable customers to comply with network access control policy, security, and privacy requirements such as HIPAA. Customers can create VMs in desired locations where Sole Tenant nodes are allocated. This functionality supports Win-10 based VDI deployments. A detailed description regarding Sole Tenant can be found on the Google documentation site.

Reserving a Google Cloud sole tenant node

1. To reserve a Sole Tenant Node, access the Google Cloud Console menu, select Compute Engine, and then select Sole-tenant-nodes:

poc-guides_gcp-sole-tenant_select-nodes.png

2. Sole tenants in Google Cloud are captured in Node Groups. The first step in reserving a sole tenant platform is to create a node group. In the GCP Console, select Create Node Group:

image.png

3. Start by configuring the new node group. Citrix recommends that the Region and Zone selected for your new node group allow access to your domain controller and the subnets utilized for provisioning catalogs. Consider the following:

  • Fill in a name for the node group. In this example, we used mh-sole-tenant-node-group-1.

  • Select a Region. For example, us-east1.

  • Select a Zone where the reserved system resides. For example, us-east1-b.

All node groups are associated with a node template, which indicates the performance characteristics of the systems reserved in the node group. These characteristics include the number of virtual CPUs, the quantity of memory dedicated to the node, and the machine type used for machines created on the node.

Select the drop-down menu for the Node template. Then select Create node template:

image.png

4. Enter a name for the new template. For example:

mh-sole-tenant-node-group-1-template-n1.

5. The next step is to select a Node Type. Select the Node type most applicable to your needs in the drop-down menu.

image.png

Note: You can refer to this Google documentation page for more information on different node types.

5. Once you have chosen a node type, click Create:

6. The Create node group screen reappears after creating the node template. Click Create:

image.png

Creating the VDA master image

To deploy machines on the sole-tenant node, the catalog creation process requires extra steps when creating and preparing the machine image from the provisioned catalog.

Machine Instances in Google Cloud have a property called Node affinity labels. Instances used as master images for catalogs deployed to sole-tenant environments need to have a Node affinity label that matches the name of the target node group. There are two ways to apply the affinity label:

  1. Set the label in the Google Cloud Console when creating an Instance.

  2. Using the gcloud command line to set the label on existing instances.

An example of both approaches follows.

Set the node affinity label at instance creation

This section does not cover all the steps necessary for creating a GCP Instance. It provides sufficient information and context to understand the process of setting the Node affinity label. Recall that in the examples above, the node group was named mh-sole-tenant-node-grouop-1. This is the node group we need to apply to the Node affinity label on the Instance.

New instance screen

The new instance screen appears. At the bottom of the screen, a section for managing settings related to management, security, disks, networking, and sole tenancy appears.

To set a new Instance:

  1. Click the section once to open the Management settings panel.

  2. Then click Sole Tenancy to see the related settings panel.

image.png

3. The panel for setting the Node affinity label appears. Click Browse to see the available Node Groups in the currently selected Google Cloud project:

image.png

4. The Google Cloud Project used for these examples contains one node group, the one that was created in the earlier example.

To select the node group:

  1. Click the desired node group from the list.

  2. Then click Select at the bottom of the panel.

image.png

5. After clicking Select in the previous step, you will be returned to the Instance creation screen. The Node affinity labels field contains the needed value to ensure catalogs created from this master image are deployed to the indicated node group:

image.png

Set the node affinity label for an existing instance

1. To set the Node affinity label for an existing Instance, access the Google Cloud Shell and use the gcloud compute instances command.

More information about the gcloud compute instances command can be found on the Google Developer Tools page.

Include three pieces of information with the gcloud command:

  • Name of the VM. This example uses an existing VM named s2019-vda-base.

  • Name of the Node group. The node group name, previously defined, is mh-sole-tenant-node-grouop-1.

  • The Zone where the Instance resides. In this example, the VM resides in the us-east-1b zone.

2. The following image's buttons are at the top right of the Google Cloud Console window. Click the Cloud Shell button:

image.png

3. When the Cloud Shell first opens, it looks similar to the following:

image.png

4. Run this command in the Cloud Shell window:

gcloud compute instances set-scheduling "s2019-vda-base" --node-group="mh-sole-tenant-node-group-1" --zone="us-east1-b"

5. Finally, verify the details for the s2019-vda-base instance:

image.png

Google shared VPCs

If you intend to use Google Sole-tenants with a Shared VPC, refer to the GCP Shared VPC Support with Citrix DaaS document. Shared VPC support requires extra configuration steps for Google Cloud permissions and service accounts.

Create a Machine Catalog

You can create a machine catalog after performing the previous steps in this document. Use the following steps to access Citrix Cloud and navigate to the Citrix Studio Console.

1. In Citrix Studio, select Machine Catalogs:

image.png

2. Select Create Machine Catalog:

image.png

3. Click Next to begin the configuration process:

image.png

4. Select an operating system type for the machines in the catalog. Click Next:

image.png

5. Accept the default setting that the catalog utilizes power-managed machines. Then, select MCS resources. In this example case, we are using the Resources named GCP1-useast1(Zone:My Resource Location). Click Next:

 

Note: These resources come from a previously created host connection, representing the network and other resources like the domain controller and reserved sole tenants. These elements are used when deploying the catalog. The process of creating the host connection is not covered in this document. More information can be found on the Connections and resources page.
6. The next step is to select the master image for the catalog. Recall that to utilize the reserved Node Group. We must select an image with the Node affinity value set accordingly. For this example, we use the image from the previous example, s2019-vda-base.

7. Click Next:

image.png

8. This screen indicates the storage type used for the virtual machines in the machine catalog. For this example, we use the Standard Persistent Disk. Click Next:

image.png

9. This screen indicates the number of virtual machines and the zones to which the machines are deployed. In this example, we have specified three machines in the catalog. When using Sole-tenant node groups for machine catalogs, you must only select Zones containing reserved node Groups. In our example, we have a single node group that resides in Zone: us-central1-a, so that is the only zone selected. Click Next:

image.png

10. This screen provides the option to enable Write-back cache. For this example, we are not enabling this setting. Click Next:

image.png

11. During the provisioning process, MCS communicates with the domain controller to create hostnames for all the machines being created:

  • Select the Domain into which the machines are created.
  • Specify the Account naming scheme when generating the machine names.

Since the catalog in this example has three machines, we have specified a naming scheme for MySTVms-\#\# the machines:

  • MySTVms-01

  • MySTVms-02

  • MySTVms-03

12. Click Next:

image.png

13. Specify the credentials used to communicate with the domain controller, as mentioned in the previous step:

  • Select Enter Credentials.
  • Supply the credentials, then click OK.

image.png

14. This screen displays a summary of key information during the catalog creation process. The final step is to enter a catalog name and an optional description. In this example, the catalog name is My Sole Tenant Catalog. Enter the catalog name and click Finish:

image.png

15. When the catalog creation process finishes, the Citrix Studio Console resembles:

image.png

16. Use the Google Console to verify that the machines were created on the node group as expected:

image.png

Currently, migrating machine catalogs from Google Cloud general/shared space to sole tenant nodes is impossible.

Commonly encountered issues and errors

Working with any complex system containing interdependencies results in unexpected situations. This section shows a few common issues and errors encountered when setting up and configuring CVAD and GCP Sole-tenants.

The catalog was created successfully, but machines are not provisioned to the reserved node group

If you have successfully created a catalog, the most likely reasons are:

  • The node affinity label was not set on the master image.

  • The node affinity label value does not match the name of the Node group.

  • Incorrect zones were selected in the Virtual Machines screen during the catalog creation.

Catalog creation fails with ‘Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone.’

This situation presents itself with this error when View details are selected in the Citrix Studio dialog window:

System.Reflection.TargetInvocationException: One or more errors occurred. Citrix.MachineCreationAPI.MachineCreationException: One or more errors occurred. System.AggregateException: One or more errors occurred. Citrix.Provisioning.Gcp.Client.Exceptions.OperationException: Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone.

One or more of the following are the likely causes of receiving this message:

Upgrading an existing catalog fails with ‘Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone’

There are two cases in which this occurs:

  1. You are upgrading an existing sole tenant catalog that has already been provisioned using Sole Tenancy and Zone Selection. The causes of this are the same as those found in the earlier entry Catalog creation fails with ‘Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone’.

  2. You are upgrading an existing non-sole tenant catalog and do not have a sole tenant node reserved in each zone that is already provisioned with machines for the catalog. This case is considered a migration, intending to migrate machines from Google Cloud Common/Shared runtime space to a Sole Tenant Group. As noted in Migrating Non-Sole Tenant Catalogs, this is not possible.

Unknown errors during catalog provisioning

If you encounter a dialog like this when creating the catalog:

image.png

Selecting View details produces a screen resembling:

image.png

There are a few things you can check:

  • Ensure that the Machine Type specified in the Node Group Template matches the Machine Type for the master image Instance.

  • Ensure that the Machine Type for the master image has 2 or more CPUs.

Test plan

This section contains some exercises you may want to consider trying to get a feel for CVAD support of Google Cloud Sole-tenants.

Single tenant catalog

Reserve a group node in a single zone and provide both a persistent and non-persistent catalog. During the steps below, monitor the node group using the Google Cloud Console to ensure proper behavior:

  1. Power off the machines.

  2. Add machines.

  3. Power all machines on.

  4. Power all machines off.

  5. Delete some machines.

  6. Delete the machine catalog.

  7. Update the catalog

  8. Update the catalog from Non-Sole Tenant template to Sole Tenant Template

  9. Update the catalog from Sole Tenant template to Non-Sole Tenant Template

Two zone catalog

Like the exercise above, reserve two node groups and provide a persistent catalog in one zone and a non-persistent catalog in another. During the steps below, monitor the Node Group using the Google Cloud Console to ensure proper behavior:

  1. Power off the machines.

  2. Add machines.

  3. Power all machines on.

  4. Power all machines off.

  5. Delete some machines.

  6. Delete the machine catalog.

  7. Update the catalog.

  8. Update the catalog from the non-sole tenant template to the sole tenant template.

  9. Update the catalog from sole tenant template to non-sole tenant template.


User Feedback


There are no comments to display.



Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...