Jump to content
Updated Privacy Statement

Steve Beals

Administrators
  • Posts

    28
  • Joined

  • Last visited

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Steve Beals's Achievements

Contributor

Contributor (5/14)

  • Dedicated Rare
  • Conversation Starter Rare
  • Week One Done
  • One Month Later
  • One Year In

Recent Badges

0

Reputation

  1. Overview Modern Citrix Management through WEM tools provides advanced User Profile solutions to deploy Traditional file base and Container-based profiles in a simplified way through the built-in templates and granular configuration for your WEM configuration sets. This guide shows how to deploy a Citrix Profile Management solution using the Workspace Environment Management and the WEM Tool Hub. Prerequisites You will use the Workspace Environment Management toolset to deploy your profile solution. These tools can be run from the same location, allowing you to centralize the configuration. WEM Web console: The new Citrix Web console for WEM allows you to use the Out of the Box templates to set up your profile solution. Run Citrix Workspace Environment Management Web Console.exe for on-premises environments to install the web console services. By default, the infrastructure services are installed in the following folder: C:\Program Files (x86)\Citrix\Workspace Environment Management Web Console. WEM Tool Hub: WEM Tool Hub is a collection of tools that aims to simplify the configuration experience for Workspace Environment Management administrators. As part of the Profile configuration, you need a Storage repository to allocate your profiles. That’s where you will use the WEM Tool Hub to configure everything from the same place. This tool provides a configuration wizard to create the repository in your File Server, create the shared folder, and assign it to your users. Go to the WEM Utilities tab in the WEM web console to download it. Profile Configuration The first step is to enable” the Profile Management settings in the Workspace Environment Management web console. Go to Configuration Set name > Profiles > Profile Management Settings and click the Enable toggle. This will enable the configuration options and the Built-in Templates. Once enabled, you can access Quick Setup in the top right corner. This is where you access the built-in Profile Templates. You have two options: Restore a backup of a previous Profile configuration. Start a new configuration with a template. We will use Start with Template to configure a fresh Profile solution for our deployment. After this, you start your configuration by selecting the type of Profile solution you want. In our case, we choose Container-based. The Template-based deployment through Workspace Environment Management provides access to centralize your Profile configuration. Setting up the Profile solution The first step in setting up the profile solution is to set the basic settings. There are two mandatory inputs for the configuration and a few more optional settings: Enable Profile Management: This option enables WEM to manage your profile settings. Set path to user store: This is the store location where the profiles will be located. Note: To configure this, we will use the WEM Tool Hub advanced feature to centralize the configuration. For File-Based Profiles, use the Basic Settings section to configure the user store path as seen here: For Container Based Profiles, use the Profile Container section to configure the Profile Container Content as seen here: Install and Configure the WEM Tool Hub Find the WEM Tool Hub install file you downloaded earlier. Install and Open the WEM Tool Hub and select User Store Creation Tool. This feature will start a configuration wizard to create the Store for the Profiles. It walks you through the steps where you enter the required information: Where would you like to create the user store? Specify whether you want to use an existing machine or another, such as a File server. Enter the Server name and Admin credentials to create the connection. Specify the Folder Path. You can select an existing folder or be notified if you are trying to create a new one with an existing name. (Optional) Create a Shared Folder. WEM Tool Hub allows you to create a shared folder without accessing the file server. User Groups: Specify what Active Directory groups will be assigned to this Store. Once you enter the required information, a folder is created to use as your User Store Path. Copy the information and add it to the Profile configuration. Profile Container Configuration Once you create the Profile Container Folder in the WEM Tool Hub and enter the path in the WEM configuration template, you can continue fine-tuning your Profile solution. You have advanced options to manage your Profile: Enable Folder and file Exclusions. Enable file Inclusions. Enable VHD Auto expansion: If enabled, when the profile container reaches 90% utilization, it automatically expands by 10 GB, with a maximum capacity of 80 GB. Depending on your needs, you can adjust the default auto-expansion settings using the following options: Auto-expansion trigger threshold (%), Auto-expansion increment (GB), and Auto-expansion limit (GB). Set users and groups access to profile containers: Profile Handling These optional settings allow you to specify how to manage the User Profile. Depending on your profile needs, you can choose the following options: Delete locally cached profiles on logoff: This setting sets a delay before deleting cached profiles. The time is set in seconds, and the value can be between 0 and 600 seconds. Enable Template Profile: Profile Management can use a centrally stored template when creating profiles. The template can be a standard roaming, local, or mandatory profile on any network file share. There are three options: Template Overrides local profile. Template overrides roaming profile. Use template profile as Citrix Mandatory profile. Note: In Path to the template profile, enter the location of the profile you want to use as a template or mandatory profile. This path is the full path to the folder containing the NTUSER.DAT registry file and any other folders and files required for the template. Advanced Settings Profile configuration through Workspace Environment Management allows you to apply advanced settings. In this section, we set up the following features: Microsoft Applications Profile Containers: This part allows you to configure your profile container for Outlook, OneDrive, and Universal Windows Platform Microsoft Outlook search index: The user-specific Microsoft Outlook offline folder file (*.ost) and Microsoft search database are roamed along with the user profile, improving the user experience when searching mail in Microsoft Outlook. OneDrive container: Profile Management roams OneDrive folders with users by storing the folders on a VHDX disk. The disk is attached during logins and detached during logoffs. MSFT - UWP: Universal Windows Platform roaming. Advanced VHD settings: Here, you can define the size of the containers and enable features such as Disk compaction and Multi-session write-back profile Multi-session write-back for profile containers. If enabled, profile Management saves changes in multi-session scenarios for FSLogix Profile Container and Citrix Profile Management profile containers. If the same user launches multiple sessions on different machines, changes made in each session are synchronized and saved to the user’s profile container disk. Customize storage path for VHDX files. This option lets you specify a separate path to store VHDX files. By default, VHDX files are stored in the user store. VHD disk compaction. If enabled, VHD disks are automatically compacted on user logoff when certain conditions are met. This policy lets you save the storage space consumed by the profile container, OneDrive container, and mirror folder container. You can adjust the default VHD compaction settings and behavior depending on your needs and available resources. Profile Validation Once a user session is initiated, Workspace Environment Management will create the Profile container in the predefined store location created with the WEM Tool Hub. Summary This guide walked you through the installation and configuration of Citrix Profiles for a Citrix DaaS or Citrix Virtual Apps and Desktops deployment, utilizing Workspace Environment Management tools to centralize the configuration and management. During the process, we used the WEM Tool Hub to create the profile store and templates for the Workspace Environment Management Web console. Finally, we validated the creation of the profile.
  2. Overview The Citrix Global App Configuration Service allows Citrix administrators to centrally manage end users' Citrix client experience. Over 360 settings help streamline experiences like updates and plugins, Citrix App Protection, Citrix Enterprise Browser, HDX experiences, and more on a per-operating-system level. The Citrix Global Apps and Configuration service is available for cloud, hybrid, or on-premises Citrix deployments. This POC guide provides high-level instructions for using the Global App Configuration service and provides details on how to use the service to manage: Citrix Workspace app versions Citrix Secure Access agent Citrix Endpoint Analysis plug-in 3rd party plug-ins (Zoom VDI plug-in manager) Citrix Workspace app settings without a Citrix or Group Policy Citrix Enterprise Browser Prerequisites Citrix Workspace App versions must be at or above the following versions: Windows (Current Release - 2106, LTSR -2203.1) Mac (2203.1) iOS (2104) HTML5 (2111) ChromeOS (2203) Android (2104) A valid Citrix Cloud account. If you need to create a Citrix Cloud account, visit here. The addresses <https://discovery.cem.cloud.us>, <https://gacs-discovery.cloud.com>, and <https://gacs-config.cloud.com> must be contactable. This guide features the configuration and installation of the Zoom VDI plug-in; therefore, the prerequisites found here must also be completed. For on-premises StoreFront deployments, the StoreFront URL must be claimed before you can configure settings. Follow the instructions here to claim the URL. Manage Citrix Workspace app 1. Sign in to Citrix Cloud. 2. Once signed in, click the top left hamburger menu and click Workspace Configuration. 3. Click App Configuration. 4. Select your store and click Configure. 5. Change the Workspace URL channel from Production to Test Channel. 6. Select Mac and Windows, and click Updates and Plug-ins. 7. Expand Citrix Workspace App Version, select Windows, and click Edit. 8. Uncheck. Use default settings, select Update type Long Term Service Release (LTSR), and select Fast for Delay group. Click Save Draft. Note: Current Release (CR) and Long-Term Service Release (LTSR) releases are supported. The last three versions are available when using the CR release. 9. Click Yes on the Save Settings pop-up screen. 10. Select Mac, then click Edit. 11. Uncheck. Use default settings, select Update type Current Release, and select Fast for Delay group. Click Save Draft. 12. Click Yes on the Save Settings pop-up screen. 13. Click Publish Drafts. 14. Click Save on the Save settings pop-up screen. 15. The Citrix Workspace app version is now configured via the Global App Configuration service. Note: If enabling the VDA auto-update feature in the Citrix Global App Configuration service, you must update the VDA registry manually. Please see this documentation for more information. Manage Plug-ins In our deployment, the Citrix Secure Access agent allows end users to access private apps without a traditional VPN. We will configure the Citrix Global App Configuration service to install and automatically update the agent to the latest version. 1. Expand Secure Access Plug-in in Updates and Plug-ins. 2. Select Windows and click Edit. 3. Uncheck. Use default settings, select Install and update, and check Install the plug-in silently after the end user adds the store. Click Save Draft. Note: Learn more about the deployment mode settings used in this step here. 4. Expand Zoom VDI Plug-In Management. 5. Select Windows and click Edit. 6. Uncheck. Use default settings, select Install and update, and check Install the plug-in before the end user logs in. Click Save Draft. Note: This option will work only at the time of store addition. It won't work when the session is timed out, and a new login is made. 7. Click Yes on the Save Settings pop-up. 8. Select Mac and Click Edit. 9. Uncheck. Use default settings, select Install and update, and check Install the plug-in before the end user logs in. Click Save Draft. 10. Click Yes on the Save Settings pop-up. 11. Click Publish Drafts. 12. Click Save on the Save settings pop-up. The Citrix Secure Access agent and Zoom VDI Plug-in Manager are now configured to install and update with the Citrix Global App Configuration service. Manage Citrix Workspace app Settings With over 360 configuration settings available within the Global App Configuration service, Citrix administrators can reduce the overhead of managing experiences separately for different users and devices across the enterprise. For our deployment, we will enable several App Experience, Security, and Session Experience settings using the Global App Configuration service, which must typically be configured by Citrix or Group Policy. 1. Select Mac and Windows from the App Configuration > URL Configuration page. 2. Expand Security and Authentication and select App Protection. 3. Expand Anti Key Logging, select Mac and Windows, and set it to Enable. 4. Expand Anti Screen Capture, select Mac and Windows, and set to Enable. Note: Starting with the Citrix Workspace app for Windows 2302 or Windows 2301 versions, you can configure App Protection for authentication screens and self-service plug-ins using the Global App Configuration service. Additionally, configurations do not apply for Virtual Apps and Desktops and web and SaaS apps. The Delivery Controller and Citrix Secure Private Access continue to control these resources. 5. Select Authentication. 6. Expand Secure Access Auto Login, select Mac and Windows, and switch to Enabled. (This setting allows the Citrix Workspace app user to single-sign onto the Citrix Secure Access client using the store configured on the Citrix Workspace app.) 7. Expand Session Experience and select Clipboard. Expand the Clipboard Redirection setting, choose Windows, and switch to Enabled. 8. Click Publish Drafts. 9. Click Save on the Save settings pop-up screen. 10. Your settings are now saved. 11. Click View configured changes to review the changes made. 12. Select the Platform drop-down and choose Windows. Click Apply. 13. Scroll to Citrix Workspace app version and click View changes in full. 14. Review the changes and click Close. 15. Review the other changes made by selecting View changes in full for each. Review your current Citrix and Group policies to see which Citrix Workspace app configuration settings can be moved into the Citrix Global App Configuration service. Manage Citrix Enterprise Browser The Citrix Global App Configuration service can configure Citrix Enterprise Browser settings, allowing Citrix administrators to set various settings or system policies for the Citrix Enterprise Browser. For our deployment, we will set the Citrix Enterprise Browser as the default browser for all web and SaaS apps, remove the ability to save browser history and change how we handle default cookies. 1. On the App configuration > URL Configuration page, select Mac and Windows, then Enterprise Browser. 2. Scroll to and expand Open All SaaS Apps Through Citrix Enterprise Browser, select Mac and Windows, and switch each to Enabled. 3. Expand Saving Browser History Disabled, select Mac and Windows, and switch each to Enabled. 4. Scroll to and expand Default Cookies, select Mac and Windows, and choose Keep cookies for the session duration from the dropdown. 5. Click Publish Drafts. 6. Click Save on the Save settings pop-up. Your Citrix Enterprise Browser settings are now configured. Summary This guide provides high-level instructions for Citrix administrators to test the Citrix Global App Configuration service in a proof-of-concept environment. You learned how to configure the service, deploy and update the Citrix Workspace app, Secure Access agent, and Zoom VDI plug-in Manager, and configure several of the over 360 Citrix Workspace app settings available to be managed through the service. To learn more about the Citrix Global App Configuration service, visit the following: Citrix Global App Configuration service Product Documentation Citrix Global App Configuration service Tech Brief Citrix Global App Configuration service Settings and Behaviors FAQ
  3. Overview The Citrix Virtual Delivery Agent for macOS (Citrix VDA for macOS) enables HDX access to macOS Remote desktop from any device with the Citrix Workspace App installed. The Citrix VDA for macOS is designed and engineered as “yet another VDA” backed by Citrix's leading HDX technologies within the DaaS/CVAD product family. It adheres to the existing Citrix product architecture. It follows all the common roadmap of HDX features and all interfaces defined between critical components in DaaS/CVAD to ensure that our customers' knowledge and experience can be fully reused in this new VDA. This deployment guide provides the steps to Install and configure the Citrix VDA for macOS to a non-Active Directory joined macOS desktop. The following steps are covered in the guide: Prepare the installation of the non-domain joined VDA. Install the Citrix VDA for macOS. Create the Delivery Group Connect via Citrix Gateway Service to the Citrix VDA for macOS. Note: The Citrix VDA for macOS is currently in public tech preview. This guide will be updated once this feature is Generally Available.. Prerequisites Any Apple Silicon (M1, M2, and M3 families) based macOS device. macOS Venture 13 or Sonoma 14. The following network ports must be open on the macOS device: 443 (TCP/UDP for outbound traffic to Citrix Cloud). 1494 and 2598 (Optional. TCP/UDP inbound. Not required while CWA connects via Citrix Gateway Service; Only required when CWA connects to Citrix VDA for macOS directly or via on-premise NetScalar gateway) An existing Citrix DaaS subscription Citrix Workspace app 2402 or later (Windows, Linux, Mac) Citrix VDA for macOS is downloaded to your macOS device from Citrix Downloads. Note: The EAR for Citrix VDA for macOS should not be used in any production environment under any circumstances. . Prepare the Installation 1. Login to Citrix DaaS, open Web Studio, and select Machine Catalogs. 2. Click Create Machine Catalog. 3. Select Remote PC Access and click Next. (The Single-session OS option is also supported.) 4. Select "I want users to connect to the same (static) desktop each time they log on" and click Next. 5. Select the minimum functional level for this catalog and click Next on the Machine Accounts page. 6. Click Next on the Scopes page. 7. Click Next on the Workspace Environment Management (Optional) page. 8. Leave the "Enable VDA upgrade" unchecked and click Next. 9. Name your Machine Catalog and click Finish. 10. The Machine Catalog is now created. 11. Right-click on the Citrix VDA for macOS Machine Catalog and select Manage Enrollment Tokens. 12. Click Generate. 13. Enter your Token name, select Use current date and time for start, enter 100 into the Specify how many times the token can register VDAs, choose an appropriate end date for the token to allow VDA registrations, and click Generate. 14. Click Copy. 15. Optionally, click Download to download this token for later usage. 16. You will now see your active Enrollment token. Click Close. Install the VDA 1. On your macOS device, download .Net 6.0 from https://dotnet.microsoft.com/en-us/download/dotnet/6.0. 2. Select .NET Runtime v 6.0.29 and choose the macOS Arm64 download link. 3. Install the Arm64 .Net Runtime package for macOS and check the installation directory path using the command: which dotnet 4. To begin the VDA installation, select the Citrix VDA for macOS installer and double-click it. 5. Click Continue. 6. Click Continue on the License Agreement. 7. Click Agree. 8. Select Install for all users of this computer and click Continue. 9. Click Install. 10. Enter the administrator password and click Install Software if prompted. 11. The Citrix VDA for macOS installs. 12. Choose the required option on the vdaconfig UI and click Open Screen Recording Preference to enable Citrix Graphics Service. Then click Open System Settings. 13. Enable Citrix Graphics Service and close the window. 14. Click Open Accessibility Preferences. 15. Enabled the Citrix Input Service. 16. Validate that .NET 6.0 is installed correctly. 17. Copy the Enrollment Token from Notepad, paste it into Enroll with Token, then click Enroll. 18. Enter the administrator password if prompted and click OK. 19. Once your enrollment is successful, you will receive the enrolled successfully message as seen here: 20. Click Close. Your Citrix VDA for macOS has been installed. 21. Return to Citrix DaaS Web Studio console and verify that the macOS device is registered with the Machine Catalog. Create Delivery Group 1. Within Citrix DaaS Web Studio, select Delivery Groups, then click Create Delivery Group. 2. Select your Citrix VDA for macOS Machine Catalog and click Next. 3. Select which users can access the Delivery Group and click Next. 4. Add the required Desktop Assignment Rule and click Next. 5. Select Next on the App Protection screen. 6. Click Next on the Scopes window. 7. Select the appropriate License Assignment for your Citrix DaaS deployment and click Next. 8. Click Next on the Policy Set window. 9. Provide a name for your Delivery Group and click Finish. 10. Your Delivery Group is now created and ready for user access. Enable Rendezvous Our deployment will contain non-domain-joined macOS devices. The Rendezvous v2 protocol is used in this scenario, so no Cloud Connectors are required. However, we must enable the Rendezvous Citrix policy for our deployment to work. 1. Within Citrix Web Studio, select Policies and then click Create Policy. 2. Select ICA within View by Category, then scroll to and select Rendezvous Protocol. Click Allow. 3. Click Next. 4. Select Filtered users and computers and expand the Delivery Group. 5. Select Allow in the Mode drop-down menu, and then choose your Citrix VDA for macOS Delivery Group, select Enable, and click Save. 6. review your filters, then click Next. 7. Select Enable policy, provide a Policy name, then click Finish. 8. The Rendezvous policy is now active and enabled. Launch the macOS VDA 1. Open your Citrix Workspace app or browser, enter your Citrix Workspace URL, and log in. Your Citrix VDA for macOS desktop will be available for launch. 2. Launch the Desktop. 3. Your remote session via Citrix HDX to your macOS device will begin. Summary This guide walked you through the installation and configuration of the Citrix VDA for macOS. This consisted of preparing the Citrix DaaS environment by creating a Machine Catalog and capturing an Enrollment Token, installing and registering the Citrix VDA on macOS device, and creating a Delivery Group with user assignments. As a reminder, the Citrix VDA for macOS is currently in Public Tech Preview. This deployment guide will be updated during the preview period and when the feature goes GA. For more information, please visit the Citrix VDA for macOS product documentation.
  4. Overview This document is intended for Citrix technical professionals, IT decision-makers, partners, and consultants who want to deploy Microsoft OneDrive Citrix Virtual Apps environment. The content is relevant for both on-premises and public cloud architectures. The reader should understand the Citrix app, desktop virtualization offerings, and Microsoft OneDrive. The document provides best practices for deploying Microsoft OneDrive in a Citrix Virtual Apps environment. The goal is to overcome the challenges of delivering OneDrive in a Citrix environment and provide an optimal user experience. Installing OneDrive in a Citrix virtual application environment, especially for multi-user, non-persistent scenarios, requires careful planning and configuration to ensure seamless user experiences. There are several challenges when deploying OneDrive Citrix Virtual Apps environment, including: Users who roam between multiple endpoints require their applications, files, and data to roam with them. Users with multiple applications open from different sessions on hosts require access to the same storage repository. Environments that have unique applications that store data in specific mapped drives, often without user intervention. Microsoft OneDrive syncs data from the cloud upon user login, which causes challenges in a non-persistent environment where data is deleted after user logoff and can consume a significant amount of storage in the data center where the user’s sessions are hosted. Solving these challenges requires careful planning when deploying Microsoft OneDrive. This deployment guide provides the reader with the recommended installation, configuration, and optimizations to resolve the challenges of deploying OneDrive in a Citrix Virtual Apps environment with the Citrix Profile Management OneDrive Container. Conceptual Architecture Citrix Profile Management is a profile solution for Citrix Virtual Apps servers installed on each computer where user profiles must be managed. Citrix Profile Management addresses user profile deficiencies in environments where simultaneous domain logins by the same user introduce complexities and consistency issues to the profile and optimizes profiles efficiently and reliably. Citrix Profile Management is a crucial component of a well-optimized Citrix Virtual Apps environment. Please review the Citrix Profile Management Quick Start Guide for additional information on deploying the solution. Refer to Profile Management architecture for more details on the folder structure of the user store and the central location for Citrix user profiles. Profile Container The Citrix Profile Management Profile Container is a VHDX-based profile solution that allows you to store the profile folders of your choice or the entire user profile on a VHDX profile disk. A VHDX file is created per user on your profile storage share and mounted to the VDA session(s) when the users log on, resolving any issues with slow logons and improving the logon experience. Once the users have logged into their virtual application, their profile folders are available immediately. Note: Starting with Profile Management 2109, Profile Containers can store the entire user profile in the VHDX profile disk. Microsoft OneDrive Container The Citrix Profile Management OneDrive container is a VHDX-based folder roaming solution. Profile Management creates a VHDX file per user on a file share and stores the users’ OneDrive folders into the VHDX files. The VHDX files are attached when users log on and detached when users log off. With Citrix Profile Management OneDrive Containers, end-user OneDrive folders roam with users to allow access to the same OneDrive folders on any computer or virtual session. These containers are VHDX-based and are created per user within a file share. They are then mounted to the virtual session when users log on and detached when users log off. The VHDX files for the OneDrive container are stored on the same storage server as the Citrix Profile Management user store. The VHDX files for Citrix Profile Management, such as the OneDrive container and profile container, can be stored in different locations in a hybrid solution with a container plus a file-based profile. Roaming and simultaneous access from multiple sessions In many cases, end users roam between multiple endpoints in these settings, requiring their applications, files, and data to roam with them. This is seen a lot in healthcare settings that deliver Citrix Virtual Applications. Additionally, these users may have multiple applications open from different sessions on different hosts, all requiring access to the same storage repository. Citrix Profile Management is designed to resolve the roaming and multiple-session scenario. The above diagram depicts a scenario where the user has launched multiple virtual applications from multiple Citrix Virtual Delivery Agents (VDAs). The user then logs out of two applications and moves to a second device, where they log onto the open application session. Once they log off from the remaining application setting, Citrix Profile Management writes back on the specific settings that were changed during the session while letting other unchanged settings remain untouched. OneDrive Container for roaming and multiple sessions Introducing OneDrive into an environment where roaming or multiple-session Citrix Virtual Apps environments are common also brings many challenges. Challenges include the OneDrive sync app needing to be supported when using file-based profile roaming and previously requiring FSLogix to be supported. However, Citrix Profile Management v2311 and the Citrix OneDrive Container resolve these challenges and allow users to roam and open multiple virtual application sessions when using OneDrive. Note: Many organizations require LTSR infrastructure and Virtual Delivery Agents in their environments. If so, Citrix Virtual Apps and Desktops LTSR v2402 includes the Citrix Profile Management OneDrive Container. Alternatively, customers could remain on LTSR v2203 for VDAs and use newer Citrix Profile Management versions, including the OneDrive Container. OneDrive Install Recommendations Installing OneDrive within a Citrix Virtual Apps environment takes careful consideration. The typical OneDrive installation is installed into each user's profile, which in a Citrix Virtual Apps non-persistent environment will cause issues. OneDrive provides a per-machine install option for this type of environment, which installs OneDrive so that each profile logged in will use the same OneDrive.exe binary. This is recommended when installing OneDrive into a Citrix Virtual Apps non-persistent environment. The following recommendations should be considered when deploying OneDrive in a Citrix Virtual Apps environment. Ensure that the following prerequisites and requirements for OneDrive and Citrix are met. a. Citrix Virtual Apps and Desktops 2311 or later, or Citrix DaaS for the Citrix management plane. b. Windows Server 2019 and above for the VDA operating system. c. Citrix Profile Management 2311 or later e. SMB File share for the VHDX containers Shellbridge is enabled by default in Citrix VDA 2212 and later versions. If using Citrix VDA 2203, Shellbridge must be enabled manually by adding the following Registry key: 'HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\Citrix Virtual Desktop Agent` 'Name: Shellbridge` `Type: REG_DWORD` `Value: 1` Install OneDrive once at the machine level, making it available to all users who access the virtual environment. This method can save storage space but may result in a less personalized experience. For more information on deploying OneDrive per machine level, refer to Microsoft’s OneDrive per-machine installation. OneDrive must be added to the following registry location: HKLM\software\Microsoft\Windows\CurrentVersion. The following command will create the key required for the per-machine installation to run correctly. `REG ADD HKLM\Software\Microsoft\Windows\CurrentVersion\Run /v OneDrive /t REG_SZ /d "\"C:\Program Files\Microsoft OneDrive\OneDrive.exe\" /background"` Install the OneDrive sync client on the base image of your virtual machines or in the shared application layer. Configure OneDrive settings to match your deployment strategy (shared). Create a Group Policy Object (GPO) settings for the following: OneDrive Files On-Demand: This setting prevents files from being downloaded into a user's local cache until the user accesses them. Allow Storage Sense: This setting will remove local copies of files that have not been accessed for a defined period and help control the size of the OneDrive cache. Citrix Profile Management Recommended Configuration with OneDrive Integrating Citrix Profile Management with OneDrive requires careful planning and consideration, as several configurations are necessary to ensure optimal performance and user experience. With the Citrix Profile Management OneDrive container enabled in CPM 2311, the OneDrive container can be accessed via concurrent sessions by default. Citrix Profile Management file-based users must allow the OneDrive container to roam the OneDrive data. This supports simultaneous access to OneDrive. OneDrive data can be roamed for full container users with the profile container, but it does not support concurrent access to the OneDrive data. Full container users must specifically enable OneDrive container for OneDrive data concurrent access. Enable Citrix Profile Management. Enable and set the path to the Profile Management User Store. When using Citrix Profile Management and OneDrive, administrators need to Enable the OneDrive Container so that Citrix Profile Management creates a VHDX file per user on a file share and stores the users’ OneDrive folders into the VHDX files. The VHDX files are attached when users log on and detached when users log off. Enable the OneDrive container—list of OneDrive folders policy and add your OneDrive folders as a path relative to the user profile, then click Save. It is recommended that VHDX disks be automatically reattached in sessions be enabled when deploying containers in the Citrix Profile Management solution (profile container, OneDrive container, Outlook container). Once the Citrix Profile Management OneDrive container is enabled and users are signed into OneDrive, no other sign-in to OneDrive is required if users sign in to a new workstation and launch a new one or connect to a disconnected session. Any files stored In OneDrive will also be available to them immediately as they are not downloaded from the cloud but rather synced between the virtual app server they are connected to and the local area network (LAN). Lastly, offline access is still available for kiosks that do not have internet access to their files. Storage, Scale, and Sync Considerations Each OneDrive user is granted 1 TB of storage space for their personal library. Synchronizing the user’s entire library across multiple devices consumes significant storage. Citrix Profile Management VHDX containers are created in the user store per user, ensuring a single copy of the OneDrive VHDX or profile container VHDX, regardless of the number of user sessions. The VHDX base disk and the difference disk are in the user store. No matter how large a user's OneDrive folder is, it won't consume the OS disk space in an MCS scenario or the write cache disk space in a PVS scenario. Additionally, the storage consumed on the VDAs is minimal as the OneDrive VHDX containers are remote-mounted, preventing the entire user library from copying across the network during the login process. If Citrix Profile Containers are used, the default storage size of the VHDX file is 50GB per user. However, if required, you can use the Default capacity of the VHD containers policy to set a smaller default for profile size. Citrix Profile container VHDX files can auto-compact upon user logoff to save space for central or cloud storage locations. Certain conditions must be met for the compaction to take effect. Lastly, to help with scaling the environment, you can limit the ability for OneDrive to sync only when files are required by enabling the Use OneDrive Files On-Demand Group Policy setting. This setting allows OneDrive to download files when they are needed. Users accessing published applications throughout the day typically only require access to a few files within their OneDrive container. Enabling this setting ensures that the OneDrive sync client does not sync unneeded files to the container. Tips and Optimizations Optimizing the environment to make the OneDrive user experience consistent with a physical desktop is essential to healthcare customers. Several settings can be configured to adjust OneDrive settings to improve performance and reduce network traffic. Disable the OneDrive Update service on the master image. The OneDrive Update service keeps the application updated in the per-machine installation of OneDrive. It is recommended that you disable this service in non-persistent image workloads. Disable the OneDrive per machine Standalone Update and Reporting scheduled tasks. The Standalone Update scheduled task updates the OneDrive application service. The Reporting scheduled task audits every file OneDrive and provides up-to-date reports on all user file activity. Limit selective sync to essential folders. Syncing only essential OneDrive folders in the environment helps optimize performance and enhances the end-user experience in the Citrix environment. Setting upload and download limits. Setting limits to uploads and downloads for OneDrive can assist in avoiding overloading the virtual infrastructure. Enable VHD disk compaction. The Citrix Profile Management setting automatically compacts the VHDX file on user logoff when the file exceeds a specified value or the number of logoffs reaches a specified value. Replicate profile containers. Replicating user profile containers provides profile redundancy for user logins but not for in-session failovers. However, replicating the containers increases system I/O and may prolong logoffs. Enable Automatically reattach VHDX disks in sessions. This policy enables a high level of stability if a session failover occurs. The connection to the profile container is re-established to the profile store, and the VHDX is automatically re-attached. By default, with Local Caching, the entire profile is cached locally during log-in. To Reduce Login times, enable Profile Streaming, which caches profile folders on demand after login. Exclusive Access. VHD containers allow concurrent access by default. If needed, you can disable concurrent access for the profile and OneDrive. Summary Deploying Microsoft OneDrive in a Citrix Virtual Apps environment to be used within healthcare settings comes with many challenges. Having the right strategy and careful planning and execution will allow healthcare organizations to deploy OneDrive successfully within a Citrix Virtual Apps environment and overcome the difficulties of roaming users, multiple sessions, and cloud synchronization of files. Optimizing the environment will provide a better overall end-user experience within the environment and applications. References Citrix Profile Containers Deployment Guide
  5. Overview The NetScaler ADC Stack fulfills basic requirements for application availability features (ADC), security features segregation (WAF), scaling of agile application topologies (SSL and GSLB), and proactive observability (Service Graph) into a highly orchestrated Cloud Native Era, environment. Digital Transformation is fueling the need to move modern application deployments to microservice-based architectures. These Cloud-Native architectures leverage application containers, microservices, and Kubernetes. The Cloud-Native approach to modern applications also transformed the development lifecycle, including agile workflows, automation deployment toolsets, and development languages and platforms. The new era of modern application deployment has also shifted the traditional data center business model disciplines including monthly and yearly software releases and contracts, silo compute resources and budget, and the vendor consumption model. While all this modernization is occurring in the ecosystem, there are still basic requirements for application availability features (ADC), security feature segregation (WAF), scaling of agile application topologies (SSL and GSLB), and proactive observability (Service Graph) into a highly orchestrated environment. Why Citrix for Modern Application Delivery The Citrix software approach to modern application deployment requires incorporating an agile workflow across many teams within the organization. One of the advantages of agile application development and delivery is the framework known as CI/CD. CI/CD is a way to provide speed, safety, and reliability into the modern application lifecycle. Continuous Integration (CI) allows for a common code base that can be updated in real-time several times a day and integrate into an automated build platform. The three phases of Continuous Integration are Push, Test, Fix. Continuous Delivery (CD) integrates the deployment pipeline directly into the CI development process, thus optimizing and improving the software delivery model for modern applications. NetScaler ADCs tie into the continuous delivery process through implementing automated canary analysis progressive rollouts. A Solution for All Stakeholders Citrix has created a dedicated software-based solution that addresses the cross-functional requirements when deploying modern applications and integrates the various components of observability stack, security framework, and CI/CD infrastructure. Traditional organizations that adopt CI/CD techniques for deploying modern applications have recognized the need to provide a common delivery and availability framework to all members involved in CI/CD, these resources are generally defined as the business unit “stakeholders,” and while each stakeholder is invested in the overall success of the organization, each stakeholder generally has distinct requirements and differences. Some common examples of stakeholders in the modern delivery activity include: Platforms Team—deploy data center infrastructure such as IaaS, PaaS, SDN, ADC, WAF DevOps and Engineering Team—develop and maintain the unified code repository, automation tools, software architecture Service Reliability Engineering (SRE) Team—reduce organizational silos, error management, deployment automation, and measurements Security Operations Team—proactive security policies, incident management, patch deployment, portfolio hardening The Citrix Software Stack Explained Single Code Base - it is all the same code for you - On-premises deployments, Public Cloud deployments, Private cloud deployments, GOV Cloud deployments Choice of Platforms - to meet any agile requirement, choose any NetScaler ADC model CPX – the NetScaler ADC CPX is a NetScaler ADC delivered as a container VPX – the NetScaler ADC VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms offering performance ranging from 10 Mb/s to 100 Gb/s. MPX – the NetScaler ADC MPX is a hardware-based application delivery appliance offering performance ranging from 500 Mb/s to 200 Gb/s. SDX – the NetScaler ADC SDX appliance is a multitenant platform on which you can provision and manage multiple virtual NetScaler ADC machines (instances). BLX – the NetScaler ADC BLX appliance is a software form-factor of NetScaler ADC. It is designed to run natively on bare-metal-Linux on commercial off-the-shelf servers (COTS) Containerized Environments - create overlays and automatically configure your NetScaler ADC Citrix Ingress Controller - built around Kubernetes Ingress and automatically configures one or more NetScaler ADC based on the Ingress resource configuration Citrix Node Controller – create a VXLAN-based overlay network between the Kubernetes nodes and the Ingress NetScaler ADC Citrix IPAM Controller – automatically assign the load balancing virtual server on a NetScaler ADC with an IP address (virtual IP address or VIP) Pooled Capacity Licensing – one global license Ubiquitous global license pool decouples platforms and licenses for complete flexibility for design and performance Application Delivery Manger – the single pane of glass Manage the fleet, orchestrate policies and applications, monitor and troubleshoot in real-time Flexible Topologies – traditional data center or modern clouds Single tier, two-tier, and service mesh lite The NetScaler ADC Value Kubernetes and CNCF Open Source Tools integration The Perfect Proxy – a proven Layer7 application delivery controller for modern apps High performance ADC container in either a Pod or Sidecar deployment Low Latency access to the Kubernetes cluster using multiple options Feature-Rich API – easily implement and orchestrate security features without limit Advanced Traffic Steering and Canary deployment for CI/CD Proven security through TLS/SSL, WAF, DoS, and API protection Rich Layer 7 capabilities Integrated Monitoring for Legacy and Modern application deployments Actionable Insights and Service Graphs for Visibility The NetScaler ADC Benefits Move legacy apps without having to rewrite them Developers can secure apps with NetScaler ADC policies using Kubernetes APIs (Using CRDs – developer friendly) Deploy high performing microservices for North-South and Service Mesh Use one Application Service Graph for all microservices Troubleshoot microservices problems faster across TCP, UDP, HTTP/S, SSL Secure APIs and configure using Kubernetes APIs Enhance CICD process for canary deployments Architecture Components The NetScaler ADC Suite Advantage Citrix is Choice. Whether you are living with legacy data centers and components, or have launched a new cloud native modern application, NetScaler ADC integrates seamlessly into any platform requirement that you may have. We provide cloud native ADC functionality for subscription-based cloud platforms and tools, allow for the directing and orchestration of traffic into your Kubernetes cluster on-premises with easy Ingress controller orchestration, and address the Service Mesh architectures from simple to complex. Citrix is Validated. Validated design templates and sample applications allow for the easy reference of a desired state and business requirement to be addressed quickly and completely. We have documented and published configuration examples in a central location for ease of reference across DevOps, SecOps, and Platforms teams. Citrix is Agile and Modern. Create foundational architecture for customers to consume new features of the Citrix Cloud Native Stack with their existing ADC and new modules (CNC, IPAM, etc.) Citrix is Open. Help customers understand our integration with Partner ecosystems. In this document we use both OpenSource CNCF tools and Citrix enterprise grade products. Partner Ecosystem This topic provides details about various Kubernetes platforms, deployment topologies, features, and CNIs supported in Cloud-Native deployments that include NetScaler ADC and Citrix ingress controller. The Citrix Ingress Controller is supported on the following platforms: Kubernetes v1.10 on bare metal or self-hosted on public clouds such as, AWS, GCP, or Azure. Google Kubernetes Engine (GKE) Elastic Kubernetes Service (EKS) Azure Kubernetes Service (AKS) Red Hat OpenShift version 3.11 and later Pivotal Container Service (PKS) Diamanti Enterprise Kubernetes Platform Our Partner Ecosystem also includes the following: Prometheus – monitoring tool for metrics, alerting, and insights Grafana – a platform for analytics and monitoring Spinnaker – a tool for multi-cloud continuous delivery and canary - analytics Elasticsearch – an application or site search service Kibana – a visualization tool for elastic search data and an elastic stack navigation tool Fluentd – a data collector tool The focus of this next section is design/architecture with OpenShift. OpenShift Overview Red Hat OpenShift is a Kubernetes platform for deployments focusing on using microservices and containers to build and scale applications faster. Automating, installing, upgrading, and managing the container stack, OpenShift streamlines Kubernetes and facilitates day-to-day DevOps tasks. Developers provision applications with access to validated solutions and partners that are pushed to production via streamlined workflows. Operations can manage and scale the environment using the Web Console and built-in logging and monitoring. Figure 1-6: OpenShift high-level architecture. More advantages and components of OpenShift include: Choice of Infrastructure Master and worker nodes Image Registry Routing and Service Layer Developer operation (introduced but is beyond the scope of this doc) Use cases for integrating Red Hat OpenShift with the Citrix Native Stack include: Legacy application support Rewrite/Responder policies deployed as APIs Microservices troubleshooting Day-to-day operations with security patches and feature enhancements In this document we cover how Citric ADC provides solid Routing/Service layer integration. OpenShift Projects The first new concept OpenShift adds is project, which effectively wraps a namespace, with access to the namespace being controlled via the project. Access is controlled through an authentication and authorization model based on users and groups. Projects in OpenShift therefore provide the walls between namespaces, ensuring that users, or applications, can only see and access what they are allowed to. OpenShift Namespaces The primary grouping concept in Kubernetes is the namespace. Namespaces are also a way to divide cluster resources between multiple uses. That being said, there is no security between namespaces in Kubernetes. If you are a “user” in a Kubernetes cluster, you can see all the different namespaces and the resources defined in them. OpenShift Software Defined Networking (SDN) OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. This pod network is established and maintained by the OpenShift SDN, which configures an overlay network using Open vSwitch (OVS). OpenShift SDN provides three SDN plug-ins for configuring the pod network: The ovs-subnet plug-in is the original plug-in, which provides a "flat" pod network where every pod can communicate with every other pod and service. The ovs-multitenant plug-in provides project-level isolation for pods and services. Each project receives a unique Virtual Network ID (VNID) that identifies traffic from pods assigned to the project. Pods from different projects cannot send packets to or receive packets from pods and services of a different project. However, projects that receive VNID 0 are more privileged in that they are allowed to communicate with all other pods, and all other pods can communicate with them. In OpenShift Container Platform clusters, the default project has VNID 0.This facilitates certain services, such as the load balancer, to communicate with all other pods in the cluster and vice versa. The ovs-networkpolicy plug-in allows project administrators to configure their own isolation policies using NetworkPolicy objects. OpenShift Routing and Plug-Ins An OpenShift administrator can deploy routers in an OpenShift cluster, which enable routes created by developers to be used by external clients. The routing layer in OpenShift is pluggable, and two available router plug-ins are provided and supported by default. OpenShift routers provide external host name mapping and load balancing to services over protocols that pass distinguishing information directly to the router; the host name must be present in the protocol in order for the router to determine where to send it. Router plug-ins assume they can bind to host ports 80 and 443. This is to allow external traffic to route to the host and subsequently through the router. Routers also assume that networking is configured such that it can access all pods in the cluster. The OpenShift router is the ingress point for all external traffic destined for services in your OpenShift installation.OpenShift provides and supports the following router plug-ins: The HAProxy template router is the default plug-in. It uses the openshift3/ose-haproxy-routerimage to run an HAProxy instance alongside the template router plug-in inside a container on OpenShift. It currently supports HTTP(S) traffic and TLS-enabled traffic via SNI. The router’s container listens on the host network interface, unlike most containers that listen only on private IPs. The router proxies external requests for route names to the IPs of actual pods identified by the service associated with the route. The Citrix ingress controller can be deployed as a router plug-in in the OpenShift cluster to integrate with NetScaler ADCs deployed in your environment. The Citrix ingress controller enables you to use the advanced load balancing and traffic management capabilities of NetScaler ADC with your OpenShift cluster. See Deploy the Citrix ingress controller as a router plug-in in an OpenShift cluster. OpenShift Routes and Ingress Methods In an OpenShift cluster, external clients need a way to access the services provided by pods.OpenShift provides two resources for communicating with services running in the cluster: routes and Ingress. Routes In an OpenShift cluster, a route exposes a service on a given domain name or associates a domain name with a service. OpenShift routers route external requests to services inside the OpenShift cluster according to the rules specified in routes. When you use the OpenShift router, you must also configure the external DNS to make sure that the traffic is landing on the router. The Citrix ingress controller can be deployed as a router plug-in in the OpenShift cluster to integrate with NetScaler ADCs deployed in your environment. The Citrix ingress controller enables you to use the advanced load balancing and traffic management capabilities of NetScaler ADC with your OpenShift cluster. OpenShift routes can be secured or unsecured. Secured routes specify the TLS termination of the route. The Citrix ingress controller supports the following OpenShift routes: Unsecured Routes: For Unsecured routes, HTTP traffic is not encrypted. Edge Termination: For edge termination, TLS is terminated at the router. Traffic from the router to the endpoints over the internal network is not encrypted. Passthrough Termination: With passthrough termination, the router is not involved in TLS offloading and encrypted traffic is sent straight to the destination. Re-encryption Termination: In re-encryption termination, the router terminates the TLS connection but then establishes another TLS connection to the endpoint. Based on how you want to use NetScaler ADC, there are two ways to deploy the Citrix Ingress Controller as a router plug-in in the OpenShift cluster: as a NetScaler ADC CPX within the cluster or as a NetScaler ADC MPX/VPX outside the cluster. Deploy NetScaler ADC CPX as a router within the OpenShift cluster The Citrix Ingress controller is deployed as a sidecar alongside the NetScaler ADC CPX container in the same pod. In this mode, the Citrix ingress controller configures the NetScaler ADC CPX. See Deploy NetScaler ADC CPX as a router within the OpenShift cluster. Deploy NetScaler ADC MPX/VPX as a router outside the OpenShift cluster The Citrix ingress controller is deployed as a stand-alone pod and allows you to control the NetScaler ADC MPX, or VPX appliance from outside the OpenShift cluster. See Deploy NetScaler ADC MPX/VPX as a router outside the OpenShift cluster. Ingress Kubernetes Ingress provides you a way to route requests to services based on the request host or path, centralizing a number of services into a single entry point. The Citrix Ingress Controller is built around Kubernetes Ingress, automatically configuring one or more NetScaler ADC appliances based on the Ingress resource. Routing with Ingress can be done by: Host name based routing Path based routing Wildcard based routing Exact path matching Non-Hostname routing Default back end See Ingress configurations for examples and more information. Deploy the Citrix ingress controller as an OpenShift router plug-in Based on how you want to use NetScaler ADC, there are two ways to deploy the Citrix Ingress Controller as a router plug-in in the OpenShift cluster: As a sidecar container alongside NetScaler ADC CPX in the same pod: In this mode, the Citrix ingress controller configures the NetScaler ADC CPX. See Deploy NetScaler ADC CPX as a router within the OpenShift cluster. As a standalone pod in the OpenShift cluster: In this mode, you can control the NetScaler ADC MPX or VPX appliance deployed outside the cluster. See Deploy NetScaler ADC MPX/VPX as a router outside the OpenShift cluster. Recommended Architectures We recommend the following Architectures for customers when designing their microservices architectures: Citrix Unified Ingress Citrix 2-Tier Ingress Citrix Service Mesh Lite Figure 1-2: The architecture ranges from relatively simple to more complex and feature‑rich. Citrix Unified Ingress In a Unified Ingress deployment, NetScaler ADC MPX or VPX devices proxy North-South traffic from the clients to the enterprise-grade applications deployed as microservices inside the cluster. The Citrix ingress controller is deployed as a pod or sidecar in the Kubernetes cluster to automate the configuration of NetScaler ADC devices (MPX or VPX) based on the changes to the microservices or the Ingress resources. You can begin implementing the Unified Ingress Model while your application is still a monolith. Simply position the NetScaler ADC as a reverse proxy in front of your application server and implement the features described later. You are then in a good position to convert your application to microservices. Communication between the microservices is handled through a mechanism of your choice (kube-proxy, IPVS, and so on). Functionality provided in different categories The capabilities of the Unified Ingress architecture fall into three groups. The features in the first group optimize performance: Load balancing Low‑latency connectivity High availability The features in the second group improve security and make application management easier: Rate limiting SSL/TLS termination HTTP/2 support Health checks The features in the final group are specific to microservices: Central communications point for services API gateway capability Summary Features with the Unified Ingress Model include robust load-balancing to services, a central communication point, Dynamic Service Discovery, low latency connectivity, High Availability, rate limiting, SSL/TLS termination, HTTP/2, and more. The Unified Ingress model makes it easy to manage traffic, load balance requests, and dynamically respond to changes in the back end microservices application. Advantages include: North-South traffic flows are well-scalable, visible for observation and monitoring, and provide continuous delivery with tools such as Spinnaker and Citrix ADM A single tier unifies the infrastructure team who manages the network and platform services, and reduces hops to lower latency Suitable for internal applications that do not need Web App Firewall and SSL Offload, but can be added later Disadvantages include: No East-West security with kube-proxy, but can add Calico for L4 Segmentation Kube-proxy scalability is unknown There is limited to no visibility of East-West traffic since kube-proxy does not give visibility, control, or logs, mitigating open tool integration and continuous delivery The platform team must also be network savvy Citrix 2-Tier Ingress The 2-Tier Ingress architectural model is a great solution for Cloud Native Novices. In this model, the NetScaler ADC in tier 1 manages incoming traffic, but sends requests to the 2-tier ADC managed by developers rather than directly to the service instances. The Tier-2 Ingress model applies policies, written by the Platform and Developers team, to inbound traffic only, and enables Cloud scale and multitenancy. Figure 1-4: Diagram of the Citrix 2-Tier Ingress Model with a tier-1 NetScaler ADC VPX/MPX and tier-2 NetScaler ADC CPX containers. Functionality provided by Tier-1 The first tier ADC, managed by the traditional Networking team, provides L4 load balancing, Citrix Web App Firewall, SSL Offload, and reverse proxy services. The NetScaler ADC MPX or VPX devices in Tier-1 proxy the traffic (North-South) from the client to NetScaler ADC CPXs in Tier-2. By default the Citrix Ingress Controller will program the following configurations on Tier-1: Reverse proxy of applications to users: Content Switching virtual servers Virtual server (front-end, user-facing) Service Groups SSL Offload NetScaler Logging / Debugging Health monitoring of services Functionality provided by Tier-2 While the first tier ADC provides reverse proxy services, the second tier ADC, managed by the Platform team, serves as a communication point to microservices, providing: Dynamic service discovery Load balancing Visibility and rich metrics The Tier-2 NetScaler ADC CPX then routes the traffic to the microservices in the Kubernetes cluster. The Citrix ingress controller deployed as a standalone pod configures the Tier-1 devices. And, the sidecar controller in one or more NetScaler ADC CPX pods configures the associated NetScaler ADC CPX in the same pod. Summary The networking architecture for microservices in the 2-tier model uses two ADCs configured for different roles. The Tier-1 ADC acts as a user-facing proxy server and the Tier-2 ADC as a proxy for the microservices. Splitting different types of functions between two different tiers provides speed, control, and opportunities to optimize for security. In the second tier, load balancing is fast, robust, and configurable. With this model, there is a clear separation between the ADC administrator and the Developer. It is BYOL for Developers. Advantages include: North-South traffic flows are well-scalable, visible for observation and monitoring, and provide continuous delivery with tools such as Spinnaker and Citrix ADM Simplest and faster deployment for a Cloud Native Novice with limited new learning for Network and Platform teams Disadvantages include: No East-West security with kube-proxy, but can add Calico for L4 Segmentation Kube-proxy scalability is unknown There is limited to no visibility of East-West traffic since kube-proxy does not give visibility, control, or logs, mitigating open tool integration and continuous delivery. Citrix Service Mesh Lite The Service Mesh Lite is the most feature rich of the three models. It is internally secure, fast, efficient, and resilient, and it can be used to enforce policies for inbound and inter-container traffic. The Service Mesh Lite model is suitable for several use cases, which include: Health and finance apps – Regulatory and user requirements mandate a combination of security and speed for financial and health apps, with billions of dollars in financial and reputational value at stake. Ecommerce apps – User trust is a huge issue for ecommerce and speed is a key competitive differentiator. So, combining speed and security is crucial. Summary Advantages include: A more robust approach to networking, with a load balancer CPX applies policies to inbound and inter-container traffic, deploying full L7 policies Richer observability, analytics, continuous delivery and security for North-South and East-West traffic Canary for each container with an embedded NetScaler ADC A single tier unifies the infrastructure team who manages the network and platform services, and reduces hops to lower latency Disadvantages include: More complex model to deploy Platform team must be network savvy Summary of Architecture Choices Citrix Unified Ingress North-South (NS) Application Traffic - one NetScaler ADC is responsible for L4 and L7 NS traffic, security, and external load balancing outside of the K8s cluster. East-West (EW) Application Traffic - kube-proxy is responsible for L4 EW traffic. Security - the ADC is responsible for securing NS traffic and authenticating users. Kube-proxy is responsible for L4 EW traffic. Scalability and Performance - NS traffic is well-scalable, and clustering is an option.EW traffic and kube-proxy scalability is unknown. Observability - the ADC provides excellent observability for NS traffic, but there is no observability for EW traffic. Citrix 2-Tier Ingress North-South (NS) Application Traffic - the Tier-1 ADC is responsible for SSL offload, Web App Firewall, and L4 NS traffic. It is used for both monolith and CN applications. The Tier-2 CPX Manages rapid change of k8s and L7 NS traffic. East-West (EW) Application Traffic - kube-proxy is responsible for L4 EW traffic. Security - the Tier-1 ADC is responsible for securing NS traffic. Authentication can occur at either ADC. EW traffic is not secured with kube-proxy.Add Calico for L4 Segmentation. Scalability and Performance - NS traffic is well-scalable, and clustering is an option.EW traffic and kube-proxy scalability is unknown. Observability - the Tier-1 ADC provides excellent observability for NS traffic, but there is no observability for EW traffic. Citrix Service Mesh Lite North-South (NS) Application Traffic - the Tier-1 ADC is responsible for SSL offload, Web App Firewall, and L4 NS traffic. It is used for both monolith and CN applications. The Tier-2 CPX Manages rapid change of k8s and L7 NS traffic. East-West (EW) Application Traffic - the Tier-2 CPX or any open source proxy is responsible for L4 EW traffic. Customers can select which applications use the CPX and which use kube-proxy. Security - the Tier-1 ADC is responsible for securing NS traffic. Authentication can occur at either ADC. The Citrix CPX is responsible for authentication, SSL Offload, and securing EW traffic. Encryption can be applied at the application level. Scalability and Performance - NS and EW traffic is well-scalable, but it adds 1 in-line hop. Observability - the Tier-1 ADC provides excellent observability of NS traffic. The CPX in Tier-2 provides observability of EW traffic, but it can be disabled to reduce the CPX memory or CPU footprint. How to Deploy Citrix Unified Ingress To validate a Citrix Unified Ingress deployment with OpenShift, use a sample "hello-world" application with a NetScaler ADC VPX or MPX. The default namespace for OpenShift, "default", is used for this deployment. A NetScaler ADC Instance is hand built and configured with a NSIP/SNIP.Installing NetScaler ADC on XenServer can be found here. Copy the following YAML file example into an OpenShift directory and name it application.yaml. apiVersion: apps/v1kind: Deploymentmetadata: name: hello-worldspec: selector: matchLabels: run: load-balancer-example replicas: 2 template: metadata: labels: run: load-balancer-example spec: containers: - name: hello-world image: gcr.io/google-samples/node-hello:1.0 ports: - containerPort: 8080 protocol: TCP Deploy the application. oc apply -f application.yaml Ensure pods are running. oc get pods Copy the following YAML file example into an OpenShift directory and name it service.yaml. apiVersion: v1kind: Servicemetadata: name: hello-world-servicespec: type: NodePort ports: - port: 80 targetPort: 8080 selector: run: load-balancer-example Expose the application via NodePort with a service. oc apply -f service.yaml Verify that the service was created. oc get service Copy the following YAML file example into an OpenShift directory and name it ingress.yaml.You must change the annotation "ingress.citrix.com/frontend-ip" to a free IP address to be made the VIP on the NetScaler ADC. apiVersion: extensions/v1beta1kind: Ingressmetadata: name: hello-world-ingress annotations: kubernetes.io/ingress.class: "vpx" ingress.citrix.com/insecure-termination: "redirect" ingress.citrix.com/frontend-ip: "10.217.101.183"spec: rules: - host: helloworld.com http: paths: - path: backend: serviceName: hello-world-service servicePort: 80 Deploy the Ingress YAML file. oc apply -f ingress.yaml Now there are application pods which we have exposed using a service, and can route traffic to them using Ingress. Install the Citrix Ingress Controller (CIC) to push these configurations to our Tier 1 ADC VPX.Before deploying the CIC, deploy an RBAC file that gives the CIC the correct permissions to run. Note: The rbac yaml file specifies the namespace and it will have to be changed, pending which namespace is being used. kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: cpxrules: - apiGroups: [""] resources: ["services", "endpoints", "ingresses", "pods", "secrets", "nodes", "routes", "routes/status", "tokenreviews", "subjectaccessreviews"] verbs: ["*"] - apiGroups: ["extensions"] resources: ["ingresses", "ingresses/status"] verbs: ["*"] - apiGroups: ["citrix.com"] resources: ["rewritepolicies"] verbs: ["*"] - apiGroups: ["apps"] resources: ["deployments"] verbs: ["*"]kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: cpxroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cpxsubjects:- kind: ServiceAccount name: cpx namespace: defaultapiVersion: v1kind: ServiceAccountmetadata: name: cpx namespace: default Deploy the RBAC file. oc apply -f rbac.yaml Before deploying the CIC, edit the YAML file. Under spec, add either the NSIP or the SNIP as long as management is enabled on the SNIP, of the Tier 1 ADC. Notice the argument "ingress-classes" is the same as the ingress class annotation specified in the Ingress YAML file. apiVersion: v1kind: Podmetadata: name: hello-world-cic labels: app: hello-world-cicspec: serviceAccountName: cpx containers: - name: hello-world-cic image: "quay.io/citrix/citrix-k8s-ingress-controller:1.1.3" env: # Set NetScaler NSIP/SNIP, SNIP in case of HA (mgmt has to be enabled) - name: "NS_IP" value: "10.217.101.193" # Set username for Nitro # Set log level - name: "NS_ENABLE_MONITORING" value: "NO" - name: "NS_USER" value: "nsroot" - name: "NS_PASSWORD" value: "nsroot" - name: "EULA" value: "yes" - name: "LOGLEVEL" value: "DEBUG" args: - --ingress-classes vpx - --feature-node-watch false imagePullPolicy: IfNotPresent Deploy the CIC. oc apply -f cic.yaml Verify all pods are running. oc get pods Edit the hosts file on your local machine with an entry for helloworld.com and the VIP on the NetScaler ADC specified in the Ingress YAML file. Navigate to helloworld.com in a browser."Hello Kubernetes!" should appear. Note: The following are delete commands oc delete pods (pod name) -n (namespace name) oc delete deployment (deployment name) -n (namespace name) oc delete service (service name) -n (namespace name) oc delete ingress (ingress name) -n (namespace name) oc delete serviceaccounts (serviceaccounts name) -n (namespace name) Citrix 2-Tier Ingress To validate a Citrix 2-Tier Ingress deployment with OpenShift, use a sample "hello-world" application with a NetScaler ADC VPX or MPX. The default namespace "tier-2-adc", is used for this deployment.**Note: When deploying pods, services, and Ingress, the namespace must be specified using the parameter "-n (namespace name)". A NetScaler ADC Instance is hand built and configured with a NSIP/SNIP.Installing NetScaler ADC on XenServer can be found [here. If the instance was already configured, clear any virtual servers in Load Balancing or Content Switching that were pushed to the ADC from deploying hello-world as a Unified Ingress. Create a namespace called "tier-2-adc". oc create namespace tier-2-adc Copy the following YAML file example into an OpenShift directory and name it application-2t.yaml. apiVersion: apps/v1kind: Deploymentmetadata: name: hello-worldspec: selector: matchLabels: run: load-balancer-example replicas: 2 template: metadata: labels: run: load-balancer-example spec: containers: - name: hello-world image: gcr.io/google-samples/node-hello:1.0 ports: - containerPort: 8080 protocol: TCP Deploy the application in the namespace. oc apply -f application-2t.yaml -n tier-2-adc Ensure pods are running. oc get pods Copy the following YAML file example into an OpenShift directory and name it service-2t.yaml. apiVersion: v1kind: Servicemetadata: name: hello-world-service-2spec: type: NodePort ports: -port: 80 targetPort: 8080 selector: run: load-balancer-example Expose the application via NodePort with a service. oc apply -f service-2t.yaml -n tier-2-adc oc apply -f service-2t.yaml -n tier-2-adc Verify that the service was created. oc get service -n tier-2-adc Copy the following YAML file example into an OpenShift directory and name it ingress-2t.yaml. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hello-world-ingress-2 annotations: kubernetes.io/ingress.class: "cpx" spec: backend: serviceName: hello-world-service-2 servicePort: 80 Deploy the Ingress YAML file. oc apply -f ingress-2t.yaml -n tier-2-adc Deploy an RBAC file that gives the CIC and CPX the correct permissions to run. Note: The rbac yaml file specifies the namespace and it will have to be changed, pending which namespace is being used. kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: cpxrules: -apiGroups: [""] resources: ["services", "endpoints", "ingresses", "pods", "secrets", "nodes", "routes", "routes/status", "tokenreviews", "subjectaccessreviews"] verbs: ["*"] -apiGroups: ["extensions"] resources: ["ingresses", "ingresses/status"] verbs: ["*"] -apiGroups: ["citrix.com"] resources: ["rewritepolicies"] verbs: ["*"] -apiGroups: ["apps"] resources: ["deployments"] verbs: ["*"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: cpxroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cpxsubjects:- kind: ServiceAccount name: cpx namespace: tier-2-adc---apiVersion: v1kind: ServiceAccountmetadata: name: cpx namespace: tier-2-adc Deploy the RBAC file. oc apply -f rbac-2t.yaml The service account needs elevated permissions to create a CPX. oc adm policy add-scc-to-user privileged system:serviceaccount:tier-2-adc:cpx Edit the CPX YAML file and call it cpx-2t.yaml. This deploys the CPX and the service that exposes it. Notice the argument for Ingress Class matches the annotation in the ingress-2t.yaml file. apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-world-cpx-2 spec: replicas: 1 template: metadata: name: hello-world-cpx-2 labels: app: hello-world-cpx-2 app1: exporter annotations: NETSCALER_AS_APP: "True" spec: serviceAccountName: cpx containers: - name: hello-world-cpx-2 image: "quay.io/citrix/citrix-k8s-cpx-ingress:13.0-36.28" securityContext: privileged: true env: - name: "EULA" value: "yes" - name: "KUBERNETES_TASK_ID" value: "" imagePullPolicy: Always # Add cic as a sidecar - name: cic image: "quay.io/citrix/citrix-k8s-ingress-controller:1.1.3" env: - name: "EULA" value: "yes" - name: "NS_IP" value: "127.0.0.1" - name: "NS_PROTOCOL" value: "HTTP" - name: "NS_PORT" value: "80" - name: "NS_DEPLOYMENT_MODE" value: "SIDECAR" - name: "NS_ENABLE_MONITORING" value: "YES" - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace args: - --ingress-classes cpx imagePullPolicy: Always apiVersion: v1 kind: Service metadata: name: lb-service-cpx labels: app: lb-service-cpx spec: type: NodePort ports: - port: 80 protocol: TCP name: http targetPort: 80 selector: app: hello-world-cpx-2 Deploy the CPX. oc apply -f cpx-2t.yaml -n tier-2-adc Verify the pod is running and the service was created. oc get pods -n tier-2-adc oc get service -n tier-2-adc Create an Ingress to route from the VPX to the CPX. The front-end IP should be a free IP on the ADC. Give the file a name: ingress-cpx-2t.yaml. apiVersion: extensions/v1beta1kind: Ingressmetadata: name: hello-world-ingress-vpx-2 annotations: kubernetes.io/ingress.class: "helloworld" ingress.citrix.com/insecure-termination: "redirect" ingress.citrix.com/frontend-ip: "10.217.101.183"spec: rules: - host: helloworld.com http: paths: - path: backend: serviceName: lb-service-cpx servicePort: 80 Deploy the Ingress. oc apply -f ingress-cpx-2t.yaml -n tier-2-adc Before deploying the CIC, edit the YAML file. Under spec, add either the NSIP or the SNIP as long as management is enabled on the SNIP, of the Tier 1 ADC. apiVersion: v1kind: Podmetadata: name: hello-world-cic labels: app: hello-world-cicspec: serviceAccountName: cpx containers: - name: hello-world-cic image: "quay.io/citrix/citrix-k8s-ingress-controller:1.1.3" env: # Set NetScaler NSIP/SNIP, SNIP in case of HA (mgmt has to be enabled) - name: "NS_IP" value: "10.217.101.176" # Set username for Nitro # Set log level - name: "NS_ENABLE_MONITORING" value: "NO" - name: "NS_USER" value: "nsroot" - name: "NS_PASSWORD" value: "nsroot" - name: "EULA" value: "yes" - name: "LOGLEVEL" value: "DEBUG" args: - --ingress-classes helloworld - --feature-node-watch false imagePullPolicy: IfNotPresent Deploy the CIC. oc apply -f cic-2t.yaml -n tier-2-adc Verify all pods are running. oc get pods -n tier-2-adc Edit the hosts file on your local machine with an entry for helloworld.com and the VIP on the NetScaler ADC specified in the Ingress YAML file that routes from the VPX to the CPX. Navigate to helloworld.com in a browser."Hello Kubernetes!" should appear. Citrix Service Mesh Lite Service Mesh Lite allows the introduction of CPX (or other NetScaler ADC appliances) as a replacement for the built-in HAProxy functionalities. This enables us to expand upon our N/S capabilities in Kubernetes and provide E/W traffic load balancing, routing, and observability as well. NetScaler ADC (MPX, VPX, or CPX) can provide such benefits for E-W traffic such as: Mutual TLS or SSL offload Content based routing to allow or block traffic based on HTTP or HTTPS header parameters Advanced load balancing algorithms (for example, least connections, least response time and so on.) Observability of east-west traffic through measuring golden signals (errors, latencies, saturation, or traffic volume).Citrix ADM’s Service Graph is an observability solution to monitor and debug microservices. In this deployment scenario we deploy the Bookinfo application and observe how it functions by default. Then we go through to rip and replace the default Kubernetes services and use CPX and VPX to proxy our E/W traffic. Citrix Service Mesh Lite with a CPX To validate a Citrix Unified Ingress deployment with OpenShift, use a sample "hello-world" application with a NetScaler ADC VPX or MPX. The default namespace for OpenShift, "default," is used for this deployment. A NetScaler ADC Instance is hand built and configured with a NSIP/SNIP.Installing NetScaler ADC on XenServer can be found here. Create a namespace for this deployment. In this example, sml is used. oc create namespace sml Copy the following YAML to create the deployment and services for Bookinfo. Name it bookinfo.yaml. ################################################################################################### Details service##################################################################################################apiVersion: v1kind: Servicemetadata: name: details labels: app: details service: detailsspec: ports: - port: 9080 name: http selector: app: details---apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: details-v1 labels: app: details version: v1spec: replicas: 1 template: metadata: annotations: sidecar.istio.io/inject: "false" labels: app: details version: v1 spec: containers: - name: details image: docker.io/maistra/examples-bookinfo-details-v1:0.12.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080---################################################################################################### Ratings service##################################################################################################apiVersion: v1kind: Servicemetadata: name: ratings labels: app: ratings service: ratingsspec: ports: - port: 9080 name: http selector: app: ratings---apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: ratings-v1 labels: app: ratings version: v1spec: replicas: 1 template: metadata: annotations: sidecar.istio.io/inject: "false" labels: app: ratings version: v1 spec: containers: - name: ratings image: docker.io/maistra/examples-bookinfo-ratings-v1:0.12.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080---################################################################################################### Reviews service##################################################################################################apiVersion: v1kind: Servicemetadata: name: reviews labels: app: reviews service: reviewsspec: ports: - port: 9080 name: http selector: app: reviews---apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: reviews-v1 labels: app: reviews version: v1spec: replicas: 1 template: metadata: annotations: sidecar.istio.io/inject: "false" labels: app: reviews version: v1 spec: containers: - name: reviews image: docker.io/maistra/examples-bookinfo-reviews-v1:0.12.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: reviews-v2 labels: app: reviews version: v2 spec: replicas: 1 template: metadata: annotations: sidecar.istio.io/inject: "false" labels: app: reviews version: v2 spec: containers: - name: reviews image: docker.io/maistra/examples-bookinfo-reviews-v2:0.12.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: reviews-v3 labels: app: reviews version: v3 spec: replicas: 1 template: metadata: annotations: sidecar.istio.io/inject: "false" labels: app: reviews version: v3 spec: containers: - name: reviews image: docker.io/maistra/examples-bookinfo-reviews-v3:0.12.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 --- ################################################################################################## # Productpage services ################################################################################################## apiVersion: v1 kind: Service metadata: name: productpage-service spec: type: NodePort ports: - port: 80 targetPort: 9080 selector: app: productpage --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: productpage-v1 labels: app: productpage version: v1 spec: replicas: 1 template: metadata: annotations: sidecar.istio.io/inject: "false" labels: app: productpage version: v1 spec: containers: - name: productpage image: docker.io/maistra/examples-bookinfo-productpage-v1:0.12.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 --- Deploy the bookinfo.yaml in the sml namespace. oc apply -f bookinfo.yaml -n sml Copy and deploy the Ingress file that maps to the product page service. This file can be named ingress-productpage.yaml.The front-end IP should be a free VIP on the NetScaler ADC VPX/MPX. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: productpage-ingress annotations: kubernetes.io/ingress.class: "bookinfo" ingress.citrix.com/insecure-termination: "redirect" ingress.citrix.com/frontend-ip: "10.217.101.182" spec: rules: - host: bookinfo.com http: paths: - path: backend: serviceName: productpage-service servicePort: 80oc apply -f ingress-productpage.yaml -n sml Copy the following YAML for the RBAC file in the sml namespace and deploy it. Name the file rbac-cic-pp.yaml as it is used for the CIC in front of the product page microservice. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: cpx rules: - apiGroups: [""] resources: ["services", "endpoints", "ingresses", "pods", "secrets", "routes", "routes/status", "nodes", "namespaces"] verbs: ["*"] - apiGroups: ["extensions"] resources: ["ingresses", "ingresses/status"] verbs: ["*"] - apiGroups: ["citrix.com"] resources: ["rewritepolicies", "vips"] verbs: ["*"] - apiGroups: ["apps"] resources: ["deployments"] verbs: ["*"] - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: ["get", "list", "watch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: cpx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cpx subjects: - kind: ServiceAccount name: cpx namespace: sml apiVersion: rbac.authorization.k8s.io/v1 --- apiVersion: v1 kind: ServiceAccount metadata: name: cpx namespace: smloc apply -f rbac-cic-pp.yaml -n sml Elevate the service account privileges to deploy the CIC and CPX. oc adm policy add-scc-to-user privileged system:serviceaccount:sml:cpx Edit the hosts file on the local machine with bookinfo.com mapped to the front end IP specified in the ingress-productpage.yaml. Copy and deploy the product page with a CIC. Name the file cic-productpage.yaml.The NS_IP should be the NS_IP of the Tier 1 ADC. apiVersion: v1 kind: Pod metadata: name: productpage-cic labels: app: productpage-cic spec: serviceAccountName: cpx containers: - name: productpage-cic image: "quay.io/citrix/citrix-k8s-ingress-controller:1.1.3" env: # Set NetScaler NSIP/SNIP, SNIP in case of HA (mgmt has to be enabled) - name: "NS_IP" value: "10.217.101.176" # Set username for Nitro # Set log level - name: "NS_ENABLE_MONITORING" value: "NO" - name: "NS_USER" value: "nsroot" - name: "NS_PASSWORD" value: "nsroot" - name: "EULA" value: "yes" - name: "LOGLEVEL" value: "DEBUG" - name: "NS_APPS_NAME_PREFIX" value: "BI-" args: - --ingress-classes bookinfo - --feature-node-watch false imagePullPolicy: IfNotPresentoc apply -f cic-productpage.yaml -n sml Navigate to bookinfo.com and click normal user. The product page should pull details, reviews, and ratings, which are other microservices. HAProxy is responsible for routing the traffic between microservices (East-West). Delete the service in front of details. Refresh the Bookinfo webpage and notice that the product page could not pull the microservice for details. oc delete service details -n sml Copy and deploy a headless service so that traffic coming from the product page to details goes through a CPX. Call this file detailsheadless.yaml. apiVersion: v1 kind: Service metadata: name: details spec: ports: - port: 9080 name: http selector: app: cpxoc apply -f detailsheadless.yaml -n sml Copy and deploy a new details service, that should be names detailsservice.yaml, to sit in front of the details microservice. apiVersion: v1 kind: Service metadata: name: details-service labels: app: details-service service: details-service spec: clusterIP: None ports: - port: 9080 name: http selector: app: detailsoc apply -f detailsservice.yaml -n sml Expose the details-service with an ingress and deploy it. Call this file detailsingress.yaml. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: details-ingress annotations: kubernetes.io/ingress.class: "cpx" ingress.citrix.com/insecure-port: "9080" spec: rules: - host: details http: paths: - path: backend: serviceName: details-service servicePort: 9080oc apply -f detailsingress.yaml -n sml Copy and deploy the CPXEastWest.yaml file. apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cpx labels: app: cpx service: cpx spec: replicas: 1 template: metadata: name: cpx labels: app: cpx service: cpx annotations: NETSCALER_AS_APP: "True" spec: serviceAccountName: cpx containers: - name: reviews-cpx image: "quay.io/citrix/citrix-k8s-cpx-ingress:13.0-36.28" securityContext: privileged: true env: - name: "EULA" value: "yes" - name: "KUBERNETES_TASK_ID" value: "" - name: "MGMT_HTTP_PORT" value: "9081" ports: - name: http containerPort: 9080 - name: https containerPort: 443 - name: nitro-http containerPort: 9081 - name: nitro-https containerPort: 9443 imagePullPolicy: Always # Add cic as a sidecar - name: cic image: "quay.io/citrix/citrix-k8s-ingress-controller:1.2.0" env: - name: "EULA" value: "yes" - name: "NS_IP" value: "127.0.0.1" - name: "NS_PROTOCOL" value: "HTTP" - name: "NS_PORT" value: "80" - name: "NS_DEPLOYMENT_MODE" value: "SIDECAR" - name: "NS_ENABLE_MONITORING" value: "YES" - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace args: - --ingress-classes cpx imagePullPolicy: Alwaysoc apply -f CPXEastWest.yaml -n sml Refresh bookinfo.com and the details should be pulled from the details microservice.A CPX was successfully deployed to proxy EW traffic. Citrix Service Mesh Lite with a VPX/MPX Run the following commands to delete the CPX being used as an EW proxy. New file is deployed to configure the VPX as the EW proxy between product page and the details microservices. oc delete -f detailsheadless.yaml -n sml oc delete -f detailsservice.yaml -n sml oc delete -f detailsingress.yaml -n sml oc delete -f CPXEastWest.yaml -n sml Copy and deploy a service, name the file detailstoVPX.yaml, to send traffic from the product page back to the VPX. The IP parameter should be a free VIP on the NetScaler ADC VPX/MPX. --- kind: "Service" apiVersion: "v1" metadata: name: "details" spec: ports: - name: "details" protocol: "TCP" port: 9080 --- kind: "Endpoints" apiVersion: "v1" metadata: name: "details" subsets: - addresses: - ip: "10.217.101.182" # Ingress IP in MPX ports: - port: 9080 name: "details"oc apply -f detailstoVPX.yaml -n sml Redeploy the detailsservice.yaml in front of the details microservice. oc apply -f detailsservice.yaml -n sml Copy and deploy the Ingress to expose the details microservice to the VPX. This is named detailsVPXingress.yaml. The front end IP should match the VIP on the Tier 1 ADC. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: details-ingress annotations: kubernetes.io/ingress.class: "vpx" ingress.citrix.com/insecure-port: "9080" ingress.citrix.com/frontend-ip: "10.217.101.182" spec: rules: - host: details http: paths: - path: backend: serviceName: details-service servicePort: 9080oc apply -f detailsVPXingress.yaml Refresh bookinfo.com and the details should be pulled from the details microservice. A VPX was successfully deployed to proxy EW traffic. Service Migration to NetScaler ADC using Routes or Ingress Classes in Openshift Service Migration with Route Sharding The Citrix Ingress Controller (CIC) acts as a Router and redirects the traffic to various pods to distribute incoming traffic among various available pods. This migration process can also be part of a cluster upgrade process from legacy Openshift topologies to automated deployments with Citrix CNC, CIC and CPX components for cluster migration and upgrade procedures. This solution can be acheived by two methods: CIC Router by Plug-In (Pod) CPX Router inside Openshift (Sidecar) Both of the methods are described below along with migration examples. Static Routes (Default) -maps the Openshift HostSubnet to the External ADC via a Static route. Static routes are common in legacy Openshift deployments using HAProxy. The static routes can be used in parralell with Citrix CNC, CIC and CPX when migrating services from one Service Proxy to another without disrupting the deployed namespaces in a functioning cluster. Example static route configuration for NetScaler ADC: oc get hostsubnet (Openshift Cluster) snippet oc311-master.example.com 10.x.x.x 10.128.0.0/23 oc311-node1.example.com 10.x.x.x 10.130.0.0/23 oc311-node2.example.com 10.x.x.x 10.129.0.0/23show route (external Citrix VPX) snippet 10.128.0.0 255.255.254.0 10.x.x.x STATIC 10.129.0.0 255.255.254.0 10.x.x.x STATIC 10.130.0.0 255.255.254.0 10.x.x.x STATIC Auto Routes - uses the CNC (Citrix Node Controller) to automate the external routes to the defined route shards. You can integrate NetScaler ADC with OpenShift in two ways, both of which support OpenShift router sharding. Route Types unsecured - external load balancer to CIC router, HTTP traffic is not encrypted. secured-edge - external load balancer to CIC router terminating TLS. secured-passthrough - external load balancer to destination terminating TLS secured reencrypt - external load balancer to CIC router terminating TLS. CIC router encrypting to destination usin TLS. Deployment Example #1: CIC deployed as an OpenShift router plug-in For Routes, we use Route Sharding concepts. Here, the CIC acts as a Router and redirects the traffic to various pods to distribute incoming traffic among various available pods. The CIC is installed as a router plug-in for NetScaler ADC MPX or VPX, deployed outside the cluster. Citrix Components: VPX - The Ingress ADC that is presenting the cluster services to DNS. CIC - provides the ROUTE_LABELS and NAMESPACE_LABELS to the external NetScaler ADC via the CNC route. Sample YAML file parameters for route shards: Citrix Openshift source files are located in Github here Add the following environment variables, ROUTE_LABELS and NAMESPACE_LABELS, with the value in the kubernetes label format. The route sharding expressions in CIC, NAMESPACE_LABELS, is an optional field. If used, it must match match the namespace labels that are mentioned in the route.yaml file. env: - name: "ROUTE_LABELS" value: "name=apache-web" - name: "NAMESPACE_LABELS" value: "app=hellogalaxy" The routes that are created via route.yaml will have labels that match the route sharding expressions in the CIC will get configured. metadata: name: apache-route namespace: hellogalaxy labels: name: apache-web Expose the service with service.yaml. metadata: name: apache-servicespec:type: NodePort#type=LoadBalancerports: - port: 80targetPort: 80selector: app: apache Deploy a simple web application with a selector label matching that in the service.yaml. Deployment Example #2: NetScaler ADC CPX deployed as an OpenShift router NetScaler ADC CPX can be deployed as an OpenShift router, along with the Citrix Ingress Controller inside the cluster. For more steps on deploying a CPX or CIC as a Router in the cluster, see Enable OpenShift routing sharding support with NetScaler ADC. Citrix Components: VPX - The Ingress ADC that is presenting the cluster services to DNS. CIC - provides the ROUTE_LABELS and NAMESPACE_LABELS to the external NetScaler ADC to define the route shards. CNC - provides the auto-route configuration for shards to the external load balancer. CPX - provides Openshift routing within the Openshift cluster. Service Migration with Ingress Classes Annotation Ingress Classes Annotations uses the ingress classes annotation concept, we add annotations to Ingress with ingress class information, this will help in redirecting the traffic to a particular pod/node from the external ADC. Sample YAML file parameters for ingress classes:** Citrix Ingress source files are located in Github here env: args: - --ingress-classes vpx The Ingress config file should also have a kubernetes.io/ingress.class annotations field inside metadata that will be matched with the CIC ingress-classes args field, at the time of creation. Sample Ingress VPX deployment with the "ingress.classes" example** kind: Ingress metadata: name: ingress-vpx annotations: kubernetes.io/ingress.class: "vpx" Citrix Metrics Exporter You can use the NetScaler ADC metrics exporter and Prometheus-Operator to monitor NetScaler ADC VPX or CPX ingress devices and NetScaler ADC CPX (east-west) devices. See View metrics of NetScaler ADCs using Prometheus and Grafana.
  6. Static and Auto Routes in an OpenShift cluster Static Routes (Default) - maps the OpenShift HostSubnet to the External ADC via a Static route Static routes are common in legacy OpenShift deployments using HAProxy. The static routes can be used in parallel with Citrix node controller (CNC), Citrix ingress controller (CIC), and CPX when migrating services from one Service Proxy to another without disrupting the deployed namespaces in a functioning cluster. Example static route configuration for Citrix ADC: oc get hostsubnet (Openshift Cluster) snippet oc311-master.example.com 10.x.x.x 10.128.0.0/23 oc311-node1.example.com 10.x.x.x 10.130.0.0/23 oc311-node2.example.com 10.x.x.x 10.129.0.0/23 show route (external Citrix VPX) snippet 10.128.0.0 255.255.254.0 10.x.x.x STATIC 10.129.0.0 255.255.254.0 10.x.x.x STATIC 10.130.0.0 255.255.254.0 10.x.x.x STATIC Auto Routes - uses the CNC (Citrix Node Controller) to automate the external routes to the defined route shards You can integrate Citrix ADC with OpenShift in two ways, both of which support OpenShift router sharding. Route Types unsecured - external load balancer to CIC router, HTTP traffic is not encrypted. secured-edge - external load balancer to CIC router terminating TLS. secured-passthrough - external load balancer to destination terminating TLS secured reencrypt - external load balancer to CIC router terminating TLS. CIC router encrypting to destination using TLS. See more about the different route types in Citrix ingress controller deployment solutions. Deploy the Citrix Ingress Controller with OpenShift router sharding support The Citrix Ingress Controller (CIC) acts as a Router and redirects the traffic to various pods to distribute incoming traffic among various available pods. This migration process can also be part of a cluster upgrade process from legacy OpenShift topologies to automated deployments with Citrix CNC, CIC, and CPX components for cluster migration and upgrade procedures. This solution can be achieved by two methods: CIC Router Plug-In (Pod) CPX Router inside OpenShift (Sidecar) Both of the methods are described below along with migration examples. OpenShift router sharding allows distributing a set of routes among multiple OpenShift routers. By default, an OpenShift router selects all routes from all namespaces. In router sharding, labels are added to routes or namespaces and label selectors to routers for filtering routes. Each router shard selects only routes with specific labels that match its label selection parameters. To configure router sharding for a Citrix ADC deployment on OpenShift, a Citrix ingress controller instance is required per shard. The Citrix ingress controller instance is deployed with route or namespace labels or both as environment variables depending on the criteria required for sharding. When the Citrix ingress controller processes a route, it compares the route’s labels or route’s namespace labels with the selection criteria configured on it. If the route satisfies the criteria, the appropriate configuration is applied to Citrix ADC, otherwise it does not apply the configuration. In router sharding, selecting a subset of routes from the entire pool of routes is based on selection expressions. Selection expressions are a combination of multiple values and operations. See more about the expressions, values, and operations in this Citrix blog. Bookinfo deployment The architecture for the Bookinfo application is seen in the figure below. A CIC is deployed as an OpenShift router plug-in in the first tier, configuring the Citrix ADC VPX to route North-South traffic to the Product Page. In the second tier, a Citrix ADC CPX is deployed as an OpenShift router, routing East-West traffic between the Details and Product Page microservice, whereas East-West traffic between the Product Page, Reviews, and Ratings microservices are being routed through the default HAProxy router. Citrix Components VPX - The Ingress ADC that is presenting the cluster services to DNS. CIC - provides the ROUTE_LABELS and NAMESPACE_LABELS to the external Citrix ADC via the CNC route. CPX - provides OpenShift routing within the OpenShift cluster. Deployment Steps Create a namespace for the deployment. oc create ns sml Deploy the Bookinfo application into the namespace. oc apply -f bookinfo.yaml ################################################################################################## # Details service ################################################################################################## apiVersion: v1 kind: Service metadata: name: details labels: app: details service: details spec: ports: - port: 9080 name: http selector: app: details --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: details-v1 labels: app: details version: v1 spec: replicas: 1 template: metadata: annotations: sidecar.istio.io/inject: "false" labels: app: details version: v1 spec: containers: "bookinfo.yaml" 224L, 5120C Deploy the route file that maps to our product page service. Specify frontend-ip (This is the content switching vip on the Tier 1 ADC) oc apply -f routes-productpage.yaml apiVersion: v1 kind: Route metadata: name: productpage-route namespace: sml annotations: ingress.citrix.com/frontend-ip: "X.X.X.X" labels: name: productpage spec: host: bookinfo.com path: / port: targetPort: 80 to: kind: Service name: productpage-service Deploy the RBAC file for the sml namespace that gives the CIC the necessary permissions to run. The RBAC file is already namespaced. oc apply -f rbac.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: cpx rules: - apiGroups: [""] resources: ["endpoints", "ingresses", "services", "pods", "secrets", "nodes", "routes", "namespaces","tokenreviews","subjectaccessreview"] verbs: ["get", "list", "watch"] # services/status is needed to update the loadbalancer IP in service status for integrating # service of type LoadBalancer with external-dns - apiGroups: [""] resources: ["services/status"] verbs: ["patch"] - apiGroups: ["extensions"] resources: ["ingresses", "ingresses/status"] verbs: ["get", "list", "watch"] - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: ["get", "list", "watch"] - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "list", "watch"] - apiGroups: ["citrix.com"] resources: ["rewritepolicies", "canarycrds", "authpolicies", "ratelimits"] verbs: ["get", "list", "watch"] - apiGroups: ["citrix.com"] resources: ["vips"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: ["route.openshift.io"] resources: ["routes"] verbs: ["get", "list", "watch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: cpx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cpx subjects: - kind: ServiceAccount name: cpx namespace: sml --- apiVersion: v1 kind: ServiceAccount metadata: "rbac.yaml" 51L, 1513C Deploy your CIC to push the route configs to your VPX. Match the parameter ROUTE_LABELS to the label specified in the route-productpage.yaml. For more information on syntax for ROUTE_LABELS, please see this blog. oc apply -f cic-productpage-v2.yaml apiVersion: v1 kind: Pod metadata: name: cic labels: app: cic spec: serviceAccount: cpx containers: - name: cic image: "quay.io/citrix/citrix-k8s-ingress-controller:1.7.6" securityContext: privileged: true env: - name: "EULA" value: "yes" # Set NetScaler NSIP/SNIP, SNIP in case of HA (mgmt has to be enabled) - name: "NS_IP" value: "X.X.X.X" # Set NetScaler VIP that receives the traffic # - name: "NS_VIP" # value: "X.X.X.X" - name: "NS_USER" value: "nsroot" - name: "NS_PASSWORD" value: "nsroot" - name: "NS_APPS_NAME_PREFIX" value: "BOOK" - name: "ROUTE_LABELS" value: "name in (productpage)" # - name: "NAMESPACE_LABELS" # value: "app=hellogalaxy" # Set username for Nitro # - name: "NS_USER" # valueFrom: # secretKeyRef: # name: nslogin # key: nsroot # # Set user password for Nitro # - name: "NS_PASSWORD" # valueFrom: # secretKeyRef: # name: nslogin # key: nsroot args: # - --default-ssl-certificate # default/default-cert imagePullPolicy: Always ~ "cic-productpage-v2.yaml" 48L, 1165C Now we must create a headless service that points users looking for details to our CPX using the DNS pods in our cluster. oc apply -f detailsheadless.yaml ################################################################################################## # Details service ################################################################################################## apiVersion: v1 kind: Service metadata: name: details spec: ports: - port: 9080 name: http selector: app: cpx Deploy a new service to expose the details container. oc apply -f detailsservice.yaml ################################################################################################## # Details service ################################################################################################## apiVersion: v1 kind: Service metadata: name: details-service labels: app: details-service service: details-service spec: clusterIP: None ports: - port: 9080 name: http selector: app: details Deploy a new route definition that sits in front of the details service that we created. Notice the label is “name: details”. oc apply -f detailsroutes.yaml apiVersion: v1 kind: Route metadata: name: details-route namespace: sml annotations: ingress.citrix.com/insecure-port: "9080" labels: name: details spec: host: details path: / port: targetPort: 9080 to: kind: Service name: details-service Deploy CPX for E/W traffic. A CIC is deployed as a sidecar and is configured with a ROUTE_LABEL parameter to match the label in detailsroutes.yaml. oc apply -f cpx.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cpx labels: app: cpx service: cpx spec: replicas: 1 template: metadata: name: cpx labels: app: cpx service: cpx annotations: NETSCALER_AS_APP: "True" spec: serviceAccountName: cpx containers: - name: cpx image: "quay.io/citrix/citrix-k8s-cpx-ingress:13.0-36.28" securityContext: privileged: true env: - name: "EULA" value: "yes" - name: "KUBERNETES_TASK_ID" value: "" - name: "MGMT_HTTP_PORT" value: "9081" ports: - name: http containerPort: 9080 - name: https containerPort: 443 - name: nitro-http containerPort: 9081 - name: nitro-https containerPort: 9443 # readiness probe? imagePullPolicy: Always # Add cic as a sidecar - name: cic image: "quay.io/citrix/citrix-k8s-ingress-controller:1.7.6" env: - name: "EULA" value: "yes" - name: "NS_IP" "cpx.yaml" 75L, 1939C Continuous Delivery choices in a microservices environment Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Continuous Delivery (CD) is the natural extension of Continuous Integration: an approach in which teams ensure that every change to the system is releasable, and that we can release any version at the push of a button. The different CD choices, along with their pros and cons, are: Recreate - Version 1 (V1) is terminated and then Version 2 (V2) is rolled out. Pros Easy to set up. Application state is entirely renewed. Cons High impact on the user. Expect downtime that depends on both shutdown and boot duration. Ramped/Rolling Update - V2 is slowly rolled out and replaces V1. Pros Easy to set up. Version is slowly released across instances. Convenient for stateful applications that can handle rebalancing of the data. Cons Rollout/rollback can take time. Supporting multiple APIs is hard. Little control of traffic. Blue Green - V2 is released alongside V1, then the traffic is switched to V2. Pros Instant rollout/rollback. Avoid version issue since the entire application is changed at once. Cons Expensive as it requires double the resources. Proper test of the entire platform should be done before releasing to production. Handling multiple stateful applications can be hard. Canary - V2 is released to a subset of users and then proceeds to a full rollout. Pros Version released for a subset of users. Convenient for error rate and performance monitoring. Fast rollback. Cons Slow rollout. Handling multiple stateful applications can be hard. A/B Testing - V2 is released to a subset of users under specific conditions. Pros Several versions run in parallel. Full control over the traffic distribution. Cons Requires intelligent load balancer. Hard to troubleshoot errors for a given session, so distributed tracing becomes mandatory. Shadow - V2 receives real-world traffic alongside V11 and does not impact the response. Pros Performance testing of the application with production traffic. No impact on the user. No rollout until the stability and performance of the application meet the requirements. Cons Expensive as it requires double the resources. Not a true user test and can be misleading. Complex to set up. Requires mocking service for certain cases. Reference Materials Citrix GitHub: “OpenShift routes and ingress” Citrix Developer Docs: “Deployment solutions” Citrix Blog: “Enable OpenShift router sharding support with Citrix ADC” OpenShift Route Documentation:
  7. NetScaler BLX is a Linux Software form-factor of NetScaler ADC, which runs natively on the Linux Kernel irrespective of the underlying environment . It is designed to run natively on bare-metal-Linux on commercial off-the-shelf servers (COTS). Following are the benefits of using a NetScaler BLX appliance: Cloud-ready. Easy-Management. Seamless third-party tools integration. Coexistence of other applications. DPDK Support. Why is there a need for a bare metal version of NetScaler ? NetScaler BLX appliances provide simplicity with no virtual machine overhead for better performance. Also, you can run a NetScaler BLX appliance on your preferred server hardware. Use Cases - High traffic load, mission critical applications, latency sensitive workload, North-South traffic. Characteristics - Lightweight software package and no VM overhead. BLX Deployment using Terraform Guide HashiCorp Terraform is an infrastructure-as-code software tool used to orchestrate and manage IT infrastructure, including networking. Terraform codifies infrastructure into declarative configuration files for easier provisioning, compliance, and management. Terraform provider CitrixBLX allows users to bring-up any number of NetScaler BLX instances in shared and DPDK modes (supporting both Intel & Mellanox Interfaces ). Along with Citrix ADC Terraform provider, it allows users to configure ADC BLX’s for various use-cases such as global server load balancing, web application firewall policies, and more. With Terraform, you can share and reuse your NetScaler configurations across your environments — a key time saver when migrating applications from your data center to any public cloud. A. Setting up Requirements - Setting up Terraform Client & Installing GO [Terraform] (https://www.terraform.io/downloads.html) 0.10.x [Go] (https://golang.org/doc/install) 1.11 (to build the provider plugin) After installing GO, set PATH & GOPATH accordingly export PATH=$PATH:/usr/local/go/bin export GOPATH=/root/go/ B. Terraform plugin to Deploy BLX Terraform provider for NetScaler BLX is not available through terrform.registry.io as of now. Hence users have to install the provider manually. Clone repository to: $GOPATH/src/github.com/citrix/terraform-provider-blx $ git clone git@github.com:citrix/terraform-provider-citrixblx $ GOPATH/src/github.com/citrix/terraform-provider-blx Enter the provider directory and build the provider $ cd $GOPATH/src/github.com/citrix/terraform-provider-blx $ make build Navigating the repository citrixblx folder - Contains the citrixblx resource file and modules leveraged by Terraform. examples folder - Contain the examples for users to deploy BLX. 2. Create a following directory in your local machine and save the NetScaler terraform binary. e.g. in Ubuntu machine. Note that the directory structure has to be same as below, you can edit the version -0.0.1 to the NetScaler version you downloaded. mkdir -p /home/user/.terraform.d/plugins/registry.terraform.io/citrix/citrixblx/0.0.1/linux_amd64Copy the terraform-provider-citrixblx to the above created folder as shown belowcp $GOPATH/bin/terraform-provider-citrixblx /home/user/.terraform.d/plugins/registry.terraform.io/citrix/citrixblx/0.0.1/linux_amd64[/code] C. Get Started on using terraform to deploy Netscaler BLX In order to familiarize with Netscaler BLX deployment through terraform, lets get started with basic configuration of setting up a dedicated mode BLX in Terraform. Network mode of a NetScaler BLX appliance defines whether the NIC ports of the Linux host are shared or not shared with other Linux applications running on the host. A NetScaler BLX appliance can be configured to run on one of the following network modes: Shared mode - A NetScaler BLX appliance configured to run in shared mode, shares the Linux host NIC ports with other Linux applications. Dedicated mode - A NetScaler BLX appliance configured in dedicated mode has dedicated Linux host NIC ports and it does not share the ports with other Linux applications. In our below Deployment case, we will bring up BLX in Simple Shared mode, similarly we have provider.tf & resources.tf to bring up BLX in – DPDK Mode ( Step inside blx-dedicated directory in examples folder ) DPDK Mode for Mellanox Interfaces ( Step inside blx-mlx directory in examples folder ) Secured way by not disclosing BLX Password ( Step inside blx-sensitive-pass in examples folder ). 1. Now navigate to examples folder as below. Here you can find many ready to use examples for you to get started: cd $GOPATH/src/github.com/citrix/terraform-provider-blx/examples Lets deploy a simple shared mode NetScaler BLX. cd terraform-provider-citrixblx/examples/simple-blx-shared/ 2. Provider.tf contains the details of the target Citrix ADC. Edit the simple-blx-shared/provider.tf as follows. For Terraform version > 0.13 edit the provider.tf as follows - terraform { required_providers { citrixblx = { source = "citrix/citrixblx" } } } provider "citrixblx" { } For terraform version < 0.13, edit the provider.tf as follows – provider "citrixblx" { } 3. Resources.tf contains the desired state of the resources that you want to manage through terraform. Here we want to create a shared mode blx. Edit the simple-blx-shared/resources.tf with your configuration values – source path of BLX packages to be installed, host ip address, host username, host password, blx password as below. resource "citrixblx_adc" "blx_1" { source = "/root/blx-rpm-13.1-27.59.tar.gz" host = { ipaddress = "10.102.174.76" username = "user" password = " DummyHostPass " } config = { worker_processes = "3" } password = DummyPassword}resource "citrixblx_adc" "blx_2" { source = "/root/blx-rpm-13.1-27.59.tar.gz" host = { ipaddress = "10.102.56.25" username = "user" password = " DummyHostPass " } config = { worker_processes = "1" } password = var.blx_password}[/code] 4 . Once the provider.tf and resources.tf is edited and saved with the desired values in the simple-blx-shared folder, you are good to run terraform and configure NetScaler. Initialize the terraform by running terraform-init inside the simple_blx-shared folder as follow: terraform-provider-citrixblx/examples/simple-blx-shared$ terraform init You should see following output if terraform was able to successfully find citrix blx provider and initialize it - Initializing the backend... Initializing provider plugins... - Reusing previous version of hashicorp/citrixblx from the dependency lock file - Installing hashicorp/citrixblx v0.0.1... - Installed hashicorp/citrixblx v0.0.1 (unauthenticated) Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. 5. To view the changes that will be done to your NetScaler configurations, run terraform-plan # citrixblx_adc.blx_1 will be created + resource "citrixblx_adc" "blx_1" { + config = { + "worker_processes" = "3" } + host = { + "ipaddress" = "10.102.174.76" + "password" = "freebsd" + "username" = "root" } + id = (known after apply) + password = (sensitive) + source = "/root/blx-rpm-13.1-27.59.tar.gz" } # citrixblx_adc.blx_2 will be created + resource "citrixblx_adc" "blx_2" { + config = { + "worker_processes" = "1" } + host = { + "ipaddress" = "10.102.56.25" + "password" = "freebsd" + "username" = "root" } + id = (known after apply) + password = (sensitive) + source = "/root/blx-rpm-13.1-27.59.tar.gz" } 6. Terraform apply – To apply the Infrastructure end to end – Install & Bring up BLX terrafrom-apply citrixblx_adc.blx_2: Creating... citrixblx_adc.blx_1: Creating... citrixblx_adc.blx_1: Still creating... [10s elapsed] citrixblx_adc.blx_2: Still creating... [10s elapsed] citrixblx_adc.blx_2: Still creating... [20s elapsed] citrixblx_adc.blx_1: Still creating... [20s elapsed] citrixblx_adc.blx_1: Still creating... [30s elapsed] citrixblx_adc.blx_2: Still creating... [30s elapsed] . . citrixblx_adc.blx_1: Creation complete after 2m52s [id=10.102.174.76] citrixadc_nsip.nsip: Creating... citrixadc_service.tf_service: Creating... citrixadc_nsfeature.nsfeature: Creating... citrixadc_lbvserver.tf_lbvserver: Creating... citrixadc_nsfeature.nsfeature: Creation complete after 0s [id=tf-nsfeature-20220810125911768300000001] citrixadc_nsip.nsip: Creation complete after 0s [id=192.168.2.55] citrixadc_service.tf_service: Creation complete after 0s [id=tf_service] citrixadc_lbvserver.tf_lbvserver: Creation complete after 0s [id=tf_lbvserver] citrixadc_lbvserver_service_binding.tf_binding: Creating... citrixadc_lbvserver_service_binding.tf_binding: Creation complete after 0s [id=tf_lbvserver,tf_service] citrixblx_adc.blx_2: Still creating... [3m0s elapsed] citrixblx_adc.blx_2: Still creating... [3m10s elapsed] citrixblx_adc.blx_2: Still creating... [3m20s elapsed] citrixblx_adc.blx_2: Still creating... [3m30s elapsed] citrixblx_adc.blx_2: Still creating... [3m40s elapsed] citrixblx_adc.blx_2: Still creating... [3m50s elapsed] citrixblx_adc.blx_2: Still creating... [4m0s elapsed] citrixblx_adc.blx_2: Creation complete after 4m7s [id=10.102.56.25] D. Configuring BLX for Load Balancing Use Case Citrix ADC Terraform provider allows users to configure ADCs for various use-cases such as global server load balancing, web application firewall policies, and more. Here we will look how to integrate both plugins to configure BLX – Edit the simple-blx-shared/provider.tf as follows and add details of your target adc provider "citrixadc" { endpoint = "http://10.102.174.76:9080" username = "user" password = "DummyPassword "} 2. Add config.tf section which specifies configuration details to be applied on NetScaler BLX. Here notice depends on variable used to apply configuration on a particular BLX Instance. In below example config.tf , LB vserver configurations are applied on BLX Instance blx_1. resource "citrixadc_nsip" "nsip" { ipaddress = "192.168.2.55" type = "VIP" netmask = "255.255.255.0" icmp = "ENABLED" depends_on = [ citrixblx_adc.blx_1 ] state = "ENABLED" } resource "citrixadc_nsfeature" "nsfeature" { lb = true depends_on = [ citrixblx_adc.blx_1 ] } resource "citrixadc_lbvserver" "tf_lbvserver" { ipv46 = "10.10.10.33" name = "tf_lbvserver" port = 80 depends_on = [ citrixblx_adc.blx_1 ] servicetype = "HTTP" } resource "citrixadc_service" "tf_service" { name = "tf_service" ip = "192.168.43.33" depends_on = [ citrixblx_adc.blx_1 ] servicetype = "HTTP" port = 80 } resource "citrixadc_lbvserver_service_binding" "tf_binding" { name = citrixadc_lbvserver.tf_lbvserver.name servicename = citrixadc_service.tf_service.name weight = 1 } 3. Post above config scripts, user needs to do Terraform plan and apply. 4. Terraform destroy – To destroy the infrastructure. [root@localhost simple-blx-shared]# terraform destroy citrixblx_adc.blx_2: Refreshing state... [id=10.102.56.25] citrixblx_adc.blx_1: Refreshing state... [id=10.102.174.76] citrixadc_nsip.nsip: Refreshing state... [id=192.168.2.55] citrixadc_nsfeature.nsfeature: Refreshing state... [id=tf-nsfeature-20220810125911768300000001] citrixadc_service.tf_service: Refreshing state... [id=tf_service] citrixadc_lbvserver.tf_lbvserver: Refreshing state... [id=tf_lbvserver] citrixadc_lbvserver_service_binding.tf_binding: Refreshing state... [id=tf_lbvserver,tf_service] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - destroy citrixblx_adc.blx_2: Destroying... [id=10.102.56.25] citrixadc_lbvserver_service_binding.tf_binding: Destroying... [id=tf_lbvserver,tf_service] citrixadc_nsfeature.nsfeature: Destroying... [id=tf-nsfeature-20220810125911768300000001] citrixadc_nsip.nsip: Destroying... [id=192.168.2.55] citrixadc_nsfeature.nsfeature: Destruction complete after 0s citrixadc_lbvserver_service_binding.tf_binding: Destruction complete after 0s citrixadc_service.tf_service: Destroying... [id=tf_service] citrixadc_lbvserver.tf_lbvserver: Destroying... [id=tf_lbvserver] citrixadc_service.tf_service: Destruction complete after 0s citrixadc_nsip.nsip: Destruction complete after 0s citrixadc_lbvserver.tf_lbvserver: Destruction complete after 1s citrixblx_adc.blx_1: Destroying... [id=10.102.174.76] citrixblx_adc.blx_2: Still destroying... [id=10.102.56.25, 10s elapsed] citrixblx_adc.blx_2: Destruction complete after 10s citrixblx_adc.blx_1: Still destroying... [id=10.102.174.76, 10s elapsed] citrixblx_adc.blx_1: Destruction complete after 10s Destroy complete! Resources: 7 destroyed. Conclusion As we see above, Terraform abstracts the ADC technicalities and makes it easy to codify and integrate ADC with other applications. You can use the Terraform Netscaler BLX Provider and CitrixADC Terraform integrated solution for end to end Netscaler BLX deployments as is or customize it as per requirements. Citrix ADC Terraform modules enable an infrastructure-as-code approach and seamlessly integrate with your automation environment to provide self-service infrastructure. References BLX Documentation – https://docs.citrix.com/en-us/citrix-adc-blx/current-release.html Terraform provider for BLX – https://github.com/citrix/terraform-provider-citrixblx Terraform provider to configure ADC - https://registry.terraform.io/providers/citrix/citrixadc/latest/docs
  8. Introduction NetScaler supports one-time passwords (OTPs) without using a third-party server. OTPs are a highly secure option for authenticating to secure servers as the number or passcode generated is random. Previously, specialized firms, such as RSA, with specific devices that generate random numbers, offered OTPs. In addition to reducing capital and operating expenses, this feature enhances the administrator’s control by keeping the entire configuration on the NetScaler appliance. To use the OTP solution, a user must register with a NetScaler virtual server. Registration is required only once per unique device and can be restricted to certain environments. Configuring and validating a registered user is similar to configuring an extra authentication policy. This POC guide will show how a single UI(Logon form) can be leveraged for both OTP Registration and OTP validation flows instead of asking users to go to different Endpoints for each. Netscaler Configuration VPN Vserver and AAA vserver Creation add vpn vserver test.aaadomain.net SSL 10.106.1.1 443 add authentication vserver aaavserver1 SSL 0.0.0.0 Creating and binding authnprofile to VPN vserver (for advanced or nfactor OTP Configuration) add authnprofile authnprof -authnVsName aaavserver1 set vpn vserver test.aaadomain.net -authnprofile authnprof Creating and Binding Single Unified Loginschema for OTP Registration and Validation add authentication loginSchema otpregistrationorvalidation -authenticationSchema "/nsconfig/loginschema/LoginSchema/DualAuthOrOTPRegisterDynamic.xml" add authentication loginSchemaPolicy otpregistrationorvalidation -rule true -action otpregistrationorvalidation bind authentication vserver aaavserver1 -policy otpregistrationorvalidation -priority 1 -gotoPriorityExpression END OTP Registration flow add authentication ldapAction ldap -serverIP 10.106.7.50 -serverPort 636 -ldapBase "dc=xyz,dc=com" -ldapBindDn test@xyz.com -ldapBindDnPassword test@123 -ldapLoginName samAccountName add authentication Policy ldap-registration -rule "aaa.login.VALUE(\"otpregister\").eq(\"true\")" -action ldap add authentication policylabel otp-registration -loginSchema LSCHEMA_INT add authentication ldapAction ldap-otp -serverIP 10.106.7.50 -serverPort 636 -ldapBase "dc=xyz,dc=com" -ldapBindDn test@xyz.xom -ldapBindDnPassword test@123 -ldapLoginName sAMAccountName -secType SSL -authentication DISABLED -OTPSecret userParameters add authentication Policy ldap-otp -rule true -action ldap-otp bind authentication policylabel otp-registration -policyName ldap-otp -priority 1 -gotoPriorityExpression NEXT bind authentication vserver aaavserver1 -policy ldap-registration -priority 1 -nextFactor otp-registration -gotoPriorityExpression NEXT OTP Validation flow add authentication Policy ldap -rule true -action ldap >>> Same ldap Action/Profile created for OTP Registration can be used for OTP Validation flow as well add authentication policylabel otp-validation -loginSchema LSCHEMA_INT bind authentication policylabel otp-validation -policyName ldap-otp -priority 1 -gotoPriorityExpression NEXT >>> Same ldap-otp policy created for OTP Registration can be used for OTP Validation flow as well bind authentication vserver aaavserver1 -policy ldap -priority 2 -nextFactor otp-validation -gotoPriorityExpression NEXT CLI Snippet for the nfactor Configuration on AAA vserver(here aaavserver1) > sh authentication vs aaavserver1 aaavserver1 (10.106.1.1:443) - SSL IPSet: ??? Type: CONTENT State: UP Client Idle Timeout: 180 sec Down state flush: DISABLED Disable Primary Vserver On Down: DISABLED HTTP profile name: nshttp_default_strict_validation Network profile name: ??? Appflow logging: ENABLED Authentication: ON Device Certificate Check: ??? CGInfra Homepage Redirect: ??? Current AAA Sessions: 0 Current Users: 0 Dtls: ??? L2Conn: ??? RDP Server Profile Name: ??? Max Login Attempts: 0 Failed Login Timeout: 0 Fully qualified domain name: ??? PCoIP VServer Profile Name: ??? Listen Policy: NONE Listen Priority: 0 IcmpResponse: ??? RHIstate: ??? Traffic Domain: 0 Probe Protocol: ??? 1) LoginSchema Policy Name: otpregistrationorvalidation Priority: 1 GotoPriority Expression: END 1) Advanced Authentication Policy Name: ldap-registration Priority: 1 GotoPriority Expression: NEXT NextFactor name: otp-registration 2) Advanced Authentication Policy Name: ldap Priority: 2 GotoPriority Expression: NEXT NextFactor name: otp-validation User Endpoint Now we test the above configuration OTP Registration flow with Citrix SSO app 1. Open a browser and navigate to the domain FQDN managed by the NetScaler Gateway. We use https://test.aaadomain.net 2. The following login screen will appear after your browser is redirected. If a user wants to register a new Device, click on the “Click to register” checkbox. 3. On the next screen, add Username, Password, and DeviceName to be Registered as follows On your mobile device, open your Citrix SSO app and Scan the QR code Select Done, and you will see confirmation that the device was added successfully.You can also “Test” if the device is added successfully by clicking on the “Test” Button and entering the OTP from your Citrix SSO app. OTP Validation flow Open a browser and navigate to the domain FQDN managed by the NetScaler Gateway. We use https://test.aaadomain.net After your browser is redirected to a login screen, enter your Username, Password, and Passcode(OTP from the Citrix SSO app for the Android1 device) if your device is already registered. On successful Authentication, you will be logged in to Citrix Gateway.
  9. Overview This proof of concept guide is designed to provide a step-by-step method to deploy an instance of the NetScaler VPX on Nutanix AHV and prepare it for use. NetScaler VPX running on Nutanix AHV is supported through the Citrix Ready Program. This guide will assist in deploying a VPX appliance using Prism Element with some basic best practices. This guide will NOT cover the specific needs for every deployment. It is recommended that deployments and testing are conducted to define the best method for a particular need. Nutanix Acropolis Hypervisor (AHV) is a modern and secure virtualization platform that powers VMs and containers for applications and cloud-native workloads on-premises and in public clouds that can run any application at any scale. Prerequisites This guide assumes the following prerequisites have been completed: Nutanix AHV is configured and ready for use Nutanix Prism Element will be used for the deployment (not Prism Central) Sufficient resources are available to support the recommended VM configuration The NetScaler VPX requires a minimum of 2 vCPUs and 2 GB of RAM (4 GB RAM or more is recommended). At least one vNIC (2 or more vNICs recommended for Management and Production networks) At least 20 GB of disk space A basic understanding of Nutanix AHV A basic understanding of Nutanix Prism Element Familiarity with the Acropolis Command Line Interface (ACLI) Familiarity with the initial setup of a NetScaler VPX appliance. Considerations for NetScaler VPX appliances A proof of concept deployment is set up to try out different functions of the VPX appliance. With a POC deployment, customers can: Try different features Familiarize themselves with the environment Try different configurations to see how they impact performance, usability, etc. A POC is not intended for production workloads and should only be utilized for learning and feasibility purposes. Therefore, a virtual appliance running with (2) vCPUs, (4) GB RAM, and 20 GB of disc drive should be sufficient. In a production environment, it is recommended to provision the appliance with adequate resources for the expected workload. With a virtual appliance on Nutanix AHV, scaling up or down on resources is very easy, making the virtual appliance very flexible. To determine the required resources for your workload, use the following NetScaler Form Factors Datasheet Deploying the NetScaler VPX Download the VPX virtual appliance (the example below shows the latest 14.1 version of the firmware, however other versions are available for AHV should this meet your business requirements) Download the “Citrix ADC VPX for KVM” file. On the first extraction, it will become a “tar” file. Extract that until you see the “.qcow2” and “.xml” files. Login to Prism Element (not Prism Central) From Home, select Settings Choose Image Configuration Give the image a name Select the “DISK” image type Pick a storage container Choose “Upload a file” and navigate to the NetScaler VPX “.qcow2” file Choose “Save” to create the image 7. Once the file uploads, you should see the image listed and the status should show as “ACTIVE”, this may take some time as Prism Element processes the image file. 8. Navigate to VM and then click Create VM 9. On the Create VM Screen, remove the CD ROM Drive 10 Add a new disk Select “Clone from the image service” from the drop-down menu In the Bus Type, select “SCSI” Note: The NetScaler VPX has been deployed with PCI, SCSI, SATA, and IDE bus disks without issue Choose the NetScaler image that was uploaded Choose “Add” 11. The disk will then be added 12. Add VLANs as necessary. A minimum of two VLANs (Management and LAN) are recommended 13. Do not set affinity now, as it will be set later in this guide 14. Choose "Save" Once the VM is listed and shows as powered off, we must add a serial port. The VM appliance will not boot without a serial port connection, and Nutanix AHV does not add a serial port by default. To add the Serial Port SSH into the CVM using the username “nutanix” and the password you set for that account (You can find a list of CVM IP addresses in the “Hardware” section of the Prism Element console) Enter the ACLI acli Enter the following command to create the serial port where <vmname> is the name you gave to the VPX Appliance vm.serial_port_create <vmname> type=kServer index=0 At this point, you can snapshot the VM to be used as a template later should you wish to deploy more instances (an HA pair, for example). Initial Configuration Power on the VM Launch the VNC console Watch the VM Boot Log in with the default credentials of nsrootnsroot You will be prompted to change the password. It is recommended that you change it at this time Manually run the “config ns” command from the CLI Assign the IP Enter the NetMask Choose “Apply changes and exit” 6. Restart the VM When the appliance reboots, log back into the CLI and add the default route using the command below, replacing <default_route> with the default route assigned to the network that your NSIP resides on. route add 0.0.0.0 0.0.0.0 <default_route> Save the configuration using the command below to ensure the default route persists during a reboot save ns config Now you can connect to the GUI After this point, the configuration proceeds like any other NetScaler setup. Additional Considerations High CPU usage CPU usage will show high by default on NetScaler VPX appliances. If you desire to enable CPU sharing, then you should enable CPU Yield. From the GUI Navigate to Settings and click the “Change VPX Settings link Change “CPU Yield” to Yes Save the configuration 2. From the CLI set ns vpxparam -cpuyield YES Running a pair of appliances for high availability (HA) If you are going to run an HA pair of appliances, it is recommended that you set anti-affinity rules so the appliances will always be run on separate AHV hosts To accomplish this: Login to the CVM via SSH Create the VM group where <vmgroupname> is the name you give to the group of NetScalers you deployed on AHV vm_group.create <vmgroupname> Add the existing NetScalers to the group where <vmgroupname> is the name from the previous step, and <vm1name> and <vm2name> are the NetScaler VMs to be added to the group vm_group.add_vms <vmgroupname> vm_list=<vm1name>,<vm2name> Set the Anti-affinity rule where <vmgroupname> is the name given in step 2 above vm_group.antiaffinity_set <vmgroupname> Disaster Recovery and GSLB Suppose multiple sites are to be used, and Global Server Load Balancing (GSLB) is utilized for access. In that case, it is recommended that an HA pair of NetScalers be deployed on AHV at both locations. You can then use Nutanix technologies such as DR replication to ensure the availability of your NetScaler pair should you experience a cluster outage. More information on Nutanix DR replication can be found here. Resources NetScaler Form Factors Datasheet FAQ on Deploying a NetScaler VPX
  10. Overview NetScaler ADC is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler ADC is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler ADC combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility, and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler VPX The NetScaler ADC VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms: XenServer VMware ESX Microsoft Hyper-V Linux KVM Amazon Web Services Microsoft Azure Google Cloud Platform This deployment guide focuses on NetScaler ADC VPX on Microsoft Azure Microsoft Azure Microsoft Azure is an ever-expanding set of cloud computing services built to help organizations meet their business challenges. Azure gives users the freedom to build, manage, and deploy applications on a massive, global network using their preferred tools and frameworks. With Azure, users can: Be future-ready with continuous innovation from Microsoft to support their development today and their product visions for tomorrow. Operate hybrid cloud seamlessly on-premises, in the cloud, and at the edge—Azure meets users where they are. Build on their terms with Azure’s commitment to open source and support for all languages and frameworks, allowing users to be free to build how they want and deploy where they want. Trust their cloud with security from the ground up—backed by a team of experts and proactive, industry-leading compliance that is trusted by enterprises, governments, and startups. Azure Terminology Here is a brief description of the key terms used in this document that users must be familiar with: Azure Load Balancer – Azure load balancer is a resource that distributes incoming traffic among computers in a network. Traffic is distributed among virtual machines defined in a load-balancer set. A load balancer can be external or internet-facing, or it can be internal. Azure Resource Manager (ARM) – ARM is the new management framework for services in Azure. Azure Load Balancer is managed using ARM-based APIs and tools. Back-End Address Pool – IP addresses associated with the virtual machine NIC to which load is distributed. BLOB - Binary Large Object – Any binary object like a file or an image that can be stored in Azure storage. Front-End IP Configuration – An Azure Load balancer can include one or more front-end IP addresses, also known as a virtual IPs (VIPs). These IP addresses serve as ingress for the traffic. Instance Level Public IP (ILPIP) – An ILPIP is a public IP address that users can assign directly to a virtual machine or role instance, rather than to the cloud service that the virtual machine or role instance resides in. This does not take the place of the VIP (virtual IP) that is assigned to their cloud service. Rather, it is an extra IP address that can be used to connect directly to a virtual machine or role instance. Note: In the past, an ILPIP was referred to as a PIP, which stands for public IP. Inbound NAT Rules – This contains rules mapping a public port on the load balancer to a port for a specific virtual machine in the back-end address pool. IP-Config - It can be defined as an IP address pair (public IP and private IP) associated with an individual NIC. In an IP-Config, the public IP address can be NULL. Each NIC can have multiple IP-Configs associated with it, which can be up to 255. Load Balancing Rules – A rule property that maps a given front-end IP and port combination to a set of back-end IP addresses and port combinations. With a single definition of a load balancer resource, users can define multiple load balancing rules, each rule reflecting a combination of a front-end IP and port and back end IP and port associated with virtual machines. Network Security Group (NSG) – NSG contains a list of Access Control List (ACL) rules that allow or deny network traffic to virtual machine instances in a virtual network. NSGs can be associated with either subnets or individual virtual machine instances within that subnet. When an NSG is associated with a subnet, the ACL rules apply to all the virtual machine instances in that subnet. In addition, traffic to an individual virtual machine can be restricted further by associating an NSG directly to that virtual machine. Private IP addresses – Used for communication within an Azure virtual network, and user on-premises network when a VPN gateway is used to extend a user network to Azure. Private IP addresses allow Azure resources to communicate with other resources in a virtual network or an on-premises network through a VPN gateway or ExpressRoute circuit, without using an internet-reachable IP address. In the Azure Resource Manager deployment model, a private IP address is associated with the following types of Azure resources – virtual machines, internal load balancers (ILBs), and application gateways. Probes – This contains health probes used to check availability of virtual machines instances in the back-end address pool. If a particular virtual machine does not respond to health probes for some time, then it is taken out of traffic serving. Probes enable users to track the health of virtual instances. If a health probe fails, the virtual instance is taken out of rotation automatically. Public IP Addresses (PIP) – PIP is used for communication with the Internet, including Azure public-facing services and is associated with virtual machines, internet-facing load balancers, VPN gateways, and application gateways. Region - An area within a geography that does not cross national borders and that contains one or more data centers. Pricing, regional services, and offer types are exposed at the region level. A region is typically paired with another region, which can be up to several hundred miles away, to form a regional pair. Regional pairs can be used as a mechanism for disaster recovery and high availability scenarios. Also referred to generally as location. Resource Group - A container in Resource Manager that holds related resources for an application. The resource group can include all resources for an application, or only those resources that are logically grouped. Storage Account – An Azure storage account gives users access to the Azure blob, queue, table, and file services in Azure Storage. A user storage account provides the unique namespace for user Azure storage data objects. Virtual Machine – The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in various sizes. Virtual Network - An Azure virtual network is a representation of a user network in the cloud. It is a logical isolation of the Azure cloud dedicated to a user subscription. Users can fully control the IP address blocks, DNS settings, security policies, and route tables within this network. Users can also further segment their VNet into subnets and launch Azure IaaS virtual machines and cloud services (PaaS role instances). Also, users can connect the virtual network to their on-premises network using one of the connectivity options available in Azure. In essence, users can expand their network to Azure, with complete control on IP address blocks with the benefit of the enterprise scale Azure provides. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler ADC on Azure combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, and other essential application delivery capabilities in a single VPX instance, conveniently available via the Azure Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler ADC deployments. The net result is that NetScaler ADC on Azure enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. Global Server Load Balancing (GSLB) Global Server Load Balancing (GSLB) is huge for many of our customers. Those businesses have an on-prem data center presence serving regional customers, but with increasing demand for their business, they now want to scale and deploy their presence globally across AWS and Azure while maintaining their on-prem presence for regional customers. Customers want to do all of this with automated configurations as well. Thus, they are looking for a solution that can rapidly adapt to either evolving business needs or changes in the global market. With NetScaler ADC on the network administrator’s side, customers can use the Global Load Balancing (GLB) StyleBook to configure applications both on-prem and in the cloud, and that same config can be transferred to the cloud with NetScaler ADM. Users can reach either on-prem or cloud resources depending on proximity with GSLB. This allows for a seamless experience no matter where the users are located in the world. Deployment Types Multi-NIC Multi-IP Deployment (Three-NIC Deployment) Use Cases Multi-NIC Multi-IP (Three-NIC) Deployments are used to achieve real isolation of data and management traffic. Multi-NIC Multi-IP (Three-NIC) Deployments also improve the scale and performance of the ADC. Multi-NIC Multi-IP (Three-NIC) Deployments are used in network applications where throughput is typically 1 Gbps or higher and a Three-NIC Deployment is recommended. Multi-NIC Multi-IP (Three-NIC) Deployments are also used in network applications for WAF Deployment. Multi-NIC Multi-IP (Three-NIC) Deployment for GSLB Customers would potentially deploy using three-NIC deployment if they are deploying into a production environment where security, redundancy, availability, capacity, and scalability are critical. With this deployment method, complexity and ease of management are not critical concerns to the users. Azure Resource Manager (ARM) Template Deployment Customers would deploy using Azure Resource Manager (ARM) Templates if they are customizing their deployments or they are automating their deployments. Deployment Steps When users deploy a NetScaler ADC VPX instance on a Microsoft Azure Resource Manager (ARM), they can use the Azure cloud computing capabilities and use NetScaler ADC load balancing and traffic management features for their business needs. Users can deploy NetScaler ADC VPX instances on Azure Resource Manager either as standalone instances or as high availability pairs in active-standby modes. But users can deploy a NetScaler ADC VPX instance on Microsoft Azure in either of two ways: Through the Azure Marketplace. The NetScaler ADC VPX virtual appliance is available as an image in the Microsoft Azure Marketplace. NetScaler ADC ARM templates are available in the Azure Marketplace for standalone and HA deployment types. Using the NetScaler ADC Azure Resource Manager (ARM) json template available on GitHub. For more information, see the GitHub repository for NetScaler ADC Azure Templates. How a NetScaler ADC VPX Instance Works on Azure In an on-premises deployment, a NetScaler ADC VPX instance requires at least three IP addresses: Management IP address, called NSIP address Subnet IP (SNIP) address for communicating with the server farm Virtual server IP (VIP) address for accepting client requests For more information, see: Network Architecture for NetScaler ADC VPX Instances on Microsoft Azure. Note: VPX virtual appliances can be deployed on any instance type that has two or more cores and more than 2 GB memory. In an Azure deployment, users can provision a NetScaler ADC VPX instance on Azure in three ways: Multi-NIC multi-IP architecture Single NIC multi IP architecture ARM (Azure Resource Manager) templates Depending on requirements, users can deploy any of these supported architecture types. Multi-NIC Multi-IP Architecture (Three-NIC) In this deployment type, users can have more than one network interfaces (NICs) attached to a VPX instance. Any NIC can have one or more IP configurations - static or dynamic public and private IP addresses assigned to it. Refer to the following use cases: Configure a High-Availability Setup with Multiple IP Addresses and NICs Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands Configure a High-Availability Setup with Multiple IP Addresses and NICs In a Microsoft Azure deployment, a high-availability configuration of two NetScaler ADC VPX instances is achieved by using the Azure Load Balancer (ALB). This is achieved by configuring a health probe on ALB, which monitors each VPX instance by sending health probes at every 5 seconds to both primary and secondary instances. In this setup, only the primary node responds to health probes and the secondary does not. Once the primary sends the response to the health probe, the ALB starts sending the data traffic to the instance. If the primary instance misses two consecutive health probes, ALB does not redirect traffic to that instance. On failover, the new primary starts responding to health probes and the ALB redirects traffic to it. The standard VPX high availability failover time is three seconds. The total failover time that might occur for traffic switching can be a maximum of 13 seconds. Users can deploy a pair of NetScaler ADC VPX instances with multiple NICs in an active-passive high availability (HA) setup on Azure. Each NIC can contain multiple IP addresses. The following options are available for a multi-NIC high availability deployment: High availability using Azure availability set High availability using Azure availability zones For more information about Azure Availability Set and Availability Zones, see the Azure documentation: Manage the Availability of Linux Virtual Machines. High Availability using Availability Set A high availability setup using an availability set must meet the following requirements: An HA Independent Network Configuration (INC) configuration The Azure Load Balancer (ALB) in Direct Server Return (DSR) mode All traffic goes through the primary node. The secondary node remains in standby mode until the primary node fails. Note: For a NetScaler VPX high availability deployment on the Azure cloud to work, users need a floating public IP (PIP) that can be moved between the two VPX nodes. The Azure Load Balancer (ALB) provides that floating PIP, which is moved to the second node automatically in the event of a failover. In an active-passive deployment, the ALB front-end public IP (PIP) addresses are added as the VIP addresses in each VPX node. In an HA-INC configuration, the VIP addresses are floating and the SNIP addresses are instance specific. Users can deploy a VPX pair in active-passive high availability mode in two ways by using: NetScaler ADC VPX standard high availability template: use this option to configure an HA pair with the default option of three subnets and six NICs. Windows PowerShell commands: use this option to configure an HA pair according to your subnet and NIC requirements. This section describes how to deploy a VPX pair in active-passive HA setup by using the NetScaler template. If you want to deploy with PowerShell commands, see Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands. Configure HA-INC Nodes by using the NetScaler High Availability Template Users can quickly and efficiently deploy a pair of VPX instances in HA-INC mode by using the standard template. The template creates two nodes, with three subnets and six NICs. The subnets are for management, client, and server-side traffic, and each subnet has two NICs for both of the VPX instances. Complete the following steps to launch the template and deploy a high availability VPX pair, by using Azure Availability Sets. From Azure Marketplace, select and initiate the NetScaler solution template. The template appears. Ensure deployment type is Resource Manager and select Create. The Basics page appears. Create a Resource Group and select OK. The General Settings page appears. Type the details and select OK. The Network Setting page appears. Check the VNet and subnet configurations, edit the required settings, and select OK. The Summary page appears. Review the configuration and edit accordingly. Select OK to confirm. The Buy page appears. Select Purchase to complete the deployment. It might take a moment for the Azure Resource Group to be created with the required configurations. After completion, select the Resource Group in the Azure portal to see the configuration details, such as LB rules, back-end pools, health probes. The high availability pair appears as ns-vpx0 and ns-vpx1. If further modifications are required for the HA setup, such as creating more security rules and ports, users can do that from the Azure portal. Next, users need to configure the load-balancing virtual server with the ALB’s Frontend public IP (PIP) address, on the primary node. To find the ALB PIP, select ALB > Frontend IP configuration. See the Resources section for more information about how to configure the load-balancing virtual server. Resources: The following links provide additional information related to HA deployment and virtual server (virtual server) configuration: Configuring High Availability Nodes in Different Subnets Set up Basic Load Balancing Related resources: Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands Configure GSLB on an Active-Standby High-Availability Setup High Availability using Availability Zones Azure Availability Zones are fault-isolated locations within an Azure region, providing redundant power, cooling, and networking and increasing resiliency. Only specific Azure regions support Availability Zones. For more information, see: Regions and Availability Zones in Azure. Users can deploy a VPX pair in high availability mode by using the template called “NetScaler 13.0 HA using Availability Zones,” available in Azure Marketplace. Complete the following steps to launch the template and deploy a high availability VPX pair, by using Azure Availability Zones. From Azure Marketplace, select and initiate the NetScaler solution template. Ensure deployment type is Resource Manager and select Create. The Basics page appears. Enter the details and click OK. Note: Ensure that an Azure region that supports Availability Zones is selected. For more information about regions that support Availability Zones, see: Regions and Availability Zones in Azure . The General Settings page appears. Type the details and select OK. The Network Setting page appears. Check the VNet and subnet configurations, edit the required settings, and select OK. The Summary page appears. Review the configuration and edit accordingly. Select OK to confirm. The Buy page appears. Select Purchase to complete the deployment. It might take a moment for the Azure Resource Group to be created with the required configurations. After completion, select the Resource Group to see the configuration details, such as LB rules, back-end pools, health probes, in the Azure portal. The high availability pair appears as ns-vpx0 and ns-vpx1. Also, users can see the location under the Location column. If further modifications are required for the HA setup, such as creating more security rules and ports, users can do that from the Azure portal. ARM (Azure Resource Manager) Templates The GitHub repository for NetScaler ADC ARM (Azure Resource Manager) templates hosts NetScaler ADC Azure Templates for deploying NetScaler ADC in Microsoft Azure Cloud Services. All templates in the repository are developed and maintained by the NetScaler ADC engineering team. Each template in this repository has co-located documentation describing the usage and architecture of the template. The templates attempt to codify the recommended deployment architecture of the NetScaler ADC VPX, or to introduce the user to the NetScaler ADC or to demonstrate a particular feature, edition, or option. Users can reuse, modify, or enhance the templates to suit their particular production and testing needs. Most templates require sufficient subscriptions to portal.azure.com to create resource and deploy templates. NetScaler ADC VPX Azure Resource Manager (ARM) templates are designed to ensure an easy and consistent way of deploying standalone NetScaler ADC VPX. These templates increase reliability and system availability with built-in redundancy. These ARM templates support Bring Your Own License (BYOL) or Hourly based selections. Choice of selection is either mentioned in the template description or offered during template deployment. For more information on how to provision a NetScaler ADC VPX instance on Microsoft Azure using ARM (Azure Resource Manager) templates, visit NetScaler ADC Azure Templates. NetScaler ADC GSLB and Domain Based Services Back-end Autoscale with Cloud Load Balancer GSLB and DBS Overview NetScaler ADC GSLB supports using DBS (Domain Based Services) for Cloud load balancers. This allows for the auto-discovery of dynamic cloud services using a cloud load balancer solution. This configuration allows the NetScaler ADC to implement Global Server Load Balancing Domain-Name Based Services (GSLB DBS) in an Active-Active environment. DBS allows the scaling of back end resources in Microsoft Azure environments from DNS discovery. This section covers integrations between NetScaler ADC in the Azure Auto Scaling environments. The final section of the document details the ability to set up a HA pair of NetScaler ADCs that span two different Availability Zones (AZs) specific to an Azure region. Domain-Name Based Services – Azure ALB GLSB DBS utilizes the FQDN of the user Azure Load Balancer to dynamically update the GSLB Service Groups to include the back-end servers that are being created and deleted within Azure. To configure this feature, users point the NetScaler ADC to their Azure Load Balancer to dynamically route to different servers in Azure. They can do this without having to manually update the NetScaler ADC every time an instance is created and deleted within Azure. The NetScaler ADC DBS feature for GSLB Service Groups uses DNS-aware service discovery to determine the member service resources of the DBS namespace identified in the Autoscale group. Diagram: NetScaler ADC GSLB DBS Autoscale Components with Cloud Load Balancers Configuring Azure Components Log in to the user Azure Portal and create a new virtual machine from a NetScaler ADC template Create an Azure Load Balancer Add the created NetScaler ADC back-end Pools Create a Health Probe for port 80. Create a Load Balancing Rule utilizing the front-end IP created from the Load Balancer. Protocol: TCP Backend Port: 80 Backend pool: NetScaler ADC created in step 1 Health Probe: Created in step 4 Session Persistence: None Configure NetScaler ADC GSLB Domain Based Service The following configurations summarize what is required to enable domain-based services for autoscaling ADCs in a GSLB enabled environment. Traffic Management Configurations Note: It is required to configure the NetScaler ADC with either a nameserver or a DNS virtual server through which the ELB /ALB Domains are resolved for the DNS Service Groups. Navigate to Traffic Management > Load Balancing > Servers Click Add to create a server, provide a name and FQDN corresponding to the A record (domain name) in Azure for the Azure Load Balancer (ALB) Repeat step 2 to add the second ALB from the second resource in Azure. GSLB Configurations Click the Add button to configure a GSLB Site Name the Site. Type is configured as Remote or Local based on which NetScaler ADC users are configuring the site on. The Site IP Address is the IP address for the GSLB site. The GSLB site uses this IP address to communicate with the other GSLB sites. The Public IP address is required when using a cloud service where a particular IP is hosted on an external firewall or NAT device. The site should be configured as a Parent Site. Ensure the Trigger Monitors are set to ALWAYS. Also, be sure to check off the three boxes at the bottom for Metric Exchange, Network Metric Exchange, and Persistence Session Entry Exchange. NetScaler recommends that you set the Trigger monitor setting to MEPDOWN, please refer to: Configure a GSLB Service Group. Click Create, repeat steps 3 & 4 to configure the GSLB site for the other resource location in Azure (this can be configured on the same NetScaler ADC) Navigate to Traffic Management > GSLB > Service Groups Click Add to add a service group. Name the Service Group, use the HTTP protocol, and then under Site Name choose the respective site that was created in the previous steps. Be sure to configure autoscale Mode as DNS and check off the boxes for State and Health Monitoring. Click OK to create the Service Group. Click Service Group Members and select Server Based. Select the respective Elastic Load Balancing Server that was configured in the start of the run guide. Configure the traffic to go over port 80. Click Create. The Service group Member Binding should populate with 2 instances that it is receiving from the Elastic Load Balancer. Repeat steps 5 & 6 to configure the Service Group for the second resource location in Azure. (This can be done from the same NetScaler ADC GUI). The final step is to set up a GSLB Virtual Server. Navigate to Traffic Management > GSLB > Virtual Servers. Click Add to create the virtual server. Name the server, DNS Record Type is set as A, Service Type is set as HTTP, and check the boxes for Enable after Creating and AppFlow Logging. Click OK to create the GSLB Virtual Server. Once the GSLB Virtual Server is created, click No GSLB Virtual Server ServiceGroup Binding. Under ServiceGroup Binding use Select Service Group Name to select and add the Service Groups that were created in the previous steps. Next configure the GSLB Virtual Server Domain Binding by clicking No GSLB Virtual Server Domain Binding. Configure the FQDN and Bind, the rest of the settings can be left as the defaults. Configure the ADNS Service by clicking No Service. Add a Service Name, click New Server, and enter the IP Address of the ADNS server. Also, if the user ADNS is already configured users can select Existing Server and then choose the user ADNS from the drop-down menu. Make sure the Protocol is ADNS and the traffic is configured to flow over Port 53. Configure the Method as LEASTCONNECTION and the Backup Method as ROUNDROBIN. Click Done and verify that the user GSLB Virtual Server is shown as Up. NetScaler ADC Global Load Balancing for Hybrid and Multi-Cloud Deployments The NetScaler ADC hybrid and multi-cloud global load balancing (GLB) solution enables users to distribute application traffic across multiple data centers in hybrid clouds, multiple clouds, and on-premises deployments. The NetScaler ADC hybrid and multi-cloud GLB solution helps users to manage their load balancing setup in hybrid or multi-cloud without altering the existing setup. Also, if users have an on-premises setup, they can test some of their services in the cloud by using the NetScaler ADC hybrid and multi-cloud GLB solution before completely migrating to the cloud. For example, users can route only a small percentage of their traffic to the cloud, and handle most of the traffic on-premises. The NetScaler ADC hybrid and multi-cloud GLB solution also enables users to manage and monitor NetScaler ADC instances across geographic locations from a single, unified console. A hybrid and multi-cloud architecture can also improve overall enterprise performance by avoiding “vendor lock-in” and using different infrastructure to meet the needs of user partners and customers. With a multiple cloud architecture, users can manage their infrastructure costs better as they now have to pay only for what they use. Users can also scale their applications better as they now use the infrastructure on demand. It also lets you quickly switch from one cloud to another to take advantage of the best offerings of each provider. Architecture of the NetScaler ADC Hybrid and Multi-Cloud GLB Solution The following diagram illustrates the architecture of the NetScaler ADC hybrid and multi-cloud GLB feature. The NetScaler ADC GLB nodes handle the DNS name resolution. Any of these GLB nodes can receive DNS requests from any client location. The GLB node that receives the DNS request returns the load balancer virtual server IP address as selected by the configured load balancing method. Metrics (site, network, and persistence metrics) are exchanged between the GLB nodes using the metrics exchange protocol (MEP), which is a proprietary NetScaler protocol. For more information on the MEP protocol, see: Configure Metrics Exchange Protocol. The monitor configured in the GLB node monitors the health status of the load balancing virtual server in the same data center. In a parent-child topology, metrics between the GLB and NetScaler ADC nodes are exchanged by using MEP. However, configuring monitor probes between a GLB and NetScaler ADC LB node is optional in a parent-child topology. The NetScaler Application Delivery Management (ADM) service agent enables communication between the NetScaler ADM and the managed instances in your data center. For more information on NetScaler ADM service agents and how to install them, see: Getting Started. Note: This document makes the following assumptions: If users have an existing load balancing setup, it is up and running. A SNIP address or a GLB site IP address is configured on each of the NetScaler ADC GLB nodes. This IP address is used as the data center source IP address when exchanging metrics with other data centers. An ADNS or ADNS-TCP service is configured on each of the NetScaler ADC GLB instances to receive the DNS traffic. The required firewall and security groups are configured in the cloud service providers. SECURITY GROUPS CONFIGURATION Users must set up the required firewall/security groups configuration in the cloud service providers. For more information about AWS security features, see: AWS/Documentation/Amazon VPC/User Guide/Security. For more information about Microsoft Azure Network Security Groups, see: Azure/Networking/Virtual Network/Plan Virtual Networks/Security. In addition, on the GLB node, users must open port 53 for ADNS service/DNS server IP address and port 3009 for GSLB site IP address for MEP traffic exchange. On the load balancing node, users must open the appropriate ports to receive the application traffic. For example, users must open port 80 for receiving HTTP traffic and open port 443 for receiving HTTPS traffic. Open port 443 for NITRO communication between the NetScaler ADM service agent and NetScaler ADM. For the dynamic round trip time GLB method, users must open port 53 to allow UDP and TCP probes depending on the configured LDNS probe type. The UDP or the TCP probes are initiated using one of the SNIPs and therefore this setting must be done for security groups bound to the server-side subnet. Capabilities of the NetScaler ADC Hybrid and Multi-Cloud GLB Solution Some of the capabilities of the NetScaler ADC hybrid and multi-cloud GLB solution are described in this section: Compatibility with other Load Balancing Solutions The NetScaler ADC hybrid and multi-cloud GLB solution supports various load balancing solutions, such as the NetScaler ADC load balancer, NGINX, HAProxy, and other third-party load balancers. Note: Load balancing solutions other than NetScaler ADC are supported only if proximity-based and non-metric based GLB methods are used and if parent-child topology is not configured. GLB Methods The NetScaler ADC hybrid and multi-cloud GLB solution supports the following GLB methods. Metric-based GLB methods. Metric-based GLB methods collect metrics from the other NetScaler ADC nodes through the metrics exchange protocol. Least Connection: The client request is routed to the load balancer that has the fewest active connections. Least Bandwidth: The client request is routed to the load balancer that is currently serving the least amount of traffic. Least Packets: The client request is routed to the load balancer that has received the fewest packets in the last 14 seconds. Non-metric based GLB methods Round Robin: The client request is routed to the IP address of the load balancer that is at the top of the list of load balancers. That load balancer then moves to the bottom of the list. Source IP Hash: This method uses the hashed value of the client IP address to select a load balancer. Proximity-based GLB methods Static Proximity: The client request is routed to the load balancer that is closest to the client IP address. Round-Trip Time (RTT): This method uses the RTT value (the time delay in the connection between the client’s local DNS server and the data center) to select the IP address of the best performing load balancer. For more information on the load balancing methods, see: Load Balancing Algorithms. GLB Topologies The NetScaler ADC hybrid and multi-cloud GLB solution supports the active-passive topology and parent-child topology. Active-passive topology - Provides disaster recovery and ensures continuous availability of applications by protecting against points of failure. If the primary data center goes down, the passive data center becomes operational. For more information about GSLB active-passive topology, see: Configure GSLB for Disaster Recovery. Parent-child topology – Can be used if customers are using the metric-based GLB methods to configure GLB and LB nodes and if the LB nodes are deployed on a different NetScaler ADC instance. In a parent-child topology, the LB node (child site) must be a NetScaler ADC appliance because the exchange of metrics between the parent and child site is through the metrics exchange protocol (MEP). For more information about parent-child topology, see: Parent-Child Topology Deployment using the MEP Protocol. IPv6 Support The NetScaler ADC hybrid and multi-cloud GLB solution also supports IPv6. Monitoring The NetScaler ADC hybrid and multi-cloud GLB solution supports built-in monitors with an option to enable the secure connection. However, if LB and GLB configurations are on the same NetScaler ADC instance or if parent-child topology is used, configuring monitors is optional. Persistence The NetScaler ADC hybrid and multi-cloud GLB solution supports the following: Source IP based persistence sessions, so that multiple requests from the same client are directed to the same service if they arrive within the configured time-out window. If the time-out value expires before the client sends another request, the session is discarded, and the configured load balancing algorithm is used to select a new server for the client’s next request. Spillover persistence so that the backup virtual server continues to process the requests it receives, even after the load on the primary falls below the threshold. For more information, see: Configure Spillover. Site persistence so that the GLB node selects a data center to process a client request and forwards the IP address of the selected data center for all subsequent DNS requests. If the configured persistence applies to a site that is DOWN, the GLB node uses a GLB method to select a new site, and the new site becomes persistent for subsequent requests from the client. Configuration by using the NetScaler ADM StyleBooks Customers can use the default Multi-cloud GLB StyleBook on NetScaler ADM to configure the NetScaler ADC instances with hybrid and multi-cloud GLB configuration. Customers can use the default Multi-cloud GLB StyleBook for LB Node StyleBook to configure the NetScaler ADC load balancing nodes which are the child sites in a parent-child topology that handle the application traffic. Use this StyleBook only if users want to configure LB nodes in a parent-child topology. However, each LB node must be configured separately using this StyleBook. Workflow of the NetScaler ADC Hybrid and Multi-Cloud GLB Solution Configuration Customers can use the shipped Multi-cloud GLB StyleBook on NetScaler ADM to configure the NetScaler ADC instances with hybrid and multi-cloud GLB configuration. The following diagram shows the workflow for configuring the NetScaler ADC hybrid and multi-cloud GLB solution. The steps in the workflow diagram are explained in more detail after the diagram. PNG 19 Perform the following tasks as a cloud administrator: Sign up for a Citrix Cloud account. To start using NetScaler ADM, create a Citrix Cloud company account or join an existing one that has been created by someone in your company. After users log on to Citrix Cloud, click Manage on the NetScaler Application Delivery Management tile to set up the ADM service for the first time. Download and install multiple NetScaler ADM service agents. Users must install and configure the NetScaler ADM service agent in their network environment to enable communication between the NetScaler ADM and the managed instances in their data center or cloud. Install an agent in each region, so that they can configure LB and GLB configurations on the managed instances. The LB and GLB configurations can share a single agent. For more information on the above three tasks, see: Getting Started. Deploy load balancers on Microsoft Azure/AWS cloud/on-premises data centers. Depending on the type of load balancers that users are deploying on cloud and on-premises, provision them accordingly. For example, users can provision NetScaler ADC VPX instances in a Microsoft Azure Resource Manager (ARM) portal, in an Amazon Web Services (AWS) virtual private cloud and in on-premises data centers. Configure NetScaler ADC instances to function as LB or GLB nodes in standalone mode, by creating the virtual machines and configuring other resources. For more information on how to deploy NetScaler ADC VPX instances, see the following documents: NetScaler ADC VPX on AWS. Configure a NetScaler VPX Standalone Instance. Perform security configurations. Configure network security groups and network ACLs in ARM and AWS to control inbound and outbound traffic for user instances and subnets. Add NetScaler ADC instances in NetScaler ADM. NetScaler ADC instances are network appliances or virtual appliances that users want to discover, manage, and monitor from NetScaler ADM. To manage and monitor these instances, users must add the instances to the service and register both LB (if users are using NetScaler ADC for LB) and GLB instances. For more information on how to add NetScaler ADC instances in the NetScaler ADM, see: Getting Started. Implement the GLB and LB configurations using default NetScaler ADM StyleBooks. Use Multi-cloud GLB StyleBook to execute the GLB configuration on the selected GLB NetScaler ADC instances. Implement the load balancing configuration. (Users can skip this step if they already have LB configurations on the managed instances.) Users can configure load balancers on NetScaler ADC instances in one of two ways: Manually configure the instances for load balancing the applications. For more information on how to manually configure the instances, see: Set up Basic Load Balancing. Use StyleBooks. Users can use one of the NetScaler ADM StyleBooks (HTTP/SSL Load Balancing StyleBook or HTTP/SSL Load Balancing (with Monitors) StyleBook) to create the load balancer configuration on the selected NetScaler ADC instance. Users can also create their own StyleBooks. For more information on StyleBooks, see: StyleBooks. Use Multi-cloud GLB StyleBook for LB Node to configure GLB parent-child topology in any of the following cases: If users are using the metric-based GLB algorithms (Least Packets, Least Connections, Least Bandwidth) to configure GLB and LB nodes and if the LB nodes are deployed on a different NetScaler ADC instance If site persistence is required Using StyleBooks to Configure GLB on NetScaler ADC LB Nodes Customers can use the Multi-cloud GLB StyleBook for LB Node if they are using the metric-based GLB algorithms (Least Packets, Least Connections, Least Bandwidth) to configure GLB and LB nodes and if the LB nodes are deployed on a different NetScaler ADC instance. Users can also use this StyleBook to configure more child sites for an existing parent site. This StyleBook configures one child site at a time. So, create as many configurations (config packs) from this StyleBook as there are child sites. The StyleBook applies the GLB configuration on the child sites. Users can configure a maximum of 1024 child sites. Note: Use Multi-cloud GLB StyleBook found here: Using StyleBooks to Configure GLB to configure the parent sites. This StyleBook makes the following assumptions: A SNIP address or a GLB site IP address is configured. The required firewall and security groups are configured in the cloud service providers. Configuring a Child Site in a Parent-Child Topology by using Multi-cloud GLB StyleBook for LB Node Navigate to Applications > Configuration, and click Create New. The Choose StyleBook page displays all the StyleBooks available for customer use in NetScaler Application Delivery Management (ADM). Scroll down and select Multi-cloud GLB StyleBook for LB Node. The StyleBook appears as a user interface page on which users can enter the values for all the parameters defined in this StyleBook. Note: The terms data center and sites are used interchangeably in this document. Set the following parameters: Application Name. Enter the name of the GLB application deployed on the GLB sites for which you want to create child sites. Protocol. Select the application protocol of the deployed application from the drop-down list box. LB Health Check (Optional) Health Check Type. From the drop-down list box, select the type of probe used for checking the health of the load balancer VIP address that represents the application on a site. Secure Mode. (Optional) Select Yes to enable this parameter if SSL based health checks are required. HTTP Request. (Optional) If users selected HTTP as the health-check type, enter the full HTTP request used to probe the VIP address. List of HTTP Status Response Codes. (Optional) If users selected HTTP as the health check type, enter the list of HTTP status codes expected in responses to HTTP requests when the VIP is healthy. Configuring parent site. Provide the details of the parent site (GLB node) under which you want to create the child site (LB node). Site Name. Enter the name of the parent site. Site IP Address. Enter the IP address that the parent site uses as its source IP address when exchanging metrics with other sites. This IP address is assumed to be already configured on the GLB node in each site. Site Public IP Address. (Optional) Enter the Public IP address of the parent site that is used to exchange metrics, if that site’s IP address is NAT’ed. Configuring child site. Provide the details of the child site. Site name. Enter the name of the site. Site IP Address. Enter the IP address of the child site. Here, use the private IP address or SNIP of the NetScaler ADC node that is being configured as a child site. Site Public IP Address. (Optional) Enter the Public IP address of the child site that is used to exchange metrics, if that site’s IP address is NAT’ed. Configuring active GLB services (optional) Configure active GLB services only if the LB virtual server IP address is not a public IP address. This section allows users to configure the list of local GLB services on the sites where the application is deployed. Service IP. Enter the IP address of the load balancing virtual server on this site. Service Public IP Address. If the virtual IP address is private and has a public IP address NAT’ed to it, specify the public IP address. Service Port. Enter the port of the GLB service on this site. Site Name. Enter the name of the site on which the GLB service is located. Click Target Instances and select the NetScaler ADC instances configured as GLB instances on each site on which to deploy the GLB configuration. Click Create to create the LB configuration on the selected NetScaler ADC instance (LB node). Users can also click Dry Run to check the objects that would be created in the target instances. The StyleBook configuration that users have created appears in the list of configurations on the Configurations page. Users can examine, update, or remove this configuration by using the NetScaler ADM GUI. For more information on how to deploy a NetScaler ADC VPX instance on Microsoft Azure, see Deploy a NetScaler ADC VPX Instance on Microsoft Azure. For more information on how a NetScaler ADC VPX instance works on Azure, visit How a NetScaler ADC VPX Instance Works on Azure. For more information on how to configure GSLB on NetScaler ADC VPX instances, see Configure GSLB on NetScaler ADC VPX Instances. For more information on how to configure GSLB on an active-standby high-availability setup on Azure, visit Configure GSLB on an Active-Standby High-Availability Setup. Prerequisites Users need some prerequisite knowledge before deploying a NetScaler VPX instance on Azure: Familiarity with Azure terminology and network details. For information, see the Azure terminology in the previous section. Knowledge of a NetScaler ADC appliance. For detailed information about the NetScaler ADC appliance, see: NetScaler ADC 13.0. For knowledge of NetScaler ADC networking, see the Networking topic: Networking. Azure GSLB Prerequisites The prerequisites for the NetScaler ADC GSLB Service Groups include a functioning Amazon Web Services / Microsoft Azure environment with the knowledge and ability to configure Security Groups, Linux Web Servers, NetScaler ADCs within AWS, Elastic IPs, and Elastic Load Balancers. GSLB DBS Service integration requires NetScaler ADC version 12.0.57 for AWS ELB and Microsoft Azure ALB load balancer instances. NetScaler ADC GSLB Service Group Feature Enhancements GSLB Service Group entity: NetScaler ADC version 12.0.57 GSLB Service Group is introduced which supports autoscale using DBS dynamic discovery. DBS Feature Components (domain-based service) must be bound to the GSLB service group Example: > add server sydney_server LB-Sydney-xxxxxxxxxx.ap-southeast-2.elb.amazonaws.com> add gslb serviceGroup sydney_sg HTTP -autoscale DNS -siteName sydney> bind gslb serviceGroup sydney_sg sydney_server 80 Limitations Running the NetScaler ADC VPX load balancing solution on ARM imposes the following limitations: The Azure architecture does not accommodate support for the following NetScaler ADC features: Clustering IPv6 Gratuitous ARP (GARP) L2 Mode (bridging). Transparent virtual server are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN Dynamic Routing Virtual MAC USIP Jumbo Frames If you think you might need to shut down and temporarily deallocate the NetScaler ADC VPX virtual machine at any time, assign a static Internal IP address while creating the virtual machine. If you do not assign a static internal IP address, Azure might assign the virtual machine a different IP address each time it restarts, and the virtual machine might become inaccessible. In an Azure deployment, only the following NetScaler ADC VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler ADC VPX Data Sheet. If a NetScaler ADC VPX instance with a model number higher than VPX 3000 is used, the network throughput might not be the same as specified by the instance’s license. However, other features, such as SSL throughput and SSL transactions per second, might improve. The “deployment ID” that Azure generates during virtual machine provisioning is not visible to the user in ARM. Users cannot use the deployment ID to deploy NetScaler ADC VPX appliance on ARM. The NetScaler ADC VPX instance supports 20 Mb/s throughput and standard edition features when it is initialized. For a XenApp and XenDesktop deployment, a VPN virtual server on a VPX instance can be configured in the following modes: Basic mode, where the ICAOnly VPN virtual server parameter is set to ON. The Basic mode works fully on an unlicensed NetScaler ADC VPX instance. SmartAccess mode, where the ICAOnly VPN virtual server parameter is set to OFF. The SmartAccess mode works for only 5 NetScaler ADC AAA session users on an unlicensed NetScaler ADC VPX instance. Note: To configure the Smart Control feature, users must apply a Premium license to the NetScaler ADC VPX instance. Azure-VPX Supported Models and Licensing In an Azure deployment, only the following NetScaler ADC VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler ADC VPX Data Sheet. A NetScaler ADC VPX instance on Azure requires a license. The following licensing options are available for NetScaler ADC VPX instances running on Azure. Subscription-based licensing: NetScaler ADC VPX appliances are available as paid instances on Azure Marketplace. Subscription based licensing is a pay-as-you-go option. Users are charged hourly. The following VPX models and license types are available on Azure Marketplace: VPX Model License Type VPX10 Standard, Advanced, Premium VPX200 Standard, Advanced, Premium VPX1000 Standard, Advanced, Premium VPX3000 Standard, Advanced, Premium Bring your own license (BYOL): If users bring their own license (BYOL), they should see the VPX Licensing Guide at: CTX122426/NetScaler VPX and CloudBridge VPX Licensing Guide. Users have to: Use the licensing portal within MyCitrix to generate a valid license. Upload the license to the instance. NetScaler ADC VPX Check-In/Check-Out licensing: For more information, see: NetScaler ADC VPX Check-in and Check-out Licensing. Starting with NetScaler release 12.0 56.20, VPX Express for on-premises and cloud deployments does not require a license file. For more information on NetScaler ADC VPX Express see the “NetScaler ADC VPX Express license” section in NetScaler ADC licensing overview, which can be found here: Licensing Overview. Note: Regardless of the subscription-based hourly license bought from Azure Marketplace, in rare cases, the NetScaler ADC VPX instance deployed on Azure might come up with a default NetScaler license. This happens due to issues with Azure Instance Metadata Service (IMDS). Perform a warm restart before making any configuration change on the NetScaler ADC VPX instance, to enable the correct NetScaler ADC VPX license. Port Usage Guidelines Users can configure more inbound and outbound rules n NSG while creating the NetScaler VPX instance or after the virtual machine is provisioned. Each inbound and outbound rule is associated with a public port and a private port. Before configuring NSG rules, note the following guidelines regarding the port numbers users can use: The NetScaler VPX instance reserves the following ports. Users cannot define these as private ports when using the Public IP address for requests from the internet. Ports 21, 22, 80, 443, 8080, 67, 161, 179, 500, 520, 3003, 3008, 3009, 3010, 3011, 4001, 5061, 9000, 7000. However, if users want internet-facing services such as the VIP to use a standard port (for example, port 443) users have to create port mapping by using the NSG. The standard port is then mapped to a different port that is configured on the NetScaler ADC VPX for this VIP service. For example, a VIP service might be running on port 8443 on the VPX instance but be mapped to public port 443. So, when the user accesses port 443 through the Public IP, the request is directed to private port 8443. The Public IP address does not support protocols in which port mapping is opened dynamically, such as passive FTP or ALG. In a NetScaler Gateway deployment, users need not configure a SNIP address, because the NSIP can be used as a SNIP when no SNIP is configured. Users must configure the VIP address by using the NSIP address and some nonstandard port number. For call-back configuration on the back-end server, the VIP port number has to be specified along with the VIP URL (for example, url: port). Note: In Azure Resource Manager, a NetScaler ADC VPX instance is associated with two IP addresses - a public IP address (PIP) and an internal IP address. While the external traffic connects to the PIP, the internal IP address or the NSIP is non-routable. To configure a VIP in VPX, use the internal IP address (NSIP) and any of the free ports available. Do not use the PIP to configure a VIP. For example, if NSIP of a NetScaler ADC VPX instance is 10.1.0.3 and an available free port is 10022, then users can configure a VIP by providing the 10.1.0.3:10022 (NSIP address + port) combination.
  11. Introduction This document provides an overview of the steps, tools, architecture, and considerations for migrating Citrix ADC traffic management and security solutions to the new Citrix App Delivery and Security (CADS) service. This guide is intended for technical engineering and architectural teams who want to migrate applications to AWS. The scope of this guide is limited to Citrix ADC hardware or software-based appliances on product version 13 and later. What is CADS Service - Citrix Managed? CADS service – Citrix Managed is a new SaaS offering for application delivery and security. Citrix App Delivery and Security service removes the complexity from every step of app delivery, including provisioning, securing, on-boarding, and management, empowering IT to deliver a superior experience that keeps users engaged and productive. Getting Started There are four key steps for migrating to the new CADS service: Deployment models - Evaluation of the current deployment, assessment of how your applications fit together, and the design the architecture for the AWS environment. Use cases and feature mapping - Develop a high-level plan for your migration and making key decisions about what to migrate. Licensing – Identify the right CADS service – Citrix Managed entitlement by converting the current ADC capacity. Traffic flow - Migrate your application user’s traffic to the new site. Follow the Getting Started Guide Deployment Models Customers have designed their application architecture based on requirements such as specific feature need, performance, high availability, compliance, etc. When you migrate applications and their associated dependencies to AWS there is no standard approach. The following table provides an overview of the common use cases for different applications and ADC workloads that are migrated to CADS service – Citrix Managed. Application Type Use Case Suggested Action Development/Testing/PoC web app with temporary capacity needs Web application utilizing SSL-offload, load balancing and content switching capabilities of Citrix ADC Depending on the required location of the datacenter, create an environment as described here. Use CADS service Modern App delivery workflow to deploy your application as documented here. Trial License can be used, for more details see the Licensing section. Custom/Commercial, external facing application to be deployed across multiple Availability Zones, high availability (HA) You either plan to expand a datacenter or run a mix of self-managed and Citrix manged CADS services. You might have integrated Citrix Application Delivery Controller (ADC) as part of the application’s logic, and required it to port the same logic to CADS. You can leverage the Cloud Recommendation engine to determine the optimal site location for application. For details click here. Depending on the required location of the new datacenter, choose multiple availability zones for the region while you create an environment as described here. Review current Citrix ADC configurations (ns.conf) and break them down into the application components that need to be migrated. You can use the app migration workflow as described here. You can refer to feature mapping in Figure 2 to decide on modern app workflow or migration. External application across multiple Regions, high availability (HA) with DNS / GSLB Expand application presence globally with the help of global server load-balancing capability of CADS Based on the feature usage, you can either choose the Modern App or Migration (Classic App) workflow for application deployment. Once the applications are deployed in the desired region and availability zones, you can use the Multi-Site application delivery to create a GSLBaaS solution with CADS as described here. Internal application across multiple Availability Zones, high availability (HA) but no DNS / GSLB Deploy application for internal users only. In the Application creation workflow, while creating endpoints, ensures you select Internal for Access type. This ensures no public IP association for your application is configured. Applications with high compliance or security-related requirements. WAF or IDS/IPS applications These applications require advanced security features such as signatures, bot protections, deep and complex WAF rule sets, protection from OWASP top 10. You need to have a CADS Premium license to use these features. Ensure you enable the desired security protection features for your application deployment as described here. Cloud Native applications Use CADS to deploy an application as an Ingress controller to manage and route traffic into your Kubernetes cluster Not Supported with CADS. However, you can use CADS as the first (relatively static) tier of load balancing to an existing second tier of Citrix ADC CPX. Use Cases and Feature Mapping There are many aspects of migration that need to be considered, but before beginning your Citrix ADC workload migration, the following assessments help clarify the migration process. Application and the associated feature dependency to migrate: Assess whether the entire application is moving or only the web (UI) tier. You should also consider additional dependencies around features like use of caching, compression, authentication, security and more. Your evaluation needs to determine what would be required from the network topology. Reasons for application migration: You might be migrating your application because you are decommissioning your on-prem datacenter or because you want more elasticity or creating a disaster recovery site. Assess whether the application is migrating to have a per-application architecture, compared to the shared monolithic patterns common in many datacenters. Destination of the migration: Assess if the application needs to move to a single VPC with one Availability Zone or two Availability Zones. Determine the peer or transit VPC topology, along with the need for multi-Region deployments. These will impact the migration pattern design You can refer to Deployment types and the Datasheet for full set of supported features with CADS service – Citrix Managed. Following flow chart in Figure 2 shows the feature list for Modern and Classic App. You can start with the Modern App decision flow and check if all the required functionalities are addressed. If not, then you can validate the Classic app flow. Licensing The Citrix App Delivery and Security Service license is based on flexible consumption-based metering, where your applications automatically consume capacity from available entitlements. You get full architectural flexibility to deploy what you need when you need it. Details of the licensing entitlements are available here. Following calculation can be used to determine the consumption. If your application serves an average throughput of 250 Mbps per year, then the annual data usage can be calculated. Average application throughput per year (T) = 250 Mbps Data usage per sec (d) = T x 0.125 i.e. 250 x 0.125 = 31.25 MB per sec Total data usage in TB per year = (d x 365 x 24 x 3600)/1048576 i.e. (31.25 x 24 x 3600)/1048576 = 939.85 TB. For a data usage of ~1000 TB, the preferred license entitlement is Advance or Premium 1200 TB bandwidth + 100 million DNS queries. Traffic Flow With applications deployed with CADS service – Citrix Managed, the final step is to migrate the application traffic from an existing datacenter. For this, use Multi-site application delivery and define the existing and new Citrix Managed site. For traffic migration use weighted Round-Robin as the algorithm. Configure a weight in 90(existing site):10 (new Citrix managed site) ratio. Weights are proportional, i.e. 90 % of the traffic is received by the existing site and 10% by the Citrix Managed site. You can alter this to control the traffic proportions to your datacenters. Finally, perform application tests and complete the migration process with 100% traffic to the Citrix Managed site. Summary Following above pattern enables admins to migrate applications delivered and secured by an ADC to CADS service - Citrix Managed.
  12. Deployment Guide NetScaler ADC VPX on AWS - GSLB Overview NetScaler ADC is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler ADC is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler ADC combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler VPX The NetScaler ADC VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms: XenServer Hypervisor VMware ESX Microsoft Hyper-V Linux KVM Amazon Web Services Microsoft Azure Google Cloud Platform This deployment guide focuses on NetScaler ADC VPX on Amazon Web Services. Amazon Web Services Amazon Web Services (AWS) is a comprehensive, evolving cloud computing platform provided by Amazon that includes a mixture of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings. AWS services can offer tools such as compute power, database storage, and content delivery services. AWS offers the following essential services AWS Compute Services Migration Services Storage Database Services Management Tools Security Services Analytics Networking Messaging Developer Tools Mobile Services AWS Terminology Here is a brief description of essential terms used in this document that users must be familiar with: Elastic Network Interface (ENI) - A virtual network interface that users can attach to an instance in a Virtual Private Cloud (VPC). Elastic IP (EIP) address - A static, public IPv4 address that users have allocated in Amazon EC2 or Amazon VPC and then attached to an instance. Elastic IP addresses are associated with user accounts, not a specific instance. They are elastic because users can easily allocate, attach, detach, and free them as their needs change. Subnet - A segment of the IP address range of a VPC with which EC2 instances can be attached. Users can create subnets to group instances according to security and operational needs. Virtual Private Cloud (VPC) - A web service for provisioning a logically isolated section of the AWS cloud where users can launch AWS resources in a virtual network that they define. Here is a brief description of other terms used in this document that users should be familiar with: Amazon Machine Image (AMI) - A machine image, which provides the information required to launch an instance, which is a virtual server in the cloud. Elastic Block Store - Provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Simple Storage Service (S3) - Storage for the Internet. It is designed to make web-scale computing easier for developers. Elastic Compute Cloud (EC2) - A web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Elastic Load Balancing (ELB) - Distributes incoming application traffic across multiple EC2 instances, in multiple Availability Zones. This increases the fault tolerance of user applications. Instance type - Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give users the flexibility to choose the appropriate mix of resources for their applications. Identity and Access Management (IAM) - An AWS identity with permission policies that determine what the identity can and cannot do in AWS. Users can use an IAM role to enable applications running on an EC2 instance to securely access their AWS resources. IAM role is required for deploying VPX instances in a high-availability setup. Internet Gateway - Connects a network to the Internet. Users can route traffic for IP addresses outside their VPC to the Internet gateway. Key pair - A set of security credentials with which users prove their identity electronically. A key pair consists of a private key and a public key. Route table - A set of routing rules that controls the traffic leaving any subnet that is associated with the route table. Users can associate multiple subnets with a single route table, but a subnet can be associated with only one route table at a time. Auto Scaling - A web service to launch or terminate Amazon EC2 instances automatically based on user-defined policies, schedules, and health checks. CloudFormation - A service for writing or changing templates that create and delete related AWS resources together as a unit. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler ADC on AWS combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, and other essential application delivery capabilities in a single VPX instance, conveniently available via the AWS Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler ADC deployments. The net result is that NetScaler ADC on AWS enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. Global Server Load Balancing (GSLB) Global Server Load Balancing (GSLB) is important for many of our customers. Those businesses have an on-prem data center presence serving regional customers, but with increasing demand for their business, they now want to scale and deploy their presence globally across AWS and Azure while maintaining their on-prem presence for regional customers. Customers want to do all of this with automated configurations as well. Thus, they are looking for a solution that can rapidly adapt to either evolving business needs or changes in the global market. With NetScaler ADC on the network administrator’s side, customers can use the Global Load Balancing (GLB) StyleBook to configure applications both on-prem and in the cloud, and that same config can be transferred to the cloud with NetScaler ADM. Users can reach either on-prem or cloud resources depending on proximity with GSLB. This allows for a seamless experience no matter where the users are located in the world. Deployment Types Three-NIC Deployment Typical Deployments GLB StyleBook With ADM With GSLB (Route53 w/domain registration) Licensing - Pooled/Marketplace Use Cases Three-NIC Deployments are used to achieve real isolation of data and management traffic. Three-NIC Deployments also improve the scale and performance of the ADC. Three-NIC Deployments are used in network applications where throughput is typically 1 Gbps or higher and a Three-NIC Deployment is recommended. CFT Deployment Customers would deploy using CloudFormation Templates if they are customizing their deployments or they are automating their deployments. Deployment Steps Three-NIC Deployment for GSLB The NetScaler ADC VPX instance is available as an Amazon Machine Image (AMI) in the AWS marketplace, and it can be launched as an Elastic Compute Cloud (EC2) instance within an AWS VPC. The minimum EC2 instance type allowed as a supported AMI on NetScaler VPX is m4.large. The NetScaler ADC VPX AMI instance requires a minimum of 2 virtual CPUs and 2 GB of memory. An EC2 instance launched within an AWS VPC can also provide the multiple interfaces, multiple IP addresses per interface, and public and private IP addresses needed for VPX configuration. Each VPX instance requires at least three IP subnets: A management subnet A client-facing subnet (VIP) A back-end facing subnet (SNIP) Citrix recommends three network interfaces for a standard VPX instance on AWS installation. AWS currently makes multi-IP functionality available only to instances running within an AWS VPC. A VPX instance in a VPC can be used to load balance servers running in EC2 instances. An Amazon VPC allows users to create and control a virtual networking environment, including their own IP address range, subnets, route tables, and network gateways. Note: By default, users can create up to 5 VPC instances per AWS region for each AWS account. Users can request higher VPC limits by submitting Amazon’s request form here: Amazon VPC Request . Licensing A NetScaler ADC VPX instance on AWS requires a license. The following licensing options are available for NetScaler ADC VPX instances running on AWS: Free (unlimited) Hourly Annual Bring your own license Free Trial (all NetScaler ADC VPX-AWS subscription offerings for 21 days free in AWS marketplace). Deployment Options Users can deploy a NetScaler ADC VPX standalone instance on AWS by using the following options: AWS web console Citrix-authored CloudFormation template AWS CLI Three-NIC Deployment Steps Users can deploy a NetScaler ADC VPX instance on AWS through the AWS web console. The deployment process includes the following steps: Create a Key Pair Create a Virtual Private Cloud (VPC) Add more subnets Create security groups and security rules Add route tables Create an internet gateway Create a NetScaler ADC VPX instance Create and attach more network interfaces Attach elastic IPs to the management NIC Connect to the VPX instance Create a Key Pair Amazon EC2 uses a key pair to encrypt and decrypt logon information. To log on to an instance, users must create a key pair, specify the name of the key pair when they launch the instance, and provide the private key when they connect to the instance. When users review and launch an instance by using the AWS Launch Instance wizard, they are prompted to use an existing key pair or create a new key pair. For more information about how to create a key pair, see: Amazon EC2 Key Pairs and Linux Instances. Create a VPC A NetScaler ADC VPC instance is deployed inside an AWS VPC. A VPC allows users to define virtual networks dedicated to their AWS account. For more information about AWS VPC, see: Getting Started With IPv4 for Amazon VPC. While creating a VPC for a NetScaler ADC VPX instance, keep the following points in mind. Use the VPC with a Single Public Subnet Only option to create an AWS VPC in an AWS availability zone. Citrix recommends that users create at least three subnets, of the following types: One subnet for management traffic. Place the management IP (NSIP) on this subnet. By default, elastic network interface (ENI) eth0 is used for the management IP. One or more subnets for client-access (user-to-NetScaler ADC VPX) traffic, through which clients connect to one or more virtual IP (VIP) addresses assigned to NetScaler ADC load balancing virtual servers. One or more subnets for the server-access (VPX-to-server) traffic, through which user servers connect to VPX-owned subnet IP (SNIP) addresses. For more information about NetScaler ADC load balancing and virtual servers, virtual IP addresses (VIPs), and subnet IP addresses (SNIPs). All subnets must be in the same availability zone. Add Subnets When the VPC wizard is used for deployment, only one subnet is created. Depending on user requirements, users may want to create more subnets. For more information about how to create more subnets, see: VPCs and Subnets. Create Security Groups and Security Rules To control inbound and outbound traffic, create security groups and add rules to the groups. For more information about how to create groups and add rules, see: Security Groups for Your VPC. For NetScaler ADC VPX instances, the EC2 wizard gives default security groups, which are generated by AWS Marketplace and is based on recommended settings by Citrix. However, users can create more security groups based on their requirements. Note: Port 22, 80, 443 to be opened on the Security group for SSH, HTTP, and HTTPS access respectively. Add Route Tables Route tables contain a set of rules, called routes, that are used to determine where network traffic is directed. Each subnet in a VPC must be associated with a route table. For more information about how to create a route table, see: Route Tables. Create an Internet Gateway An internet gateway serves two purposes: to provide a target in the VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. Create an internet gateway for internet traffic. For more information about how to create an Internet Gateway, see the section: Creating and Attaching an Internet Gateway. Create a NetScaler ADC VPX Instance by using the AWS EC2 Service To create a NetScaler ADC VPX instance by using the AWS EC2 service, complete the following steps: From the AWS dashboard, go to Compute > EC2 > Launch Instance > AWS Marketplace. Before clicking Launch Instance, users should ensure their region is correct by checking the note that appears under Launch Instance. In the Search AWS Marketplace bar, search with the keyword NetScaler ADC VPX. Select the version the user wants to deploy and then click Select. For the NetScaler ADC VPX version, users have the following options: A licensed version NetScaler ADC VPX Express appliance (This is a free virtual appliance, which is available from NetScaler ADC version 12.0 56.20.) Bring your own device The Launch Instance wizard starts. Follow the wizard to create an instance. The wizard prompts users to: Choose Instance Type Configure Instance Add Storage Add Tags Configure Security Group Review Create and Attach more Network Interfaces Create two more network interfaces for the VIP and SNIP. For more information about how to create more network interfaces, see: Creating a Network Interface. After users have created the network interfaces, they must attach the interfaces to the VPX instance. Before attaching the interfaces, shut down the VPX instance, attach the interfaces, and power on the instance. For more information about how to attach network interfaces, see the section: Attaching a Network Interface When Launching an Instance. Allocate and Associate Elastic IPs If users assign a public IP address to an EC2 instance, it remains assigned only until the instance is stopped. After that, the address is released back to the pool. When users restart the instance, a new public IP address is assigned. In contrast, an elastic IP (EIP) address remains assigned until the address is disassociated from an instance. Allocate and associate an elastic IP for the management NIC. For more information about how to allocate and associate elastic IP addresses, see these topics: Allocating an Elastic IP Address Associating an Elastic IP Address with a Running Instance These steps complete the procedure to create a NetScaler ADC VPX instance on AWS. It can take a few minutes for the instance to be ready. Check that the instance has passed its status checks. Users can view this information in the Status Checks column on the Instances page. Connect to the VPX Instance After users have created the VPX instance, users can connect to the instance by using the GUI and an SSH client. GUI The following are the default administrator credentials to access a NetScaler ADC VPX instance: User name: nsroot Password: The default password for the nsroot account is set to the AWS instance-ID of the NetScaler ADC VPX instance. SSH client From the AWS management console, select the NetScaler ADC VPX instance and click Connect. Follow the instructions given on the Connect to Your Instance page. For more information about how to deploy a NetScaler ADC VPX standalone instance on AWS by using the AWS web console, see: Scenario: Standalone Instance Configure GSLB in two AWS Locations Setting up GSLB for the NetScaler ADC on AWS basically consists of configuring the NetScaler ADC to load balance traffic to servers located outside the VPC that the NetScaler ADC belongs to, such as within another VPC in a different Availability Region or an on-premises data center. Domain-Name Based Services (GSLB DBS) with Cloud Load Balancers GSLB and DBS Overview NetScaler ADC GSLB support using DBS (Domain Based Services) for Cloud load balancers allows for the automatic discovery of dynamic cloud services using a cloud load balancer solution. This configuration allows the NetScaler ADC to implement Global Server Load Balancing Domain-Name Based Services (GSLB DBS) in an Active-Active environment. DBS allows the scaling of back-end resources in AWS environments from DNS discovery. This section covers integrations between NetScaler ADC in AWS AutoScaling environments. The final section of the document details the ability to set up a HA pair of NetScaler ADCs that span two different Availability Zones (AZs) specific to an AWS region. NetScaler ADC GSLB Service Group Feature Enhancements GSLB Service Group entity: NetScaler ADC version 12.0.57 GSLB Service Group is introduced which supports autoscale using DBS dynamic discovery. DBS Feature Components (domain-based service) shall be bound to the GSLB service group. Example: `> add server sydney_server LB-Sydney-xxxxxxxxxx.ap-southeast-2.elb.amazonaws.com add gslb serviceGroup sydney_sg HTTP -autoScale DNS -siteName sydney bind gslb serviceGroup sydney_sg sydney_server 80` Domain-Name based Services – AWS ELB GLSB DBS utilizes the FQDN of the user Elastic Load Balancer to dynamically update the GSLB Service Groups to include the back-end servers that are being created and deleted within AWS. The back-end servers or instances in AWS can be configured to scale based on network demand or CPU utilization. To configure this feature, point the NetScaler ADC to the Elastic Load Balancer to dynamically route to different servers in AWS without having to manually update the NetScaler ADC every time an instance is created and deleted within AWS. The NetScaler ADC DBS feature for GSLB Service Groups uses DNS aware service discovery to determine the member service resources of the DBS namespace identified in the AutoScale group. Diagram: NetScaler ADC GSLB DBS AutoScale components with Cloud Load Balancers: Configure AWS Components Security Groups Note: Recommendation should be to create different security groups for ELB, NetScaler ADC GSLB Instance, and Linux instance, as the set of rules required for each of these entities is different. This example has a consolidated Security Group configuration for brevity. To ensure the proper configuration of the virtual firewall, see: Security Groups for Your VPC. Step 1: Log in to the user AWS resource group and navigate to EC2 > NETWORK & SECURITY > Security Groups. Step 2: Click Create Security Group and provide a name and description. This security group encompasses the NetScaler ADC and Linux back-end web servers. Step 3: Add the inbound port rules from the following screenshot. Note: Limiting Source IP access is recommended for granular hardening. For more information, see: Web Server Rules . Amazon Linux Back-end Web Services Step 4: Log in to the user AWS resource group and navigate to EC2 > Instances. Step 5: Click Launch Instance using the details that follow to configure the Amazon Linux instance. Fill in details about setting up a Web Server or back-end service on this instance. NetScaler ADC Configuration Step 6: Log in to the user AWS resource group and navigate to EC2 > Instances. Step 7: Click Launch Instance and use the following details to configure the Amazon AMI instance. Elastic IP Configuration Note: NetScaler ADC can also be made to run with a single elastic IP if necessary to reduce cost, by not having a public IP for the NSIP. Instead, attach an elastic IP to the SNIP which can cover for management access to the box, in addition to the GSLB site IP and ADNS IP. Step 8: Log in to the user AWS resource group and navigate to EC2 > NETWORK & SECURITY > Elastic IPs. Click Allocate new address to create a Elastic IP address. Configure the Elastic IP to point to the user running the NetScaler ADC instance within AWS. Configure a second Elastic IP and again point it to the user running the NetScaler ADC instance. Elastic Load Balancer Step 9: Log in to the user AWS resource group and navigate to EC2 > LOAD BALANCING > Load Balancers. Step 10: Click Create Load Balancer to configure a classic load balancer. The user Elastic Load Balancers allow users to load balance their back-end Amazon Linux instances while also being able to Load Balance other instances that are spun up based on demand. Configuring Global Server Load Balancing Domain-Name Based Services Traffic Management Configurations Note: It is required to configure the NetScaler ADC with either a nameserver or a DNS virtual server through which the ELB/ALB Domains will be resolved for the DBS Service Groups. Step 1: Navigate to Traffic Management > Load Balancing > Servers. Step 2: Click Add to create a server, provide a name and FQDN corresponding to the A record (domain name) in AWS for the Elastic Load Balancer (ELB). Repeat step 2 to add the second ELB from the second resource location in AWS. GSLB Configuration Step 1: Navigate to Traffic Management > GSLB > Sites. Step 2: Click the Add button to configure a GSLB Site. Name the Site. The Type is configured as Remote or Local based on which NetScaler ADC users are configuring the site on. The Site IP Address is the IP address for the GSLB site. The GSLB site uses this IP address to communicate with the other GSLB sites. The Public IP address is required when using a cloud service where a particular IP is hosted on an external firewall or NAT device. The site should be configured as a Parent Site. Ensure the Trigger Monitors are set to ALWAYS and be sure to check off the three boxes at the bottom for Metric Exchange, Network Metric Exchange, and Persistence Session Entry Exchange. Citrix recommends setting the Trigger monitor setting to MEPDOWN. For more information, see: Configure a GSLB Service Group . Step 3: The following screenshot from the AWS configurations shows where users can find the Site IP Address and Public IP Address. The IPs are found under Network & Security > Elastic IPs. Click Create, repeat steps 2 and 3 to configure the GSLB site for the other resource location in AWS (this can be configured on the same NetScaler ADC). Step 4: Navigate to Traffic Management > GSLB > Service Groups. Step 5: Click Add to add a service group. Name the Service Group, use the HTTP protocol, and then under Site Name, choose the respective site that was created in the previous steps. Be sure to configure AutoScale Mode as DNS and check off the boxes for State and Health Monitoring. Click OK to create the Service Group. Step 6: Click Service Group Members and select Server Based. Select the respective Elastic Load Balancing Server that was configured in the start of the run guide. Configure the traffic to go over port 80. Click Create. Step 7: The Service group Member Binding should populate with two instances that it is receiving from the Elastic Load Balancer. Repeat steps to configure the Service Group for the second resource location in AWS. (This can be done from the same location). Step 8: Navigate to Traffic Management > GSLB > Virtual Servers. Click Add to create the virtual server. Name the server, DNS Record Type is set as A, Service Type is set as HTTP, and check the boxes for Enable after Creating and AppFlow Logging. Click OK to create the GSLB Virtual Server. (NetScaler ADC GUI) Step 9: When the GSLB Virtual Server is created, click No GSLB Virtual Server ServiceGroup Binding. Click Add to create the virtual server. Name the server, DNS Record Type is set as A, Service Type is set as HTTP, and check the boxes for Enable after Creating and AppFlow Logging. Click OK to create the GSLB Virtual Server. (NetScaler ADC GUI) Step 10: Under “ServiceGroup Binding” use Select Service Group Name to select and add the Service Groups that were created in the previous steps. Step 11: Next configure the GSLB Virtual Server Domain Binding by clicking No GSLB Virtual Server Domain Binding. Configure the FQDN and Bind, the rest of the settings can be left as the defaults. Step 12: Configure the ADNS Service by clicking No Service. Add a Service Name, click New Server, and enter the IP Address of the ADNS server. Also, if the user ADNS is already configured users can select Existing Server and then choose their ADNS from the menu. Make sure the Protocol is ADNS and the traffic is over Port 53. Configure the Method as LEASTCONNECTION and Backup Method as ROUNDROBIN. NetScaler ADC Global Load Balancing for Hybrid and Multi-Cloud Deployments The NetScaler ADC hybrid and multi-cloud global load balancing (GLB) solution enables users to distribute application traffic across multiple data centers in hybrid clouds, multiple clouds, and on-premises deployments. The NetScaler ADC hybrid and multi-cloud GLB solution helps users to manage their load balancing setup in hybrid or multi-cloud environments without altering the existing setup. Also, if users have an on-premises setup, they can test some of their services in the cloud by using the NetScaler ADC hybrid and multi-cloud GLB solution before completely migrating to the cloud. For example, users can route only a small percentage of their traffic to the cloud, and handle most of the traffic on-premises. The NetScaler ADC hybrid and multi-cloud GLB solution also enables users to manage and monitor NetScaler ADC instances across geographic locations from a single, unified console. A hybrid and multi-cloud architecture can also improve overall enterprise performance by avoiding “vendor lock-in” and using different infrastructure to meet the needs of user partners and customers. With multiple cloud architecture, users can manage their infrastructure costs better as they now have to pay only for what they use. Users can also scale their applications better as they now use the infrastructure on demand. It also provides the ability to quickly switch from one cloud to another to take advantage of the best offerings of each provider. Architecture of the NetScaler ADC Hybrid and Multi-Cloud GLB Solution The following diagram illustrates the architecture of NetScaler ADC hybrid and multi-cloud GLB feature. The NetScaler ADC GLB nodes handle the DNS name resolution. Any of these GLB nodes can receive DNS requests from any client location. The GLB node that receives the DNS request returns the load balancer virtual server IP address as selected by the configured load balancing method. Metrics (site, network, and persistence metrics) are exchanged between the GLB nodes using the metrics exchange protocol (MEP), which is a proprietary Citrix protocol. For more information on the MEP protocol, see: Configure Metrics Exchange Protocol . The monitor configured in the GLB node monitors the health status of the load balancing virtual server in the same data center. In a parent-child topology, metrics between the GLB and NetScaler ADC nodes are exchanged by using MEP. However, configuring monitor probes between a GLB and NetScaler ADC LB node is optional in a parent-child topology. The NetScaler Application Delivery Management (ADM) service agent enables communication between the NetScaler ADM and the managed instances in the user data center. For more information on NetScaler ADM service agents and how to install them, see: Getting Started . Note: This document makes the following assumptions: If users have an existing load balancing setup, it is up and running. A SNIP address or a GLB site IP address is configured on each of the NetScaler ADC GLB nodes. This IP address is used as the data center source IP address when exchanging metrics with other data centers. An ADNS or ADNS-TCP service is configured on each of the NetScaler ADC GLB instances to receive the DNS traffic. The required firewall and security groups are configured in the cloud service providers. Security Groups Configuration Users must set up the required firewall/security groups configuration in the cloud service providers. For more information about AWS security features, see: AWS/Documentation/Amazon VPC/User Guide/Security. Also, on the GLB node, users must open port 53 for ADNS service/DNS server IP address and port 3009 for GSLB site IP address for MEP traffic exchange. On the load balancing node, users must open the appropriate ports to receive the application traffic. For example, users must open port 80 for receiving HTTP traffic and open port 443 for receiving HTTPS traffic. Open port 443 for NITRO communication between the NetScaler ADM service agent and NetScaler ADM. For the dynamic round trip time GLB method, users must open port 53 to allow UDP and TCP probes depending on the configured LDNS probe type. The UDP or the TCP probes are initiated using one of the SNIPs and therefore this setting must be done for security groups bound to the server-side subnet. Capabilities of the NetScaler ADC Hybrid and Multi-Cloud GLB Solution Some of the capabilities of the NetScaler ADC hybrid and multi-cloud GLB solution are described in this section. Compatibility with other Load Balancing Solutions The NetScaler ADC hybrid and multi-cloud GLB solution supports various load balancing solutions such as the NetScaler ADC load balancer, NGINX, HAProxy, and other third-party load balancers. Note: Load balancing solutions other than NetScaler ADC are supported only if proximity-based and non-metric based GLB methods are used and if parent-child topology is not configured. GLB Methods The NetScaler ADC hybrid and multi-cloud GLB solution supports the following GLB methods. Metric-based GLB methods. Metric-based GLB methods collect metrics from the other NetScaler ADC nodes through the metrics exchange protocol. Least Connection: The client request is routed to the load balancer that has the fewest active connections. Least Bandwidth: The client request is routed to the load balancer that is currently serving the least amount of traffic. Least Packets: The client request is routed to the load balancer that has received the fewest packets in the last 14 seconds. Non-metric based GLB methods Round Robin: The client request is routed to the IP address of the load balancer that is at the top of the list of load balancers. That load balancer then moves to the bottom of the list. Source IP Hash: This method uses the hashed value of the client IP address to select a load balancer. Proximity-based GLB methods Static Proximity: The client request is routed to the load balancer that is closest to the client IP address. Round-Trip Time (RTT): This method uses the RTT value (the time delay in the connection between the client’s local DNS server and the data center) to select the IP address of the best performing load balancer. For more information on the load balancing methods, see: Load Balancing Algorithms . GLB Topologies The NetScaler ADC hybrid and multi-cloud GLB solution supports the active-passive topology and parent-child topology. Active-passive topology - Provides disaster recovery and ensures continuous availability of applications by protecting against points of failure. If the primary data center goes down, the passive data center becomes operational. For more information about GSLB active-passive topology, see: Configure GSLB for Disaster Recovery . Parent-child topology – Can be used if customers are using the metric-based GLB methods to configure GLB and LB nodes and if the LB nodes are deployed on a different NetScaler ADC instance. In a parent-child topology, the LB node (child site) must be a NetScaler ADC appliance because the exchange of metrics between the parent and child site is through the metrics exchange protocol (MEP). For more information about parent-child topology, see: Parent-Child Topology Deployment using the MEP Protocol . IPv6 Support The NetScaler ADC hybrid and multi-cloud GLB solution also supports IPv6. Monitoring The NetScaler ADC hybrid and multi-cloud GLB solution supports built-in monitors with an option to enable the secure connection. However, if LB and GLB configurations are on the same NetScaler ADC instance or if parent-child topology is used, configuring monitors is optional. Persistence The NetScaler ADC hybrid and multi-cloud GLB solution supports the following: Source IP based persistence sessions, so that multiple requests from the same client are directed to the same service if they arrive within the configured time-out window. If the time-out value expires before the client sends another request, the session is discarded, and the configured load balancing algorithm is used to select a new server for the client’s next request. Spillover persistence so that the backup virtual server continues to process the requests it receives, even after the load on the primary falls below the threshold. For more information, see: Configure Spillover. Site persistence so that the GLB node selects a data center to process a client request and forwards the IP address of the selected data center for all subsequent DNS requests. If the configured persistence applies to a site that is DOWN, the GLB node uses a GLB method to select a new site, and the new site becomes persistent for subsequent requests from the client. Configuration by using NetScaler ADM StyleBooks Customers can use the default Multi-cloud GLB StyleBook on NetScaler ADM to configure the NetScaler ADC instances with hybrid and multi-cloud GLB configurations. Customers can use the default Multi-cloud GLB StyleBook for the LB Node StyleBook to configure the NetScaler ADC load balancing nodes which are the child sites in a parent-child topology that handle the application traffic. Use this StyleBook only if users want to configure LB nodes in a parent-child topology. However, each LB node must be configured separately using this StyleBook. Workflow of the NetScaler ADC Hybrid and Multi-Cloud GSLB Solution Configuration Customers can use the shipped Multi-cloud GLB StyleBook on NetScaler ADM to configure the NetScaler ADC instances with hybrid and multi-cloud GLB configurations. The following diagram shows the workflow for configuring a NetScaler ADC hybrid and multi-cloud GLB solution. The steps in the workflow diagram are explained in more detail after the diagram. Perform the following tasks as a cloud administrator: Sign up for a Citrix Cloud account. To start using NetScaler ADM, create a Citrix Cloud company account or join an existing one that has been created by someone in your company. After users log on to Citrix Cloud, click Manage on the Citrix Application Delivery Management tile to set up the ADM service for the first time. Download and install multiple NetScaler ADM service agents. Users must install and configure the NetScaler ADM service agent in their network environment to enable communication between the NetScaler ADM and the managed instances in their data center or cloud. Install an agent in each region, so that they can configure LB and GLB configurations on the managed instances. The LB and GLB configurations can share a single agent. For more information on the above three tasks, see: Getting Started . Deploy load balancers on Microsoft Azure/AWS cloud/on-premises data centers. Depending on the type of load balancers that users are deploying on cloud and on-premises, provision them accordingly. For example, users can provision NetScaler ADC VPX instances in a Microsoft Azure Resource Manager (ARM) portal, in an Amazon Web Services (AWS) virtual private cloud and in on-premises data centers. Configure NetScaler ADC instances to function as LB or GLB nodes in standalone mode, by creating the virtual machines and configuring other resources. For more information on how to deploy NetScaler ADC VPX instances, see the following documents: NetScaler ADC VPX on AWS. Configure a NetScaler VPX Standalone Instance . Perform security configurations. Configure network security groups and network ACLs in ARM and in AWS to control inbound and outbound traffic for user instances and subnets. Add NetScaler ADC instances in NetScaler ADM. NetScaler ADC instances are network appliances or virtual appliances that users want to discover, manage, and monitor from NetScaler ADM. To manage and monitor these instances, users must add the instances to the service and register both LB (if users are using NetScaler ADC for LB) and GLB instances. For more information on how to add NetScaler ADC instances in the NetScaler ADM, see: Getting Started Implement the GLB and LB configurations using default NetScaler ADM StyleBooks. Use Multi-cloud GLB StyleBook to execute the GLB configuration on the selected GLB NetScaler ADC instances. Implement the load balancing configuration. (Users can skip this step if they already have LB configurations on the managed instances.) Users can configure load balancers on NetScaler ADC instances in one of two ways: Manually configure the instances for load balancing the applications. For more information on how to manually configure the instances, see: Set up Basic Load Balancing . Use StyleBooks. Users can use one of the NetScaler ADM StyleBooks (HTTP/SSL Load Balancing StyleBook or HTTP/SSL Load Balancing (with Monitors) StyleBook) to create the load balancer configuration on the selected NetScaler ADC instance. Users can also create their own StyleBooks. For more information on StyleBooks, see: StyleBooks . Use Multi-cloud GLB StyleBook for LB Node to configure GLB parent-child topology in any of the following cases: If users are using the metric-based GLB algorithms (Least Packets, Least Connections, Least Bandwidth) to configure GLB and LB nodes and if the LB nodes are deployed on a different NetScaler ADC instance. If site persistence is required. Using StyleBooks to Configure GLB on NetScaler ADC LB Nodes Customers can use the Multi-cloud GLB StyleBook for LB Node if they are using the metric-based GLB algorithms (Least Packets, Least Connections, Least Bandwidth) to configure GLB and LB nodes and if the LB nodes are deployed on a different NetScaler ADC instance. Users can also use this StyleBook to configure more child sites for an existing parent site. This StyleBook configures one child site at a time. So, create as many configurations (config packs) from this StyleBook as there are child sites. The StyleBook applies the GLB configuration on the child sites. Users can configure a maximum of 1024 child sites. Note: Use Multi-cloud GLB StyleBook to configure the parent sites. This StyleBook makes the following assumptions: A SNIP address or a GLB site IP address is configured. The required firewall and security groups are configured in the cloud service providers. Configuring a Child Site in a Parent-Child Topology by using Multi-Cloud GLB StyleBook for LB Node Navigate to Applications > Configuration > Create New. Navigate to Applications > Configuration, and click Create New. The StyleBook appears as a user interface page on which users can enter the values for all the parameters defined in this StyleBook. Note: The terms data center and sites are used interchangeably in this document. Set the following parameters: Application Name. Enter the name of the GLB application deployed on the GLB sites for which you want to create child sites. Protocol. Select the application protocol of the deployed application from the drop-down list box. LB Health Check (Optional) Health Check Type. From the drop-down list box, select the type of probe used for checking the health of the load balancer VIP address that represents the application on a site. Secure Mode. (Optional) Select Yes to enable this parameter if SSL based health checks are required. HTTP Request. (Optional) If users selected HTTP as the health-check type, enter the full HTTP request used to probe the VIP address. List of HTTP Status Response Codes. (Optional) If users selected HTTP as the health check type, enter the list of HTTP status codes expected in responses to HTTP requests when the VIP is healthy. Configuring parent site. Provide the details of the parent site (GLB node) under which you want to create the child site (LB node). Site Name. Enter the name of the parent site. Site IP Address. Enter the IP address that the parent site uses as its source IP address when exchanging metrics with other sites. This IP address is assumed to be already configured on the GLB node in each site. Site Public IP Address. (Optional) Enter the Public IP address of the parent site that is used to exchange metrics, if that site’s IP address is NAT’ed. Configuring child site. Provide the details of the child site. Site name. Enter the name of the site. Site IP Address. Enter the IP address of the child site. Here, use the private IP address or SNIP of the NetScaler ADC node that is being configured as a child site. Site Public IP Address. (Optional) Enter the Public IP address of the child site that is used to exchange metrics, if that site’s IP address is NAT’ed. Configuring active GLB services (optional) Configure active GLB services only if the LB virtual server IP address is not a public IP address. This section allows users to configure the list of local GLB services on the sites where the application is deployed. Service IP. Enter the IP address of the load balancing virtual server on this site. Service Public IP Address. If the virtual IP address is private and has a public IP address NAT’ed to it, specify the public IP address. Service Port. Enter the port of the GLB service on this site. Site Name. Enter the name of the site on which the GLB service is located. Click Target Instances and select the NetScaler ADC instances configured as GLB instances on each site on which to deploy the GLB configuration. Click Create to create the LB configuration on the selected NetScaler ADC instance (LB node). Users can also click Dry Run to check the objects that would be created in the target instances. The StyleBook configuration that users have created appears in the list of configurations on the Configurations page. Users can examine, update, or remove this configuration by using the NetScaler ADM GUI. CloudFormation Template Deployment NetScaler ADC VPX is available as Amazon Machine Images (AMI) in the AWS Marketplace. Before using a CloudFormation template to provision a NetScaler ADC VPX in AWS, the AWS user has to accept the terms and subscribe to the AWS Marketplace product. Each edition of the NetScaler ADC VPX in the Marketplace requires this step. Each template in the CloudFormation repository has collocated documentation describing the usage and architecture of the template. The templates attempt to codify recommended deployment architecture of the NetScaler ADC VPX, or to introduce the user to the NetScaler ADC or to demonstrate a particular feature, edition, or option. Users can reuse, modify, or enhance the templates to suit their particular production and testing needs. Most templates require full EC2 permissions in addition to permissions to create IAM roles. The CloudFormation templates contain AMI Ids that are specific to a particular release of the NetScaler ADC VPX (for example, release 12.0-56.20) and edition (for example, NetScaler ADC VPX Platinum Edition - 10 Mbps) OR NetScaler ADC BYOL. To use a different version / edition of the NetScaler ADC VPX with a CloudFormation template requires the user to edit the template and replace the AMI IDs. The latest NetScaler ADC AWS-AMI-IDs are located here: NetScaler ADC AWS CloudFormation Master. CFT Three-NIC Deployment This template deploys a VPC, with 3 subnets (Management, client, server) for 2 Availability Zones. It deploys an Internet Gateway, with a default route on the public subnets. This template also creates a HA pair across Availability Zones with two instances of NetScaler ADC: 3 ENIs associated to 3 VPC subnets (Management, Client, Server) on primary and 3 ENIs associated to 3 VPC subnets (Management, Client, Server) on secondary. All the resource names created by this CFT are prefixed with a tagName of the stack name. The output of the CloudFormation template includes: PrimaryCitrixADCManagementURL - HTTPS URL to the Management GUI of the Primary VPX (uses self-signed cert) PrimaryCitrixADCManagementURL2 - HTTP URL to the Management GUI of the Primary VPX PrimaryCitrixADCInstanceID - Instance Id of the newly created Primary VPX instance PrimaryCitrixADCPublicVIP - Elastic IP address of the Primary VPX instance associated with the VIP PrimaryCitrixADCPrivateNSIP - Private IP (NS IP) used for management of the Primary VPX PrimaryCitrixADCPublicNSIP - Public IP (NS IP) used for management of the Primary VPX PrimaryCitrixADCPrivateVIP - Private IP address of the Primary VPX instance associated with the VIP PrimaryCitrixADCSNIP - Private IP address of the Primary VPX instance associated with the SNIP SecondaryCitrixADCManagementURL - HTTPS URL to the Management GUI of the Secondary VPX (uses self-signed cert) SecondaryCitrixADCManagementURL2 - HTTP URL to the Management GUI of the Secondary VPX SecondaryCitrixADCInstanceID - Instance Id of the newly created Secondary VPX instance SecondaryCitrixADCPrivateNSIP - Private IP (NS IP) used for management of the Secondary VPX SecondaryCitrixADCPublicNSIP - Public IP (NS IP) used for management of the Secondary VPX SecondaryCitrixADCPrivateVIP - Private IP address of the Secondary VPX instance associated with the VIP SecondaryCitrixADCSNIP - Private IP address of the Secondary VPX instance associated with the SNIP SecurityGroup - Security group id that the VPX belongs to When providing input to the CFT, the * against any parameter in the CFT implies that it is a mandatory field. For example, VPC ID* is a mandatory field. The following prerequisites must be met. The CloudFormation template requires sufficient permissions to create IAM roles, beyond normal EC2 full privileges. The user of this template also needs to accept the terms and subscribe to the AWS Marketplace product before using this CloudFormation template. The following should also be present: Key Pair 3 unallocated EIPs Primary Management Client VIP Secondary Management For more information on provisioning NetScaler ADC VPX instances on AWS, users can visit: Provisioning NetScaler ADC VPX Instances on AWS . For information on how to configure GLB using stylebooks visit Using StyleBooks to Configure GLB Prerequisites Before attempting to create a VPX instance in AWS, users should ensure they have the following: An AWS account to launch a NetScaler ADC VPX AMI in an Amazon Web Services (AWS) Virtual Private Cloud (VPC). Users can create an AWS account for free at www.aws.amazon.com. An AWS Identity and Access Management (IAM) user account to securely control access to AWS services and resources for users. For more information about how to create an IAM user account, see the topic: Creating IAM Users (Console). An IAM role is mandatory for both standalone and high availability deployments. The IAM role must have the following privileges: ec2:DescribeInstances ec2:DescribeNetworkInterfaces ec2:DetachNetworkInterface ec2:AttachNetworkInterface ec2:StartInstances ec2:StopInstances ec2:RebootInstances ec2:DescribeAddresses ec2:AssociateAddress ec2:DisassociateAddress autoscaling:* sns:* sqs:* iam:SimulatePrincipalPolicy iam:GetRole If the Citrix CloudFormation template is used, the IAM role is automatically created. The template does not allow selecting an already created IAM role. Note: When users log on the VPX instance through the GUI, a prompt to configure the required privileges for IAM role appears. Ignore the prompt if the privileges have already been configured. AWS CLI is required to use all the functionality provided by the AWS Management Console from the terminal program. For more information, see: What Is the AWS Command Line Interface?. Users also need the AWS CLI to change the network interface type to SR-IOV. GSLB Prerequisites The prerequisites for the NetScaler ADC GSLB Service Groups include a functioning AWS / Microsoft Azure environment with the knowledge and ability to configure Security Groups, Linux Web Servers, NetScaler ADCs within AWS, Elastic IPs, and Elastic Load Balancers. GSLB DBS Service integration requires NetScaler ADC version 12.0.57 for AWS ELB and Microsoft Azure ALB load balancer instances. Limitations and Usage Guidelines The following limitations and usage guidelines apply when deploying a NetScaler ADC VPX instance on AWS: Users should be familiar with the AWS terminology listed previously before starting a new deployment. The clustering feature is supported only when provisioned with NetScaler ADM Auto Scale Groups. For the high availability setup to work effectively, associate a dedicated NAT device to the management Interface or associate an Elastic IP (EIP) to NSIP. For more information on NAT, in the AWS documentation, see: NAT Instances. Data traffic and management traffic must be segregated with ENIs belonging to different subnets. Only the NSIP address must be present on the management ENI. If a NAT instance is used for security instead of assigning an EIP to the NSIP, appropriate VPC level routing changes are required. For instructions on making VPC level routing changes, in the AWS documentation, see: Scenario 2: VPC with Public and Private Subnets. A VPX instance can be moved from one EC2 instance type to another (for example, from m3.large to an m3.xlarge). For more information, visit: Limitations and Usage Guidelines. For storage media for VPX on AWS, NetScaler recommends EBS, because it is durable and the data is available even after it is detached from instance. Dynamic addition of ENIs to VPX is not supported. Restart the VPX instance to apply the update. NetScaler recommends users to stop the standalone or HA instance, attach the new ENI, and then restart the instance. The primary ENI cannot be changed or attached to a different subnet once it is deployed. Secondary ENIs can be detached and changed as needed while the VPX is stopped. Users can assign multiple IP addresses to an ENI. The maximum number of IP addresses per ENI is determined by the EC2 instance type, see the section “IP Addresses Per Network Interface Per Instance Type” in: Elastic Network Interfaces. Users must allocate the IP addresses in AWS before they assign them to ENIs. For more information, see: Elastic Network Interfaces. NetScaler recommends that users avoid using the enable and disable interface commands on NetScaler ADC VPX interfaces. The NetScaler ADC set ha node <NODE_ID> -haStatus STAYPRIMARY and set ha node <NODE_ID> -haStatus STAYSECONDARY commands are disabled by default. IPv6 is not supported for VPX. Due to AWS limitations, these features are not supported: Gratuitous ARP(GARP) L2 mode (bridging). Transparent virtual server are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN Dynamic Routing Virtual MAC For RNAT, routing, and Transparent virtual server to work, ensure Source/Destination Check is disabled for all ENIs in the data path. For more information, see “Changing the Source/Destination Checking” in: Elastic Network Interfaces. In a NetScaler ADC VPX deployment on AWS, in some AWS regions, the AWS infrastructure might not be able to resolve AWS API calls. This happens if the API calls are issued through a non-management interface on the NetScaler ADC VPX instance. As a workaround, restrict the API calls to the management interface only. To do that, create an NSVLAN on the VPX instance and bind the management interface to the NSVLAN by using the appropriate command. For example: set ns config -nsvlan <vlan id> -ifnum 1/1 -tagged NO save config Restart the VPX instance at the prompt. For more information about configuring nsvlan, see: Configuring NSVLAN. In the AWS console, the vCPU usage shown for a VPX instance under the Monitoring tab might be high (up to 100 percent), even when the actual usage is much lower. To see the actual vCPU usage, navigate to View all CloudWatch metrics. For more information, see: Monitor your Instances using Amazon CloudWatch. Alternately, if low latency and performance are not a concern, users may enable the CPU Yield feature allowing the packet engines to idle when there is no traffic. For more details about the CPU Yield feature and how to enable it, visit: Citrix Support Knowledge Center. AWS-VPX Support Matrix The following tables list the supported VPX model and AWS regions, instance types, and services. Supported VPX Models on AWS Supported VPX Model: NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 200 Mbps NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 1000 Mbps NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 3 Gbps NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 5 Gbps NetScaler ADC VPX Standard/Advanced/Premium - 10 Mbps NetScaler ADC VPX Express - 20 Mbps NetScaler ADC VPX - Customer Licensed Supported AWS Regions Supported AWS Regions: US West (Oregon) Region US West (N. California) Region US East (Ohio) Region US East (N. Virginia) Region Asia Pacific (Seoul) Region Canada (Central) Region Asia Pacific (Singapore) Region Asia Pacific (Sydney) Region Asia Pacific (Tokyo) Region Asia Pacific (Hong Kong) Region Canada (Central) Region China (Beijing) Region China (Ningxia) Region EU (Frankfurt) Region EU (Ireland) Region EU (London) Region EU (Paris) Region South America (São Paulo) Region AWS GovCloud (US-East) Region Supported AWS Instance Types Supported AWS Instance Types: m3.large, m3.large, m3.2xlarge c4.large, c4.large, c4.2xlarge, c4.4xlarge, c4.8xlarge m4.large, m4.large, m4.2xlarge, m4.4xlarge, m4.10xlarge m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge, m5.12xlarge, m5.24xlarge c5.large, c5.xlarge, c5.2xlarge, c5.4xlarge, c5.9xlarge, c5.18xlarge, c5.24xlarge C5n.large, C5n.xlarge, C5n.2xlarge, C5n.4xlarge, C5n.9xlarge, C5n.18xlarge Supported AWS Services Supported AWS Services: #EC2 #Lambda #S3 #VPC #route53 #ELB #Cloudwatch #AWS AutoScaling #Cloud formation Simple Queue Service (SQS) Simple Notification Service (SNS) Identity & Access Management (IAM) For higher bandwidth, NetScaler recommends the following instance types Instance Type Bandwidth Enhanced Networking (SR-IOV) M4.10x large 3 Gbps and 5 Gbps Yes C4.8x large 3 Gbps and 5 Gbps Yes C5.18xlarge/M5.18xlarge 25 Gbps ENA C5n.18xlarge 30 Gbps ENA To remain updated about the current supported VPX models and AWS regions, instance types, and services, visit: VPX-AWS support matrix .
  13. Overview NetScaler ADC is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler ADC is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler ADC combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler VPX The NetScaler ADC VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms: XenServer VMware ESX Microsoft Hyper-V Linux KVM Amazon Web Services Microsoft Azure Google Cloud Platform This deployment guide focuses on NetScaler ADC VPX on Microsoft Azure Microsoft Azure Microsoft Azure is an ever-expanding set of cloud computing services built to help organizations meet their business challenges. Azure gives users the freedom to build, manage, and deploy applications on a massive, global network using their preferred tools and frameworks. With Azure, users can: Be future-ready with continuous innovation from Microsoft to support their development today—and their product visions for tomorrow. Operate hybrid cloud seamlessly on-premises, in the cloud, and at the edge—Azure meets users where they are. Build on their terms with Azure’s commitment to open source and support for all languages and frameworks, allowing users to be free to build how they want and deploy where they want. Trust their cloud with security from the ground up—backed by a team of experts and proactive, industry-leading compliance that is trusted by enterprises, governments, and startups. Azure Terminology Here is a brief description of the key terms used in this document that users must be familiar with: Azure Load Balancer – Azure load balancer is a resource that distributes incoming traffic among computers in a network. Traffic is distributed among virtual machines defined in a load-balancer set. A load balancer can be external or internet-facing, or it can be internal. Azure Resource Manager (ARM) – ARM is the new management framework for services in Azure. Azure Load Balancer is managed using ARM-based APIs and tools. Back-End Address Pool – These are IP addresses associated with the virtual machine NIC to which load is distributed. BLOB - Binary Large Object – Any binary object like a file or an image that can be stored in Azure storage. Front-End IP Configuration – An Azure Load balancer can include one or more front-end IP addresses, also known as a virtual IPs (VIPs). These IP addresses serve as ingress for the traffic. Instance Level Public IP (ILPIP) – An ILPIP is a public IP address that users can assign directly to a virtual machine or role instance, rather than to the cloud service that the virtual machine or role instance resides in. This does not take the place of the VIP (virtual IP) that is assigned to their cloud service. Rather, it is an extra IP address that can be used to connect directly to a virtual machine or role instance. Note: In the past, an ILPIP was referred to as a PIP, which stands for public IP. Inbound NAT Rules – This contains rules mapping a public port on the load balancer to a port for a specific virtual machine in the back-end address pool. IP-Config - It can be defined as an IP address pair (public IP and private IP) associated with an individual NIC. In an IP-Config, the public IP address can be NULL. Each NIC can have multiple IP-Configs associated with it, which can be up to 255. Load Balancing Rules – A rule property that maps a given front-end IP and port combination to a set of back-end IP addresses and port combinations. With a single definition of a load balancer resource, users can define multiple load balancing rules, each rule reflecting a combination of a front-end IP and port and back end IP and port associated with virtual machines. Network Security Group (NSG) – NSG contains a list of Access Control List (ACL) rules that allow or deny network traffic to virtual machine instances in a virtual network. NSGs can be associated with either subnets or individual virtual machine instances within that subnet. When an NSG is associated with a subnet, the ACL rules apply to all the virtual machine instances in that subnet. In addition, traffic to an individual virtual machine can be restricted further by associating an NSG directly to that virtual machine. Private IP addresses – Used for communication within an Azure virtual network, and user on-premises network when a VPN gateway is used to extend a user network to Azure. Private IP addresses allow Azure resources to communicate with other resources in a virtual network or an on-premises network through a VPN gateway or ExpressRoute circuit, without using an Internet-reachable IP address. In the Azure Resource Manager deployment model, a private IP address is associated with the following types of Azure resources – virtual machines, internal load balancers (ILBs), and application gateways. Probes – This contains health probes used to check availability of virtual machines instances in the back-end address pool. If a particular virtual machine does not respond to health probes for some time, then it is taken out of traffic serving. Probes enable users to track the health of virtual instances. If a health probe fails, the virtual instance is taken out of rotation automatically. Public IP Addresses (PIP) – PIP is used for communication with the Internet, including Azure public-facing services and is associated with virtual machines, Internet-facing load balancers, VPN gateways, and application gateways. Region - An area within a geography that does not cross national borders and that contains one or more data centers. Pricing, regional services, and offer types are exposed at the region level. A region is typically paired with another region, which can be up to several hundred miles away, to form a regional pair. Regional pairs can be used as a mechanism for disaster recovery and high availability scenarios. Also referred to generally as location. Resource Group - A container in Resource Manager that holds related resources for an application. The resource group can include all resources for an application, or only those resources that are logically grouped. Storage Account – An Azure storage account gives users access to the Azure blob, queue, table, and file services in Azure Storage. A user storage account provides the unique namespace for user Azure storage data objects. Virtual Machine – The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in various sizes. Virtual Network - An Azure virtual network is a representation of a user network in the cloud. It is a logical isolation of the Azure cloud dedicated to a user subscription. Users can fully control the IP address blocks, DNS settings, security policies, and route tables within this network. Users can also further segment their VNet into subnets and launch Azure IaaS virtual machines and cloud services (PaaS role instances). Also, users can connect the virtual network to their on-premises network using one of the connectivity options available in Azure. In essence, users can expand their network to Azure, with complete control on IP address blocks with the benefit of the enterprise scale Azure provides. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler ADC on Azure combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, and other essential application delivery capabilities in a single VPX instance, conveniently available via the Azure Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler ADC deployments. The net result is that NetScaler ADC on Azure enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. Datacenter Expansion with Autoscale In an application economy where applications are synonymous with business productivity, growth, and customer experience, it becomes indispensable for organizations to stay competitive, innovate rapidly and scale to meet customer demands while minimizing downtime and to prevent revenue losses. When an organization outgrows the on-prem data center capacity, instead of thinking about procuring more hardware and spending their capex budget, they are thinking about expanding their presence in the public cloud. With the move to the public cloud, when selecting the right ADC for the user public cloud deployments, scale and performance are important factors. There is always a need to scale applications in response to fluctuating demand. Under provisioning may lead to lost customers, reduced employee productivity, and lower revenue. Right sizing the user infrastructure on demand is even more important in the public cloud where over provisioning is costly. In response to the need for greater performance and scalability in the public cloud, NetScaler ADC remains the best option. The best-in-class solution lets users automatically scale up to 100 Gbps/region and because of its superior software architecture, it delivers a latency advantage of 100 ms on a typical eCommerce webpage compared to other ADC vendors and cloud provider options. Benefits of Autoscaling High availability of applications. Autoscaling ensures that your application always has the right number of NetScaler ADC VPX instances to handle the traffic demands. This is to ensure that your application is up and running all the time irrespective of traffic demands. Smart scaling decisions and zero touch configuration. Autoscaling continuously monitors your application and adds or removes NetScaler ADC instances dynamically depending on the demand. When demand spikes upward, the instances are automatically added. When the demand spikes downward, the instances are automatically removed. The addition and removal of NetScaler ADC instances happens automatically making it a zero-touch manual configuration. Automatic DNS management. The NetScaler ADM Autoscale feature offers automatic DNS management. Whenever new NetScaler ADC instances are added, the domain names are updated automatically. Graceful connection termination. During a scale-in, the NetScaler ADC instances are gracefully removed avoiding the loss of client connections. Better cost management. Autoscaling dynamically increases or decreases NetScaler ADC instances as needed. This enables users to optimize the costs involved. Users save money by launching instances only when they are needed and terminate them when they are not needed. Thus, users pay only for the resources they use. Observability. Observability is essential to application dev-ops or IT personnel to monitor the health of the application. The NetScaler ADM’s Autoscale dashboard enables users to visualize the threshold parameter values, Autoscale trigger time stamps, events, and the instances participating in Autoscale. Autoscaling of NetScaler ADC VPX in Microsoft Azure using NetScaler ADM Autoscaling Architecture NetScaler ADM handles the client traffic distribution using Azure DNS or Azure Load Balancer (ALB). Traffic Distribution using Azure DNS The following diagram illustrates how the DNS based autoscaling occurs using the Azure traffic manager as the traffic distributor: In DNS based autoscaling, DNS acts as a distribution layer. The Azure traffic manager is the DNS based load balancer in Microsoft Azure. Traffic manager directs the client traffic to the appropriate NetScaler ADC instance that is available in the NetScaler ADM autoscaling group. Azure traffic manager resolves the FQDN to the VIP address of the NetScaler ADC instance. Note: In DNS based autoscaling, each NetScaler ADC instance in the NetScaler ADM autoscale group requires a public IP address. NetScaler ADM triggers the scale-out or scale-in action at the cluster level. When a scale-out is triggered, the registered virtual machines are provisioned and added to the cluster. Similarly, when a scale-in is triggered, the nodes are removed and de-provisioned from the NetScaler ADC VPX clusters. Traffic Distribution using Azure Load Balancer The following diagram illustrates how the autoscaling occurs using the Azure Load Balancer as the traffic distributor: Azure Load Balancer is the distribution tier to the cluster nodes. ALB manages the client traffic and distributes it to NetScaler ADC VPX clusters. ALB sends the client traffic to NetScaler ADC VPX cluster nodes that are available in the NetScaler ADM autoscaling group across availability zones. Note: Public IP address is allocated to Azure Load Balancer. NetScaler ADC VPX instances do not require a public IP address. NetScaler ADM triggers the scale-out or scale-in action at the cluster level. When a scale-out is triggered the registered virtual machines are provisioned and added to the cluster. Similarly, when a scale-in is triggered, the nodes are removed and de-provisioned from the NetScaler ADC VPX clusters. NetScaler ADM Autoscale Group Autoscale group is a group of NetScaler ADC instances that load balance applications as a single entity and trigger autoscaling based on the configured threshold parameter values. Resource Group Resource group contains the resources that are related to NetScaler ADC autoscaling. This resource group helps users to manage the resources required for autoscaling. For more information, see: Manage Azure Resources by using the Azure Portal. Azure Back-end Virtual Machine Scale Set Azure virtual machine scale set is a collection of identical VM instances. The number of VM instances can increase or decrease depending on the client traffic. This set provides high-availability to your applications. For more information, see: What are Virtual Machine Scale Sets?. Availability Zones Availability Zones are isolated locations within an Azure region. Each region is made up of several availability zones. Each availability zone belongs to a single region. Each availability zone has one NetScaler ADC VPX cluster. For more information, see: Regions and Availability Zones in Azure. Availability Sets An availability set is a logical grouping of a NetScaler ADC VPX cluster and application servers. Availability Sets are helpful to deploy ADC instances across multiple isolated hardware nodes in a cluster. With an availability set, users can ensure a reliable ADM autoscaling if there is hardware or software failure within Azure. For more information, see: Tutorial: Create and Deploy Highly Available Virtual Machines with Azure PowerShell. The following diagram illustrates the autoscaling in an availability set: The Azure infrastructure (ALB or Azure traffic manager) sends the client traffic to a NetScaler ADM autoscaling group in the availability set. NetScaler ADM triggers the scale-out or scale-in action at the cluster level. How Autoscaling Works The following flowchart illustrates the autoscaling workflow: The NetScaler ADM collects the statistics (CPU, Memory, and throughput) from the autoscale provisioned clusters for every minute. The statistics are evaluated against the configuration thresholds. Depending on the statistics, scale out or scale in is triggered. Scale-out is triggered when the statistics exceed the maximum threshold. Scale-in is triggered when the statistics are operating below the minimum threshold. If a scale-out is triggered: A new node is provisioned. The node is attached to the cluster and the configuration is synchronized from the cluster to the new node. The node is registered with NetScaler ADM. The new node IP addresses are updated in the Azure traffic manager. If a scale-in is triggered: The node is identified for removal. Stop new connections to the selected node. Waits for the specified period for the connections to drain. In DNS traffic, it also waits for the specified TTL period. The node is detached from the cluster, deregistered from NetScaler ADM, and then de-provisioned from Microsoft Azure. Note: When the application is deployed, an IP set is created on clusters in every availability zone. Then, the domain and instance IP addresses are registered with the Azure traffic manager or ALB. When the application is removed, the domain and instance IP addresses are deregistered from the Azure traffic manager or ALB. Then, the IP set is deleted. Example Autoscaling Scenario Consider that users have created an autoscale group named asg_arn in a single availability zone with the following configuration. Selected threshold parameters – Memory usage. Threshold limit set to memory: Minimum limit: 40 Maximum limit: 85 Watch time – 2 minutes. Cooldown period – 10 minutes. Time to wait during de-provision – 10 minutes. DNS time to live – 10 seconds. After the autoscale group is created, statistics are collected from the autoscale group. The autoscale policy also evaluates if any autoscale event is in progress. If autoscaling is in progress, wait for that event to complete before collecting the statistics. The Sequence of Events Memory usage exceeds the threshold limit at T2. However, the scale-out is not triggered because it did not breach for the specified watch time. Scale-out is triggered at T5 after a maximum threshold is breached for 2 minutes (watch time) continuously. No action was taken for the breach between T5-T10 because node provisioning is in progress. The node is provisioned at T10 and added to the cluster. The cooldown period is started. No action was taken for the breach between T10-T20 because of the cooldown period. This period ensures the organic growing of instances of an autoscale group. Before triggering the next scaling decision, it waits for the current traffic to stabilize and average out on the current set of instances. Memory usage drops below the minimum threshold limit at T23. However, the scale-in is not triggered because it did not breach for the specified watch time. Scale-in is triggered at T26 after the minimum threshold is breached for 2 minutes (watch time) continuously. A node in the cluster is identified for de-provisioning. No action was taken for the breach between T26-T36 because NetScaler ADM is waiting to drain existing connections. For DNS based autoscaling, TTL is in effect. Note: For DNS based autoscaling, NetScaler ADM waits for the specified Time-To-Live (TTL) period. Then, it waits for existing connections to drain before initiating node de-provisioning. No action was taken for the breach between T37-T39 because node de-provisioning is in progress. The node is removed and de-provisioned at T40 from the cluster. All the connections to the selected node were drained before initiating node de-provisioning. Therefore, the cooldown period is skipped after the node de-provision. Autoscale Configuration NetScaler ADM manages all the NetScaler ADC VPX clusters in Microsoft Azure. NetScaler ADM accesses the Azure resources using the Cloud Access Profile. The following flow diagram explains the steps involved in creating and configuring an Autoscale group: Set up Microsoft Azure Components Perform the following tasks in Azure before users Autoscale NetScaler ADC VPX instances in NetScaler ADM. Create a Virtual Network. Create Security Groups. Create Subnets. Subscribe to the NetScaler ADC VPX License in Microsoft Azure . Create and Register an Application. Create a Virtual Network Log on to the user Microsoft Azure portal. Select Create a resource. Select Networking and click Virtual Network. Specify the required parameters. In Resource group, users must specify the resource group where they want to deploy a NetScaler ADC VPX product. In Location, users must specify the locations that support availability zones such as: Central US East US2 France Central North Europe Southeast Asia West Europe West US2 Note: The application servers are present in this resource group. Click Create. For more information, see Azure Virtual Network here: What is Azure Virtual Network?. Create Security Groups Create three security groups in the user virtual network (VNet) - one each for the management, client, and server connections. Create a security group to control inbound and outbound traffic in the NetScaler ADC VPX instance. Create rules for incoming traffic that users want to control in the NetScaler Autoscale groups. Users can add as many rules as they want. Management: A security group in the user account dedicated for management of NetScaler ADC VPX. NetScaler ADC has to contact Azure services and requires Internet access. Inbound rules are allowed on the following TCP and UDP ports. TCP: 80, 22, 443, 3008–3011, 4001 UDP: 67, 123, 161, 500, 3003, 4500, 7000 For more information, see Azure Virtual Network here: What is Azure Virtual Network?. Create Security Groups Create three security groups in the user virtual network (VNet) - one each for the management, client, and server connections. Create a security group to control inbound and outbound traffic in the NetScaler ADC VPX instance. Create rules for incoming traffic that users want to control in the NetScaler Autoscale groups. Users can add as many rules as they want. Management: A security group in the user account dedicated for management of NetScaler ADC VPX. NetScaler ADC has to contact Azure services and requires Internet access. Inbound rules are allowed on the following TCP and UDP ports. TCP: 80, 22, 443, 3008–3011, 4001 UDP: 67, 123, 161, 500, 3003, 4500, 7000 Note: Ensure that the security group allows the NetScaler ADM agent to be able to access the VPX. Client: A security group in the user account dedicated for client-side communication of NetScaler ADC VPX instances. Typically, inbound rules are allowed on the TCP ports 80, 22, and 443. Server: A security group in the user account dedicated for server-side communication of NetScaler ADC VPX. For more information on how to create a security group in Microsoft Azure, see: Create, Change, or Delete a Network Security Group. Create Subnets Create three subnets in the user virtual network (VNet) - one each for the management, client, and server connections. Specify an address range that is defined in the user VNet for each of the subnets. Specify the availability zone in which users want the subnet to reside. Management: A subnet in the user Virtual Network (VNet) dedicated for management. NetScaler ADC has to contact Azure services and requires internet access. Client: A subnet in the user Virtual Network (VNet) dedicated for the client side. Typically, NetScaler ADC receives client traffic for the application via a public subnet from the internet. Server: A subnet where the application servers are provisioned. All the user application servers are present in this subnet and receive application traffic from the NetScaler ADC through this subnet. Note: Specify an appropriate security group to the subnet while creating a subnet. For more information on how to create a subnet in Microsoft Azure, see: Add, Change, or Delete a Virtual Network Subnet. Subscribe to the NetScaler ADC VPX License in Microsoft Azure Log on to the user Microsoft Azure portal. Select Create a resource. In the Search the marketplace bar, search NetScaler ADC and select the required product version. In the Select a software plan list, select one of the following license types: Bring your own license Enterprise Platinum Note: If users choose the Bring your own license option, the Autoscale group checks out the licenses from the NetScaler ADM while provisioning NetScaler ADC instances. In NetScaler ADM, the Advanced and Premium are the equivalent license types for Enterprise and Platinum respectively. Ensure the programmatic deployment is enabled for the selected NetScaler ADC product. Beside Want to deploy programmatically? click Get Started. In Choose the subscriptions, select Enable to deploy the selected NetScaler ADC VPX edition programmatically. Important: Enabling the programmatic deployment is required to Autoscale NetScaler ADC VPX instances in Azure. Click Save. Close Configure Programmatic Deployment. Click Create. Create and Register an Application NetScaler ADM uses this application to Autoscale NetScaler ADC VPX instances in Azure. To create and register an application in Azure: In the Azure portal, select Azure Active Directory. This option displays the user organization’s directory. Select App registrations: In Name, specify the name of the application. Select the Application type from the list. In Sign-on URL, specify the application URL to access the application. Click Create. For more information on App registrations, see: How to: Use the Portal to Create an Azure AD Application and Service Principal that can Access Resources. Azure assigns an application ID to the application. The following is an example application registered in Microsoft Azure: Copy the following IDs and provide these IDs when users are configuring the Cloud Access Profile in NetScaler ADM. For steps to retrieve the following IDs, see: Get Values for Signing in. Application ID Directory ID Key Subscription ID: Copy the subscription ID from the user storage account. Assign the Role Permission to an Application NetScaler ADM uses the application-as-a-service principle to Autoscale NetScaler ADC instances in Microsoft Azure. This permission is applicable only to the selected resource group. To assign a role permission to the user registered application, users have to be the owner of the Microsoft Azure subscription. In the Azure portal, select Resource groups. Select the resource group to which users want to assign a role permission. Select Access control (IAM). In Role assignments, click Add. Select Owner from the Role list. Select the application that is registered for autoscaling NetScaler ADC instances. Click Save. Set up NetScaler ADM Components Perform the following tasks in Azure before users Autoscale NetScaler ADC VPX instances in NetScaler ADM: Provision NetScaler ADM Agent on Azure Create a Site Attach the Site to a NetScaler ADM Service Agent Provision NetScaler ADM Agent on Azure The NetScaler ADM service agent works as an intermediary between the NetScaler ADM and the discovered instances in the data center or on the cloud. Navigate to Networks > Agents. Click Provision. Select Microsoft Azure and click Next. In the Cloud Parameters tab, specify the following: Name - specify the NetScaler ADM agent name. Site - select the site users have created to provision an agent and ADC VPX instances. Cloud Access Profile - select the cloud access profile from the list. Availability Zone - Select the zones in which users want to create the Autoscale groups. Depending on the cloud access profile that users have selected, availability zones specific to that profile are populated. Security Group - Security groups control the inbound and outbound traffic in the NetScaler ADC agent. Users create rules for both incoming and outgoing traffic that they want to control. Subnet - Select the management subnet where users want to provision an agent. Tags - Type the key-value pair for the Autoscale group tags. A tag consists of a case-sensitive key-value pair. These tags enable users to organize and identify the Autoscale groups easily. The tags are applied to both Azure and NetScaler ADM. Click Finish. Alternatively, users can install the NetScaler ADM agent from the Azure Marketplace. For more information, see: Install NetScaler ADM Agent on Microsoft Azure Cloud. Create a Site Create a site in NetScaler ADM and add the VNet details associated with the user Microsoft Azure resource group. In NetScaler ADM, navigate to Networks > Sites. Click Add. In the Select Cloud pane, Select Data Center as a Site type. Choose Azure from the Type list. Check the Fetch VNet from Azure check box. This option helps users to retrieve the existing VNet information from the user Microsoft Azure account. Click Next. In the Choose Region pane, In Cloud Access Profile, select the profile created for the user Microsoft Azure account. If there are no profiles, create a profile. To create a cloud access profile, click Add. In Name, specify a name to identify the user Azure account in NetScaler ADM. In Tenant Active Directory ID / Tenant ID, specify the Active Directory ID of the tenant or the account in Microsoft Azure. Specify the Subscription ID. Specify the Application ID/Client ID. Specify the Application Key Password / Secret. Click Create. For more information, see: Install NetScaler ADM Agent on Microsoft Azure Cloud and Mapping Cloud Access Profile to the Azure Application. In Vnet, select the virtual network containing NetScaler ADC VPX instances that users want to manage. Specify a Site Name. Click Finish. Mapping Cloud Access Profile to the Azure Application NetScaler ADM Term Microsoft Azure Term Tenant Active Directory ID / Tenant ID Directory ID Subscription ID Subscription ID Application ID / Client ID Application ID Application Key Password / Secret keys or Certificates or Client Secrets Attach the Site to a NetScaler ADM Service Agent In NetScaler ADM, navigate to Networks > Agents. Select the agent for which users want to attach a site. Click Attach Site. Select the site from the list that users want to attach. Click Save. Step 1: Initialize Autoscale Configuration in NetScaler ADM In NetScaler ADM, navigate to Networks > AutoScale Groups. Click Add to create Autoscale groups. The Create AutoScale Group page appears. Select Microsoft Azure and click Next. In Basic Parameters, enter the following details: Name: Type a name for the Autoscale group. Site: Select the site that users have created to Autoscale the NetScaler ADC VPX instances on Microsoft Azure. If users have not created a site, click Add to create a site. Agent: Select the NetScaler ADM agent that manages the provisioned instances. Cloud Access Profile: Select the cloud access profile. Users can also add or edit a Cloud Access Profile. Device Profile: Select the device profile from the list. NetScaler ADM uses the device profile when it requires users to log on to the NetScaler ADC VPX instance. Note: Ensure the selected device profile conforms to Microsoft Azure password rules, which can be found here: Password Policies that only Apply to Cloud User Accounts . Traffic Distribution Mode: The Load Balancing using Azure LB option is selected as the default traffic distribution mode. Users can also choose the DNS using Azure DNS mode for the traffic distribution. Enable AutoScale Group: Enable or disable the status of the ASG groups. This option is enabled, by default. If this option is disabled, autoscaling is not triggered. Availability Set or Availability Zone: Select the availability set or availability zones in which users want to create the Autoscale groups. Depending on the cloud access profile that users have selected, availability zones appear on the list. Tags: Type the key-value pair for the Autoscale group tags. A tag consists of a case-sensitive key-value pair. These tags enable users to organize and identify the Autoscale groups easily. The tags are applied to both Microsoft Azure and NetScaler ADM. Click Next. Step 2: Configure Autoscale Parameters In the AutoScale Parameters tab, enter the following details. Select one or more than one of the following threshold parameters whose values must be monitored to trigger a scale-out or a scale-in. Enable CPU Usage Threshold: Monitor the metrics based on the CPU usage. Enable Memory Usage Threshold: Monitor the metrics based on the memory usage. Enable Throughput Threshold: Monitor the metrics based on the throughput. Note: Default minimum threshold limit is 30 and the maximum threshold limit is 70. However, users can modify the limits. Minimum threshold limit must be equal or less than half of the maximum threshold limit. Users can select more than one threshold parameter for monitoring. Scale-out is triggered if at least one of the threshold parameters is above the maximum threshold. However, a scale-in is triggered only if all the threshold parameters are operating below their normal thresholds. Minimum Instances: Select the minimum number of instances that must be provisioned for this Autoscale group. The default minimum number of instances is equal to the number of zones selected. Users can only increment the minimum instances in the multiples of the specified number of zones. For example, if the number of availability zones is 4, the minimum instances are 4 by default. Users can increase the minimum instances by 8, 12, 16. Maximum Instances: Select the maximum number of instances that must be provisioned for this Autoscale group. The maximum number of instances must be greater than or equal to the value of the minimum instances. The maximum number of instances cannot exceed the number of availability zones multiplied by 32. Maximum number of instances = number of availability zones * 32 Watch-Time (minutes): Select the watch-time duration. The time for which the scale parameter’s threshold has to stay breached for scaling to happen. If the threshold is breached on all samples collected in this specified time then a scaling happens. Cooldown period (minutes): Select the cooldown period. During scale-out, the cooldown period is the time for which evaluation of the statistics has to be stopped after a scale-out occurs. This period ensures the organic growing of instances of an Autoscale group. Before triggering the next scaling decision, it waits for the current traffic to stabilize and average out on the current set of instances. Time to wait during Deprovision (minutes): Select the drain connection timeout period. During scale-in action, an instance is identified to de-provision. NetScaler ADM restricts the identified instance from processing new connections until the specified time expires before de-provision. In this period, it allows existing connections to this instance to be drained out before it gets de-provisioned. DNS Time To Live (seconds): Select the time (in seconds). In this period, a packet is set to exist inside a network before a router discards the packet. This parameter is applicable only when the traffic distribution mode is DNS using the Microsoft Azure traffic manager. Click Next. Step 3: Configure Licenses for Provisioning NetScaler ADC Instances Select one of the following modes to license NetScaler ADC instances that are part of the Autoscale Group: Using NetScaler ADM: While provisioning NetScaler ADC instances, the Autoscale group checks out the licenses from the NetScaler ADM. Using Microsoft Azure: The Allocate from Cloud option uses the NetScaler product licenses available in the Azure Marketplace. While provisioning NetScaler ADC instances, the Autoscale group uses the licenses from the marketplace. If users choose to use licenses from Azure Marketplace, specify the product or license in the Cloud Parameters tab. For more information, see: Licensing Requirements. Use Licenses from NetScaler ADM To use this option, ensure that users have subscribed to NetScaler ADC with the Bring your own license software plan in Azure. See: Subscribe to the NetScaler ADC VPX License in Microsoft Azure . In the License tab, select Allocate from ADM. In License Type, select one of the following options from the list: Bandwidth Licenses: Users can select one of the following options from the Bandwidth License Types list: Pooled Capacity: Specify the capacity to allocate for every new instance in the Autoscale group. From the common pool, each ADC instance in the Autoscale group checks out one instance license and only as much bandwidth as is specified. VPX Licenses: When a NetScaler ADC VPX instance is provisioned, the instance checks out the license from the NetScaler ADM. Virtual CPU Licenses: The provisioned NetScaler ADC VPX instance checks out licenses depending on the number of CPUs running in the Autoscale group. Note: When the provisioned instances are removed or destroyed, the applied licenses return to the NetScaler ADM license pool. These licenses can be reused to provision new instances during the next Autoscale. In License Edition, select the license edition. The Autoscale group uses the specified edition to provision instances. Click Next. Step 4: Configure Cloud Parameters In the Cloud Parameters tab, enter the following details: Resource Group: Select the resource group in which NetScaler ADC instances are deployed. Product / License: Select the NetScaler ADC product version that users want to provision. Ensure that programmatic access is enabled for the selected type. For more information, see: Subscribe to the NetScaler ADC VPX License in Microsoft Azure. Azure VM Size: Select the required VM size from the list. Note: Ensure that the selected Azure VM Size has a minimum of three NICs. For more information, see: Autoscaling of NetScaler ADC VPX in Microsoft Azure using NetScaler ADM . Cloud Access Profile for ADC: NetScaler ADM logs in to the user Azure account using this profile to provision or de-provision ADC instances. It also configures Azure LB or Azure DNS. Image: Select the required NetScaler ADC version image. Click Add New to add a NetScaler ADC image. Security Groups: Security groups control the inbound and outbound traffic in a NetScaler ADC VPX instance. Select a security group for Management, Client, and Server traffic. For more information on management, client, and server security groups, see: Create Security Groups. Subnets: Users must have three separate subnets such as Management, client, and server subnet to Autoscale NetScaler ADC subnets. Subnets contain the required entities for autoscaling. Select For more information, see: Create Subnets. Click Finish. Step 5: Configure an application for the Autoscale group In NetScaler ADM, navigate to Networks > Autoscale Groups. Select the Autoscale group that users created and click Configure. In Configure Application, specify the following details: Application Name - Specify the name of an application. Domain Name - Specify the domain name of an application. Zone Name - Specify the zone name of an application. This domain and zone name redirects to the virtual servers in Azure. For example, if users host an application in app.example.com, the app is the domain name and example.com is the zone name. Access Type - Users can use ADM autoscaling for both external and internal applications. Select the required application access type. Choose the required StyleBook that users want to deploy configurations for the selected Autoscale group. If users want to import StyleBooks, click Import New StyleBook. Specify the values for all the parameters. The configuration parameters are pre-defined in the selected StyleBook. Check the Application Server Group Type CLOUD check box to specify the application servers available in the virtual machine scale set. In Application Server Fleet Name, specify Autoscale setting name of your virtual machine scale set. Select the Application Server Protocol from the list. In Member Port, specify the port value of the application server. Note: Ensure AutoDisable Graceful shutdown is set to No and AutoDisable Delay field is blank. If users want to specify the advanced settings for the user application servers, check the Advanced Application Server Settings check box. Then, specify the required values listed under Advanced Application Server Settings. If users have standalone application servers in the virtual network, check the Application Server Group Type STATIC check box: Select the Application Server Protocol from the list. In Server IPs and Ports, click + to add an application server IP address, port, and weight, then click Create. Click Create. Modify the Autoscale Groups Configuration Users can modify an Autoscale group configuration or delete an Autoscale group. Users can modify only the following Autoscale group parameters: Maximum and minimum limits of the threshold parameters Minimum and maximum instance values Drain connection period value Cooldown period value Watch duration value Users can also delete the Autoscale groups after they are created. When an Autoscale group is deleted, all the domains and IP addresses are deregistered from DNS and the cluster nodes are de-provisioned. For more detailed information on provisioning NetScaler ADC VPX instances on Microsoft Azure, see Provisioning NetScaler ADC VPX Instances on Microsoft Azure. ARM (Azure Resource Manager) Templates The GitHub repository for NetScaler ADC ARM (Azure Resource Manager) templates hosts NetScaler ADC custom templates for deploying NetScaler ADC in Microsoft Azure Cloud Services. The templates in this repository are developed and maintained by the NetScaler ADC engineering team. Each template in this repository has co-located documentation describing the usage and architecture of the template. The templates attempt to codify the recommended deployment architecture of the NetScaler ADC VPX, or to introduce the user to the NetScaler ADC or to demonstrate a particular feature / edition / option. Users can reuse / modify or enhance the templates to suit their particular production and testing needs. Most templates require sufficient subscriptions to portal.azure.com to create resource and deploy templates. NetScaler ADC VPX Azure Resource Manager (ARM) templates are designed to ensure an easy and consistent way of deploying standalone NetScaler ADC VPX. These templates increase reliability and system availability with built-in redundancy. These ARM templates support Bring Your Own License (BYOL) or Hourly based selections. Choice of selection is either mentioned in the template description or offered during template deployment. For more information on how to provision a NetScaler ADC VPX instance on Microsoft Azure using ARM (Azure Resource Manager) templates, visit NetScaler ADC Azure Templates. For more information on how to add Azure autoscale settings, visit: Add Azure Autoscale Settings. For more information on how to deploy a NetScaler ADC VPX instance on Microsoft Azure, refer to Deploy a NetScaler ADC VPX Instance on Microsoft Azure. For more information on how a NetScaler ADC VPX instance works on Azure, visit How a NetScaler ADC VPX Instance Works on Azure. Prerequisites Users need some prerequisite knowledge before deploying a NetScaler VPX instance on Azure: Familiarity with Azure terminology and network details. For information, see the Azure terminology in the previous section. Knowledge of a NetScaler ADC appliance. For detailed information about the NetScaler ADC appliance, see: NetScaler ADC 13.0. Knowledge of NetScaler ADC networking. See: Networking. Azure Autoscale Prerequisites This section describes the prerequisites that users must complete in Microsoft Azure and NetScaler ADM before they provision NetScaler ADC VPX instances. This document assumes the following: Users possess a Microsoft Azure account that supports the Azure Resource Manager deployment model. Users have a resource group in Microsoft Azure. For more information on how to create an account and other tasks, see Microsoft Azure Documentation. Limitations Running the NetScaler ADC VPX load balancing solution on ARM imposes the following limitations: The Azure architecture does not accommodate support for the following NetScaler ADC features: Clustering IPv6 Gratuitous ARP (GARP) L2 Mode (bridging). Transparent virtual server are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN Dynamic Routing Virtual MAC USIP Jumbo Frames If users think that they might have to shut down and temporarily deallocate the NetScaler ADC VPX virtual machine at any time, they should assign a static Internal IP address while creating the virtual machine. If they do not assign a static internal IP address, Azure might assign the virtual machine a different IP address each time it restarts, and the virtual machine might become inaccessible. In an Azure deployment, only the following NetScaler ADC VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler ADC VPX Data Sheet. If a NetScaler ADC VPX instance with a model number higher than VPX 3000 is used, the network throughput might not be the same as specified by the instance’s license. However, other features, such as SSL throughput and SSL transactions per second, might improve. The “deployment ID” that is generated by Azure during virtual machine provisioning is not visible to the user in ARM. Users cannot use the deployment ID to deploy a NetScaler ADC VPX appliance on ARM. The NetScaler ADC VPX instance supports 20 Mb/s throughput and standard edition features when it is initialized. For a XenApp and XenDesktop deployment, a VPN virtual server on a VPX instance can be configured in the following modes: Basic mode, where the ICAOnly VPN virtual server parameter is set to ON. The Basic mode works fully on an unlicensed NetScaler ADC VPX instance. SmartAccess mode, where the ICAOnly VPN virtual server parameter is set to OFF. The SmartAccess mode works for only 5 NetScaler ADC AAA session users on an unlicensed NetScaler ADC VPX instance. Note: To configure the SmartControl feature, users must apply a Premium license to the NetScaler ADC VPX instance. Azure-VPX Supported Models and Licensing In an Azure deployment, only the following NetScaler ADC VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler ADC VPX Data Sheet. A NetScaler ADC VPX instance on Azure requires a license. The following licensing options are available for NetScaler ADC VPX instances running on Azure. Users can choose one of these methods to license NetScaler ADCs provisioned by NetScaler ADM: Using ADC licenses present in NetScaler ADM: Configure pooled capacity, VPX licenses, or virtual CPU licenses while creating the autoscale group. So, when a new instance is provisioned for an autoscale group, the already configured license type is automatically applied to the provisioned instance. Pooled Capacity: Allocates bandwidth to every provisioned instance in the autoscale group. Ensure users have the necessary bandwidth available in NetScaler ADM to provision new instances. For more information, see: Configure Pooled Capacity. Each ADC instance in the autoscale group checks out one instance license and the specified bandwidth from the pool. VPX licenses: Applies the VPX licenses to newly provisioned instances. Ensure users have the necessary number of VPX licenses available in NetScaler ADM to provision new instances. When a NetScaler ADC VPX instance is provisioned, the instance checks out the license from the NetScaler ADM. For more information, see: NetScaler ADC VPX Check-in and Check-out Licensing. Virtual CPU licenses: Applies virtual CPU licenses to newly provisioned instances. This license specifies the number of CPUs entitled to a NetScaler ADC VPX instance. Ensure users have the necessary number of Virtual CPUs in NetScaler ADM to provision new instances. When a NetScaler ADC VPX instance is provisioned, the instance checks out the virtual CPU license from the NetScaler ADM. For more information, see: NetScaler ADC Virtual CPU Licensing. When the provisioned instances are destroyed or de-provisioned, the applied licenses are automatically returned to NetScaler ADM. To monitor the consumed licenses, navigate to the Networks > Licenses page. Using Microsoft Azure subscription licenses: Configure NetScaler ADC licenses available in the Azure Marketplace while creating the autoscale group. So, when a new instance is provisioned for the autoscale group, the license is obtained from Azure Marketplace. Supported NetScaler ADC Azure Virtual Machine Images Supported NetScaler ADC Azure Virtual Machine Images for Provisioning Use the Azure virtual machine image that supports a minimum of three NICs. Provisioning NetScaler ADC VPX instance is supported only on the Premium and Advanced editions. For more information on Azure virtual machine image types, see: General Purpose Virtual Machine Sizes. The following are the recommended VM sizes for provisioning: Standard_DS3_v2 Standard_B2ms Standard_DS4_v2 Port Usage Guidelines Users can configure more inbound and outbound rules on the NetScaler Gateway while creating the NetScaler ADC VPX instance or after the virtual machine is provisioned. Each inbound and outbound rule is associated with a public port and a private port. Before configuring NSG rules, note the following guidelines regarding the port numbers users can use: The NetScaler ADC VPX instance reserves the following ports. Users cannot define these as private ports when using the Public IP address for requests from the internet. Ports 21, 22, 80, 443, 8080, 67, 161, 179, 500, 520, 3003, 3008, 3009, 3010, 3011, 4001, 5061, 9000, 7000. However, if users want internet-facing services such as the VIP to use a standard port (for example, port 443) users have to create port mapping by using the NSG. The standard port is then mapped to a different port that is configured on the NetScaler ADC VPX for this VIP service. For example, a VIP service might be running on port 8443 on the VPX instance but be mapped to public port 443. So, when the user accesses port 443 through the Public IP, the request is directed to private port 8443. The Public IP address does not support protocols in which port mapping is opened dynamically, such as passive FTP or ALG. High availability does not work for traffic that uses a public IP address (PIP) associated with a VPX instance, instead of a PIP configured on the Azure load balancer. In a NetScaler Gateway deployment, users need not configure a SNIP address, because the NSIP can be used as a SNIP when no SNIP is configured. Users must configure the VIP address by using the NSIP address and some nonstandard port number. For call-back configuration on the back-end server, the VIP port number has to be specified along with the VIP URL (for example, url: port). Note: In the Azure Resource Manager, a NetScaler ADC VPX instance is associated with two IP addresses - a public IP address (PIP) and an internal IP address. While the external traffic connects to the PIP, the internal IP address or the NSIP is non-routable. To configure a VIP in VPX, use the internal IP address (NSIP) and any of the free ports available. Do not use the PIP to configure a VIP. For example, if the NSIP of a NetScaler ADC VPX instance is 10.1.0.3 and an available free port is 10022, then users can configure a VIP by providing the 10.1.0.3:10022 (NSIP address + port) combination.
  14. Overview NetScaler is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility, and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler VPX The NetScaler VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms: Citrix Hypervisor VMware ESX Microsoft Hyper-V Linux KVM Amazon Web Services Microsoft Azure Google Cloud Platform This deployment guide focuses on NetScaler VPX on Microsoft Azure Microsoft Azure Microsoft Azure is an ever-expanding set of cloud computing services built to help organizations meet their business challenges. Azure gives users the freedom to build, manage, and deploy applications on a massive, global network using their preferred tools and frameworks. With Azure, users can: Be future-ready with continuous innovation from Microsoft to support their development today and their product visions for tomorrow. Operate hybrid cloud seamlessly on-premises, in the cloud, and at the edge—Azure meets users where they are. Build on their terms with Azure’s commitment to open source and support for all languages and frameworks, allowing users to be free to build how they want and deploy where they want. Trust their cloud with security from the ground up—backed by a team of experts and proactive, industry-leading compliance that is trusted by enterprises, governments, and startups. Azure Terminology Here is a brief description of the essential terms used in this document that users must be familiar with: Azure Load Balancer – Azure load balancer is a resource that distributes incoming traffic among computers in a network. Traffic is distributed among virtual machines defined in a load-balancer set. A load balancer can be external or internet-facing, or it can be internal. Azure Resource Manager (ARM) – ARM is the new management framework for services in Azure. Azure Load Balancer is managed using ARM-based APIs and tools. Back-End Address Pool – The back-end address pool is the IP addresses associated with the virtual machine NIC (NIC) to which the load is distributed. BLOB - Binary Large Object – Any binary object like a file or an image that can be stored in Azure storage. Front-End IP Configuration – An Azure Load balancer can include one or more front-end IP addresses, also known as a virtual IPs (VIPs). These IP addresses serve as ingress for the traffic. Instance Level Public IP (ILPIP) – An ILPIP is a public IP address that users can assign directly to a virtual machine or role instance, rather than to the cloud service that the virtual machine or role instance resides in. The ILPIP does not take the place of the VIP (virtual IP) that is assigned to their cloud service. Rather, it is an extra IP address that can be used to connect directly to a virtual machine or role instance. Note: In the past, an ILPIP was referred to as a PIP, which stands for public IP. Inbound NAT Rules – This contains rules mapping a public port on the load balancer to a port for a specific virtual machine in the back-end address pool. IP-Config - It can be defined as an IP address pair (public IP and private IP) associated with an individual NIC. In an IP-Config, the public IP address can be NULL. Each NIC can have multiple IP-Configs associated with it, which can be up to 255. Load Balancing Rules – A rule property that maps a given front-end IP and port combination to a set of back-end IP addresses and port combinations. With a single definition of a load balancer resource, users can define multiple load balancing rules, each rule reflecting a combination of a front-end IP and port and back end IP and port associated with virtual machines. Network Security Group (NSG) – NSG contains a list of Access Control List (ACL) rules that allow or deny network traffic to virtual machine instances in a virtual network. NSGs can be associated with either subnets or individual virtual machine instances within that subnet. When an NSG is associated with a subnet, the ACL rules apply to all the virtual machine instances in that subnet. In addition, traffic to an individual virtual machine can be restricted further by associating an NSG directly to that virtual machine. Private IP addresses – Used for communication within an Azure virtual network, and user on-premises network when a VPN gateway is used to extend a user network to Azure. Private IP addresses allow Azure resources to communicate with other resources in a virtual network or an on-premises network through a VPN gateway or ExpressRoute circuit, without using an internet-reachable IP address. In the Azure Resource Manager deployment model, a private IP address is associated with the following types of Azure resources – virtual machines, internal load balancers (ILBs), and application gateways. Probes – This contains health probes used to check availability of virtual machines instances in the back-end address pool. If a particular virtual machine does not respond to health probes for some time, then it is taken out of traffic serving. Probes enable users to track the health of virtual instances. If a health probe fails, the virtual instance is taken out of rotation automatically. Public IP Addresses (PIP) – PIP is used for communication with the Internet, including Azure public-facing services and is associated with virtual machines, internet-facing load balancers, VPN gateways, and application gateways. Region - An area within a geography that does not cross national borders and that contains one or more data centers. Pricing, regional services, and offer types are exposed at the region level. A region is typically paired with another region, which can be up to several hundred miles away, to form a regional pair. Regional pairs can be used as a mechanism for disaster recovery and high availability scenarios. Also referred to generally as location. Resource Group - A container in Resource Manager that holds related resources for an application. The resource group can include all of the resources for an application, or only those resources that are logically grouped. Storage Account – An Azure storage account gives users access to the Azure blob, queue, table, and file services in Azure Storage. A user storage account provides the unique namespace for user Azure storage data objects. Virtual Machine – The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in various sizes. Virtual Network - An Azure virtual network is a representation of a user network in the cloud. It is a logical isolation of the Azure cloud dedicated to a user subscription. Users can fully control the IP address blocks, DNS settings, security policies, and route tables within this network. Users can also further segment their VNet into subnets and launch Azure IaaS virtual machines and cloud services (PaaS role instances). Also, users can connect the virtual network to their on-premises network using one of the connectivity options available in Azure. In essence, users can expand their network to Azure, with complete control on IP address blocks with the benefit of enterprise scale Azure provides. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler on Azure combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, and other essential application delivery capabilities in a single VPX instance, conveniently available via the Azure Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler deployments. The net result is that NetScaler on Azure enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. Disaster Recovery (DR) Disaster is a sudden disruption of business functions caused by natural calamities or human caused events. Disasters affect data center operations, after which resources and the data lost at the disaster site must be fully rebuilt and restored. The loss of data or downtime in the data center is critical and collapses the business continuity. One of the challenges that customers face today is deciding where to put their DR site. Businesses are looking for consistency and performance regardless of any underlying infrastructure or network faults. Possible reasons many organizations are deciding to migrate to the cloud are: Usage economics — The capital expense of having a data center on-prem is well documented and by using the cloud, these businesses can free up time and resources from expanding their own systems. Faster recovery times — Much of the automated orchestration enables recovery in mere minutes. Also, there are technologies that help replicate data by providing continuous data protection or continuous snapshots to guard against any outage or attack. Finally, there are use cases where customers need many different types of compliance and security control which are already present on the public clouds. These make it easier to achieve the compliance they need rather than building their own. A NetScaler configured for GSLB forwards traffic to the least-loaded or best-performing data center. This configuration, referred to as an active-active setup, not only improves performance, but also provides immediate disaster recovery by routing traffic to other data centers if a data center that is part of the setup goes down. NetScaler thereby saves customers valuable time and money. Deployment Types Multi-NIC Multi-IP Deployment (Three-NIC Deployment) Typical Deployments High Availability (HA) Standalone Use Cases Multi-NIC Multi-IP Deployments are used to achieve real isolation of data and management traffic. Multi-NIC Multi-IP Deployments also improve the scale and performance of the ADC. Multi-NIC Multi-IP Deployments are used in network applications where throughput is typically 1 Gbps or higher and a Three-NIC Deployment is recommended. Single-NIC Multi-IP Deployment (One-NIC Deployment) Typical Deployments High Availability (HA) Standalone Use Cases Internal Load Balancing Typical use case of the Single-NIC Multi-IP Deployment is intranet applications requiring lower throughput (less than 1 Gbps). NetScaler Azure Resource Manager Templates Azure Resource Manager (ARM) Templates provide a method of deploying ADC infrastructure-as-code to Azure simply and consistently. Azure is managed using an Azure Resource Manager (ARM) API. The resources the ARM API manages are objects in Azure such as network cards, virtual machines, and hosted databases. ARM Templates define the objects users want to use along with their types, names, and properties in a JSON file which can be understood by the ARM API. ARM Templates are a way to declare the objects users want, along with their types, names and properties, in a JSON file which can be checked into source control and managed like any other code file. ARM Templates are what really gives users the ability to roll out Azure infrastructure as code. Use Cases Customizing deployment Automating deployment Multi-NIC Multi-IP (Three-NIC) Deployment for DR Customers would potentially deploy using three-NIC deployment if they are deploying into a production environment where security, redundancy, availability, capacity, and scalability are critical. With this deployment method, complexity and ease of management are not critical concerns to the users. Single NIC Multi IP (One-NIC) Deployment for DR Customers would potentially deploy using one-NIC deployment if they are deploying into a non-production environment, they are setting up the environment for testing, or they are staging a new environment before production deployment. Another potential reason for using one-NIC deployment is that customers want to deploy direct to the cloud quickly and efficiently. Finally, one-NIC deployment would be used when customers seek the simplicity of a single subnet configuration. Azure Resource Manager Template Deployment Customers would deploy using Azure Resource Manager (ARM) Templates if they are customizing their deployments or they are automating their deployments. Network Architecture In ARM, a NetScaler VPX virtual machine (VM) resides in a virtual network. A virtual NIC (NIC) is created on each NetScaler VM. The network security group (NSG) configured in the virtual network is bound to the NIC, and together they control the traffic flowing into the VM and out of the VM. The NSG forwards the requests to the NetScaler VPX instance, and the VPX instance sends them to the servers. The response from a server follows the same path in reverse. The NSG can be configured to control a single VPX VM, or, with subnets and virtual networks, can control traffic in multiple VPX VM deployments. The NIC contains network configuration details such as the virtual network, subnets, internal IP address, and Public IP address. While on ARM, it is good to know the following IP addresses that are used to access the VMs deployed with a single NIC and a single IP address: Public IP (PIP) address is the internet-facing IP address configured directly on the virtual NIC of the NetScaler VM. This allows users to directly access a VM from the external network. NetScaler IP (NSIP) address is an internal IP address configured on the VM. It is non-routable. Virtual IP address (VIP) is configured by using the NSIP and a port number. Clients access NetScaler services through the PIP address, and when the request reaches the NIC of the NetScaler VPX VM or the Azure load balancer, the VIP gets translated to internal IP (NSIP) and internal port number. Internal IP address is the private internal IP address of the VM from the virtual network’s address space pool. This IP address cannot be reached from the external network. This IP address is by default dynamic unless users set it to static. Traffic from the internet is routed to this address according to the rules created on the NSG. The NSG works with the NIC to selectively send the right type of traffic to the right port on the NIC, which depends on the services configured on the VM. The following figure shows how traffic flows from a client to a server through a NetScaler VPX instance provisioned in ARM. Deployment Steps When users deploy a NetScaler VPX instance on Microsoft Azure Resource Manager (ARM), they can use the Azure cloud computing capabilities and use NetScaler load balancing and traffic management features for their business needs. Users can deploy NetScaler VPX instances on Azure Resource Manager either as standalone instances or as high availability pairs in active-standby modes. But users can deploy a NetScaler VPX instance on Microsoft Azure in either of two ways: Through the Azure Marketplace. The NetScaler VPX virtual appliance is available as an image in the Microsoft Azure Marketplace. Using the NetScaler Azure Resource Manager (ARM) json template available on GitHub. For more information, see: NetScaler Azure Templates. How a NetScaler VPX Instance Works on Azure In an on-premises deployment, a NetScaler VPX instance requires at least three IP addresses: Management IP address, called NSIP address Subnet IP (SNIP) address for communicating with the server farm Virtual server IP (VIP) address for accepting client requests For more information, see: Network Architecture for NetScaler VPX Instances on Microsoft Azure. Note: VPX virtual appliances can be deployed on any instance type that has two or more cores and more than 2 GB memory. In an Azure deployment, users can provision a NetScaler VPX instance on Azure in three ways: Multi-NIC multi-IP architecture Single NIC multi IP architecture ARM (Azure Resource Manager) templates Depending on requirements, users can deploy any of these supported architecture types. Multi-NIC Multi-IP Architecture (Three-NIC) In this deployment type, users can have more than one network interfaces (NICs) attached to a VPX instance. Any NIC can have one or more IP configurations - static or dynamic public and private IP addresses assigned to it. Refer to the following use cases: Configure a High-Availability Setup with Multiple IP Addresses and NICs Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands Configure a High-Availability Setup with Multiple IP Addresses and NICs In a Microsoft Azure deployment, a high-availability configuration of two NetScaler VPX instances is achieved by using the Azure Load Balancer (ALB). This is achieved by configuring a health probe on ALB, which monitors each VPX instance by sending health probes at every 5 seconds to both primary and secondary instances. In this setup, only the primary node responds to health probes and the secondary does not. Once the primary sends the response to the health probe, the ALB starts sending the data traffic to the instance. If the primary instance misses two consecutive health probes, ALB does not redirect traffic to that instance. On failover, the new primary starts responding to health probes and the ALB redirects traffic to it. The standard VPX high availability failover time is three seconds. The total failover time that might occur for traffic switching can be a maximum of 13 seconds. Users can deploy a pair of NetScaler VPX instances with multiple NICs in an active-passive high availability (HA) setup on Azure. Each NIC can contain multiple IP addresses. The following options are available for a multi-NIC high availability deployment: High availability using Azure availability set High availability using Azure availability zones For more information about Azure Availability Set and Availability Zones, see the Azure documentation: Manage the Availability of Linux Virtual Machines. High Availability using Availability Set A high availability setup using availability set must meet the following requirements: An HA Independent Network Configuration (INC) configuration The Azure Load Balancer (ALB) in Direct Server Return (DSR) mode All traffic goes through the primary node. The secondary node remains in standby mode until the primary node fails. Note: For a NetScaler VPX high availability deployment on Azure cloud to work, users need a floating public IP (PIP) that can be moved between the two VPX nodes. The Azure Load Balancer (ALB) provides that floating PIP, which is moved to the second node automatically in the event of a failover. For a NetScaler VPX high availability deployment on Azure cloud to work, users need a floating public IP (PIP) that can be moved between the two VPX nodes. The Azure Load Balancer (ALB) provides that floating PIP, which is moved to the second node automatically in the event of a failover. In an active-passive deployment, the ALB front end public IP (PIP) addresses are added as the VIP addresses in each VPX node. In an HA-INC configuration, the VIP addresses are floating and the SNIP addresses are instance specific. Users can deploy a VPX pair in active-passive high availability mode in two ways by using: NetScaler VPX standard high availability template: use this option to configure an HA pair with the default option of three subnets and six NICs. Windows PowerShell commands: use this option to configure an HA pair according to your subnet and NIC requirements. This section describes how to deploy a VPX pair in active-passive HA setup by using the Citrix template. If users want to deploy with PowerShell commands, see: Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands. Configure HA-INC Nodes by using the Citrix High Availability Template Users can quickly and efficiently deploy a pair of VPX instances in HA-INC mode by using the standard template. The template creates two nodes, with three subnets and six NICs. The subnets are for management, client, and server-side traffic, and each subnet has two NICs for both of the VPX instances. Users can get the NetScaler 12.1 HA Pair template at the Azure Marketplace by visiting: Azure Marketplace/NetScaler 12.1 (High Availability). Complete the following steps to launch the template and deploy a high availability VPX pair, by using Azure Availability Sets. From Azure Marketplace, select and initiate the Citrix solution template. The template appears. Ensure deployment type is Resource Manager and select Create. The Basics page appears. Create a Resource Group and select OK. The General Settings page appears. Type the details and select OK. The Network Setting page appears. Check the VNet and subnet configurations, edit the required settings, and select OK. The Summary page appears. Review the configuration and edit accordingly. Select OK to confirm. The Buy page appears. Select Purchase to complete the deployment. It might take a moment for the Azure Resource Group to be created with the required configurations. After completion, select the Resource Group in the Azure portal to see the configuration details, such as LB rules, back-end pools, health probes, and so on. The high availability pair appears as ns-vpx0 and ns-vpx1. If further modifications are required for the HA setup, such as creating more security rules and ports, users can do that from the Azure portal. Next, users need to configure the load-balancing virtual server with the ALB’s Frontend public IP (PIP) address, on the primary node. To find the ALB PIP, select ALB > Frontend IP configuration. See the Resources section for more information about how to configure the load-balancing virtual server. Resources: The following links provide additional information related to HA deployment and virtual server (virtual server) configuration: Configuring High Availability Nodes in Different Subnets Set up Basic Load Balancing Related resources: Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands Configure GSLB on an Active-Standby High-Availability Setup High Availability using Availability Zones Azure Availability Zones are fault-isolated locations within an Azure region, providing redundant power, cooling, and networking and increasing resiliency. Only specific Azure regions support Availability Zones. For more information, see the Azure documentation: Regions and Availability Zones in Azure. Users can deploy a VPX pair in high availability mode by using the template called “NetScaler 13.0 HA using Availability Zones,” available in Azure Marketplace. Complete the following steps to launch the template and deploy a high availability VPX pair, by using Azure Availability Zones. From Azure Marketplace, select and initiate the Citrix solution template. Ensure deployment type is Resource Manager and select Create. The Basics page appears. Enter the details and click OK. Note: Ensure that an Azure region that supports Availability Zones is selected. For more information about regions that support Availability Zones, see Azure documentation: Regions and Availability Zones in Azure . Ensure that an Azure region that supports Availability Zones is selected. For more information about regions that support Availability Zones, see Azure documentation: Regions and Availability Zones in Azure. The General Settings page appears. Type the details and select OK. The Network Setting page appears. Check the VNet and subnet configurations, edit the required settings, and select OK. The Summary page appears. Review the configuration and edit accordingly. Select OK to confirm. The Buy page appears. Select Purchase to complete the deployment. It might take a moment for the Azure Resource Group to be created with the required configurations. After completion, select the Resource Group to see the configuration details, such as LB rules, back-end pools, health probes, and so on, in the Azure portal. The high availability pair appears as ns-vpx0 and ns-vpx1. Also, users can see the location under the Location column. If further modifications are required for the HA setup, such as creating more security rules and ports, users can do that from the Azure portal. Single NIC Multi IP Architecture (One-NIC) In this deployment type, one network interface (NIC) is associated with multiple IP configurations - static or dynamic public and private IP addresses assigned to it. For more information, refer to the following use cases: Configure Multiple IP Addresses for a NetScaler VPX Standalone Instance Configure Multiple IP Addresses for a NetScaler VPX Standalone Instance by using PowerShell Commands Configure Multiple IP Addresses for a NetScaler VPX Standalone Instance This section explains how to configure a standalone NetScaler VPX instance with multiple IP addresses, in Azure Resource Manager (ARM). The VPX instance can have one or more NICs attached to it, and each NIC can have one or more static or dynamic public and private IP addresses assigned to it. Users can assign multiple IP addresses as NSIP, VIP, SNIP, and so on. For more information, refer to the Azure documentation: Assign Multiple IP Addresses to Virtual Machines using the Azure Portal. If you want to deploy using PowerShell commands, see Configure Multiple IP Addresses for a NetScaler VPX Standalone Instance by using PowerShell Commands. Standalone NetScaler VPX with Single NIC Use Case In this use case, a standalone NetScaler VPX appliance is configured with a single NIC that is connected to a virtual network (VNET). The NIC is associated with three IP configurations (ipconfig), each serves a different purpose - as shown in the table: IPconfig Associated with Purpose ipconfig1 Static IP address; Serves management traffic static private IP address ipconfig2 Static public IP address Serves client-side traffic static private IP address ipconfig3 Static private IP address Communicates with back-end servers Note: IPConfig-3 is not associated with any public IP address. In a multi-NIC, multi-IP Azure NetScaler VPX deployment, the private IP associated with the primary (first) IPConfig of the primary (first) NIC is automatically added as the management NSIP of the appliance. The remaining private IP addresses associated with the IPConfigs need to be added in the VPX instance as a VIP or SNIP by using the add ns ip command, according to user requirements. Before starting Deployment Before users begin deployment, they must create a VPX instance using the steps that follow. For this use case, the NSDoc0330VM VPX instance is created. Configure Multiple IP Addresses for a NetScaler VPX Instance in Standalone Mode Add IP addresses to the VM Configure NetScaler -owned IP addresses Step 1: Add IP Addresses to the VM In the portal, click More services > type virtual machines in the filter box, and then click Virtual machines. In the Virtual machines blade, click the VM you want to add IP addresses to. Click Network interfaces in the virtual machine blade that appears, and then select the network interface. In the blade that appears for the NIC selected, click IP configurations. The existing IP configuration that was assigned when the VM was created, ipconfig1, is displayed. For this use case, make sure the IP addresses associated with ipconfig1 are static. Next, create two more IP configurations: ipconfig2 (VIP) and ipconfig3 (SNIP). To create more IP configurations, click Add. In the Add IP configuration window, enter a Name, specify the allocation method as Static, enter an IP address (192.0.0.5 for this use case), and enable Public IP address. Note: Before you add a static private IP address, check for IP address availability and make sure the IP address belongs to the same subnet to which the NIC is attached. Next, click Configure required settings to create a static public IP address for ipconfig2. By default, public IPs are dynamic. To make sure that the VM always uses the same public IP address, create a static Public IP. In the Create public IP address blade, add a Name, under Assignment click Static. And then click OK. Note: Even when users set the allocation method to static, they cannot specify the actual IP address assigned to the public IP resource. Instead, it gets allocated from a pool of available IP addresses in the Azure location where the resource is created. Follow the steps to add one more IP configuration for ipconfig3. Public IP is not mandatory. Step 2: Configure NetScaler-owned IP Addresses Configure the NetScaler-owned IP addresses by using the GUI or the command add ns ip. For more information, refer to: Configuring NetScaler-owned IP Addresses. For more information about how to deploy a NetScaler VPX instance on Microsoft Azure, see Deploy a NetScaler VPX Instance on Microsoft Azure. For more information about how a NetScaler VPX instance works on Azure, see How a NetScaler VPX Instance Works on Azure. ARM (Azure Resource Manager) Templates The GitHub repository for NetScaler ARM (Azure Resource Manager) templates hosts NetScaler custom templates for deploying NetScaler in Microsoft Azure Cloud Services here: NetScaler Azure Templates. All templates in this repository were developed and are maintained by the NetScaler engineering team. Each template in this repository has co-located documentation describing the usage and architecture of the template. The templates attempt to codify the recommended deployment architecture of the NetScaler VPX, or to introduce the user to the NetScaler or to demonstrate a particular feature, edition, or option. Users can reuse, modify, or enhance the templates to suit their particular production and testing needs. Most templates require sufficient subscriptions to portal.azure.com to create resource and deploy templates. NetScaler VPX Azure Resource Manager (ARM) templates are designed to ensure an easy and consistent way of deploying standalone NetScaler VPX. These templates increase reliability and system availability with built-in redundancy. These ARM templates support Bring Your Own License (BYOL) or Hourly based selections. Choice of selection is either mentioned in the template description or offered during template deployment. For more information about how to provision a NetScaler VPX instance on Microsoft Azure using ARM (Azure Resource Manager) templates, visit: Citrix Azure ADC Templates. Prerequisites Users need some prerequisite knowledge before deploying a NetScaler VPX instance on Azure: Familiarity with Azure terminology and network details. For information, see the Azure terminology in the preceding section. Knowledge of a NetScaler appliance. For detailed information about the NetScaler appliance, see NetScaler 13.0. Knowledge of NetScaler networking. See the Networking topic here: Networking. Limitations Running the NetScaler VPX load balancing solution on ARM imposes the following limitations: The Azure architecture does not accommodate support for the following NetScaler features: Clustering IPv6 Gratuitous ARP (GARP) L2 Mode (bridging). Transparent virtual servers are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN Dynamic Routing Virtual MAC USIP Jumbo Frames If you think you might have to shut down and temporarily deallocate the NetScaler VPX virtual machine at any time, assign a static Internal IP address while creating the virtual machine. If you do not assign a static internal IP address, Azure might assign the virtual machine a different IP address each time it restarts, and the virtual machine might become inaccessible. In an Azure deployment, only the following NetScaler VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler VPX Data Sheet. If a NetScaler VPX instance with a model number higher than VPX 3000 is used, the network throughput might not be the same as specified by the instance’s license. However, other features, such as SSL throughput and SSL transactions per second, might improve. The “deployment ID” that is generated by Azure during virtual machine provisioning is not visible to the user in ARM. Users cannot use the deployment ID to deploy a NetScaler VPX appliance on ARM. The NetScaler VPX instance supports 20 Mb/s throughput and standard edition features when it is initialized. For a XenApp and XenDesktop deployment, a VPN virtual server on a VPX instance can be configured in the following modes: Basic mode, where the ICAOnly VPN virtual server parameter is set to ON. The Basic mode works fully on an unlicensed NetScaler VPX instance. Smart-Access mode, where the ICAOnly VPN virtual server parameter is set to OFF. The Smart-Access mode works for only 5 NetScaler AAA session users on an unlicensed NetScaler VPX instance. Note: To configure the Smart Control feature, users must apply a Premium license to the NetScaler VPX instance. Azure-VPX Supported Models and Licensing In an Azure deployment, only the following NetScaler VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler VPX Data Sheet. A NetScaler VPX instance on Azure requires a license. The following licensing options are available for NetScaler VPX instances running on Azure. Subscription-based licensing: NetScaler VPX appliances are available as paid instances on Azure Marketplace. Subscription based licensing is a pay-as-you-go option. Users are charged hourly. The following VPX models and license types are available on Azure Marketplace: VPX Model License Type VPX10 Standard, Advanced, Premium VPX200 Standard, Advanced, Premium VPX1000 Standard, Advanced, Premium VPX3000 Standard, Advanced, Premium Bring your own license (BYOL): If you bring your own license (BYOL), see the VPX Licensing Guide at: CTX122426/NetScaler VPX and CloudBridge VPX Licensing Guide. Users have to: Use the licensing portal within MyCitrix to generate a valid license. Upload the license to the instance. NetScaler VPX Check-In/Check-Out licensing: For more information, see: NetScaler VPX Check-in and Check-out Licensing. Starting with NetScaler release 12.0 56.20, VPX Express for on-premises and cloud deployments does not require a license file. For more information on NetScaler VPX Express see the “NetScaler VPX Express license” section in Licensing Overview. Note: Regardless of the subscription-based hourly license bought from Azure Marketplace, in rare cases, the NetScaler VPX instance deployed on Azure might come up with a default NetScaler license. This happens due to issues with Azure Instance Metadata Service (IMDS). Do a warm restart before making any configuration change on the NetScaler VPX instance, to enable the correct NetScaler VPX license. Port Usage Guidelines Users can configure more inbound and outbound rules in NSG while creating the NetScaler VPX instance or after the virtual machine is provisioned. Each inbound and outbound rule is associated with a public port and a private port. Before you configure NSG rules, note the following guidelines regarding the port numbers you can use: The NetScaler VPX instance reserves the following ports. Users cannot define these as private ports when using the Public IP address for requests from the internet. Ports 21, 22, 80, 443, 8080, 67, 161, 179, 500, 520, 3003, 3008, 3009, 3010, 3011, 4001, 5061, 9000, 7000. However, if users want internet-facing services such as the VIP to use a standard port (for example, port 443) users have to create port mapping by using the NSG. The standard port is then mapped to a different port that is configured on the NetScaler VPX for this VIP service. For example, a VIP service might be running on port 8443 on the VPX instance but be mapped to public port 443. So, when the user accesses port 443 through the Public IP, the request is directed to private port 8443. The Public IP address does not support protocols in which port mapping is opened dynamically, such as passive FTP or ALG. High availability does not work for traffic that uses a public IP address (PIP) associated with a VPX instance, instead of a PIP configured on the Azure load balancer. For more information, see: Configure a High-Availability Setup with a Single IP Address and a Single NIC. In a NetScaler Gateway deployment, users need not configure a SNIP address, because the NSIP can be used as a SNIP when no SNIP is configured. Users must configure the VIP address by using the NSIP address and some nonstandard port number. For call-back configuration on the back-end server, the VIP port number has to be specified along with the VIP URL (for example, url: port). Note: In Azure Resource Manager, a NetScaler VPX instance is associated with two IP addresses - a public IP address (PIP) and an internal IP address. While the external traffic connects to the PIP, the internal IP address or the NSIP is non-routable. To configure a VIP in VPX, use the internal IP address (NSIP) and any of the free ports available. Do not use the PIP to configure a VIP. For example, if the NSIP of a NetScaler VPX instance is 10.1.0.3 and an available free port is 10022, then users can configure a VIP by providing the 10.1.0.3:10022 (NSIP address + port) combination. The official version of this content is in English. Some of the Citrix documentation content is machine translated for your convenience only. Citrix has no control over machine-translated content, which may contain errors, inaccuracies or unsuitable language. No warranty of any kind, either expressed or implied, is made as to the accuracy, reliability, suitability, or correctness of any translations made from the English original into any other language, or that your Citrix product or service conforms to any machine translated content, and any warranty provided under the applicable end user license agreement or terms of service, or any other agreement with Citrix, that the product or service conforms with any documentation shall not apply to the extent that such documentation has been machine translated. Citrix will not be held responsible for any damage or issues that may arise from using machine-translated content. DIESER DIENST KANN ÜBERSETZUNGEN ENTHALTEN, DIE VON GOOGLE BEREITGESTELLT WERDEN. GOOGLE LEHNT JEDE AUSDRÜCKLICHE ODER STILLSCHWEIGENDE GEWÄHRLEISTUNG IN BEZUG AUF DIE ÜBERSETZUNGEN AB, EINSCHLIESSLICH JEGLICHER GEWÄHRLEISTUNG DER GENAUIGKEIT, ZUVERLÄSSIGKEIT UND JEGLICHER STILLSCHWEIGENDEN GEWÄHRLEISTUNG DER MARKTGÄNGIGKEIT, DER EIGNUNG FÜR EINEN BESTIMMTEN ZWECK UND DER NICHTVERLETZUNG VON RECHTEN DRITTER. CE SERVICE PEUT CONTENIR DES TRADUCTIONS FOURNIES PAR GOOGLE. GOOGLE EXCLUT TOUTE GARANTIE RELATIVE AUX TRADUCTIONS, EXPRESSE OU IMPLICITE, Y COMPRIS TOUTE GARANTIE D'EXACTITUDE, DE FIABILITÉ ET TOUTE GARANTIE IMPLICITE DE QUALITÉ MARCHANDE, D'ADÉQUATION À UN USAGE PARTICULIER ET D'ABSENCE DE CONTREFAÇON. ESTE SERVICIO PUEDE CONTENER TRADUCCIONES CON TECNOLOGÍA DE GOOGLE. GOOGLE RENUNCIA A TODAS LAS GARANTÍAS RELACIONADAS CON LAS TRADUCCIONES, TANTO IMPLÍCITAS COMO EXPLÍCITAS, INCLUIDAS LAS GARANTÍAS DE EXACTITUD, FIABILIDAD Y OTRAS GARANTÍAS IMPLÍCITAS DE COMERCIABILIDAD, IDONEIDAD PARA UN FIN EN PARTICULAR Y AUSENCIA DE INFRACCIÓN DE DERECHOS. 本服务可能包含由 Google 提供技术支持的翻译。Google 对这些翻译内容不做任何明示或暗示的保证,包括对准确性、可靠性的任何保证以及对适销性、特定用途的适用性和非侵权性的任何暗示保证。 このサービスには、Google が提供する翻訳が含まれている可能性があります。Google は翻訳について、明示的か黙示的かを問わず、精度と信頼性に関するあらゆる保証、および商品性、特定目的への適合性、第三者の権利を侵害しないことに関するあらゆる黙示的保証を含め、一切保証しません。 ESTE SERVIÇO PODE CONTER TRADUÇÕES FORNECIDAS PELO GOOGLE. O GOOGLE SE EXIME DE TODAS AS GARANTIAS RELACIONADAS COM AS TRADUÇÕES, EXPRESSAS OU IMPLÍCITAS, INCLUINDO QUALQUER GARANTIA DE PRECISÃO, CONFIABILIDADE E QUALQUER GARANTIA IMPLÍCITA DE COMERCIALIZAÇÃO, ADEQUAÇÃO A UM PROPÓSITO ESPECÍFICO E NÃO INFRAÇÃO.
  15. Overview NetScaler ADC is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler ADC is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler ADC combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility, and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler ADC VPX The NetScaler ADC VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms. This deployment guide focuses on NetScaler ADC VPX on Amazon Web Services. Amazon Web Services Amazon Web Services (AWS) is a comprehensive, evolving cloud computing platform provided by Amazon that includes a mixture of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings. AWS services offer tools such as compute power, database storage, and content delivery services. AWS offers the following essential services: AWS Compute Services Migration Services Storage Database Services Management Tools Security Services Analytics Networking Messaging Developer Tools Mobile Services AWS Terminology Here is a brief description of key terms used in this document that users must be familiar with: Elastic Network Interface (ENI) – A virtual network interface that users can attach to an instance in a Virtual Private Cloud (VPC). Elastic IP (EIP) address – A static, public IPv4 address that users have allocated in Amazon EC2 or Amazon VPC and then attached to an instance. Elastic IP addresses are associated with user accounts, not a specific instance. They are elastic because users can easily allocate, attach, detach, and free them as their needs change. Subnet – A segment of the IP address range of a VPC with which EC2 instances can be attached. Users can create subnets to group instances according to security and operational needs. Virtual Private Cloud (VPC) – A web service for provisioning a logically isolated section of the AWS cloud where users can launch AWS resources in a virtual network that they define. Here is a brief description of other terms used in this document that users should be familiar with: Amazon Machine Image (AMI) – A machine image, which provides the information required to launch an instance, which is a virtual server in the cloud. Elastic Block Store – Provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Simple Storage Service (S3) – Storage for the Internet. It is designed to make web-scale computing easier for developers. Elastic Compute Cloud (EC2) – A web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Elastic Kubernetes Service (EKS) – Amazon EKS is a managed service that makes it easy for users to run Kubernetes on AWS without needing to stand up or maintain their own Kubernetes control plane. ... Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Amazon EKS is a managed service that makes it easy for users to run Kubernetes on AWS without needing to install and operate their own Kubernetes clusters. Application Load Balancing (ALB) – Amazon ALB operates at layer 7 of the OSI stack so it's employed when users want to route or select traffic based on elements of the HTTP or HTTPS connection, whether host-based or path-based. The ALB connection is context-aware and can have direct requests based on any single variable. Applications are load balanced based on their peculiar behavior not solely on server (operating system or virtualization layer) information. Elastic Load Balancing (ALB/ELB/NLB) – Amazon ELB Distributes incoming application traffic across multiple EC2 instances, in multiple Availability Zones. This increases the fault tolerance of user applications. Network Load Balancing (NLB) – Amazon NLB operates at layer 4 of the OSI stack and below and is not designed to consider anything at the application layer such as content type, cookie data, custom headers, user location, or application behavior. It is context-less, caring only about the network-layer information contained within the packets it is directing. It distributes traffic based on network variables such as IP address and destination ports. Instance type – Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give users the flexibility to choose the appropriate mix of resources for their applications. Identity and Access Management (IAM) – An AWS identity with permission policies that determine what the identity can and cannot do in AWS. Users can use an IAM role to enable applications running on an EC2 instance to securely access their AWS resources. IAM role is required for deploying VPX instances in a high-availability setup. Internet Gateway – Connects a network to the Internet. Users can route traffic for IP addresses outside their VPC to the Internet gateway. Key pair – A set of security credentials with which users prove their identity electronically. A key pair consists of a private key and a public key. Route table – A set of routing rules that controls the traffic leaving any subnet that is associated with the route table. Users can associate multiple subnets with a single route table, but a subnet can be associated with only one route table at a time. Auto Scale Groups – A web service to launch or terminate Amazon EC2 instances automatically based on user-defined policies, schedules, and health checks. CloudFormation – A service for writing or changing templates that creates and deletes related AWS resources together as a unit. Web Application Firewall (WAF) – WAF is defined as a security solution protecting the web application layer in the OSI network model. A WAF does not depend on the application it is protecting. This document focuses on the exposition and evaluation of the security methods and functions provided specifically by NetScaler WAF. Bot – Bot is defined as an autonomous device, program, or piece of software on a network (especially the internet) that can interact with computer systems or users to run commands, reply to messages, or perform routine tasks. A bot is a software program on the internet that performs repetitive tasks. Some bots can be good, while others can have a huge negative impact on a website or application. Sample NetScaler WAF on AWS Architecture The preceding image shows a virtual private cloud (VPC) with default parameters that builds a NetScaler WAF environment in the AWS Cloud. In a production deployment, the following parameters are set up for the NetScaler WAF environment: This architecture assumes the use of an AWS CloudFormation Template and an AWS Quick Start Guide, which can be found here: GitHub/AWS-Quickstart/Quickstart-NetScaler-ADC-VPX . A VPC that spans two Availability Zones, configured with two public and four private subnets, according to AWS best practices, to provide you with your own virtual network on AWS with a /16 Classless Inter-Domain Routing (CIDR) block (a network with 65,536 private IP addresses). * Two instances of NetScaler WAF (Primary and Secondary), one in each Availability Zone. Three security groups, one for each network interface (Management, Client, Server), that acts as virtual firewalls to control the traffic for their associated instances. Three subnets, for each instance- one for management, one for client, and one for back-end server. An internet gateway attached to the VPC, and a Public Subnets route table which is associated with public subnets so as to allow access to the internet. This gateway is used by the WAF host to send and receive traffic. For more information on Internet Gateways, see: Internet Gateways. * 5 Route tables-one public route table associated with client subnets of both primary and secondary WAF. The remaining 4 route tables link to each of the 4 private subnets (management and server-side subnets of primary and secondary WAF). * AWS Lambda in WAF takes care of the following: Configuring two WAF in each availability zone of HA mode Creating a sample WAF Profile and thus pushing this configuration with respect to WAF AWS Identity and Access Management (IAM) to securely control access to AWS services and resources for your users. By default, the CloudFormation Template (CFT) creates the required IAM role. However, users can provide their own IAM role for NetScaler ADC instances. In the public subnets, two managed Network Address Translation (NAT) gateways to allow outbound internet access for resources in public subnets. Note: The CFT WAF template that deploys the NetScaler WAF into an existing VPC skips the components marked by asterisks and prompts users for their existing VPC configuration. Backend servers are not deployed by the CFT. Logical Flow of NetScaler WAF on AWS Logical Flow The Web Application Firewall can be installed as either a Layer 3 network device or a Layer 2 network bridge between customer servers and customer users, usually behind the customer company’s router or firewall. It must be installed in a location where it can intercept traffic between the web servers that users want to protect and the hub or switch through which users access those web servers. Users then configure the network to send requests to the Web Application Firewall instead of directly to their web servers, and responses to the Web Application Firewall instead of directly to their users. The Web Application Firewall filters that traffic before forwarding it to its final destination, using both its internal rule set and the user additions and modifications. It blocks or renders harmless any activity that it detects as harmful, and then forwards the remaining traffic to the web server. The preceding image provides an overview of the filtering process. Note: The diagram omits the application of a policy to incoming traffic. It illustrates a security configuration in which the policy is to process all requests. Also, in this configuration, a signatures object has been configured and associated with the profile, and security checks have been configured in the profile. As the diagram shows, when a user requests a URL on a protected website, the Web Application Firewall first examines the request to ensure that it does not match a signature. If the request matches a signature, the Web Application Firewall either displays the error object (a webpage that is located on the Web Application Firewall appliance and which users can configure by using the imports feature) or forwards the request to the designated error URL (the error page). If a request passes signature inspection, the Web Application Firewall applies the request security checks that have been enabled. The request security checks verify that the request is appropriate for the user website or web service and does not contain material that might pose a threat. For example, security checks examine the request for signs indicating that it might be of an unexpected type, request unexpected content, or contain unexpected and possibly malicious web form data, SQL commands, or scripts. If the request fails a security check, the Web Application Firewall either sanitizes the request and then sends it back to the NetScaler ADC appliance (or NetScaler ADC virtual appliance), or displays the error object. If the request passes the security checks, it is sent back to the NetScaler ADC appliance, which completes any other processing and forwards the request to the protected web server. When the website or web service sends a response to the user, the Web Application Firewall applies the response security checks that have been enabled. The response security checks examine the response for leaks of sensitive private information, signs of website defacement, or other content that should not be present. If the response fails a security check, the Web Application Firewall either removes the content that should not be present or blocks the response. If the response passes the security checks, it is sent back to the NetScaler ADC appliance, which forwards it to the user. Cost and Licensing Users are responsible for the cost of the AWS services used while running AWS deployments. The AWS CloudFormation templates that can be used for this deployment include configuration parameters that users can customize as necessary. Some of those settings, such as instance type, affect the cost of deployment. For cost estimates, users should refer to the pricing pages for each AWS service they are using. Prices are subject to change. A NetScaler ADC WAF on AWS requires a license. To license NetScaler WAF, users must place the license key in an S3 bucket and specify its location when they launch the deployment. Note: When users elect the Bring your own license (BYOL) licensing model, they should ensure that they have an AppFlow feature enabled. For more information on BYOL licensing, see: AWS Marketplace/NetScaler ADC VPX - Customer Licensed . The following licensing options are available for NetScaler ADC WAF running on AWS. Users can choose an AMI (Amazon Machine Image) based on a single factor such as throughput. License model: Pay as You Go (PAYG, for the production licenses) or Bring Your Own License (BYOL, for the Customer Licensed AMI - NetScaler ADC Pooled Capacity). For more information on NetScaler ADC Pooled Capacity, see: NetScaler ADC Pooled Capacity. For BYOL, there are 3 licensing modes: Configure NetScaler ADC Pooled Capacity: Configure NetScaler ADC Pooled Capacity NetScaler ADC VPX Check-in and Check-out Licensing (CICO): NetScaler ADC VPX Check-in and Check-out Licensing Tip: If users elect CICO Licensing with VPX-200, VPX-1000, VPX-3000, VPX-5000, or VPX-8000 application platform type, they should ensure that they have the same throughput license present in their ADM licensing server. NetScaler ADC virtual CPU Licensing: NetScaler ADC virtual CPU Licensing Note: If users want to dynamically modify the bandwidth of a VPX instance, they should elect a BYOL option, for example NetScaler ADC pooled capacity where they can allocate the licenses from NetScaler ADM, or they can check out the licenses from NetScaler ADC instances according to the minimum and maximum capacity of the instance on demand and without a restart. A restart is required only if users want to change the license edition. Throughput: 200 Mbps or 1 Gbps Bundle: Premium Deployment Options This deployment guide provides two deployment options: The first option is to deploy using a Quick Start Guide format and the following options: Deploy NetScaler WAF into a new VPC (end-to-end deployment). This option builds a new AWS environment consisting of the VPC, subnets, security groups, and other infrastructure components, and then deploys NetScaler WAF into this new VPC. Deploy NetScaler WAF into an existing VPC. This option provisions NetScaler WAF in the user existing AWS infrastructure. The second option is to deploy using WAF StyleBooks using NetScaler ADM Deployment Steps using a Quick Start Guide Step 1: Sign in to the User AWS Account Sign in to the user account at AWS: AWS with an IAM (Identity and Access Management) user role that has the necessary permissions to create an Amazon Account (if necessary) or sign in to an Amazon Account. Use the region selector in the navigation bar to choose the AWS Region where users want to deploy High Availability across AWS Availability Zones. Ensure that the user AWS account is configured correctly, refer to the Technical Requirements section of this document for more information. Step 2: Subscribe to the NetScaler WAF AMI This deployment requires a subscription to the AMI for NetScaler WAF in the AWS Marketplace. Sign in to the user AWS account. Open the page for the NetScaler WAF offering by choosing one of the links in the following table. When users launch the Quick Start Guide in to deploy NetScaler WAF in Step 3 below, they use the NetScaler WAF Image parameter to select the bundle and throughput option that matches their AMI subscription. The following list shows the AMI options and corresponding parameter settings. The VPX AMI instance requires a minimum of 2 virtual CPUs and 2 GB of memory. Note: To retrieve the AMI ID, refer to the NetScaler Products on AWS Marketplace page on GitHub: NetScaler Products on AWS Marketplace . AWS Marketplace AMI NetScaler Web Application Firewall (WAF) - 200 Mbps: NetScaler Web App Firewall (WAF) - 200 Mbps NetScaler Web Application Firewall (WAF) - 1000 Mbps: NetScaler Web App Firewall (WAF) - 1000 Mbps On the AMI page, choose Continue to Subscribe. Review the terms and conditions for software usage, and then choose Accept Terms. Note: Users receive a confirmation page, and an email confirmation is sent to the account owner. For detailed subscription instructions, see Getting Started in the AWS Marketplace Documentation: Getting Started . When the subscription process is complete, exit out of AWS Marketplace without further action. Do not provision the software from AWS Marketplace—users will deploy the AMI with the Quick Start Guide. Step 3: Launch the Quick Start Guide to Deploy the AMI Sign in to the user AWS account, and choose one of the following options to launch the AWS CloudFormation template. For help with choosing an option, see deployment options earlier in this guide. Deploy NetScaler VPX into a new VPC on AWS using one of the AWS CloudFormation Templates located here: Citrix/Citrix-ADC-AWS-CloudFormation/Templates/High-Availability/Across-Availability-Zone Citrix/Citrix-ADC-AWS-CloudFormation/Templates/High-Availability/Same-Availability-Zone Deploy NetScaler WAF into a new or existing VPC on AWS using the AWS Quickstart template located here: AWS-Quickstart/Quickstart-Citrix-ADC- WAF Important: If users are deploying NetScaler WAF into an existing VPC, they must ensure that their VPC spans across two Availability Zones, with one public and two private subnets in each Availability Zone for the workload instances, and that the subnets are not shared. This deployment guide does not support shared subnets, see Working with Shared VPCs: Working with Shared VPCs . These subnets require NAT Gateways in their route tables to allow the instances to download packages and software without exposing them to the internet. For more information about NAT Gateways, see: NAT Gateways . Configure the subnets so there is no overlapping of subnets. Also, users should ensure that the domain name option in the DHCP options is configured as explained in the Amazon VPC documentation found here: DHCP Options Sets: DHCP Options Sets. Users are prompted for their VPC settings when they launch the Quick Start Guide. Each deployment takes about 15 minutes to complete. Check the AWS Region that is displayed in the upper-right corner of the navigation bar, and change it if necessary. This is where the network infrastructure for NetScaler WAF will be built. The template is launched in the US East (Ohio) Region by default. Note: This deployment includes NetScaler WAF, which isn’t currently supported in all AWS Regions. For a current list of supported Regions, see the AWS Service Endpoints: AWS Service Endpoints . On the Select Template page, keep the default setting for the template URL, and then choose Next. On the Specify Details page, specify the stack name as per user convenience. Review the parameters for the template. Provide values for the parameters that require input. For all other parameters, review the default settings and customize them as necessary. In the following table, parameters are listed by category and described separately for the deployment option: Parameters for deploying NetScaler WAF into a new or existing VPC (Deployment Option 1) When users finish reviewing and customizing the parameters, they should choose Next. Parameters for Deploying NetScaler WAF into a new VPC VPC Network Configuration For reference information on this deployment refer to the CFT template here: AWS-Quickstart/Quickstart-Citrix-ADC-WAF/Templates. Parameter label (name) Default Description Primary Availability Zone (PrimaryAvailabilityZone) Requires input The Availability Zone for Primary NetScaler WAF deployment Secondary Availability Zone (SecondaryAvailabilityZone) Requires input The Availability Zone for Secondary NetScaler WAF deployment VPC CIDR (VPCCIDR) 10.0.0.0/16 The CIDR block for the VPC. Must be a valid IP CIDR range of the form x.x.x.x/x. Remote SSH CIDR IP(Management) (RestrictedSSHCIDR) Requires input The IP address range that can SSH to the EC2 instance (port: 22). For example Using 0.0.0.0/0, will enable all IP addresses to access the user instance using SSH or RDP. Note: Authorize only a specific IP address or range of addresses to access the user instance because it is unsafe to use it in production. Remote HTTP CIDR IP(Client) (RestrictedWebAppCIDR) 0.0.0.0/0 The IP address range that can HTTP to the EC2 instance (port: 80) Remote HTTP CIDR IP(Client) (RestrictedWebAppCIDR) 0.0.0.0/0 The IP address range that can HTTP to the EC2 instance (port: 80) Primary Management Private Subnet CIDR (PrimaryManagementPrivateSubnetCIDR) 10.0.1.0/24 The CIDR block for Primary Management Subnet located in Availability Zone 1. Primary Management Private IP (PrimaryManagementPrivateIP) — Private IP assigned to the Primary Management ENI (last octet has to be between 5 and 254) from the Primary Management Subnet CIDR. Primary Client Public Subnet CIDR (PrimaryClientPublicSubnetCIDR) 10.0.2.0/24 The CIDR block for Primary Client Subnet located in Availability Zone 1. Primary Client Private IP (PrimaryClientPrivateIP) — Private IP assigned to the Primary Client ENI (last octet has to be between 5 and 254) from Primary Client IP from the Primary Client Subnet CIDR. Primary Server Private Subnet CIDR (PrimaryServerPrivateSubnetCIDR) 10.0.3.0/24 The CIDR block for Primary Server located in Availability Zone 1. Primary Server Private IP (PrimaryServerPrivateIP) — Private IP assigned to the Primary Server ENI (last octet has to be between 5 and 254) from the Primary Server Subnet CIDR. Secondary Management Private Subnet CIDR (SecondaryManagementPrivateSubnetCIDR) 10.0.4.0/24 The CIDR block for Secondary Management Subnet located in Availability Zone 2. Secondary Management Private IP (SecondaryManagementPrivateIP) — Private IP assigned to the Secondary Management ENI (last octet has to be between 5 and 254). It would allocate Secondary Management IP from the Secondary Management Subnet CIDR. Secondary Client Public Subnet CIDR (SecondaryClientPublicSubnetCIDR) 10.0.5.0/24 The CIDR block for Secondary Client Subnet located in Availability Zone 2. Secondary Client Private IP (SecondaryClientPrivateIP) — Private IP assigned to the Secondary Client ENI (last octet has to be between 5 and 254). It would allocate Secondary Client IP from the Secondary Client Subnet CIDR. Secondary Server Private Subnet CIDR (SecondaryServerPrivateSubnetCIDR) 10.0.6.0/24 The CIDR block for Secondary Server Subnet located in Availability Zone 2. Secondary Server Private IP (SecondaryServerPrivateIP) — Private IP assigned to the Secondary Server ENI (last octet has to be between 5 and 254). It would allocate Secondary Server IP from the Secondary Server Subnet CIDR. VPC Tenancy attribute (VPCTenancy) default The allowed tenancy of instances launched into the VPC. Choose Dedicated tenancy to launch EC2 instances dedicated to a single customer. Bastion host configuration Parameter label (name) Default Description Bastion Host required (LinuxBastionHostEIP) No By default, no bastion host will be configured. But if users want to opt for sandbox deployment select “yes” from the menu which would deploy a Linux Bastion Host in the public subnet with an EIP that would give users access to the components in the private and public subnet. NetScaler WAF Configuration Parameter label (name) Default Description Key pair name (KeyPairName) Requires input A public/private key pair, which allows users to connect securely to the user instance after it launches. This is the key pair users created in their preferred AWS Region; see the Technical Requirements section. NetScaler ADC Instance Type (CitrixADCInstanceType) m4.xlarge The EC2 instance type to use for the ADC instances. Ensure that the instance type opted for aligns with the instance types available in the AWS marketplace or else the CFT might fail. NetScaler ADC AMI ID (CitrixADCImageID) — The AWS Marketplace AMI to be used for NetScaler WAF deployment. This must match the AMI users subscribed to in step 2. NetScaler ADC VPX IAM role (iam:GetRole) — This Template: AWS-Quickstart/Quickstart-Citrix-ADC-VPX/Templates creates the IAM role and the Instance Profile required for NetScaler ADC VPX. If left empty, CFT creates the required IAM role. Client PublicIP(EIP) (ClientPublicEIP) No Select "Yes" if users want to assign a public EIP to the user Client Network interface. Otherwise, even after the deployment, users still have the option of assigning it later if necessary. Pooled Licensing configuration Parameter label (name) Default Description ADM Pooled Licensing No If choosing the BYOL option for licensing, select yes from the list. This allows users to upload their already purchased licenses. Before users begin, they should Configure NetScaler ADC Pooled Capacity to ensure ADM pooled licensing is available, see: Configure NetScaler ADC Pooled Capacity. Reachable ADM / ADM Agent IP Requires input For the Customer Licensed option, whether users deploy NetScaler ADM on-prem or an agent in the cloud, make sure to have a reachable ADM IP which would then be used as an input parameter. Licensing Mode Optional Users can choose from the 3 licensing modes: Configure NetScaler ADC Pooled Capacity: Configure NetScaler ADC Pooled Capacity NetScaler ADC VPX Check-in and Check-out Licensing (CICO): NetScaler ADC VPX Check-in and Check-out Licensing NetScaler ADC virtual CPU Licensing: NetScaler ADC virtual CPU Licensing| |License Bandwidth in Mbps|0 Mbps|Only if the licensing mode is Pooled-Licensing, then this field comes into the picture. It allocates an initial bandwidth of the license in Mbps to be allocated after BYOL ADCs are created. It should be a multiple of 10 Mbps.| |License Edition|Premium|License Edition for Pooled Capacity Licensing Mode is Premium| |Appliance Platform Type|Optional|Choose the required Appliance Platform Type, only if users opt for CICO licensing mode. Users get the options listed: VPX-200, VPX-1000, VPX-3000, VPX-5000, VPX-8000| |License Edition|Premium|License Edition for vCPU based Licensing is Premium.| AWS Quick Start Guide Configuration Note: We recommend that users keep the default settings for the following two parameters, unless they are customizing the Quick Start Guide templates for their own deployment projects. Changing the settings of these parameters will automatically update code references to point to a new Quick Start Guide location. For more details, see the AWS Quick Start Guide Contributor’s Guide located here: AWS Quick Starts/Option 1 - Adopt a Quick Start . Parameter label (name) Default Description Quick Start Guide S3 bucket name (QSS3BucketName) aws-quickstart The S3 bucket users created for their copy of Quick Start Guide assets, if users decide to customize or extend the Quick Start Guide for their own use. The bucket name can include numbers, lowercase letters, uppercase letters, and hyphens, but should not start or end with a hyphen. Quick Start Guide S3 key prefix (QSS3KeyPrefix) quickstart-citrix-adc-vpx/ The S3 key name prefix, from the Object Key and Metadata: Object Key and Metadata, is used to simulate a folder for the user copy of Quick Start Guide assets, if users decide to customize or extend the Quick Start Guide for their own use. This prefix can include numbers, lowercase letters, uppercase letters, hyphens, and forward slashes. On the Options page, users can specify a Resource Tag or key-value pair for resources in your stack and set advanced options. For more information on Resource Tags, see: Resource Tag. For more information on setting AWS CloudFormation Stack Options, see: Setting AWS CloudFormation Stack Options. When users are done, they should choose Next. On the Review page, review and confirm the template settings. Under Capabilities, select the two check boxes to acknowledge that the template creates IAM resources and that it might require the capability to auto-expand macros. Choose Create to deploy the stack. Monitor the status of the stack. When the status is CREATE_COMPLETE, the NetScaler WAF instance is ready. Use the URLs displayed in the Outputs tab for the stack to view the resources that were created. Step 4: Test the Deployment We refer to the instances in this deployment as primary and secondary. Each instance has different IP addresses associated with it. When the Quick Start has been deployed successfully, traffic goes through the primary NetScaler WAF instance configured in Availability Zone 1. During failover conditions, when the primary instance does not respond to client requests, the secondary WAF instance takes over. The Elastic IP address of the virtual IP address of the primary instance migrates to the secondary instance, which takes over as the new primary instance. In the failover process, NetScaler WAF does the following: NetScaler WAF checks the virtual servers that have IP sets attached to them. NetScaler WAF finds the IP address that has an associated public IP address from the two IP addresses that the virtual server is listening on. One that is directly attached to the virtual server, and one that is attached through the IP set. NetScaler WAF reassociates the public Elastic IP address to the private IP address that belongs to the new primary virtual IP address. To validate the deployment, perform the following: Connect to the primary instance For example, with a proxy server, jump host (a Linux/Windows/FW instance running in AWS, or the bastion host), or another device reachable to that VPC or a Direct Connect if dealing with on-prem connectivity. Perform a trigger action to force failover and check whether the secondary instance takes over. Tip: To further validate the configuration with respect to NetScaler WAF, run the following command after connecting to the Primary NetScaler WAF instance : Sh appfw profile QS-Profile Connect to NetScaler WAF HA Pair using Bastion Host If users are opting for Sandbox deployment (for example, as part of CFT, users opt for configuring a Bastion Host), a Linux bastion host deployed in a public subnet will be configured to access the WAF interfaces. In the AWS CloudFormation console, which is accessed by signing in here: Sign in, choose the master stack, and on the Outputs tab, find the value of LinuxBastionHostEIP1. PrivateManagementPrivateNSIP and PrimaryADCInstanceID key’s value to be used in the later steps to SSH into the ADC. Choose Services. On the Compute tab, select EC2. Under Resources, choose Running Instances. On the Description tab of the primary WAF instance, note the IPv4 public IP address. Users need that IP address to construct the SSH command. To store the key in the user keychain, run the command ssh-add -K [your-key-pair].pem On Linux, users might need to omit the -K flag. Log in to the bastion host using the following command, using the value for LinuxBastionHostEIP1 that users noted in step 1. ssh -A ubuntu@[LinuxBastionHostEIP1] From the bastion host, users can connect to the primary WAF instance by using SSH. ssh nsroot@[Primary Management Private NSIP] Password: [Primary ADC Instance ID] Now users are connected to the primary NetScaler WAF instance. To see the available commands, users can run the help command. To view the current HA configuration, users can run the show HA node command. NetScaler Application Delivery Management NetScaler Application Delivery Management Service (NetScaler ADM) provides an easy and scalable solution to manage NetScaler ADC deployments that include NetScaler ADC MPX, NetScaler ADC VPX, NetScaler Gateway, NetScaler Secure Web Gateway, NetScaler ADC SDX, NetScaler ADC CPX, and NetScaler SD-WAN appliances that are deployed on-premises or on the cloud. Users can use this cloud solution to manage, monitor, and troubleshoot the entire global application delivery infrastructure from a single, unified, and centralized cloud-based console. NetScaler ADM Service provides all the capabilities required to quickly set up, deploy, and manage application delivery in NetScaler ADC deployments and with rich analytics of application health, performance, and security. NetScaler ADM Service provides the following benefits: Agile – Easy to operate, update, and consume. The service model of NetScaler ADM Service is available over the cloud, making it easy to operate, update, and use the features provided by NetScaler ADM Service. The frequency of updates, combined with the automated update feature, quickly enhances user NetScaler ADC deployment. Faster time to value – Quicker business goals achievement. Unlike with the traditional on-premises deployment, users can use their NetScaler ADM Service with a few clicks. Users not only save the installation and configuration time, but also avoid wasting time and resources on potential errors. Multi-Site Management – Single Pane of Glass for instances across Multi-Site data centers. With the NetScaler ADM Service, users can manage and monitor NetScaler ADCs that are in various types of deployments. Users have one-stop management for NetScaler ADCs deployed on-premises and in the cloud. Operational Efficiency – Optimized and automated way to achieve higher operational productivity. With the NetScaler ADM Service, user operational costs are reduced by saving user time, money, and resources on maintaining and upgrading the traditional hardware deployments. How NetScaler ADM Service Works NetScaler ADM Service is available as a service on the NetScaler Cloud. After users sign up for NetScaler Cloud and start using the service, install agents in the user network environment or initiate the built-in agent in the instances. Then, add the instances users want to manage to the service. An agent enables communication between the NetScaler ADM Service and the managed instances in the user data center. The agent collects data from the managed instances in the user network and sends it to the NetScaler ADM Service. When users add an instance to the NetScaler ADM Service, it implicitly adds itself as a trap destination and collects an inventory of the instance. The service collects instance details such as: Host name Software version Running and saved configuration Certificates Entities configured on the instance, and so on. NetScaler ADM Service periodically polls managed instances to collect information. The following image illustrates the communication between the service, the agents, and the instances: Documentation Guide The NetScaler ADM Service documentation includes information about how to get started with the service, a list of features supported on the service, and configuration specific to this service solution. Deploying NetScaler ADC VPX Instances on AWS using NetScaler ADM When customers move their applications to the cloud, the components that are part of their application increase, become more distributed, and need to be dynamically managed. With NetScaler ADC VPX instances on AWS, users can seamlessly extend their L4-L7 network stack to AWS. With NetScaler ADC VPX, AWS becomes a natural extension of their on-premises IT infrastructure. Customers can use NetScaler ADC VPX on AWS to combine the elasticity and flexibility of the cloud, with the same optimization, security, and control features that support the most demanding websites and applications in the world. With NetScaler Application Delivery Management (ADM) monitoring their NetScaler ADC instances, users gain visibility into the health, performance, and security of their applications. They can automate the setup, deployment, and management of their application delivery infrastructure across hybrid multi-cloud environments. Architecture Diagram The following image provides an overview of how NetScaler ADM connects with AWS to provision NetScaler ADC VPX instances in AWS. Configuration Tasks Perform the following tasks on AWS before provisioning NetScaler ADC VPX instances in NetScaler ADM: Create subnets Create security groups Create an IAM role and define a policy Perform the following tasks on NetScaler ADM to provision the instances on AWS: Create site Provision NetScaler ADC VPX instance on AWS To Create Subnets Create three subnets in a VPC. The three subnets that are required to provision NetScaler ADC VPX instances in a VPC - are management, client, and server. Specify an IPv4 CIDR block from the range that is defined in the VPC for each of the subnets. Specify the availability zone in which the subnet is to reside. Create all the three subnets in the same availability zone. The following image illustrates the three subnets created in the customer region and their connectivity to the client system. For more information on VPC and subnets, see VPCs and Subnets. To Create Security Groups Create a security group to control inbound and outbound traffic in the NetScaler ADC VPX instance. A security group acts as a virtual firewall for a user instance. Create security groups at the instance level, and not at the subnet level. It is possible to assign each instance in a subnet in the user VPC to a different set of security groups. Add rules for each security group to control the inbound traffic that is passing through the client subnet to instances. Users can also add a separate set of rules that control the outbound traffic that passes through the server subnet to the application servers. Although users can use the default security group for their instances, they might want to create their own groups. Create three security groups - one for each subnet. Create rules for both incoming and outgoing traffic that users want to control. Users can add as many rules as they want. For more information on security groups, see: Security Groups for your VPC. To Create an IAM Role and Define a Policy Create an IAM role so that customers can establish a trust relationship between their users and the NetScaler trusted AWS account and create a policy with NetScaler permissions. In AWS, click Services. In the left side navigation pane, select IAM > Roles, and click Create role. Users are connecting their AWS account with the AWS account in NetScaler ADM. So, select Another AWS account to allow NetScaler ADM to perform actions in the AWS account. Type in the 12-digit NetScaler ADM AWS account ID. The NetScaler ID is 835822366011. Users can also find the NetScaler ID in NetScaler ADM when they create the cloud access profile. Enable Require external ID to connect to a third-party account. Users can increase the security of their roles by requiring an optional external identifier. Type an ID that can be a combination of any characters. Click Permissions. In Attach permissions policies page, click Create policy. Users can create and edit a policy in the visual editor or by using JSON. The list of permissions from NetScaler is provided in the following box: {"Version": "2012-10-17","Statement":[ { "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeImageAttribute", "ec2:DescribeInstanceAttribute", "ec2:DescribeRegions", "ec2:DescribeDhcpOptions", "ec2:DescribeSecurityGroups", "ec2:DescribeHosts", "ec2:DescribeImages", "ec2:DescribeVpcs", "ec2:DescribeSubnets", "ec2:DescribeNetworkInterfaces", "ec2:DescribeAvailabilityZones", "ec2:DescribeNetworkInterfaceAttribute", "ec2:DescribeInstanceStatus", "ec2:DescribeAddresses", "ec2:DescribeKeyPairs", "ec2:DescribeTags", "ec2:DescribeVolumeStatus", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:CreateTags", "ec2:DeleteTags", "ec2:CreateKeyPair", "ec2:DeleteKeyPair", "ec2:ResetInstanceAttribute", "ec2:RunScheduledInstances", "ec2:ReportInstanceStatus", "ec2:StartInstances", "ec2:RunInstances", "ec2:StopInstances", "ec2:UnmonitorInstances", "ec2:MonitorInstances", "ec2:RebootInstances", "ec2:TerminateInstances", "ec2:ModifyInstanceAttribute", "ec2:AssignPrivateIpAddresses", "ec2:UnassignPrivateIpAddresses", "ec2:CreateNetworkInterface", "ec2:AttachNetworkInterface", "ec2:DetachNetworkInterface", "ec2:DeleteNetworkInterface", "ec2:ResetNetworkInterfaceAttribute", "ec2:ModifyNetworkInterfaceAttribute", "ec2:AssociateAddress", "ec2:AllocateAddress", "ec2:ReleaseAddress", "ec2:DisassociateAddress", "ec2:GetConsoleOutput" ], "Resource": "*" }]} Copy and paste the list of permissions in the JSON tab and click Review policy. In the Review policy page, type a name for the policy, enter a description, and click Create policy. To Create a Site in NetScaler ADM Create a site in NetScaler ADM and add the details of the VPC associated with the AWS role. In NetScaler ADM, navigate to Networks > Sites. Click Add. Select the service type as AWS and enable Use existing VPC as a site. Select the cloud access profile. If the cloud access profile does not exist in the field, click Add to create a profile. In the Create Cloud Access Profile page, type the name of the profile with which users want to access AWS. Type the ARN associated with the role that users have created in AWS. Type the external ID that users provided while creating an Identity and Access Management (IAM) role in AWS. See step 4 in “To create an IAM role and define a policy” task. Ensure that the IAM role name specified in AWS starts with NetScaler-ADM- and it correctly appears in the Role ARN. The details of the VPC, such as the region, VPC ID, name and CIDR block, associated with your IAM role in AWS are imported in NetScaler ADM. Type a name for the site. Click Create. To Provision NetScaler ADC VPX on AWS Use the site that users created earlier to provision the NetScaler ADC VPX instances on AWS. Provide NetScaler ADM service agent details to provision those instances that are bound to that agent. In NetScaler ADM, navigate to Networks > Instances > NetScaler ADC. In the VPX tab, click Provision. This option displays the Provision NetScaler ADC VPX on Cloud page. Select Amazon Web Services (AWS) and click Next. In Basic Parameters, Select the Type of Instance from the list. Standalone: This option provisions a standalone NetScaler ADC VPX instance on AWS. HA: This option provisions the high availability NetScaler ADC VPX instances on AWS. To provision the NetScaler ADC VPX instances in the same zone, select the Single Zone option under Zone Type. To provision the NetScaler ADC VPX instances across multiple zones, select the Multi Zone option under Zone type. In the Cloud Parameters tab, make sure to specify the network details for each zone that is created on AWS. Specify the name of the NetScaler ADC VPX instance. In Site, select the site that you created earlier. In Agent, select the agent that is created to manage the NetScaler ADC VPX instance. In Cloud Access Profile, select the cloud access profile created during site creation. In Device Profile, select the profile to provide authentication. NetScaler ADM uses the device profile when it requires to log on to the NetScaler ADC VPX instance. Click Next. In Cloud Parameters, Select the NetScaler IAM Role created in AWS. An IAM role is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. In the Product field, select the NetScaler ADC product version that users want to provision. Select the EC2 instance type from the Instance Type list. Select the Version of NetScaler ADC that users want to provision. Select both Major and Minor version of NetScaler ADC. In Security Groups, select the Management, Client, and Server security groups that users created in their virtual network. In IPs in server Subnet per Node, select the number of IP addresses in server subnet per node for the security group. In Subnets, select the Management, Client, and Server subnets for each zone that are created in AWS. Users can also select the region from the Availability Zone list. Click Finish. The NetScaler ADC VPX instance is now provisioned on AWS. Note: NetScaler ADM doesn’t support deprovisioning of NetScaler ADC instances from AWS. To View the NetScaler ADC VPX Provisioned in AWS From the AWS home page, navigate to Services and click EC2. On the Resources page, click Running Instances. Users can view the NetScaler ADC VPX provisioned in AWS. The name of the NetScaler ADC VPX instance is the same name users provided while provisioning the instance in NetScaler ADM. To View the NetScaler ADC VPX Provisioned in NetScaler ADM In NetScaler ADM, navigate to Networks > Instances > NetScaler ADC. Select NetScaler ADC VPX tab. The NetScaler ADC VPX instance provisioned in AWS is listed here. NetScaler ADC WAF and OWASP Top 10 – 2017 The Open Web Application Security Project: OWASP released the OWASP Top 10 for 2017 for web application security. This list documents the most common web application vulnerabilities and is a great starting point to evaluate web security. Here we detail how to configure the NetScaler ADC Web Application Firewall (WAF) to mitigate these flaws. WAF is available as an integrated module in the NetScaler ADC (Premium Edition) as well as a complete range of appliances. The full OWASP Top 10 document is available at OWASP Top Ten. OWASP Top-10 2017 NetScaler ADC WAF Features A1:2017- Injection Injection attack prevention (SQL or any other custom injections such as OS Command injection, XPath injection, and LDAP Injection), auto update signature feature A2:2017 - Broken Authentication NetScaler ADC AAA, Cookie Tampering protection, Cookie Proxying, Cookie Encryption, CSRF tagging, Use SSL A3:2017 - Sensitive Data Exposure Credit Card protection, Safe Commerce, Cookie proxying, and Cookie Encryption A4:2017 XML External Entities (XXE) XML protection including WSI checks, XML message validation & XML SOAP fault filtering check A5:2017 Broken Access Control NetScaler ADC AAA, Authorization security feature within NetScaler ADC AAA module of NetScaler, Form protections, and Cookie tampering protections, StartURL, and ClosureURL A6:2017 - Security Misconfiguration PCI reports, SSL features, Signature generation from vulnerability scan reports such as Cenznic, Qualys, AppScan, WebInspect, Whitehat. Also, specific protections such as Cookie encryption, proxying, and tampering A7:2017 - Cross Site Scripting (XSS) XSS Attack Prevention, Blocks all OWASP XSS cheat sheet attacks A8:2017 – Insecure Deserialisation XML Security Checks, GWT content type, custom signatures, Xpath for JSON and XML A9:2017 - Using Components with known Vulnerabilities Vulnerability scan reports, Application Firewall Templates, and Custom Signatures A10:2017 – Insufficient Logging & Monitoring User configurable custom logging, NetScaler ADC Management and Analytics System A1:2017- Injection Injection flaws, such as SQL, NoSQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into running unintended commands or accessing data without proper authorization. ADC WAF Protections SQL Injection prevention feature protects against common injection attacks. Custom injection patterns can be uploaded to protect against any type of injection attack including XPath and LDAP. This is applicable for both HTML and XML payloads. The auto update signature feature keeps the injection signatures up to date. Field format protection feature allows the administrator to restrict any user parameter to a regular expression. For instance, you can enforce that a zip-code field contains integers only or even 5-digit integers. Form field consistency: Validate each submitted user form against the user session form signature to ensure the validity of all form elements. Buffer overflow checks ensure that the URL, headers, and cookies are in the right limits blocking any attempts to inject large scripts or code. A2:2017 – Broken Authentication Application functions related to authentication and session management are often implemented incorrectly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users’ identities temporarily or permanently. ADC WAF Protections NetScaler ADC AAA module performs user authentication and provides Single Sign-On functionality to back end applications. This is integrated into the NetScaler ADC AppExpert policy engine to allow custom policies based on user and group information. Using SSL offloading and URL transformation capabilities, the firewall can also help sites to use secure transport layer protocols to prevent stealing of session tokens by network sniffing. Cookie Proxying and Cookie Encryption can be employed to completely mitigate cookie stealing. A3:2017 - Sensitive Data Exposure Many web applications and APIs do not properly protect sensitive data, such as financial, healthcare, and PII. Attackers may steal or modify such poorly protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data may be compromised without extra protection, such as encryption at rest or in transit, and requires special precautions when exchanged with the browser. ADC WAF Protections Application Firewall protects applications from leaking sensitive data like credit card details. Sensitive data can be configured as Safe objects in Safe Commerce protection to avoid exposure. Any sensitive data in cookies can be protected by Cookie Proxying and Cookie Encryption. A4:2017 XML External Entities (XXE) Many older or poorly configured XML processors evaluate external entity references within XML documents. External entities can be used to disclose internal files using the file URI handler, internal file shares, internal port scanning, remote code execution, and denial of service attacks. ADC WAF Protections In addition to detecting and blocking common application threats that can be adapted for attacking XML-based applications (that is, cross-site scripting, command injection, and so forth). ADC Application Firewall includes a rich set of XML-specific security protections. These include schema validation to thoroughly verify SOAP messages and XML payloads, and a powerful XML attachment check to block attachments containing malicious executables or viruses. Automatic traffic inspection methods block XPath injection attacks on URLs and forms aimed at gaining access. ADC Application Firewall also thwarts various DoS attacks, including external entity references, recursive expansion, excessive nesting, and malicious messages containing either long or a large number of attributes and elements. A5:2017 Broken Access Control Restrictions on what authenticated users are allowed to do are often not properly enforced. Attackers can exploit these flaws to access unauthorized functionality and data, such as access other users' accounts, view sensitive files, modify other users’ data, change access rights, and so on. ADC WAF Protections NetScaler ADC AAA feature that supports authentication, authorization, and auditing for all application traffic allows a site administrator to manage access controls with the ADC appliance. The Authorization security feature within the NetScaler ADC AAA module of the ADC appliance enables the appliance to verify, which content on a protected server it should allow each user to access. Form field consistency: If object references are stored as hidden fields in forms, then using form field consistency you can validate that these fields are not tampered on subsequent requests. Cookie Proxying and Cookie consistency: Object references that are stored in cookie values can be validated with these protections. Start URL check with URL closure: Allows user access to a predefined allow list of URLs. URL closure builds a list of all URLs seen in valid responses during the user session and automatically allows access to them during that session. A6:2017 - Security Misconfiguration Security misconfiguration is the most commonly seen issue. This is commonly a result of insecure default configurations, incomplete or improvised configurations, open cloud storage, misconfigured HTTP headers, and verbose error messages containing sensitive information. Not only must all operating systems, frameworks, libraries, and applications be securely configured, but they must be patched and upgraded in a timely fashion. ADC WAF Protections The PCI-DSS report generated by the Application Firewall, documents the security settings on the Firewall device. Reports from the scanning tools are converted to ADC WAF Signatures to handle security misconfigurations. ADC WAF supports Cenzic, IBM AppScan (Enterprise and Standard), Qualys, TrendMicro, WhiteHat, and custom vulnerability scan reports. A7:2017 - Cross Site Scripting (XSS) XSS flaws occur whenever an application includes untrusted data in a new web page without proper validation or escaping, or updates an existing webpage with user-supplied data using a browser API that can create HTML or JavaScript. Cross-site scripting allows attackers to run scripts in the victim’s browser which can hijack user sessions, deface websites, or redirect the user to malicious sites. ADC WAF Protections Cross-site scripting protection protects against common XSS attacks. Custom XSS patterns can be uploaded to modify the default list of allowed tags and attributes. The ADC WAF uses an allow list of allowed HTML attributes and tags to detect XSS attacks. This is applicable for both HTML and XML payloads. ADC WAF blocks all the attacks listed in the OWASP XSS Filter Evaluation Cheat Sheet. Field format check prevents an attacker from sending inappropriate web form data which can be a potential XSS attack. Form field consistency. A8:2017 - Insecure Deserialization Insecure deserialization often leads to remote code execution. Even if deserialization flaws do not result in remote code execution, they can be used to perform attacks, including replay attacks, injection attacks, and privilege escalation attacks. ADC WAF Protections JSON payload inspection with custom signatures. XML security: protects against XML denial of service (xDoS), XML SQL and Xpath injection and cross site scripting, format checks, WS-I basic profile compliance, XML attachments check. Field Format checks in addition to Cookie Consistency and Field Consistency can be used. A9:2017 - Using Components with Known Vulnerabilities Components, such as libraries, frameworks, and other software modules, run with the same privileges as the application. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications and APIs using components with known vulnerabilities may undermine application defenses and enable various attacks and impacts. ADC WAF Protections NetScaler recommends having the third-party components up to date. Vulnerability scan reports that are converted to ADC Signatures can be used to virtually patch these components. Application Firewall templates that are available for these vulnerable components can be used. Custom Signatures can be bound with the firewall to protect these components. A10:2017 - Insufficient Logging & Monitoring Insufficient logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more systems, and tamper, extract, or destroy data. Most breach studies show that the time to detect a breach is over 200 days, typically detected by external parties rather than internal processes or monitoring. ADC WAF Protections When the log action is enabled for security checks or signatures, the resulting log messages provide information about the requests and responses that the application firewall has observed while protecting your websites and applications. The application firewall offers the convenience of using the built-in ADC database for identifying the locations corresponding to the IP addresses from which malicious requests are originating. Default format (PI) expressions give the flexibility to customize the information included in the logs with the option to add the specific data to capture in the application firewall generated log messages. The application firewall supports CEF logs. Application Security Protection NetScaler ADM NetScaler Application Delivery Management Service (NetScaler ADM) provides a scalable solution to manage NetScaler ADC deployments that include NetScaler ADC MPX, NetScaler ADC VPX, NetScaler Gateway, NetScaler Secure Web Gateway, NetScaler ADC SDX, NetScaler ADC CPX, and NetScaler SD-WAN appliances that are deployed on-premises or on the cloud. NetScaler ADM Application Analytics and Management Features The following features are key to the ADM role in App Security. Application Analytics and Management The Application Analytics and Management feature of NetScaler ADM strengthens the application-centric approach to help users address various application delivery challenges. This approach gives users visibility into the health scores of applications, helps users determine the security risks, and helps users detect anomalies in the application traffic flows and take corrective actions. The most important among these roles for App Security is Application Security Analytics: Application security analytics: Application Security Analytics. The App Security Dashboard provides a holistic view of the security status of user applications. For example, it shows key security metrics such as security violations, signature violations, threat indexes. The App Security dashboard also displays attack related information such as SYN attacks, small window attacks, and DNS flood attacks for the discovered NetScaler ADC instances. StyleBooks StyleBooks simplify the task of managing complex NetScaler ADC configurations for user applications. A StyleBook is a template that users can use to create and manage NetScaler ADC configurations. Here users are primarily concerned with the StyleBook used to deploy the Web Application Firewall. For more information on StyleBooks, see: StyleBooks. Analytics Provides an easy and scalable way to look into the various insights of the NetScaler ADC instances’ data to describe, predict, and improve application performance. Users can use one or more analytics features simultaneously. Most important among these roles for App Security are: Security Insight: Security Insight. Provides a single-pane solution to help users assess user application security status and take corrective actions to secure user applications. Bot Insight For more information on analytics, see Analytics: Analytics. Other features that are important to ADM functionality are: Event Management Events represent occurrences of events or errors on a managed NetScaler ADC instance. For example, when there is a system failure or change in configuration, an event is generated and recorded on NetScaler ADM. Following are the related features that users can configure or view by using NetScaler ADM: Creating event rules: Create Event Rules View and export syslog messages: View and Export Syslog Messages For more information on event management, see: Events. Instance Management Enables users to manage the NetScaler ADC, NetScaler Gateway, NetScaler Secure Web Gateway, and NetScaler SD-WAN instances. For more information on instance management, see: Adding Instances. License Management Allows users to manage NetScaler ADC licenses by configuring NetScaler ADM as a license manager. NetScaler ADC pooled capacity: Pooled Capacity. A common license pool from which a user NetScaler ADC instance can check out one instance license and only as much bandwidth as it needs. When the instance no longer requires these resources, it checks them back in to the common pool, making the resources available to other instances that need them. NetScaler ADC VPX check-in and check-out licensing: NetScaler ADC VPX Check-in and Check-out Licensing. NetScaler ADM allocates licenses to NetScaler ADC VPX instances on demand. A NetScaler ADC VPX instance can check out the license from the NetScaler ADM when a NetScaler ADC VPX instance is provisioned, or check back in its license to NetScaler ADM when an instance is removed or destroyed. For more information on license management, see: Pooled Capacity. Configuration Management NetScaler ADM allows users to create configuration jobs that help them perform configuration tasks, such as creating entities, configuring features, replication of configuration changes, system upgrades, and other maintenance activities with ease on multiple instances. Configuration jobs and templates simplify the most repetitive administrative tasks to a single task on NetScaler ADM. For more information on configuration management, see Configuration jobs: Configuration Jobs. Configuration Audit Enables users to monitor and identify anomalies in the configurations across user instances. Configuration advice: Get Configuration Advice on Network Configuration. Allows users to identify any configuration anomaly. Audit template: Create Audit Templates. Allows users to monitor the changes across a specific configuration. For more information on configuration audit, see: Configuration Audit. Signatures provide the following deployment options to help users to optimize the protection of user applications: Negative Security Model: With the negative security model, users employ a rich set of preconfigured signature rules to apply the power of pattern matching to detect attacks and protect against application vulnerabilities. Users block only what they don’t want and allow the rest. Users can add their own signature rules, based on the specific security needs of user applications, to design their own customized security solutions. Hybrid security Model: In addition to using signatures, users can use positive security checks to create a configuration ideally suited for user applications. Use signatures to block what users don’t want, and use positive security checks to enforce what is allowed. To protect user applications by using signatures, users must configure one or more profiles to use their signatures object. In a hybrid security configuration, the SQL injection and cross-site scripting patterns, and the SQL transformation rules, in the user signatures object are used not only by the signature rules, but also by the positive security checks configured in the Web Application Firewall profile that is using the signatures object. The Web Application Firewall examines the traffic to user protected websites and web services to detect traffic that matches a signature. A match is triggered only when every pattern in the rule matches the traffic. When a match occurs, the specified actions for the rule are invoked. Users can display an error page or error object when a request is blocked. Log messages can help users to identify attacks being launched against user applications. If users enable statistics, the Web Application Firewall maintains data about requests that match a Web Application Firewall signature or security check. If the traffic matches both a signature and a positive security check, the more restrictive of the two actions are enforced. For example, if a request matches a signature rule for which the block action is disabled, but the request also matches an SQL Injection positive security check for which the action is block, the request is blocked. In this case, the signature violation might be logged as [not blocked], although the request is blocked by the SQL injection check. Customization: If necessary, users can add their own rules to a signatures object. Users can also customize the SQL/XSS patterns. The option to add their own signature rules, based on the specific security needs of user applications, gives users the flexibility to design their own customized security solutions. Users block only what they don’t want and allow the rest. A specific fast-match pattern in a specified location can significantly reduce processing overhead to optimize performance. Users can add, modify, or remove SQL injection and cross-site scripting patterns. Built-in RegEx and expression editors help users configure user patterns and verify their accuracy. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler ADC on AWS combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, flexible licensing, and other essential application delivery capabilities in a single VPX instance, conveniently available via the AWS Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler ADC deployments. The net result is that NetScaler ADC on AWS enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. NetScaler Web Application Firewall (WAF) NetScaler Web Application Firewall (WAF) is an enterprise grade solution offering state of the art protections for modern applications. NetScaler WAF mitigates threats against public-facing assets, including websites, web applications, and APIs. NetScaler WAF includes IP reputation-based filtering, Bot mitigation, OWASP Top 10 application threats protections, Layer 7 DDoS protection and more. Also included are options to enforce authentication, strong SSL/TLS ciphers, TLS 1.3, rate limiting and rewrite policies. Using both basic and advanced WAF protections, NetScaler WAF provides comprehensive protection for your applications with unparalleled ease of use. Getting up and running is a matter of minutes. Further, using an automated learning model, called dynamic profiling, NetScaler WAF saves users precious time. By automatically learning how a protected application works, NetScaler WAF adapts to the application even as developers deploy and alter the applications. NetScaler WAF helps with compliance for all major regulatory standards and bodies, including PCI-DSS, HIPAA, and more. With our CloudFormation templates, it has never been easier to get up and running quickly. With auto scaling, users can rest assured that their applications remain protected even as their traffic scales up. Web Application Firewall Deployment Strategy The first step to deploying the web application firewall is to evaluate which applications or specific data need maximum security protection, which ones are less vulnerable, and the ones for which security inspection can safely be bypassed. This helps users in coming up with an optimal configuration, and in designing appropriate policies and bind points to segregate the traffic. For example, users might want to configure a policy to bypass security inspection of requests for static web content, such as images, MP3 files, and movies, and configure another policy to apply advanced security checks to requests for dynamic content. Users can use multiple policies and profiles to protect different contents of the same application. The next step is to baseline the deployment. Start by creating a virtual server and run test traffic through it to get an idea of the rate and amount of traffic flowing through the user system. Then, deploy the Web Application Firewall. Use NetScaler ADM and the Web Application Firewall StyleBook to configure the Web Application Firewall. See the StyleBook section below in this guide for details. After the Web Application Firewall is deployed and configured with the Web Application Firewall StyleBook, a useful next step would be to implement the NetScaler ADC WAF and OWASP Top 10. Finally, three of the Web Application Firewall protections are especially effective against common types of Web attacks, and are therefore more commonly used than any of the others. Thus, they should be implemented in the initial deployment. They are: HTML Cross-Site Scripting. Examines requests and responses for scripts that attempt to access or modify content on a different website than the one on which the script is located. When this check finds such a script, it either renders the script harmless before forwarding the request or response to its destination, or it blocks the connection. HTML SQL Injection. Examines requests that contain form field data for attempts to inject SQL commands into a SQL database. When this check detects injected SQL code, it either blocks the request or renders the injected SQL code harmless before forwarding the request to the Web server. Note: If both of the following conditions apply to the user configuration, users should make certain that your Web Application Firewall is correctly configured: If users enable the HTML Cross-Site Scripting check or the HTML SQL Injection check (or both), and User protected websites accept file uploads or contain Web forms that can contain large POST body data. For more information about configuring the Web Application Firewall to handle this case, see Configuring the Application Firewall: Configuring the Web App Firewall. Buffer Overflow. Examines requests to detect attempts to cause a buffer overflow on the Web server. Configuring the Web Application Firewall (WAF) The following steps assume that the WAF is already enabled and functioning correctly. NetScaler recommends that users configure WAF using the Web Application Firewall StyleBook. Most users find it the easiest method to configure the Web Application Firewall, and it is designed to prevent mistakes. Both the GUI and the command line interface are intended for experienced users, primarily to modify an existing configuration or use advanced options. SQL Injection The Application Firewall HTML SQL Injection check provides special defenses against the injection of unauthorized SQL code that might break user Application security. NetScaler Web Application Firewall examines the request payload for injected SQL code in three locations: 1) POST body, 2) headers, and 3) cookies. A default set of keywords and special characters provides known keywords and special characters that are commonly used to launch SQL attacks. Users can also add new patterns, and they can edit the default set to customize the SQL check inspection. There are several parameters that can be configured for SQL injection processing. Users can check for SQL wildcard characters. Users can change the SQL Injection type and select one of the 4 options (SQLKeyword, SQLSplChar, SQLSplCharANDKeyword, SQLSplCharORKeyword) to indicate how to evaluate the SQL keywords and SQL special characters when processing the payload. The SQL Comments Handling parameter gives users an option to specify the type of comments that need to be inspected or exempted during SQL Injection detection. Users can deploy relaxations to avoid false positives. The learning engine can provide recommendations for configuring relaxation rules. The following options are available for configuring an optimized SQL Injection protection for the user application: Block — If users enable block, the block action is triggered only if the input matches the SQL injection type specification. For example, if SQLSplCharANDKeyword is configured as the SQL injection type, a request is not blocked if it contains no key words, even if SQL special characters are detected in the input. Such a request is blocked if the SQL injection type is set to either SQLSplChar, or SQLSplCharORKeyword. Log — If users enable the log feature, the SQL Injection check generates log messages indicating the actions that it takes. If block is disabled, a separate log message is generated for each input field in which the SQL violation was detected. However, only one message is generated when the request is blocked. Similarly, 1 log message per request is generated for the transform operation, even when SQL special characters are transformed in multiple fields. Users can monitor the logs to determine whether responses to legitimate requests are getting blocked. A large increase in the number of log messages can indicate attempts to launch an attack. Stats — If enabled, the stats feature gathers statistics about violations and logs. An unexpected surge in the stats counter might indicate that the user application is under attack. If legitimate requests are getting blocked, users might have to revisit the configuration to see if they need to configure new relaxation rules or modify the existing ones. Learn — If users are not sure which SQL relaxation rules might be ideally suited for their applications, they can use the learn feature to generate recommendations based on the learned data. The Web Application Firewall learning engine monitors the traffic and provides SQL learning recommendations based on the observed values. To get optimal benefit without compromising performance, users might want to enable the learn option for a short time to get a representative sample of the rules, and then deploy the rules and disable learning. Transform SQL special characters—The Web Application Firewall considers three characters, Single straight quote (‘), Backslash (), and Semicolon (;) as special characters for SQL security check processing. The SQL Transformation feature modifies the SQL Injection code in an HTML request to ensure that the request is rendered harmless. The modified HTML request is then sent to the server. All default transformation rules are specified in the /netscaler/default_custom_settings.xml file. The transform operation renders the SQL code inactive by making the following changes to the request: Single straight quote (‘) to double straight quote (“). Backslash () to double backslash (). Semicolon (;) is dropped completely. These three characters (special strings) are necessary to issue commands to a SQL server. Unless a SQL command is prefaced with a special string, most SQL servers ignore that command. Therefore, the changes that the Web Application Firewall performs when transformation is enabled prevent an attacker from injecting active SQL. After these changes are made, the request can safely be forwarded to the user protected website. When web forms on the user protected website can legitimately contain SQL special strings, but the web forms do not rely on the special strings to operate correctly, users can disable blocking and enable transformation to prevent blocking of legitimate web form data without reducing the protection that the Web Application Firewall provides to the user protected websites. The transform operation works independently of the SQL Injection Type setting. If transform is enabled and the SQL Injection type is specified as a SQL keyword, SQL special characters are transformed even if the request does not contain any keywords. Tip: Users normally enable either transformation or blocking, but not both. If the block action is enabled, it takes precedence over the transform action. If users have blocking enabled, enabling transformation is redundant. Check for SQL Wildcard Characters—Wild card characters can be used to broaden the selections of a SQL (SQL-SELECT) statement. These wild card operators can be used with LIKE and NOT LIKE operators to compare a value to similar values. The percent (%), and underscore (_) characters are frequently used as wild cards. The percent sign is analogous to the asterisk (*) wildcard character used with MS-DOS and to match zero, one, or multiple characters in a field. The underscore is similar to the MS-DOS question mark (?) wildcard character. It matches a single number or character in an expression. For example, users can use the following query to do a string search to find all customers whose names contain the D character. SELECT * from customer WHERE name like “%D%”: The following example combines the operators to find any salary values that have 0 in the second and third place. SELECT * from customer WHERE salary like ‘_00%’: Different DBMS vendors have extended the wildcard characters by adding extra operators. The NetScaler Web Application Firewall can protect against attacks that are launched by injecting these wildcard characters. The 5 default Wildcard characters are percent (%), underscore (_), caret (^), opening bracket ([), and closing bracket (]). This protection applies to both HTML and XML profiles. The default wildcard chars are a list of literals specified in the *Default Signatures: <wildchar type=” LITERAL”>% <wildchar type=”LITERAL”]>_ <wildchar type=”LITERAL”>^ <wildchar type=”LITERAL”>[ <wildchar type=”LITERAL”>] Wildcard characters in an attack can be PCRE, like [^A-F]. The Web Application Firewall also supports PCRE wildcards, but the literal wildcard chars shown here are sufficient to block most attacks. Note: The SQL wildcard character check is different from the SQL special character check. This option must be used with caution to avoid false positives. Check Request Containing SQL Injection Type—The Web Application Firewall provides 4 options to implement the desired level of strictness for SQL Injection inspection, based on the individual need of the application. The request is checked against the injection type specification for detecting SQL violations. The 4 SQL injection type options are: SQL Special Character and Keyword—Both a SQL keyword and a SQL special character must be present in the input to trigger a SQL violation. This least restrictive setting is also the default setting. SQL Special Character—At least one of the special characters must be present in the input to trigger a SQL violation. SQL key word—At least one of the specified SQL keywords must be present in the input to trigger a SQL violation. Do not select this option without due consideration. To avoid false positives, make sure that none of the keywords are expected in the inputs. SQL Special Character or Keyword—Either the key word or the special character string must be present in the input to trigger the security check violation. Tip: If users configure the Web Application Firewall to check for inputs that contain a SQL special character, the Web Application Firewall skips web form fields that do not contain any special characters. Since most SQL servers do not process SQL commands that are not preceded by a special character, enabling this option can significantly reduce the load on the Web Application Firewall and speed up processing without placing the user protected websites at risk. SQL comments handling — By default, the Web Application Firewall checks all SQL comments for injected SQL commands. Many SQL servers ignore anything in a comment, however, even if preceded by an SQL special character. For faster processing, if your SQL server ignores comments, you can configure the Web Application Firewall to skip comments when examining requests for injected SQL. The SQL comments handling options are: ANSI — Skip ANSI-format SQL comments, which are normally used by UNIX-based SQL databases. For example: /– (Two Hyphens) - This is a comment that begins with two hyphens and ends with end of line. - Braces (Braces enclose the comment. The { precedes the comment, and the } follows it. Braces can delimit single- or multiple-line comments, but comments cannot be nested) /**/: C style comments (Does not allow nested comments). Please note /*! <comment that begins with a slash followed by an asterisk and an exclamation mark is not a comment > */ MySQL Server supports some variants of C-style comments. These enable users to write code that includes MySQL extensions, but is still portable, by using comments of the following form: [/*! MySQL-specific code */] .#: Mysql comments : This is a comment that begins with the # character and ends with an end of the line Nested — Skip nested SQL comments, which are normally used by Microsoft SQL Server. For example; – (Two Hyphens), and /**/ (Allows nested comments) ANSI/Nested — Skip comments that adhere to both the ANSI and nested SQL comment standards. Comments that match only the ANSI standard, or only the nested standard, are still checked for injected SQL. Check all Comments — Check the entire request for injected SQL without skipping anything. This is the default setting. Tip: In most cases, users should not choose the Nested or the ANSI/Nested option unless their back-end database runs on Microsoft SQL Server. Most other types of SQL server software do not recognize nested comments. If nested comments appear in a request directed to another type of SQL server, they might indicate an attempt to breach security on that server. Check Request headers — Enable this option if, in addition to examining the input in the form fields, users want to examine the request headers for HTML SQL Injection attacks. If users use the GUI, they can enable this parameter in the Advanced Settings -> Profile Settings pane of the Web Application Firewall profile. Note: If users enable the Check Request header flag, they might have to configure a relaxation rule for the User-Agent header. Presence of the SQL keyword like and a SQL special character semi-colon (;) might trigger false positive and block requests that contain this header. Warning: If users enable both request header checking and transformation, any SQL special characters found in headers are also transformed. The Accept, Accept-Charset, Accept-Encoding, Accept-Language, Expect, and User-Agent headers normally contain semicolons (;). Enabling both Request header checking and transformation simultaneously might cause errors. InspectQueryContentTypes — Configure this option if users want to examine the request query portion for SQL Injection attacks for the specific content-types. If users use the GUI, they can configure this parameter in the Advanced Settings -> Profile Settings pane of the Application Firewall profile. Cross-Site Scripting The HTML Cross-Site Scripting (cross-site scripting) check examines both the headers and the POST bodies of user requests for possible cross-site scripting attacks. If it finds a cross-site script, it either modifies (transforms) the request to render the attack harmless, or blocks the request. Note: The HTML Cross-Site Scripting (cross-site scripting) check works only for content type, content length, and so forth. It does not work for cookie. Also ensure to have the ‘checkRequestHeaders’ option enabled in the user Web Application Firewall profile. To prevent misuse of the scripts on user protected websites to breach security on user websites, the HTML Cross-Site Scripting check blocks scripts that violate the same origin rule, which states that scripts should not access or modify content on any server but the server on which they are located. Any script that violates the same origin rule is called a cross-site script, and the practice of using scripts to access or modify content on another server is called cross-site scripting. The reason cross-site scripting is a security issue is that a web server that allows cross-site scripting can be attacked with a script that is not on that web server, but on a different web server, such as one owned and controlled by the attacker. Unfortunately, many companies have a large installed base of JavaScript-enhanced web content that violates the same origin rule. If users enable the HTML Cross-Site Scripting check on such a site, they have to generate the appropriate exceptions so that the check does not block legitimate activity. The Web Application Firewall offers various action options for implementing HTML Cross-Site Scripting protection. In addition to the Block, Log, Stats and Learn actions, users also have the option to Transform cross-site scripts to render an attack harmless by entity encoding the script tags in the submitted request. Users can configure Check complete URLs for the cross-site scripting parameter to specify if they want to inspect not just the query parameters but the entire URL to detect a cross-site scripting attack. Users can configure the InspectQueryContentTypes parameter to inspect the request query portion for a cross-site scripting attack for the specific content-types. Users can deploy relaxations to avoid false positives. The Web Application Firewall learning engine can provide recommendations for configuring relaxation rules. The following options are available for configuring an optimized HTML Cross-Site Scripting protection for the user application: Block — If users enable block, the block action is triggered if the cross-site scripting tags are detected in the request. Log — If users enable the log feature, the HTML Cross-Site Scripting check generates log messages indicating the actions that it takes. If block is disabled, a separate log message is generated for each header or form field in which the cross-site scripting violation was detected. However, only one message is generated when the request is blocked. Similarly, 1 log message per request is generated for the transform operation, even when cross-site scripting tags are transformed in multiple fields. Users can monitor the logs to determine whether responses to legitimate requests are getting blocked. A large increase in the number of log messages can indicate attempts to launch an attack. Stats — If enabled, the stats feature gathers statistics about violations and logs. An unexpected surge in the stats counter might indicate that the user application is under attack. If legitimate requests are getting blocked, users might have to revisit the configuration to see if they must configure new relaxation rules or modify the existing ones. Learn — If users are not sure which relaxation rules might be ideally suited for their application, they can use the learn feature to generate HTML Cross-Site Scripting rule recommendations based on the learned data. The Web Application Firewall learning engine monitors the traffic and provides learning recommendations based on the observed values. To get optimal benefit without compromising performance, users might want to enable the learn option for a short time to get a representative sample of the rules, and then deploy the rules and disable learning. Transform cross-site scripts — If enabled, the Web Application Firewall makes the following changes to requests that match the HTML Cross-Site Scripting check: Left angle bracket (<) to HTML character entity equivalent (<) Right angle bracket (>) to HTML character entity equivalent (>) This ensures that browsers do not interpret unsafe html tags, such as <script>, and thereby run malicious code. If users enable both request-header checking and transformation, any special characters found in request headers are also modified as described above. If scripts on the user protected website contain cross-site scripting features, but the user website does not rely upon those scripts to operate correctly, users can safely disable blocking and enable transformation. This configuration ensures that no legitimate web traffic is blocked, while stopping any potential cross-site scripting attacks. Check complete URLs for cross-site scripting — If checking of complete URLs is enabled, the Web Application Firewall examines entire URLs for HTML cross-site scripting attacks instead of checking just the query portions of URLs. Check Request headers — If Request header checking is enabled, the Web Application Firewall examines the headers of requests for HTML cross-site scripting attacks, instead of just URLs. If users use the GUI, they can enable this parameter in the Settings tab of the Web Application Firewall profile. InspectQueryContentTypes — If Request query inspection is configured, the Application Firewall examines the query of requests for cross-site scripting attacks for the specific content-types. If users use the GUI, they can configure this parameter in the Settings tab of the Application Firewall profile. Important: As part of the streaming changes, the Web Application Firewall processing of the cross-site scripting tags has changed. In earlier releases, the presence of either open bracket (<), or close bracket (>), or both open and close brackets (<>) was flagged as a cross-site scripting Violation. The behavior has changed in the builds that include support for request side streaming. Only the close bracket character (>) is no longer considered as an attack. Requests are blocked even when an open bracket character (<) is present, and is considered as an attack. The Cross-site scripting attack gets flagged. Buffer Overflow Check The Buffer Overflow check detects attempts to cause a buffer overflow on the web server. If the Web Application Firewall detects that the URL, cookies, or header are longer than the configured length, it blocks the request because it can cause a buffer overflow. The Buffer Overflow check prevents attacks against insecure operating-system or web-server software that can crash or behave unpredictably when it receives a data string that is larger than it can handle. Proper programming techniques prevent buffer overflows by checking incoming data and either rejecting or truncating overlong strings. Many programs, however, do not check all incoming data and are therefore vulnerable to buffer overflows. This issue especially affects older versions of web-server software and operating systems, many of which are still in use. The Buffer Overflow security check allows users to configure the Block, Log, and Stats actions. In addition, users can also configure the following parameters: Maximum URL Length. The maximum length the Web Application Firewall allows in a requested URL. Requests with longer URLs are blocked. Possible Values: 0–65535. Default: 1024 Maximum Cookie Length. The maximum length the Web Application Firewall allows for all cookies in a request. Requests with longer cookies trigger the violations. Possible Values: 0–65535. Default: 4096 Maximum Header Length. The maximum length the Web Application Firewall allows for HTTP headers. Requests with longer headers are blocked. Possible Values: 0–65535. Default: 4096 Query string length. Maximum length allowed for a query string in an incoming request. Requests with longer queries are blocked. Possible Values: 0–65535. Default: 1024 Total request length. Maximum request length allowed for an incoming request. Requests with a longer length are blocked. Possible Values: 0–65535. Default: 24820 Virtual Patching/Signatures The signatures provide specific, configurable rules to simplify the task of protecting user websites against known attacks. A signature represents a pattern that is a component of a known attack on an operating system, web server, website, XML-based web service, or other resource. A rich set of preconfigured built-in or native rules offers an easy to use security solution, applying the power of pattern matching to detect attacks and protect against application vulnerabilities. Users can create their own signatures or use signatures in the built-in templates. The Web Application Firewall has two built-in templates: Default Signatures: This template contains a preconfigured list of over 1,300 signatures, in addition to a complete list of SQL injection keywords, SQL special strings, SQL transform rules, and SQL wildcard characters. It also contains denied patterns for cross-site scripting, and allowed attributes and tags for cross-site scripting. This is a read-only template. Users can view the contents, but they cannot add, edit, or delete anything in this template. To use it, users must make a copy. In their own copy, users can enable the signature rules that they want to apply to their traffic, and specify the actions to be taken when the signature rules match the traffic. The signatures are derived from the rules published by SNORT: SNORT, which is an open source intrusion prevention system capable of performing real-time traffic analysis to detect various attacks and probes. *Xpath Injection Patterns: This template contains a preconfigured set of literal and PCRE keywords and special strings that are used to detect XPath (XML Path Language) injection attacks. Blank Signatures: In addition to making a copy of the built-in Default Signatures template, users can use a blank signatures template to create a signature object. The signature object that users create with the blank signatures option does not have any native signature rules, but, just like the *Default template, it has all the SQL/XSS built-in entities. External-Format Signatures: The Web Application Firewall also supports external format signatures. Users can import the third-party scan report by using the XSLT files that are supported by the NetScaler Web Application Firewall. A set of built-in XSLT files is available for selected scan tools to translate external format files to native format (see the list of built-in XSLT files later in this section). While signatures help users to reduce the risk of exposed vulnerabilities and protect the user mission critical Web Servers while aiming for efficacy, Signatures do come at a Cost of additional CPU Processing. It is important to choose the right Signatures for user Application needs. Enable only the signatures that are relevant to the Customer Application/environment. NetScaler offers signatures in more than 10 different categories across platforms/OS/Technologies. The signature rules database is substantial, as attack information has built up over the years. So, most of the old rules may not be relevant for all networks as Software Developers may have patched them already or customers are running a more recent version of the OS. Signatures Updates NetScaler Web Application Firewall supports both Auto & Manual Update of Signatures. We also suggest enabling Auto-update for signatures to stay up to date. These signatures files are hosted on the AWS Environment and it is important to allow outbound access to NetScaler IPs from Network Firewalls to fetch the latest signature files. There is no effect of updating signatures to the ADC while processing Real Time Traffic Application Security Analytics The Application Security Dashboard provides a holistic view of the security status of user applications. For example, it shows key security metrics such as security violations, signature violations, and threat indexes. Application Security dashboard also displays attack related information such as syn attacks, small window attacks, and DNS flood attacks for the discovered NetScaler ADC instances. Note: To view the metrics of the Application Security Dashboard, AppFlow for Security insight should be enabled on the NetScaler ADC instances that users want to monitor. To view the security metrics of a NetScaler ADC instance on the application security dashboard Log on to NetScaler ADM using the administrator credentials. Navigate to Applications > App Security Dashboard, and select the instance IP address from the Devices list. Users can further drill down on the discrepancies reported on the Application Security Investigator by clicking the bubbles plotted on the graph. Centralized Learning on ADM NetScaler Web Application Firewall (WAF) protects user web applications from malicious attacks such as SQL injection and cross-site scripting (XSS). To prevent data breaches and provide the right security protection, users must monitor their traffic for threats and real-time actionable data on attacks. Sometimes, the attacks reported might be false-positives and those need to be provided as an exception. The Centralized Learning on NetScaler ADM is a repetitive pattern filter that enables WAF to learn the behavior (the normal activities) of user web applications. Based on monitoring, the engine generates a list of suggested rules or exceptions for each security check applied on the HTTP traffic. It is much easier to deploy relaxation rules using the Learning engine than to manually deploy it as necessary relaxations. To deploy the learning feature, users must first configure a Web Application Firewall profile (set of security settings) on the user NetScaler ADC appliance. For more information, see Creating Web Application Firewall profiles: Creating Web App Firewall Profiles. NetScaler ADM generates a list of exceptions (relaxations) for each security check. As an administrator, users can review the list of exceptions in NetScaler ADM and decide to deploy or skip. Using the WAF learning feature in NetScaler ADM, users can: Configure a learning profile with the following security checks Buffer Overflow HTML Cross-Site Scripting Note: The cross-site script limitation of location is only FormField. HTML SQL Injection Note: For the HTML SQL Injection check, users must configure set -sqlinjectionTransformSpecialChars to ON and set -sqlinjectiontype sqlspclcharorkeywords in the NetScaler ADC instance. Check the relaxation rules in NetScaler ADM and decide to take necessary action (deploy or skip) Get the notifications through email, slack, and ServiceNow Use the dashboard to view relaxation details To use the WAF learning in NetScaler ADM: Configure the learning profile: Configure the Learning Profile See the relaxation rules: View Relaxation Rules and Idle Rules Use the WAF learning dashboard: View WAF Learning Dashboard StyleBook NetScaler Web Application Firewall is a Web Application Firewall (WAF) that protects web applications and sites from both known and unknown attacks, including all application-layer and zero-day threats. NetScaler ADM now provides a default StyleBook with which users can more conveniently create an application firewall configuration on NetScaler ADC instances. Deploying Application Firewall Configurations The following task assists you in deploying a load balancing configuration along with the application firewall and IP reputation policy on NetScaler ADC instances in your business network. To Create an LB Configuration with Application Firewall Settings In NetScaler ADM, navigate to Applications > Configurations > StyleBooks. The StyleBooks page displays all the StyleBooks available for customer use in NetScaler ADM. Scroll down and find HTTP/SSL Load Balancing StyleBook with application firewall policy and IP reputation policy. Users can also search for the StyleBook by typing the name as lb-appfw. Click Create Configuration. The StyleBook opens as a user interface page on which users can enter the values for all the parameters defined in this StyleBook. Enter values for the following parameters: Load Balanced Application Name. Name of the load balanced configuration with an application firewall to deploy in the user network. Load balanced App Virtual IP address. Virtual IP address at which the NetScaler ADC instance receives client requests. Load Balanced App Virtual Port. The TCP Port to be used by the users in accessing the load balanced application. Load Balanced App Protocol. Select the front-end protocol from the list. Application Server Protocol. Select the protocol of the application server. As an option, users can enable and configure the Advanced Load Balancer Settings. Optionally, users can also set up an authentication server for authenticating traffic for the load balancing virtual server. Click “+” in the server IPs and Ports section to create application servers and the ports that they can be accessed on. Users can also create FQDN names for application servers. Users can also specify the details of the SSL certificate. Users can also create monitors in the target NetScaler ADC instance. To configure the application firewall on the virtual server, enable WAF Settings. Ensure that the application firewall policy rule is true if users want to apply the application firewall settings to all traffic on that VIP. Otherwise, specify the NetScaler ADC policy rule to select a subset of requests to which to apply the application firewall settings. Next, select the type of profile that has to be applied - HTML or XML. Optionally, users can configure detailed application firewall profile settings by enabling the application firewall Profile Settings check box. Optionally, if users want to configure application firewall signatures, enter the name of the signature object that is created on the NetScaler ADC instance where the virtual server is to be deployed. Note: Users cannot create signature objects by using this StyleBook. Next, users can also configure any other application firewall profile settings such as, StartURL settings, DenyURL settings and others. For more information on application firewall and configuration settings, see Application Firewall. In the Target Instances section, select the NetScaler ADC instance on which to deploy the load balancing virtual server with the application firewall. Note: Users can also click the refresh icon to add recently discovered NetScaler ADC instances in NetScaler ADM to the available list of instances in this window. Users can also enable IP Reputation check to identify the IP address that is sending unwanted requests. Users can use the IP reputation list to preemptively reject requests that are coming from the IP with the bad reputation. Tip: NetScaler recommends that users select Dry Run to check the configuration objects that must be created on the target instance before they run the actual configuration on the instance. When the configuration is successfully created, the StyleBook creates the required load balancing virtual server, application server, services, service groups, application firewall labels, application firewall policies, and binds them to the load balancing virtual server. The following figure shows the objects created in each server: To see the ConfigPack created on NetScaler ADM, navigate to Applications > Configurations. Security Insight Analytics Web and web service applications that are exposed to the Internet have become increasingly vulnerable to attacks. To protect applications from attack, users need visibility into the nature and extent of past, present, and impending threats, real-time actionable data on attacks, and recommendations on countermeasures. Security Insight provides a single-pane solution to help users assess user application security status and take corrective actions to secure user applications. How Security Insight Works Security Insight is an intuitive dashboard-based security analytics solution that gives users full visibility into the threat environment associated with user applications. Security insight is included in NetScaler ADM, and it periodically generates reports based on the user Application Firewall and ADC system security configurations. The reports include the following information for each application: Threat index. A single-digit rating system that indicates the criticality of attacks on the application, regardless of whether the application is protected by an ADC appliance. The more critical the attacks on an application, the higher the threat index for that application. Values range from 1 through 7. The threat index is based on attack information. The attack-related information, such as violation type, attack category, location, and client details, gives users insight into the attacks on the application. Violation information is sent to NetScaler ADM only when a violation or attack occurs. Many breaches and vulnerabilities lead to a high threat index value. Safety index. A single-digit rating system that indicates how securely users have configured the ADC instances to protect applications from external threats and vulnerabilities. The lower the security risks for an application, the higher the safety index. Values range from 1 through 7. The safety index considers both the application firewall configuration and the ADC system security configuration. For a high safety index value, both configurations must be strong. For example, if rigorous application firewall checks are in place but ADC system security measures, such as a strong password for the nsroot user, have not been adopted, applications are assigned a low safety index value. Actionable Information. Information that users need for lowering the threat index and increasing the safety index, which significantly improves application security. For example, users can review information about violations, existing and missing security configurations for the application firewall and other security features, the rate at which the applications are being attacked, and so on. Configuring Security Insight Note: Security Insight is supported on ADC instances with Premium license or ADC Advanced with AppFirewall license only. To configure security insight on an ADC instance, first configure an application firewall profile and an application firewall policy, and then bind the application firewall policy globally. Then, enable the AppFlow feature, configure an AppFlow collector, action, and policy, and bind the policy globally. When users configure the collector, they must specify the IP address of the NetScaler ADM service agent on which they want to monitor the reports. Configure Security Insight on an ADC Instance Run the following commands to configure an application firewall profile and policy, and bind the application firewall policy globally or to the load balancing virtual server. add appfw profile <name> [-defaults ( basic or advanced )] set appfw profile <name> [-startURLAction <startURLAction> ...] add appfw policy <name> <rule> <profileName> bind appfw global <policyName> <priority> or, bind lb vserver <lb vserver> -policyName <policy> -priority <priority> Sample: add appfw profile pr_appfw -defaults advancedset appfw profile pr_appfw -startURLaction log stats learnadd appfw policy pr_appfw_pol "HTTP.REQ.HEADER("Host").EXISTS" pr_appfwbind appfw global pr_appfw_pol 1or,bind lb vserver outlook –policyName pr_appfw_pol –priority "20" Run the following commands to enable the AppFlow feature, configure an AppFlow collector, action, and policy, and bind the policy globally or to the load balancing virtual server: add appflow collector <name> -IPAddress <ipaddress> set appflow param [-SecurityInsightRecordInterval <secs>] [-SecurityInsightTraffic ( ENABLED or DISABLED )] add appflow action <name> -collectors <string> add appflow policy <name> <rule> <action> bind appflow global <policyName> <priority> [<gotoPriorityExpression>] [-type <type>] or, bind lb vserver <vserver> -policyName <policy> -priority <priority> Sample: add appflow collector col -IPAddress 10.102.63.85set appflow param -SecurityInsightRecordInterval 600 -SecurityInsightTraffic ENABLEDadd appflow action act1 -collectors coladd appflow action af_action_Sap_10.102.63.85 -collectors coladd appflow policy pol1 true act1add appflow policy af_policy_Sap_10.102.63.85 true af_action_Sap_10.102.63.85bind appflow global pol1 1 END -type REQ_DEFAULTor,bind lb vserver Sap –policyName af_action_Sap_10.102.63.85 –priority "20" Enable Security Insight from NetScaler ADM Navigate to Networks > Instances > NetScaler ADC and select the instance type. For example, VPX. Select the instance and from the Select Action list, select Configure Analytics. On the Configure Analytics on virtual server window: Select the virtual servers that you want to enable security insight and click Enable Analytics. The Enable Analytics window is displayed. Select Security Insight Under Advanced Options, select Logstream or IPFIX as the Transport Mode The Expression is true by default Click OK Note: If users select virtual servers that are not licensed, then NetScaler ADM first licenses those virtual servers and then enables analytics For admin partitions, only Web Insight is supported After users click OK, NetScaler ADM processes to enable analytics on the selected virtual servers. Note: When users create a group, they can assign roles to the group, provide application-level access to the group, and assign users to the group. NetScaler ADM analytics now supports virtual IP address-based authorization. Customer users can now see reports for all Insights for only the applications (virtual servers) for which they are authorized. For more information on groups and assigning users to the group, see Configure Groups on NetScaler ADM: Configure Groups on NetScaler ADM . Thresholds Users can set and view thresholds on the safety index and threat index of applications in Security Insight. To set a threshold Navigate to System > Analytics Settings > Thresholds, and select Add. Select the traffic type as Security in the Traffic Type field, and enter required information in the other appropriate fields such as Name, Duration, and entity. In the Rule section, use the Metric, Comparator, and Value fields to set a threshold. For example, “Threat Index” “>” “5” Click Create. To view the threshold breaches Navigate to Analytics > Security Insight > Devices, and select the ADC instance. In the Application section, users can view the number of threshold breaches that have occurred for each virtual server in the Threshold Breach column. Security Insight Use Case The following use cases describe how users can use security insight to assess the threat exposure of applications and improve security measures. Obtain an Overview of the Threat Environment In this use case, users have a set of applications that are exposed to attacks, and they have configured NetScaler ADM to monitor the threat environment. Users need to frequently review the threat index, safety index, and the type and severity of any attacks that the applications might have experienced, so that they can focus first on the applications that need the most attention. The security insight dashboard provides a summary of the threats experienced by the user applications over a time period of user choosing, and for a selected ADC device. It displays the list of applications, their threat and safety indexes, and the total number of attacks for the chosen time period. For example, users might be monitoring Microsoft Outlook, Microsoft Lync, SharePoint, and an SAP application, and users might want to review a summary of the threat environment for these applications. To obtain a summary of the threat environment, log on to NetScaler ADM, and then navigate to Analytics > Security Insight. Key information is displayed for each application. The default time period is 1 hour. To view information for a different time period, from the list at the top-left, select a time period. To view a summary for a different ADC instance, under Devices, click the IP address of the ADC instance. To sort the application list by a given column, click the column header. Determine the Threat Exposure of an Application After reviewing a summary of the threat environment on the Security Insight dashboard to identify the applications that have a high threat index and a low safety index, users want to determine their threat exposure before deciding how to secure them. That is, users want to determine the type and severity of the attacks that have degraded their index values. Users can determine the threat exposure of an application by reviewing the application summary. In this example, Microsoft Outlook has a threat index value of 6, and users want to know what factors are contributing to this high threat index. To determine the threat exposure of Microsoft Outlook, on the Security Insight dashboard, click Outlook. The application summary includes a map that identifies the geographic location of the server. Click Threat Index > Security Check Violations and review the violation information that appears. Click Signature Violations and review the violation information that appears. Determine Existing and Missing Security Configurations for an Application After reviewing the threat exposure of an application, users want to determine what application security configurations are in place and what configurations are missing for that application. Users can obtain this information by drilling down into the application’s safety index summary. The safety index summary gives users information about the effectiveness of the following security configurations: Application Firewall Configuration. Shows how many signature and security entities are not configured. NetScaler ADM System Security. Shows how many system security settings are not configured. In the previous use case, users reviewed the threat exposure of Microsoft Outlook, which has a threat index value of 6. Now, users want to know what security configurations are in place for Outlook and what configurations can be added to improve its threat index. On the Security Insight dashboard, click Outlook, and then click the Safety Index tab. Review the information provided in the Safety Index Summary area. On the Application Firewall Configuration node, click Outlook_Profile and review the security check and signature violation information in the pie charts. Review the configuration status of each protection type in the application firewall summary table. To sort the table on a column, click the column header. Click the NetScaler ADM System Security node and review the system security settings and NetScaler recommendations to improve the application safety index. Identify Applications That Require Immediate Attention The applications that need immediate attention are those having a high threat index and a low safety index. In this example, both Microsoft Outlook and Microsoft Lync have a high threat index value of 6, but Lync has the lower of the two safety indexes. Therefore, users might have to focus their attention on Lync before improving the threat environment for Outlook. Determine the Number of Attacks in a Given Period of Time Users might want to determine how many attacks occurred on a given application at a given point in time, or they might want to study the attack rate for a specific time period. On the Security Insight page, click any application and in the Application Summary, click the number of violations. The Total Violations page displays the attacks in a graphical manner for one hour, one day, one week, and one month. The Application Summary table provides the details about the attacks. Some of them are as follows: Attack time IP address of the client from which the attack happened Severity Category of violation URL from which the attack originated, and other details. While users can always view the time of attack in an hourly report as seen in the image, now they can view the attack time range for aggregated reports even for daily or weekly reports. If users select “1 Day” from the time-period list, the Security Insight report displays all attacks that are aggregated and the attack time is displayed in a one-hour range. If users choose “1 Week” or “1 Month,” all attacks are aggregated and the attack time is displayed in a one-day range. Obtain Detailed Information about Security Breaches Users might want to view a list of the attacks on an application and gain insights into the type and severity of attacks, actions taken by the ADC instance, resources requested, and the source of the attacks. For example, users might want to determine how many attacks on Microsoft Lync were blocked, what resources were requested, and the IP addresses of the sources. On the Security Insight dashboard, click Lync > Total Violations. In the table, click the filter icon in the Action Taken column header, and then select Blocked. For information about the resources that were requested, review the URL column. For information about the sources of the attacks, review the Client IP column. View Log Expression Details NetScaler ADC instances use log expressions configured with the Application Firewall profile to take action for the attacks on an application in the user enterprise. In Security Insight, users can view the values returned for the log expressions used by the ADC instance. These values include, request header, request body and so on. In addition to the log expression values, users can also view the log expression name and the comment for the log expression defined in the Application Firewall profile that the ADC instance used to take action for the attack. Prerequisites Ensure that users: Configure log expressions in the Application Firewall profile. For more information, see Application Firewall. Enable log expression-based Security Insights settings in NetScaler ADM. Do the following: Navigate to Analytics > Settings, and click Enable Features for Analytics. In the Enable Features for Analytics page, select Enable Security Insight under the Log Expression Based Security Insight Setting section and click OK. For example, users might want to view the values of the log expression returned by the ADC instance for the action it took for an attack on Microsoft Lync in the user enterprise. On the Security Insight dashboard, navigate to Lync > Total Violations. In the Application Summary table, click the URL to view the complete details of the violation in the Violation Information page including the log expression name, comment, and the values returned by the ADC instance for the action. Determine the Safety Index before Deploying the Configuration Security breaches occur after users deploy the security configuration on an ADC instance, but users might want to assess the effectiveness of the security configuration before they deploy it. For example, users might want to assess the safety index of the configuration for the SAP application on the ADC instance with IP address 10.102.60.27. On the Security Insight dashboard, under Devices, click the IP address of the ADC instance that users configured. Users can see that both the threat index and the total number of attacks are 0. The threat index is a direct reflection of the number and type of attacks on the application. Zero attacks indicate that the application is not under any threat. Click Sap > Safety Index > SAP_Profile and assess the safety index information that appears. In the application firewall summary, users can view the configuration status of different protection settings. If a setting is set to log or if a setting is not configured, the application is assigned a lower safety index. Security Violations View Application Security Violation Details Web applications that are exposed to the internet have become drastically more vulnerable to attacks. NetScaler ADM enables users to visualize actionable violation details to protect applications from attacks. Navigate to Security > Security Violations for a single-pane solution to: Access the application security violations based on their categories such as Network, Bot, and WAF Take corrective actions to secure the applications To view the security violations in NetScaler ADM, ensure: Users have a premium license for the NetScaler ADC instance (for WAF and BOT violations). Users have applied a license on the load balancing or content switching virtual servers (for WAF and BOT). For more information, see Manage Licensing on Virtual Servers. Users enable more settings. For more information, see the procedure available at the Setting up section in the NetScaler product documentation: Setting up. Violation Categories** NetScaler ADM enables users to view the following violations: NETWORK Bot WAF HTTP Slow Loris Excessive Client Connections Unusually High Upload Transactions DNS Slow Loris Account Takeover** Unusually High Download Transactions HTTP Slow Post Unusually High Upload Volume Excessive Unique IPs NXDomain Flood Attack Unusually High Request Rate Excessive Unique IPs Per Geo HTTP desync attack Unusually High Download Volume Bleichenbacher Attack Segment smack Attack Syn Flood Attack ** - Users must configure the account takeover setting in NetScaler ADM. See the prerequisite mentioned in Account Takeover: Account Takeover. Apart from these violations, users can also view the following Security Insight and Bot Insight violations under the WAF and Bot categories respectively: WAF Bot Buffer Overflow Crawler Content type Feed Fetcher Cookie Consistency Link Checker CSRF Form Tagging Marketing Deny URL Scraper Form Field Consistency Screenshot Creator Field Formats Search Engine Maximum Uploads Service Agent Referrer Header Site Monitor Safe Commerce Speed Tester Safe Object Tool HTML SQL Inject Uncategorized Start URL Virus Scanner XSS Vulnerability Scanner XML DoS DeviceFP Wait Exceeded XML Format Invalid DeviceFP XML WSI Invalid Captcha Response XML SSL Captcha Attempts Exceeded XML Attachment Valid Captcha Response XML SOAP Fault Captcha Client Muted XML Validation Captcha Wait Time Exceeded Others Request Size Limit Exceeded IP Reputation Rate Limit Exceeded HTTP DOS Block list (IP, subnet, policy expression) TCP Small Window Allow list (IP, subnet, policy expression) Signature Violation Zero Pixel Request File Upload Type Source IP JSON XSS Host JSON SQL Geo Location JSON DOS URL Command Injection Infer Content Type XML Cookie Hijack Setting up Users must enable Advanced Security Analytics and set Web Transaction Settings to All to view the following violations in NetScaler ADM: Unusually High Upload Transactions (WAF) Unusually High Download Transactions (WAF) Excessive Unique IPs (WAF) Account takeover (BOT) For other violations, ensure whether Metrics Collector is enabled. By default, Metrics Collector is enabled on the NetScaler ADC instance. For more information, see: Configure Intelligent App Analytics . Enable Advanced Security Analytics Navigate to Networks > Instances > NetScaler ADC, and select the instance type. For example, MPX. Select the NetScaler ADC instance and from the Select Action list, select Configure Analytics. Select the virtual server and click Enable Analytics. On the Enable Analytics window: Select Web Insight. After users select Web Insight, the read-only Advanced Security Analytics option is enabled automatically. Note: The Advanced Security Analytics option is displayed only for premium licensed ADC instances. Select Logstream as Transport Mode The Expression is true by default Click OK Enable Web Transaction settings Navigate to Analytics > Settings. The Settings page is displayed. Click Enable Features for Analytics. Under Web Transaction Settings, select All. Click Ok. Security violations dashboard In the security violations dashboard, users can view: Total violations occurred across all ADC instances and applications. The total violations are displayed based on the selected time duration. Total violations under each category. Total ADCs affected, total applications affected, and top violations based on the total occurrences and the affected applications. Violation details For each violation, NetScaler ADM monitors the behavior for a specific time duration and detects violations for unusual behaviors. Click each tab to view the violation details. Users can view details such as: The total occurrences, last occurred, and total applications affected Under event details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating violations. Drag and select on the graph that lists the violations to narrow down the violation search. Click Reset Zoom to reset the zoom result Recommended Actions that suggest users troubleshoot the issue Other violation details such as violence occurrence time and detection message Bot Insight Using Bot Insight in NetScaler ADM After users configure the bot management in NetScaler ADC, they must enable Bot Insight on virtual servers to view insights in NetScaler ADM. To enable Bot Insight: Navigate to Networks > Instances > NetScaler ADC and select the instance type. For example, VPX. Select the instance and from the Select Action list, select Configure Analytics. Select the virtual server and click Enable Analytics. On the Enable Analytics window: Select Bot Insight Under Advanced Option, select Logstream. Click OK. After enabling Bot Insight, navigate to Analytics > Security > Bot Insight. Time list to view bot details Drag the slider to select a specific time range and click Go to display the customized results Total instances affected from bots Virtual server for the selected instance with total bot attacks Total Bots – Indicates the total bot attacks (inclusive of all bot categories) found for the virtual server. Total Human Browsers – Indicates the total human users accessing the virtual server. Bot Human Ratio – Indicates the ratio between human users and bots accessing the virtual server. Signature Bots, Fingerprinted Bot, Rate Based Bots, IP Reputation Bots, allow list Bots, and block list Bots – Indicates the total bot attacks occurred based on the configured bot category. For more information about bot categories, see: Configure Bot Detection Techniques in NetScaler ADC. Click > to view bot details in a graph format. View events history Users can view the bot signature updates in the Events History, when: New bot signatures are added in NetScaler ADC instances. Existing bot signatures are updated in NetScaler ADC instances. You can select the time duration on the bot insight page to view the events history. The following diagram shows how the bot signatures are retrieved from the AWS cloud, updated on NetScaler ADC and view signature update summary on NetScaler ADM. The bot signature auto update scheduler retrieves the mapping file from the AWS URI. Checks the latest signatures in the mapping file with the existing signatures in the ADC appliance. Downloads the new signatures from AWS and verifies the signature integrity. Updates the existing bot signatures with the new signatures in the bot signature file. Generates an SNMP alert and sends the signature update summary to NetScaler ADM. View Bots Click the virtual server to view the Application Summary Provides the Application Summary details such as: Average RPS – Indicates the average bot transaction requests per second (RPS) received on virtual servers. Bots by Severity – Indicates the highest bot transactions occurred based on the severity. The severity is categorized based on Critical, High, Medium, and Low. For example, if the virtual servers have 11770 high severity bots and 1550 critical severity bots, then NetScaler ADM displays Critical 1.55 K under Bots by Severity. Largest Bot Category – Indicates the highest bot attacks occurred based on the bot category. For example, if the virtual servers have 8000 block listed bots, 5000 allow listed bots, and 10000 Rate Limit Exceeded bots, then NetScaler ADM displays Rate Limit Exceeded 10 K under Largest Bot Category. Largest Geo Source – Indicates the highest bot attacks occurred based on a region. For example, if the virtual servers have 5000 bot attacks in Santa Clara, 7000 bot attacks in London, and 9000 bot attacks in Bangalore, then NetScaler ADM displays Bangalore 9 K under Largest Geo Source. Average % Bot Traffic – Indicates the human bot ratio. Displays the severity of the bot attacks based on locations in map view Displays the types of bot attacks (Good, Bad, and All) Displays the total bot attacks along with the corresponding configured actions. For example, if you have configured: IP address range (192.140.14.9 to 192.140.14.254) as block list bots and selected Drop as an action for these IP address ranges IP range (192.140.15.4 to 192.140.15.254) as block list bots and selected to create a log message as an action for these IP ranges In this scenario, NetScaler ADM displays: Total block listed bots Total bots under Dropped Total bots under Log View CAPTCHA bots In webpages, CAPTCHAs are designed to identify if the incoming traffic is from a human or an automated bot. To view the CAPTCHA activities in NetScaler ADM, users must configure CAPTCHA as a bot action for IP reputation and device fingerprint detection techniques in a NetScaler ADC instance. For more information, see: Configure Bot Management. The following are the CAPTCHA activities that NetScaler ADM displays in Bot insight: Captcha attempts exceeded – Denotes the maximum number of CAPTCHA attempts made after login failures Captcha client muted – Denotes the number of client requests that are dropped or redirected because these requests were detected as bad bots earlier with the CAPTCHA challenge Human – Denotes the captcha entries performed from the human users Invalid captcha response – Denotes the number of incorrect CAPTCHA responses received from the bot or human, when NetScaler ADC sends a CAPTCHA challenge View bot traps To view bot traps in NetScaler ADM, you must configure the bot trap in the NetScaler ADC instance. For more information, see Configure Bot Management. To identify the bot trap, a script is enabled in the webpage and this script is hidden from humans, but not to bots. NetScaler ADM identifies and reports the bot traps, when this script is accessed by bots. Click the virtual server and select Zero Pixel Request View bot details For further details, click the bot attack type under Bot Category. The details such as attack time and total number of bot attacks for the selected captcha category are displayed. Users can also drag the bar graph to select the specific time range to be displayed with bot attacks. To get additional information of the bot attack, click to expand. Instance IP – Indicates the NetScaler ADC instance IP address Total Bots – Indicates the total bot attacks occurred for that particular time HTTP Request URL – Indicates the URL that is configured for captcha reporting Country Code – Indicates the country where the bot attack occurred Region – Indicates the region where the bot attack occurred Profile Name – Indicates the profile name that users provided during the configuration Advanced search Users can also use the search text box and time duration list, where they can view bot details as per the user requirement. When users click the search box, the search box gives them the following list of search suggestions. Instance IP – NetScaler ADC instance IP address Client-IP – Client IP address Bot-Type – Bot type such as Good or Bad Severity – Severity of the bot attack Action-Taken – Action taken after the bot attack such as Drop, No action, Redirect Bot-Category – Category of the bot attack such as block list, allow list, fingerprint, and so on. Based on a category, users can associate a bot action to it Bot-Detection – Bot detection types (block list, allow list, and so on) that users have configured on NetScaler ADC instance Location – Region/country where the bot attack has occurred Request-URL – URL that has the possible bot attacks Users can also use operators in the user search queries to narrow the focus of the user search. For example, if users want to view all bad bots: Click the search box and select Bot-Type Click the search box again and select the operator = Click the search box again and select Bad Click Search to display the results Bot violation details Excessive Client Connections When a client tries to access the web application, the client request is processed in NetScaler ADC appliance, instead of connecting to the server directly. Web traffic comprises bots and bots can perform various actions at a faster rate than a human. Using the Excessive Client Connections indicator, users can analyze scenarios when an application receives unusually high client connections through bots. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating the total IP addresses transacting the application The accepted IP address range that the application can receive Account Takeover Note: Ensure users enable the advanced security analytics and web transaction options. For more information, see Setting up: Setting up . Some malicious bots can steal user credentials and perform various kinds of cyberattacks. These malicious bots are known as bad bots. It is essential to identify bad bots and protect the user appliance from any form of advanced security attacks. Prerequisite Users must configure the Account Takeover settings in NetScaler ADM. Navigate to Analytics > Settings > Security Violations Click Add On the Add Application page, specify the following parameters: Application - Select the virtual server from the list. Method - Select the HTTP method type from the list. The available options are GET, PUSH, POST, and UPDATE. Login URL and Success response code - Specify the URL of the web application and specify the HTTP status code (for example, 200) for which users want NetScaler ADM to report the account takeover violation from bad bots. Click Add. After users configure the settings, using the Account Takeover indicator, users can analyze if bad bots attempted to take over the user account, giving multiple requests along with credentials. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating total unusual failed login activity, successful logins, and failed logins The bad bot IP address. Click to view details such as time, IP address, total successful logins, total failed logins, and total requests made from that IP address. Unusually High Upload Volume Web traffic also comprises data that is processed for uploading. For example, if the user average upload data per day is 500 MB and if users upload 2 GB of data, then this can be considered as an unusually high upload data volume. Bots are also capable to process uploading of data more quickly than humans. Using the Unusually High Upload Volume indicator, users can analyze abnormal scenarios of upload data to the application through bots. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating the total upload data volume processed The accepted range of upload data to the application Unusually High Download Volume Similar to high upload volume, bots can also perform downloads more quickly than humans. Using the Unusually High Download Volume indicator, users can analyze abnormal scenarios of download data from the application through bots. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating the total download data volume processed The accepted range of download data from the application Unusually High Request Rate Users can control the incoming and outgoing traffic from or to an application. A bot attack can perform an unusually high request rate. For example, if users configure an application to allow 100 requests/minute and if users observe 350 requests, then it might be a bot attack. Using the Unusually High Request Rate indicator, users can analyze the unusual request rate received to the application. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating the total requests received and % of excessive requests received than the expected requests The accepted range of expected request rate range from the application Use Cases Bot Sometimes the incoming web traffic is comprised of bots and most organizations suffer from bot attacks. Web and mobile applications are significant revenue drivers for business and most companies are under the threat of advanced cyberattacks, such as bots. A bot is a software program that automatically performs certain actions repeatedly at a much faster rate than a human. Bots can interact with webpages, submit forms, run actions, scan texts, or download content. They can access videos, post comments, and tweet on social media platforms. Some bots, known as chatbots, can hold basic conversations with human users. A bot that performs a helpful service, such as customer service, automated chat, and search engine crawlers are good bots. At the same time, a bot that can scrape or download content from a website, steal user credentials, spam content, and perform other kinds of cyberattacks are bad bots. With a good number of bad bots performing malicious tasks, it is essential to manage bot traffic and protect the user web applications from bot attacks. By using NetScaler bot management, users can detect the incoming bot traffic and mitigate bot attacks to protect the user web applications. NetScaler bot management helps identify bad bots and protect the user appliance from advanced security attacks. It detects good and bad bots and identifies if incoming traffic is a bot attack. By using bot management, users can mitigate attacks and protect the user web applications. NetScaler ADC bot management provides the following benefits: Defends against bots, scripts, and toolkits. Provides real-time threat mitigation using static signature-based defense and device fingerprinting. Neutralizes automated basic and advanced attacks. Prevents attacks, such as App layer DDoS, password spraying, password stuffing, price scrapers, and content scrapers. Protects user APIs and investments. Protects user APIs from unwarranted misuse and protects infrastructure investments from automated traffic. Some use cases where users can benefit by using the NetScaler bot management system are: Brute force login. A government web portal is constantly under attack by bots attempting brute force user logins. The organization discovers the attack by looking through web logs and seeing specific users being hit over and over again with rapid login attempts and passwords incrementing using a dictionary attack approach. By law, they must protect themselves and their users. By deploying the NetScaler bot management, they can stop brute force login using device fingerprinting and rate limiting techniques. Block bad bots and device fingerprint unknown bots. A web entity gets 100,000 visitors each day. They have to upgrade the underlying footprint and they are spending a fortune. In a recent audit, the team discovered that 40 percent of the traffic came from bots, scraping content, picking news, checking user profiles, and more. They want to block this traffic to protect their users and reduce their hosting costs. Using bot management, they can block known bad bots, and fingerprint unknown bots that are hammering their site. By blocking these bots, they can reduce bot traffic by 90 percent. Permit good bots. “Good” bots are designed to help businesses and consumers. They have been around since the early 1990s when the first search engine bots were developed to crawl the Internet. Google, Yahoo, and Bing would not exist without them. Other examples of good bots—mostly consumer-focused—include: Chatbots (a.k.a. chatterbots, smart bots, talk bots, IM bots, social bots, conversation bots) interact with humans through text or sound. One of the first text uses was for online customer service and text messaging apps like Facebook Messenger and iPhone Messages. Siri, Cortana, and Alexa are chatbots; but so are mobile apps that let users order coffee and then tell them when it will be ready, let users watch movie trailers and find local theater showtimes, or send users a picture of the car model and license plate when they request a ride service. Shopbots scour the Internet looking for the lowest prices on items users are searching for. Monitoring bots check on the health (availability and responsiveness) of websites. Downdetector is an example of an independent site that provides real-time status information, including outages, of websites and other kinds of services. For more information about Downdetector, see: Downdetector. Bot Detection Configuring Bot Management by using NetScaler ADC GUI Users can configure NetScaler ADC bot management by first enabling the feature on the appliance. Once users enable, they can create a bot policy to evaluate the incoming traffic as bot and send the traffic to the bot profile. Then, users create a bot profile and then bind the profile to a bot signature. As an alternative, users can also clone the default bot signature file and use the signature file to configure the detection techniques. After creating the signature file, users can import it into the bot profile. All these steps are performed in the following sequence: Enable bot management feature Configure bot management settings Clone NetScaler bot default signature Import NetScaler bot signature Configure bot signature settings Create bot profile Create bot policy Enable Bot Management Feature On the navigation pane, expand System and then click Settings. On the Configure Advanced Features page, select the Bot Management check box. Click OK, and then click Close. Clone Bot Signature File Navigate to Security > NetScaler Bot Management > Signatures. In NetScaler Bot Management Signatures page, select the default bot signatures record and click Clone. In the Clone Bot Signature page, enter a name and edit the signature data. Click Create. Import Bot Signature File If users have their own signature file, then they can import it as a file, text, or URL. Perform the following the steps to import the bot signature file: Navigate to Security > NetScaler Bot Management and Signatures. On the NetScaler Bot Management Signatures page, import the file as URL, File, or text. Click Continue. On the Import NetScaler Bot Management Signature page, set the following parameters. Name. Name of the bot signature file. Comment. Brief description about the imported file. Overwrite. Select the check box to allow overwriting of data during file update. Signature Data. Modify signature parameters Click Done. IP Reputation Configure IP Reputation by using NetScaler ADC GUI This configuration is a prerequisite for the bot IP reputation feature. The detection technique enables users to identify if there is any malicious activity from an incoming IP address. As part of the configuration, we set different malicious bot categories and associate a bot action to each of them. Navigate to Security > NetScaler Bot Management and Profiles. On the NetScaler Bot Management Profiles page, select a signature file and click Edit. On the NetScaler Bot Management Profile page, go to Signature Settings section and click IP Reputation. On the IP Reputation section, set the following parameters: Enabled. Select the check box to validate incoming bot traffic as part of the detection process. Configure Categories. Users can use the IP reputation technique for incoming bot traffic under different categories. Based on the configured category, users can drop or redirect the bot traffic. Click Add to configure a malicious bot category. In the Configure NetScaler Bot Management Profile IP Reputation Binding page, set the following parameters: Category. Select a malicious bot category from the list. Associate a bot action based on category. Enabled. Select the check box to validate the IP reputation signature detection. Bot action. Based on the configured category, users can assign no action, drop, redirect, or CAPTCHA action. Log. Select the check box to store log entries. Log Message. Brief description of the log. Comments. Brief description about the bot category. Click OK. Click Update. Click Done. Auto Update for Bot Signatures The bot static signature technique uses a signature lookup table with a list of good bots and bad bots. The bots are categorized based on user-agent string and domain names. If the user-agent string and domain name in incoming bot traffic matches a value in the lookup table, a configured bot action is applied. The bot signature updates are hosted on the AWS cloud and the signature lookup table communicates with the AWS database for signature updates. The auto signature update scheduler runs every 1-hour to check the AWS database and updates the signature table in the ADC appliance. The Bot signature mapping auto update URL to configure signatures is: Bot Signature Mapping. Note: Users can also configure a proxy server and periodically update signatures from the AWS cloud to the ADC appliance through a proxy. For proxy configuration, users must set the proxy IP address and port address in the bot settings. Configure Bot Signature Auto Update For configuring bot signature auto update, complete the following steps: Enable Bot Signature Auto Update Users must enable the auto update option in the bot settings on the ADC appliance. At the command prompt, type: set bot settings –signatureAutoUpdate ON Configure Bot Signature Auto Update using the NetScaler ADC GUI Complete the following steps to configure bot signature auto update: Navigate to Security > NetScaler Bot Management. In the details pane, under Settings click Change NetScaler Bot Management Settings. In the Configure NetScaler Bot Management Settings, select the Auto Update Signature check box. Click OK and Close. For more information on configuring IP Reputation using the CLI, see: Configure the IP Reputation Feature Using the CLI. References For information on using SQL Fine Grained Relaxations, see: SQL Fine Grained Relaxations. For information on how to configure the SQL Injection Check using the command line, see: HTML SQL Injection Check. For information on how to configure the SQL Injection Check using the GUI, see: Using the GUI to Configure the SQL Injection Security Check. For information on using the Learn Feature with the SQL Injection Check, see: Using the Learn Feature with the SQL Injection Check. For information on using the Log Feature with the SQL Injection Check, see: Using the Log Feature with the SQL Injection Check. For information on Statistics for the SQL Injection violations, see: Statistics for the SQL Injection Violations. For information on SQL Injection Check Highlights, see: Highlights. For information about XML SQL Injection Checks, see: XML SQL Injection Check. For information on using Cross-Site Scripting Fine Grained Relaxations, see: SQL Fine Grained Relaxations. For information on configuring HTML Cross-Site Scripting using the command line, see: Using the Command Line to Configure the HTML Cross-Site Scripting Check. For information on configuring HTML Cross-Site Scripting using the GUI, see: Using the GUI to Configure the HTML Cross-Site Scripting Check. For information on using the Learn Feature with the HTML Cross-Site Scripting Check, see: Using the Learn Feature with the HTML Cross-Site Scripting Check. For information on using the Log Feature with the HTML Cross-Site Scripting Check, see: Using the Log Feature with the HTML Cross-Site Scripting Check. For information on statistics for the HTML Cross-Site Scripting violations, see: Statistics for the HTML Cross-Site Scripting Violations. For information on HTML Cross-Site Scripting highlights, see: Highlights. For information about XML Cross-Site Scripting, visit: XML Cross-Site Scripting Check. For information on using the command line to configure the Buffer Overflow Security Check, see: Using the Command Line to Configure the Buffer Overflow Security Check. For information on using the GUI to configure the Buffer Overflow Security Check, see: Configure Buffer Overflow Security Check by using the NetScaler ADC GUI. For information on using the Log Feature with the Buffer Overflow Security Check, see: Using the Log Feature with the Buffer Overflow Security Check. For information on Statistics for the Buffer Overflow violations, see: Statistics for the Buffer Overflow Violations. For information on the Buffer Overflow Security Check Highlights, see: Highlights. For information on Adding or Removing a Signature Object, see: Adding or Removing a Signature Object. For information on creating a signatures object from a template, see: To Create a Signatures Object from a Template. For information on creating a signatures object by importing a file, see: To Create a Signatures Object by Importing a File. For information on creating a signatures object by importing a file using the command line, see: To Create a Signatures Object by Importing a File using the Command Line. For information on removing a signatures object by using the GUI, see: To Remove a Signatures Object by using the GUI. For information on removing a signatures object by using the command line, see: To Remove a Signatures Object by using the Command Line. For information on configuring or modifying a signatures object, see: Configuring or Modifying a Signatures Object. For more information on updating a signature object, see: Updating a Signature Object. For information on using the command line to update Web Application Firewall Signatures from the source, see: To Update the Web Application Firewall Signatures from the Source by using the Command Line. For information on updating a signatures object from a NetScaler format file, see: Updating a Signatures Object from a NetScaler Format File. For information on updating a signatures object from a supported vulnerability scanning tool, see: Updating a Signatures Object from a Supported Vulnerability Scanning Tool. For information on Snort Rule Integration, see: Snort Rule Integration. For information on configuring Snort Rules, see: Configure Snort Rules. For information about configuring Bot Management using the command line, see: Configure Bot Management. For information about configuring bot management settings for device fingerprint technique, see: Configure Bot Management Settings for Device Fingerprint Technique. For information on configuring bot allow lists by using the NetScaler ADC GUI, see: Configure Bot White List by using NetScaler ADC GUI. For information on configuring bot block lists by using the NetScaler ADC GUI, see: Configure Bot Black List by using NetScaler ADC GUI. For more information on configuring Bot management, see: Configure Bot Management. Prerequisites Before attempting to create a VPX instance in AWS, users should ensure they have the following: An AWS account to launch a NetScaler ADC VPX AMI in an Amazon Web Services (AWS) Virtual Private Cloud (VPC). Users can create an AWS account for free at Amazon Web Services: AWS. An AWS Identity and Access Management (IAM) user account to securely control access to AWS services and resources for users. For more information about how to create an IAM user account, see the topic: Creating IAM Users (Console). An IAM role is mandatory for both standalone and high availability deployments. The IAM role must have the following privileges: ec2:DescribeInstances ec2:DescribeNetworkInterfaces ec2:DetachNetworkInterface ec2:AttachNetworkInterface ec2:StartInstances ec2:StopInstances ec2:RebootInstances ec2:DescribeAddresses ec2:AssociateAddress ec2:DisassociateAddress ec2:AssignPrivateIpAddresses autoscaling:* sns:* sqs:* cloudwatch:* iam:SimulatePrincipalPolicy iam:GetRole For more information on IAM permissions, see: AWS Managed Policies for Job Functions. If the NetScaler CloudFormation template is used, the IAM role is automatically created. The template does not allow selecting an already created IAM role. Note: When users log on the VPX instance through the GUI, a prompt to configure the required privileges for the IAM role appears. Ignore the prompt if the privileges have already been configured. Note: AWS CLI is required to use all the functionality provided by the AWS Management Console from the terminal program. For more information, see the AWS CLI user guide: What Is the AWS Command Line Interface? . Users also need the AWS CLI to change the network interface type to SR-IOV. For more information about NetScaler ADC and AWS including support for the NetScaler Networking VPX within AWS see NetScaler ADC and Amazon Web Services Validated Reference Design guide: NetScaler ADC and Amazon Web Services Validated Reference Design . Limitations and Usage Guidelines The following limitations and usage guidelines apply when deploying a NetScaler ADC VPX instance on AWS: Users should read the AWS terminology listed above before starting a new deployment. The clustering feature is supported only when provisioned with NetScaler ADM Auto Scale Groups. For the high availability setup to work effectively, associate a dedicated NAT device to the management Interface or associate an Elastic IP (EIP) to NSIP. For more information on NAT, in the AWS documentation, see: NAT Instances. Data traffic and management traffic must be segregated with ENIs belonging to different subnets. Only the NSIP address must be present on the management ENI. If a NAT instance is used for security instead of assigning an EIP to the NSIP, appropriate VPC level routing changes are required. For instructions on making VPC level routing changes, in the AWS documentation, see: Scenario 2: VPC with Public and Private Subnets. A VPX instance can be moved from one EC2 instance type to another (for example, from m3.large to an m3.xlarge). For more information, visit: Limitations and Usage Guidelines. For storage media for VPX on AWS, NetScaler recommends EBS, because it is durable and the data is available even after it is detached from the instance. Dynamic addition of ENIs to VPX is not supported. Restart the VPX instance to apply the update. NetScaler recommends users to stop the standalone or HA instance, attach the new ENI, and then restart the instance. The primary ENI cannot be changed or attached to a different subnet once it is deployed. Secondary ENIs can be detached and changed as needed while the VPX is stopped. Users can assign multiple IP addresses to an ENI. The maximum number of IP addresses per ENI is determined by the EC2 instance type, see the section “IP Addresses Per Network Interface Per Instance Type” in Elastic Network Interfaces: Elastic Network Interfaces. Users must allocate the IP addresses in AWS before they assign them to ENIs. For more information, see Elastic Network Interfaces: Elastic Network Interfaces. NetScaler recommends that users avoid using the enable and disable interface commands on NetScaler ADC VPX interfaces. The NetScaler ADC set ha node <NODE_ID> -haStatus STAYPRIMARY and set ha node <NODE_ID> -haStatus STAYSECONDARY commands are disabled by default. IPv6 is not supported for VPX. Due to AWS limitations, these features are not supported: Gratuitous ARP(GARP) L2 mode (bridging). Transparent virtual servers are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN Dynamic Routing Virtual MAC For RNAT, routing, and Transparent virtual server to work, ensure Source/Destination Check is disabled for all ENIs in the data path. For more information, see “Changing the Source/Destination Checking” in Elastic Network Interfaces: Elastic Network Interfaces. In a NetScaler ADC VPX deployment on AWS, in some AWS regions, the AWS infrastructure might not be able to resolve AWS API calls. This happens if the API calls are issued through a non-management interface on the NetScaler ADC VPX instance. As a workaround, restrict the API calls to the management interface only. To do that, create an NSVLAN on the VPX instance and bind the management interface to the NSVLAN by using the appropriate command. For example: set ns config -nsvlan <vlan id> -ifnum 1/1 -tagged NO save config Restart the VPX instance at the prompt. For more information about configuring nsvlan, see Configuring NSVLAN: Configuring NSVLAN. In the AWS console, the vCPU usage shown for a VPX instance under the Monitoring tab might be high (up to 100 percent), even when the actual usage is much lower. To see the actual vCPU usage, navigate to View all CloudWatch metrics. For more information, see: Monitor your Instances using Amazon CloudWatch. Alternately, if low latency and performance are not a concern, users may enable the CPU Yield feature allowing the packet engines to idle when there is no traffic. Visit Citrix Support Knowledge Center for more details about the CPU Yield feature and how to enable it. Technical Requirements Before users launch the Quick Start Guide to begin a deployment, the user account must be configured as specified in the following table. Otherwise, the deployment might fail. Resources If necessary, sign in to the user amazon account and request service limit increases for the following resources here: AWS/Sign in. You might need to do this if you already have an existing deployment that uses these resources, and you think you might exceed the default limits with this deployment. For default limits, see the AWS Service Quotas in the AWS documentation: AWS Service Quotas. The AWS Trusted Advisor, found here: AWS/Sign in, offers a service limits check that displays usage and limits for some aspects of some services. Resource This deployment uses VPCs 1 Elastic IP addresses 0/1(for Bastion host) IAM security groups 3 IAM roles 1 Subnets 6(3/Availability zone) Internet Gateway 1 Route Tables 5 WAF VPX instances 2 Bastion host 0/1 NAT gateway 2 Regions NetScaler WAF on AWS isn’t currently supported in all AWS Regions. For a current list of supported Regions, see AWS Service Endpoints in the AWS documentation: AWS Service Endpoints. For more information on AWS regions and why cloud infrastructure matters, see: Global Infrastructure. Key Pair Make sure that at least one Amazon EC2 key pair exists in the user AWS account in the Region where users are planning to deploy using the Quick Start Guide. Make note of the key pair name. Users are prompted for this information during deployment. To create a key pair, follow the instructions for Amazon EC2 Key Pairs and Linux Instances in the AWS documentation: Amazon EC2 Key Pairs and Linux Instances. If users are deploying the Quick Start Guide for testing or proof-of-concept purposes, we recommend that they create a new key pair instead of specifying a key pair that’s already being used by a production instance.
×
×
  • Create New...