Jump to content
  • NetScaler ADC CPX in Kubernetes with Diamanti and Nirmata Validated Reference Design Part 1

    • Validation Status: Validated
      Summary: NetScaler ADC CPX in Kubernetes with Diamanti and Nirmata Validated Reference Design Part 1
      Has Video?: No

    NetScaler ADC CPX in Kubernetes with Diamanti and Nirmata Validated Reference Design Part 1

    September 12, 2022

    Author:  Luis Ugarte, Beth Pollack

    Continued in Part 2

    Features and functions to be tested

    Test cases: CPX as Ingress controller and device for North-South and Hairpin East-West:

    Setup for all test cases except for VPX as North-South:

    • Two CPXs in a cluster (CPX-1, CPX-2)
    • ADM as a licensing server
    • Prometheus exporter container in a cluster
    • Prometheus server and Grafana (either as pods in Kubernetes or external to Kubernetes server)
    • Several front-end apps
    • Several back-end apps

    I. VPX as North-South

    1. VPX on an SDX front-end Diamanti platform


      • Test SSL Offload and re-encrypt with insertion of X-Forward for every SSL connection
      • Insertion of X-Forward on SSL sessions

    II. CPX as North-South device

    1. CPX-1. Set up HTTPS ingress with support for two or three HTTPS apps with a specified ingress class:


      • Demonstrate creation of multiple content switching policies: one per front-end app.
      • Demonstrate multiple wildcard certificates per CPX: one wild-card certificate per app.
      • Demonstrate CPX offloading and re-encrypting traffic to the front-end apps.
      • Demonstrate different load balancing algorithm.
      • Demonstrate persistency to one pod.
    2. CPX-1. Set up separate TCP ingress with specified ingress class:


      • Insert TCP app like MongoDB.
      • Show TCP VIP creation.
      • Show TCP client traffic hitting MongoDB pod.
      • Show default TCP app health checking.
    3. CPX-1. Set up separate TCP-SSL ingress with specified ingress class:


      • Demonstrate SSL offload and re-encryption for TCP-SSL VIP.
      • Repeat test case 2.
    4. CPX per app. Use of separate ingress class:


      • Repeat test cases 1–3 using CPX-2 supporting one app only.
    5. CPX per team. Use of ingress class:


      • Assign different ingress classes for 2 teams.
      • Demonstrate test case 1 as evidence that CPX can configure ingress rules for individual teams.
    6. Autoscale the front-end pods:


      • Increase traffic to the front-end pods and ensure that the pods autoscale.
      • Show that CPX-1 adds new pods to service group.
      • Demonstrate for HTTPS ingress VIP.
    7. 4–7 vCPU Support:


      • Configure CPX-1 with 4 or 7 vCPUs.
      • Show performance test of HTTPS TPS, encrypted BW throughout.

    III. CPX as Hairpin East-West device

    1. CPX-1. Create HTTPS ingress for North-South traffic as in described in section I.1:


      • Expose the back-end app to the front-end app.
      • Show traffic between both apps.
      • Expose the back-end app to another back-end app.
      • Show traffic between the apps.
    2. CPX-1. Follow the directions from step 1. Also, show the end-to-end encryption:


      • Back-end app to back-end app encrypted with CPX-1 doing offload and re-encryption.
    3. Autoscale back-end pods:


      • Demonstrate CPX-1 adding backend autoscaled backend pods to service group.

    IV. CPX integration with Prometheus and Grafana

    1. Insert Prometheus container into the Kubernetes cluster:


      • Configure the container with recommended counters for export for each app.
      • Demonstrate exporter container sending counter data to Prometheus server.
      • Show Grafana dashboard illustrating data from Prometheus server coming from CPXs.
      • The goal is to show that developers can use cloud-native tools that are in popular use for DevOps.
    2. Demonstrate integration Kubernetes rolling deployment:


      • Insert new version of app in Nirmata.
      • Show Kubernetes deploying new app version into the cluster.
      • Demonstrate CPX responding to rolling deploy commands from Kubernetes to take 100% of traffic from old version of the app to the new version of the app.


    Citrix solution for NetScaler ADC CPX deployment

    1. Custom protocols: By default, CITRIX INGRESS CONTROLLER automates configuration with the default protocols(HTTP/SSL). CITRIX INGRESS CONTROLLER has support to configure custom protocols (TCP/SSL-TCP/UDP) using annotations.




      ingress.citrix.com/insecure-service-type: "tcp" [Annotation to selection LB protocol]


      ingress.citrix.com/insecure-port: "53" [Annotation to support custom port]


    2. Fine-tuning CS/LB/Servicegroup parameters: By default, CITRIX INGRESS CONTROLLER configures ADC with default parameters. The parameters can be fine-tuned with the help of NetScaler ADC entity-parameter (lb/servicegroup) annotations.




      LB-Method: ingress.citrix.com/lbvserver: '{"app-1":{"lbmethod":"ROUNDROBIN"}}'


      Persistence: ingress.citrix.com/lbvserver: '{"app-1":{"persistencetype":"sourceip"}}'




    3. Per app SSL encryption: CITRIX INGRESS CONTROLLER can selectively enable SSL encryption for apps with the help of smart annotation.




      ingress.citrix.com/secure_backend: '{"web-backend": "True"} [Annotation to selectively enable encryption per application]


    4. Default cert for ingress: CITRIX INGRESS CONTROLLER can take the default cert as argument. If the ingress definition doesn’t have the secret, then the default certificate is taken. The secret needs to be created once in the namespace, and then all the ingress that are in the namespace can use it.


    5. Citrix multiple ingress class support: By default, CITRIX INGRESS CONTROLLER listens for all the ingress objects in k8s cluster. We can control the configuration of ADC (Tier-1 MPX/VPX & Tier-2 CPX) with the help of ingress class annotations. This helps each team to manage the configurations for their ADC independently. Ingress class can help for deploying solutions to configure ADC for a particular namespace as well as a group of name spaces. The support is more generic as compared to provided by other vendors.




      kubernetes.io/ingress.class: "citrix" [Notify CITRIX INGRESS CONTROLLER to only configure ingress belonging to a particular class]


    6. Visibility: Citrix k8s solution is integrated with cncf visibility tools like Prometheous/Grafana for metric collection to support better debugging and analytics. Citrix prometheus exporter can make metrics available to Prometheus for visibility with Grafana as time series charts.


    For more information about using the microservices architecture, see the README.md file in GitHub. You can find the .yaml files in the Config folder.

    POC story line

    There are three teams running their apps on kubernetes cluster. The configuration of each team is independently managed on different CPXs with the help of citrix ingress class.

    The apps for each team are running in separate namespaces(team-hotdrink, team-colddrink, and team-redis) and all the CPXs are running in the CPX namespace.

    team-hotdrink: SSL/HTTP Ingress, persistency, lbmethod, encryption/dycription per application, default-cert.

    team-colddrink: SSL-TCP Ingress

    team-redis: TCP Ingress

    POC setup


    Application flow

    HTTP/SSL/SSL-TCP use-case:


    TCP use-case:


    Getting the docker images

    The provided YAML commands are fetching the images from quay repository.

    The images can be pulled and stored in the local repository too. You can use them by editing the Image parameter in YAML.


    Step-by-step application and CPX deployment using Nirmata

    1. Upload the cluster roles and cluster rolebindings in YAML, and apply them in cluster using Nirmata (rbac.yaml).


      1. Go to the Clusters tab.
      2. Select the cluster.
      3. In settings, apply YAML from the Apply YAMLs option.
    2. Create the environment for running CPX and the apps.


      1. Go to the Environment tab.
      2. Click Add Environment tab.
        • Select the cluster and create environment in the shared namespace.
      3. image-cpx-deployment-04


      4. Create the following environments for running Prometheus, CPX, and apps for different teams.
        • Create environment: cpx
        • Create environment: team-hotdrink
        • Create environment: team-colddrink
        • Create environment: team-redis
    3. Upload the .yaml application using Nirmata.


      1. Go to the Catalog tab.
      2. Click Add Application.
      3. Click Add to add the applications.


        Add application: team-hotdrink (team_hotdrink.yaml). Application name: team-hotdrink.


        Add application: team-colddrink (team_coldrink.yaml). Application name: team-colddrink.


        Add application: team-redis (team_redis.yaml). Application name: team-redis.


        Add application: cpx-svcacct (cpx_svcacct.yaml). Application name: cpx-svcacct.





        CPX with in-built CITRIX INGRESS CONTROLLER requires a service account in the namespace where it is running. For current version in Nirmata, create this using cpx_svcacct.yaml in the cpx environment.



        Add application: cpx (cpx_wo_sa.yaml). Application name: cpx.


    4. Run the CPX using Nirmata.


      1. Go to the Environment tab and select the correct environment.
      2. Click Run Application to run the application.
      3. In the cpx environment, run the cpx-svcacct application. Select cpx-svcacct with the run name cpx-svcacct from the Catalog Application.
      4. In the cpx environment, run the cpx application. Select cpx from the Catalog Application.
    5. image-cpx-deployment-05





      There are a couple of small workarounds needed for the CPX deployment, because the setup is using an earlier version of Nirmata.


      1. When creating the CPX deployments, do not set the serviceAccountName. The serviceAccountName can be added later. As the workaround, automatically redeploy the pods.
      2. Import the TLS secret for the ingress directly in the environment. This ensures that the type field is preserved.
      3. After running the application, go the CPX application.
      4. Under the Deployments > StatefulSets & DaemonSets tab, click the cpx-ingress-colddrinks deployment.
      5. On the next page, edit the Pod template. Enter CPX in the Service Account.
      6. Go back to the CPX application.
      7. Repeat the same procedure for the cpx-ingress-hotdrinks and cpx-ingress-redis deployment.

    Applying the service account, redeploys the pods. Wait for the pods to come up, and confirm if the service account has applied.


    The same can be verified by using the following commands in the Diamanti cluster.

    [diamanti@diamanti-250 ~]$ kubectl get deployment -n cpx -o yaml | grep -i account        serviceAccount: cpx        serviceAccountName: cpx        serviceAccount: cpx


    Note: If the serviceAccount is not applied, then cancel the CPX pods. The deployment that recreates it, comes up with serviceAccount.




    1. Run the applications using Nirmata.


      team-hotdrink application:


      1. Go to the Environment tab and select the correct environment: team-hotdrink.
      2. In the team-hotdrink environment, run the team-hotddrink application with the team-hotdrink run name. Select team-hotdrink from the Catalog Application.
      3. Go to the team-hotdrink application. In the upper-right corner of the screen, click Settings and select Import to Application. Upload hotdrink-secret.yaml.
    2. image-cpx-deployment-07


      team-colddrink application:


      1. Go to the Environment tab and select the correct environment: team-colddrink.
      2. In the team-colddrink environment, run the team-coldddrink application with team-colddrink run name. Select team-hotdrink from the Catalog Application.
      3. Go to the team-colddrink application. In the upper-right corner of the screen, click Settings and select Import to Application. Upload colddrink-secret.yaml.
    3. team-redis application:


      1. Go to the Environment tab and select the correct environment: team-redis.
      2. In the team-colddrink environment, run an application with the team-redis run name. Select team-redis from the Catalog Application.
        • In the team-redis environment, run an application with the team-redis run name.


    Continued in Part 2

    User Feedback

    Recommended Comments

    There are no comments to display.

    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Create New...