Jump to content
Welcome to our new Citrix community!

Mayur Patil

Internal Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by Mayur Patil

  1. Hello Garry, Can you please share the complete CIC log with CIC and Ingress manifest file to check the configuration? You can share the requested details over netscaler-appmodernization@cloud.com or raise the support case.
  2. NetScaler GSLB for your multi-Kubernetes microservice deployments NetScaler Application Delivery Controller (ADC) Global Server Load Balancing (GSLB) is a DNS-based solution which describes a range of technologies to distribute resources around multi-site data center locations. Customers are migrating application workloads from monoliths to microservices where applications are deployed across multiple Kubernetes clusters for high availability. NetScaler can distribute traffic across multiple K8s clusters providing resiliency to microservice applications. NetScaler GSLB controller deployed inside Kubernetes cluster will automate NetScaler GSLB appliance will be demonstrated in this lab. The lab will demonstrate how to: Deploy an application in Kubernetes cluster across two data centers/sitesDeploy GSLB controller in the Kubernetes clusterConfigure NetScaler GSLB using GSLB controllerDistribute the ingress traffic to applications deployed across the site using NetScaler.Click the Start hands-on Lab at the top of the post to try out ! Let us know your feedback or any issues in the comments section.
  3. In order to export ipfix (or netflow) records to splunk, you need to configure the splunk as an appflow collector. The detailed configuration is available in https://docs.netscaler.com/en-us/citrix-adc/current-release/ns-ag-appflow-intro-wrapper-con/ns-ag-appflow-config-tsk Splunk configuration is explained on Splunk website: https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/UseStreamtoingestNetflowandIPFIXdata
  4. Hello @Danny Gullick, thank you for your message. NetScaler 13.0 supports Prometheus integration using PUSH mode meaning netScaler will push the metrics to Prometheus PUSH Gateway or NetScaler observability exporter from which Prometheus will PULL the metrics. NetScaler 13.1 Prometheus integration is direct meaning Prometheus will PULL/scrape the metrics directly from NetScaler. Hence I would recommend you upgrade your NetScaler to 13.1 for better integration.
  5. There are different topologies NetScaler provides/supports for microservice applications. https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment will provide you our recommended topologies for cloud native/microservice deployments.
  6. Any NetScaler can be Ingress proxy for microservices. You can use how to guide - https://github.com/citrix/cloud-native-getting-started/blob/master/on-prem/Unified-Ingress/README.md to using SDX/MPX as Ingress proxy for OpenShift k8s apps.
  7. Yes, NetScaler cloud native deployments are validated on Rancher, OpenShift k8s platforms.
  8. @andrea arfaioli​ thank you for your message, CIC does not support URL transformation today however if you let us know your usecase we will help you creating rewrite CRD which can be directly applied to NetScaler to get same output.
  9. Will you help me understand your use case, are you exporting NetScaler data to elasticsearch and facing issue? We have documentation to deploy elasticsearch along with how to guide for exporting NetScaler telemetry data to elasticsearch and visualise it in Kibana dashboard - https://docs.citrix.com/en-us/citrix-observability-exporter/deploy-coe-with-es.html#deploy-elasticsearch-and-kibana-using-yaml-files
  10. Canary deployments for cloud-native apps with Citrix Ingress Controller Submitted February 15, 2021 Author: Bharathi M You work hard to mitigate the risks involved in delivering your applications. But despite all the automation, monitoring, alerts, notifications, and code review, sometimes things go wrong. You can’t always prevent issues from arising, but you can mitigate the risks with the strategy you choose for deploying updates to an application in a production environment. In this blog post, we’ll look at canary deployments for cloud native applications and how Citrix Ingress Controller (CIC) can help to ensure you can deliver a great end-user experience, even while you’re making significant updates to your applications in a production environment. What Are Canary Deployments? A canary deployment involves deploying new versions of an app in phased, incremental steps. The purpose? You’re deploying these changes to a small set of users first so you can identify and correct any issues and determine whether you’re ready to roll out the deployment to your entire user base. Canary deployments have many benefits, including: Reduced risk associated with software releases Simpler steps to execute Easy to automate Doesn’t create downtime Reduces the deployment cycle length by supporting earlier, more frequent product deployments Doesn’t disrupt the running application or the production environment How Does a Canary Deployment Work? A canary deployment has three stages. First you deploy the new version of your canary services to a small percentage of your users. Then you evaluate the canary deployment by collecting feedback from users, monitoring the errors, performance, and metrics using tools like Prometheus and Grafana. Finally, as you get comfortable with the evaluation results from your canary services, you can gradually migrate more and more users to the canary version. Stage 1: Deploying a new version of application to 10 percent of application users. Stage 2: Evaluating canary deployment using monitoring tools. Stage 3: Rolling out the new canary version to all users. Citrix Ingress Controller for Canary Deployments in Kubernetes When you run the canary and production versions of an application in parallel, the traffic controlling mechanism is responsible for: Avoiding interruption to the service. For example, when a new version appears to misbehave, you can quickly divert traffic to the production version. Allowing percentage-based control (some percentage of the traffic is directed to the new version) and client-dependent control (traffic based on HTTP headers such as User-Agent and Referrer is directed to the new version). Ensuring that all HTTP requests from a client is directed to the same version-based traffic-control configuration. Let’s look at how to achieve canary-based traffic routing at the ingress level using Citrix Ingress Controller (CIC). With CIC, the user must define an additional ingress object with canary annotations to indicate that the application requests need to be served based on canary type. The canary types you can configure using CIC are, by priority order, canary by header value; canary by header; and canary by weight. Please note, if an ingress with canary-by-header value annotation is defined without canary by header, the canary deployment will not be considered. Canary by Weight Weight-based canary deployment is a widely used canary deployment approach. Here, you can set the weight from 0 to 100. That determines the percentage of traffic directed to the canary version (and the production version) of an application. You can begin with the weight set to 0, which means no traffic is forwarded to the canary version. After you start your canary deployment you can change the weight so you are directing a percentage of traffic to the canary version. Then, after you’re ready to deploy your canary version to production, simply change the weight to 100 to ensure all the traffic is directed to the new version of your application. To do this using CIC, create a new ingress with canary annotation “ingress.citrix.com/canary-weight:” and include the percentage of traffic to be directed to the canary version. Let’s consider an example guestbook application and deploy the application using the following deployment, service, and ingress YAMLs: kubectl apply –f guestbook-deployment.yaml deployment.extensions “guestbook” created kubectl apply –f guestbook-service.yaml service “guestbook” created kubectl apply –f guestbook-ingress.yaml ingress.extensions “guestbook” created The production version of our guestbook service is now running. Next, we’ll deploy the canary version of the same service using the following YAMLs: kubectl apply –f canary-deployment.yaml deployment.extensions “guestbook-canary” created kubectl apply –f canary-service.yaml service “guestbook-canary” created kubectl apply –f canary-ingress.yaml ingress.extensions “canary-by-weight” created ********canary-ingress.yaml*********apiVersion: extensions/v1beta1kind: Ingressmetadata: name: canary-by-weight annotations: kubernetes.io/ingress.class: "citrix" ingress.citrix.com/canary-weight: "10"spec: rules: - host: webapp.com http: paths: - path: / backend: serviceName: guestbook-canary servicePort: 80 Here, ingress.citrix.com/canary-weight: “10” tells the CIC to configure NetScaler ADC so that 10 percent of total requests destined to webapp.com go to guestbook-canary service (the canary version of our guestbook application). Canary by Header We can also use HTTP request headers to achieve our canary deployment. These headers are controlled by the client and notify the ingress to route the request to the service specified in the canary ingress. The request will be routed to the service specified in the canary ingress if the request header contains the value mentioned in the ingress annotation “ingress.citrix.com/canary-by-header:”. For a canary-by-header deployment using CIC, simply replace the canary annotation “ingress.citrix.com/canary-weight:” with “ingress.citrix.com/canary-by-header:” in the canary-ingress.yaml in our above example. Canary by Header Value A canary-by-header-value approach helps us to achieve our canary deployment using the value of the HTTP request header. Along with the request header from the canary ingress annotation ( “ingress.citrix.com/canary-by-header:”), when the request header value matches the value specified in the ingress annotation “ingress.citrix.com/canary-by-header-value:”, the request will be routed to the service specified in the canary Ingress. For a canary-by-header-value-based deployment using CIC, just replace the canary annotation “ingress.citrix.com/canary-weight:” with “ingress.citrix.com/canary-by-header:” and “ingress.citrix.com/canary-by-header-value:” in the canary-ingress.yaml shown in our example above. Conclusion Canary deployments provide the flexibility you need to support deployment of your cloud-native apps. When you combine a canary deployment strategy with a fast CI/CD workflow, you end up with a productive, feature-rich release cycle that helps you deliver a great user experience. Take a look at these examples and try your own canary deployment using CIC. And check out our blog post on using NetScaler ADC to implement canary deployments for your applications.
  11. Hey, You can reach out to #AppModernization <appmodernization@citrix.com> email alias to connect with App modernisation SMEs.
  12. # Load balance Ingress traffic with Citrix ADC CPX in Minikube In this example, the Citrix ADC CPX (a containerized form-factor) is used to route the Ingress traffic to a `Guestbook` microservice application deployed in a Minikube cluster. An Ingress resource is deployed in the MiniKube cluster to define the rules for sending external traffic to the application. **Prerequisite**: Ensure that you have installed and set up a Minikube cluster (This example is tested on Minikube v1.23 having K8s 1.22.1 deployed). Perform the following: 1. Deploy Citrix ADC CPX as an Ingress proxy in the Minikube cluster and verify the installation using the following commands. kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/cpx.yaml kubectl get pods -l app=cpx-ingress 2. Deploy the `Guestbook` application in Minikube and verify the installation. kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/guestbook-app.yaml kubectl get pods -l 'app in (guestbook, redis)' 3. Deploy an Ingress rule that sends traffic to http://www.guestbook.com. kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/guestbook-ingress.yaml kubectl get ingress kubectl get svc cpx-service 4. Send traffic to the `Guestbook` microservice application. curl -s -H "Host: www.guestbook.com" http://<MiniKube IP:<NodePort> | grep Guestbook 5. (Optional) Clean up the deployments using the following commands. kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/cpx.yaml kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/guestbook-app.yaml kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/guestbook-ingress.yaml For more information on the Citrix ingress controller, see the Citrix ingress controller documentation. For more tutorials, see beginners-guides.
  13. What is Application Modernisation?Application Modernisation is a process of updating existing legacy applications using newer computing approaches, including newer languages, frameworks and infrastructure platforms. You would see the trend in the market where discussions are happening around migrating monolithic on premise application to cloud architecture or microservices.Application Modernisation allows enterprises to innovate the new features and functionalities of product with velocity. Why modernise legacy applications?A good application modernisation strategy enables organisation to reduce the resources required to run an application, increase reliability of deployments, and improve uptime and resiliency. Legacy applications are costly upgrade operations and have scaling issues on other hand modern applications are loosely coupled, easy to upgrade and scalable.Deep dive into Application modernisation patterns... Lift and Shift - simplest, fastest time to market approach where application is moved as is from legacy form factor (monolithic) to new form factor (cloud native)Refactoring - complex but robust approach where legacy applications are broken down into multiple chunks and re-written in new environment. Microservice architecture is best choice for organisations while thinking of refactoring the applications.Replatforming - It is simpler than refactoring but complex than lift and shift approach. Organisations takes advantage of modern cloud architecture for legacy applications and move towards adopting refactoring in longer run. Key technologies for application modernisationPrivate, hybrid, and multi-cloud - This is also called as cloud computing where organisations decides to move from on-premises to cloud (public, private or hybrid) depending on business need.Containers and Kubernetes - This is the trend becoming more popular in market where organisations are adopting microservice based architectures.Automation & Orchestrations - For addressing large scale deployments, organisations are thinking in direction of automating the processes, application deployments and adopting Orchestration platform giving ability to manage and control resources from one place. Application Modernisation and NetScalerNetScaler App modernisation stack enables each and every enterprise customers to adopt modern architecture seamlessly. NetScaler proxy capabilities allows organisations to address traffic management use cases with end to end visibility into user traffic. Refer NetScaler App modernisation stack and know its capabilities today!
  14. An Ingress Controller is a controller monitors the Kubernetes API server for updates to the Ingress resource and configures the Ingress load balancer accordingly. Citrix has developed an open-source Ingress controller called as Citrix Ingress Controller (CIC) deployed in a Kubernetes cluster to automate the NetScaler's configurations. CIC can configure all NetScaler form factors (MPX/SDX/VPX/BLX/CPX). Citrix Ingress Controller supports the following deployments options: A sidecar container for configuring Tier 2 Ingress proxy (NetScaler CPX proxy).Standalone deployment configuring Tier 1 external Ingress proxy (MPX/SDX/BLX/VPX).Note: This tutorial is for learning the basics of CIC deployment modes. For more information on end to end example, see getting started guides. Prerequisite: Kubernetes cluster (Following example is tested in an on-premises v1.22.1 K8s cluster). Deploy the CIC as a sidecar with NetScaler CPX proxy. kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/cpx.yamlkubectl get pods -l app=cpx-ingress There are two containers running in the same pod highlighted by 2/2 in the READY column. One container is for NetScaler CPX proxy and another container is for CIC. The details of both containers are the following. kubectl describe pod $(kubectl get pods -l app=cpx-ingress | awk ‘{print $1}’ | grep cpx-ingress) Deploy CIC for configuring Tier 1 NetScaler VPX. wget https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/tier1-cic.yaml Update the NS_IP environment variable in the tier1-cic yaml file with NetScaler management IP. Deploy CIC in K8s cluster. kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/tier1-cic.yamlkubectl get pods -l app=cic-k8s-ingress-controllerkubectl describe pod <pod name> Verify the CIC logs. kubectl logs -f <pod name> Clean up the K8s cluster. kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/cpx.yamlkubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/tier1-cic.yaml For more information on all supported options for the Citrix Ingress controller, see Modern App deployment YAML manifest details. For more information on the Citrix ingress controller, see the Citrix ingress controller documentation. For more tutorials, see beginners-guides.
  15. To route the client traffic to a microservice based application, you have to write Ingress rules that must propagate in the Ingress proxy for better traffic distribution. Ingress proxy acts as Application Delivery Controller to load balance microservice based application. Citrix has a container based Ingress proxy called as NetScaler CPX. It provides application load balancing, acceleration, security, and offload feature set in a simple, easy-to-install container. The NetScaler CPX is built from the same code base as the NetScaler and is packaged as a Docker container. In this article, NetScaler CPX (a containerized form-factor) routes the Ingress traffic to the Apache microservice application. NetScaler CPX supports the following deployment modes: Deploy a standalone NetScaler CPX per Kubernetes cluster Deploy NetScaler CPX per node Deploy NetScaler CPX per namespace Deploy NetScaler CPX with high availability (horizontal scaling) Deploy a standalone NetScaler CPX (NetScaler CPX per Kubernetes cluster) Deploy a stand-alone NetScaler CPX as an ingress proxy. ```kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/standalone-cpx-mode.yamlkubectl get pods -l app=cpx-ingresskubectl get svc cpx-service``` Send traffic to the Apache microservice. ```curl -s -H "Host: www.ingress.com" http://<Master IP:<NodePort>``` Clean up the setup. kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/standalone-cpx-mode.yaml Deploy NetScaler CPX per node of a Kubernetes cluster Get the list of nodes in a cluster and deploy NetScaler CPX per node. kubectl get nodeskubectl create -f <https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/cpx-per-node-mode.yaml>kubectl get pods -l app=cpx-ingresskubectl get svc cpx-service The number of CPX-ingress pods are equal to the number of nodes in the Kubernetes cluster deploying pods. Deploy traffic to the Apache microservice. ```curl -s -H "Host: www.ingress.com" <http://<Master IP:<NodePort>``` Clean up the setup. kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/cpx-per-node-mode.yaml Deploy NetScaler CPX per namespace deploymentThe following example shows how to deploy NetScaler CPX in multiple namespaces. Create three namespaces in the Kubernetes cluster. ```kubectl create namespace team-Akubectl create namespace team-Bkubectl create namespace team-C``` Deploy the NetScaler CPX in each namespace. kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/standalone-cpx-mode.yaml -n team-A kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/standalone-cpx-mode.yaml -n team-B kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/standalone-cpx-mode.yaml -n team-C Deploy the colddrink microservice apps in all namespaces. kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/colddrink-app.yaml -n team-A kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/colddrink-app.yaml -n team-B kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/colddrink-app.yaml -n team-C Verify the deployment using the following commands: kubectl get pods -l app=frontend-colddrinks -n team-A kubectl get pods -l app=frontend-colddrinks -n team-B kubectl get pods -l app=frontend-colddrinks -n team-C Deploy Ingress rules that send traffic to http://www.colddrink.com. kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/colddrink-ingress.yaml -n team-A kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/colddrink-ingress.yaml -n team-B kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/colddrink-ingress.yaml -n team-C Verify the Ingress using the following commands: kubectl get ingress -n team-A kubectl get svc cpx-service -n team-A kubectl get ingress -n team-B kubectl get svc cpx-service -n team-B kubectl get ingress -n team-C kubectl get svc cpx-service -n team-C Deploy the traffic for each NetScaler CPX deployed in different namespaces. kubectl get pods -l app=cpx-ingress -n team-A kubectl get pods -l app=cpx-ingress -n team-B kubectl get pods -l app=cpx-ingress -n team-C kubectl get svc -n team-A kubectl get svc -n team-B kubectl get svc -n team-C Verify NodePort for all CPXs and create cURL requests. curl -s -H "Host: www.ingress.com" http://<Master IP:<NodePort> Clean up the setup. kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/standalone-cpx-mode.yaml -n team-A kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/standalone-cpx-mode.yaml -n team-B kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/standalone-cpx-mode.yaml -n team-C kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/colddrink-app.yaml -n team-A kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/colddrink-app.yaml -n team-B kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/colddrink-app.yaml -n team-C kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/colddrink-ingress.yaml -n team-A kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/colddrink-ingress.yaml -n team-B kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/colddrink-ingress.yaml -n team-C kubectl delete namespace team-A team-B team-C High availability NetScaler CPX deploymentThis example shows how to deploy NetScaler CPX in high availability mode. Deploy NetScaler CPX in high availability mode. kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/standalone-cpx-mode.yaml kubectl get pods -l app=cpx-ingress Scale up the NetScaler CPX to two instances. kubectl scale deployment cpx-ingress --replicas=2 kubectl get pods -l app=cpx-ingress Now both NetScaler CPXs are capable to take distributed Ingress traffic. Scale down NetScaler CPX pods to a single instance. kubectl scale deployment cpx-ingress --replicas=1 kubectl get pods app=cpx-ingress Test the Kubernetes self-healing mechanism. kubectl get pods -l app=cpx-ingress kubectl delete pod <cpx-ingress pod name> kubectl get pods -l app=cpx-ingress You can see that a new NetScaler CPX pod has come up immediately after a running pod goes down. Try out Horizontal pod autoscaling for NetScaler CPX. NetScaler CPX supports Horizontal Pod Autoscaling (HPA) to automatically scale the number of pods in your workload based on different metrics like actual resource usage. Try out CPX HPA from [CPX HPA documentation](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/how-to/cpx-hpa/) Clean up the setup. kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/standalone-cpx-mode.yaml For more information, see the NetScaler CPX documentation. For more information on the Citrix ingress controller, see the Citrix ingress controller documentation. For more tutorials, see beginners-guides.
  16. Hello, There are some modification required in the Ingress file to support the use case that you are targeting (CLIENT -->https-->VPX VS-->http-->app ingress (k8s)) Since you want SSL connection at the front end, Ingress file should define the tls section under spec of Ingress (please refer below Ingress sample file copied from https://github.com/citrix/example-cpx-vpx-for-kubernetes-2-tier-microservices/blob/master/on-prem/config/ingress_vpx.yaml) secure-port: 443 this annotation will come in the picture only after tls section is present in Ingress, this port is used to use custom port for SSL connection other than 443 secure-service-type : this annotation will be used along with tls section from Ingress to define the type of SSL vserver protocol insecure-port annotation is used with insecure-termination: "allow" to send the traffic on non-secure port along with secure port, but you will not need this annotation because you want SSL connection in front end. Try to change the Ingress accordingly, please refer to above link where end to end demo yaml files are located for your reference. Sample Ingress file: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-vpx annotations: kubernetes.io/ingress.class: "vpx" ingress.citrix.com/insecure-termination: "redirect" ingress.citrix.com/frontend-ip: "x.x.x.x" ingress.citrix.com/secure_backend: '{"lb-service-hotdrinks": "True","lb-service-colddrinks": "True"}' ### This annotation helps to make backend service secure spec: tls: ### This section is required for secure front end connection - secretName: hotdrink-secret ### this is secret used as server certificate in vserver config rules: - host: hotdrink.beverages.com http: paths: - path: backend: serviceName: lb-service-hotdrinks servicePort: 443 - host: guestbook.beverages.com http: paths: - path: backend: serviceName: lb-service-guestbook servicePort: 80 - host: colddrink.beverages.com http: paths: - path: backend: serviceName: lb-service-colddrinks servicePort: 443
  17. Hi, NetScaler has the capability to parse the Json string using xp function and NetScaler can insert/Rewrite the JSON string as per customer requirement. Please find below sample for better understanding. If Original data is: { "PARENT" : {"name" : {"ganesh": "ramesh"},"C" : "abchijabc"}, "B" : "def" } Then the config will give the response as: { "PARENT" : {"name" : {"ganesh": "ramesh"},"C" : "abchijabc","NEW" : "VALUE"}, "B" : "def" } The config is: add rewrite action ac6 insert_after "http.res.body(http.res.content_length).xpath_json_with_markup(xp%/PARENT/child::C%)" q/",\"NEW\" : \"VALUE\""/ Write back to us if you have any concerns, Regards, Mayur P.
  18. Yes, to support web socket by NetScaler, HTTP profile has option to enable it. Even native wss:// also works for SSL type vserver, please find more information here -https://support.citrix.com/article/CTX235401
×
×
  • Create New...