Jump to content

Mayur Patil

Internal Members
  • Posts

    21
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Mayur Patil's Achievements

Apprentice

Apprentice (3/14)

  • Conversation Starter Rare
  • First Post Rare
  • Collaborator Rare
  • Week One Done
  • One Month Later

Recent Badges

2

Reputation

  1. Hello Garry, Can you please share the complete CIC log with CIC and Ingress manifest file to check the configuration? You can share the requested details over netscaler-appmodernization@cloud.com or raise the support case.
  2. NetScaler GSLB for your multi-Kubernetes microservice deployments NetScaler Application Delivery Controller (ADC) Global Server Load Balancing (GSLB) is a DNS-based solution which describes a range of technologies to distribute resources around multi-site data center locations. Customers are migrating application workloads from monoliths to microservices where applications are deployed across multiple Kubernetes clusters for high availability. NetScaler can distribute traffic across multiple K8s clusters providing resiliency to microservice applications. NetScaler GSLB controller deployed inside Kubernetes cluster will automate NetScaler GSLB appliance will be demonstrated in this lab. The lab will demonstrate how to: Deploy an application in Kubernetes cluster across two data centers/sitesDeploy GSLB controller in the Kubernetes clusterConfigure NetScaler GSLB using GSLB controllerDistribute the ingress traffic to applications deployed across the site using NetScaler.Click the Start hands-on Lab at the top of the post to try out ! Let us know your feedback or any issues in the comments section.
  3. In order to export ipfix (or netflow) records to splunk, you need to configure the splunk as an appflow collector. The detailed configuration is available in https://docs.netscaler.com/en-us/citrix-adc/current-release/ns-ag-appflow-intro-wrapper-con/ns-ag-appflow-config-tsk Splunk configuration is explained on Splunk website: https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/UseStreamtoingestNetflowandIPFIXdata
  4. Hello @Danny Gullick, thank you for your message. NetScaler 13.0 supports Prometheus integration using PUSH mode meaning netScaler will push the metrics to Prometheus PUSH Gateway or NetScaler observability exporter from which Prometheus will PULL the metrics. NetScaler 13.1 Prometheus integration is direct meaning Prometheus will PULL/scrape the metrics directly from NetScaler. Hence I would recommend you upgrade your NetScaler to 13.1 for better integration.
  5. There are different topologies NetScaler provides/supports for microservice applications. https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment will provide you our recommended topologies for cloud native/microservice deployments.
  6. Any NetScaler can be Ingress proxy for microservices. You can use how to guide - https://github.com/citrix/cloud-native-getting-started/blob/master/on-prem/Unified-Ingress/README.md to using SDX/MPX as Ingress proxy for OpenShift k8s apps.
  7. Yes, NetScaler cloud native deployments are validated on Rancher, OpenShift k8s platforms.
  8. @andrea arfaioli​ thank you for your message, CIC does not support URL transformation today however if you let us know your usecase we will help you creating rewrite CRD which can be directly applied to NetScaler to get same output.
  9. Will you help me understand your use case, are you exporting NetScaler data to elasticsearch and facing issue? We have documentation to deploy elasticsearch along with how to guide for exporting NetScaler telemetry data to elasticsearch and visualise it in Kibana dashboard - https://docs.citrix.com/en-us/citrix-observability-exporter/deploy-coe-with-es.html#deploy-elasticsearch-and-kibana-using-yaml-files
  10. Canary deployments for cloud-native apps with Citrix Ingress Controller Submitted February 15, 2021 Author: Bharathi M You work hard to mitigate the risks involved in delivering your applications. But despite all the automation, monitoring, alerts, notifications, and code review, sometimes things go wrong. You can’t always prevent issues from arising, but you can mitigate the risks with the strategy you choose for deploying updates to an application in a production environment. In this blog post, we’ll look at canary deployments for cloud native applications and how Citrix Ingress Controller (CIC) can help to ensure you can deliver a great end-user experience, even while you’re making significant updates to your applications in a production environment. What Are Canary Deployments? A canary deployment involves deploying new versions of an app in phased, incremental steps. The purpose? You’re deploying these changes to a small set of users first so you can identify and correct any issues and determine whether you’re ready to roll out the deployment to your entire user base. Canary deployments have many benefits, including: Reduced risk associated with software releases Simpler steps to execute Easy to automate Doesn’t create downtime Reduces the deployment cycle length by supporting earlier, more frequent product deployments Doesn’t disrupt the running application or the production environment How Does a Canary Deployment Work? A canary deployment has three stages. First you deploy the new version of your canary services to a small percentage of your users. Then you evaluate the canary deployment by collecting feedback from users, monitoring the errors, performance, and metrics using tools like Prometheus and Grafana. Finally, as you get comfortable with the evaluation results from your canary services, you can gradually migrate more and more users to the canary version. Stage 1: Deploying a new version of application to 10 percent of application users. Stage 2: Evaluating canary deployment using monitoring tools. Stage 3: Rolling out the new canary version to all users. Citrix Ingress Controller for Canary Deployments in Kubernetes When you run the canary and production versions of an application in parallel, the traffic controlling mechanism is responsible for: Avoiding interruption to the service. For example, when a new version appears to misbehave, you can quickly divert traffic to the production version. Allowing percentage-based control (some percentage of the traffic is directed to the new version) and client-dependent control (traffic based on HTTP headers such as User-Agent and Referrer is directed to the new version). Ensuring that all HTTP requests from a client is directed to the same version-based traffic-control configuration. Let’s look at how to achieve canary-based traffic routing at the ingress level using Citrix Ingress Controller (CIC). With CIC, the user must define an additional ingress object with canary annotations to indicate that the application requests need to be served based on canary type. The canary types you can configure using CIC are, by priority order, canary by header value; canary by header; and canary by weight. Please note, if an ingress with canary-by-header value annotation is defined without canary by header, the canary deployment will not be considered. Canary by Weight Weight-based canary deployment is a widely used canary deployment approach. Here, you can set the weight from 0 to 100. That determines the percentage of traffic directed to the canary version (and the production version) of an application. You can begin with the weight set to 0, which means no traffic is forwarded to the canary version. After you start your canary deployment you can change the weight so you are directing a percentage of traffic to the canary version. Then, after you’re ready to deploy your canary version to production, simply change the weight to 100 to ensure all the traffic is directed to the new version of your application. To do this using CIC, create a new ingress with canary annotation “ingress.citrix.com/canary-weight:” and include the percentage of traffic to be directed to the canary version. Let’s consider an example guestbook application and deploy the application using the following deployment, service, and ingress YAMLs: kubectl apply –f guestbook-deployment.yaml deployment.extensions “guestbook” created kubectl apply –f guestbook-service.yaml service “guestbook” created kubectl apply –f guestbook-ingress.yaml ingress.extensions “guestbook” created The production version of our guestbook service is now running. Next, we’ll deploy the canary version of the same service using the following YAMLs: kubectl apply –f canary-deployment.yaml deployment.extensions “guestbook-canary” created kubectl apply –f canary-service.yaml service “guestbook-canary” created kubectl apply –f canary-ingress.yaml ingress.extensions “canary-by-weight” created ********canary-ingress.yaml*********apiVersion: extensions/v1beta1kind: Ingressmetadata: name: canary-by-weight annotations: kubernetes.io/ingress.class: "citrix" ingress.citrix.com/canary-weight: "10"spec: rules: - host: webapp.com http: paths: - path: / backend: serviceName: guestbook-canary servicePort: 80 Here, ingress.citrix.com/canary-weight: “10” tells the CIC to configure NetScaler ADC so that 10 percent of total requests destined to webapp.com go to guestbook-canary service (the canary version of our guestbook application). Canary by Header We can also use HTTP request headers to achieve our canary deployment. These headers are controlled by the client and notify the ingress to route the request to the service specified in the canary ingress. The request will be routed to the service specified in the canary ingress if the request header contains the value mentioned in the ingress annotation “ingress.citrix.com/canary-by-header:”. For a canary-by-header deployment using CIC, simply replace the canary annotation “ingress.citrix.com/canary-weight:” with “ingress.citrix.com/canary-by-header:” in the canary-ingress.yaml in our above example. Canary by Header Value A canary-by-header-value approach helps us to achieve our canary deployment using the value of the HTTP request header. Along with the request header from the canary ingress annotation ( “ingress.citrix.com/canary-by-header:”), when the request header value matches the value specified in the ingress annotation “ingress.citrix.com/canary-by-header-value:”, the request will be routed to the service specified in the canary Ingress. For a canary-by-header-value-based deployment using CIC, just replace the canary annotation “ingress.citrix.com/canary-weight:” with “ingress.citrix.com/canary-by-header:” and “ingress.citrix.com/canary-by-header-value:” in the canary-ingress.yaml shown in our example above. Conclusion Canary deployments provide the flexibility you need to support deployment of your cloud-native apps. When you combine a canary deployment strategy with a fast CI/CD workflow, you end up with a productive, feature-rich release cycle that helps you deliver a great user experience. Take a look at these examples and try your own canary deployment using CIC. And check out our blog post on using NetScaler ADC to implement canary deployments for your applications.
  11. Hey, You can reach out to #AppModernization <appmodernization@citrix.com> email alias to connect with App modernisation SMEs.
  12. # Load balance Ingress traffic with Citrix ADC CPX in Minikube In this example, the Citrix ADC CPX (a containerized form-factor) is used to route the Ingress traffic to a `Guestbook` microservice application deployed in a Minikube cluster. An Ingress resource is deployed in the MiniKube cluster to define the rules for sending external traffic to the application. **Prerequisite**: Ensure that you have installed and set up a Minikube cluster (This example is tested on Minikube v1.23 having K8s 1.22.1 deployed). Perform the following: 1. Deploy Citrix ADC CPX as an Ingress proxy in the Minikube cluster and verify the installation using the following commands. kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/cpx.yaml kubectl get pods -l app=cpx-ingress 2. Deploy the `Guestbook` application in Minikube and verify the installation. kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/guestbook-app.yaml kubectl get pods -l 'app in (guestbook, redis)' 3. Deploy an Ingress rule that sends traffic to http://www.guestbook.com. kubectl create -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/guestbook-ingress.yaml kubectl get ingress kubectl get svc cpx-service 4. Send traffic to the `Guestbook` microservice application. curl -s -H "Host: www.guestbook.com" http://<MiniKube IP:<NodePort> | grep Guestbook 5. (Optional) Clean up the deployments using the following commands. kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/cpx.yaml kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/guestbook-app.yaml kubectl delete -f https://raw.githubusercontent.com/citrix/cloud-native-getting-started/master/beginners-guide/manifest/guestbook-ingress.yaml For more information on the Citrix ingress controller, see the Citrix ingress controller documentation. For more tutorials, see beginners-guides.
  13. What is Application Modernisation?Application Modernisation is a process of updating existing legacy applications using newer computing approaches, including newer languages, frameworks and infrastructure platforms. You would see the trend in the market where discussions are happening around migrating monolithic on premise application to cloud architecture or microservices.Application Modernisation allows enterprises to innovate the new features and functionalities of product with velocity. Why modernise legacy applications?A good application modernisation strategy enables organisation to reduce the resources required to run an application, increase reliability of deployments, and improve uptime and resiliency. Legacy applications are costly upgrade operations and have scaling issues on other hand modern applications are loosely coupled, easy to upgrade and scalable.Deep dive into Application modernisation patterns... Lift and Shift - simplest, fastest time to market approach where application is moved as is from legacy form factor (monolithic) to new form factor (cloud native)Refactoring - complex but robust approach where legacy applications are broken down into multiple chunks and re-written in new environment. Microservice architecture is best choice for organisations while thinking of refactoring the applications.Replatforming - It is simpler than refactoring but complex than lift and shift approach. Organisations takes advantage of modern cloud architecture for legacy applications and move towards adopting refactoring in longer run. Key technologies for application modernisationPrivate, hybrid, and multi-cloud - This is also called as cloud computing where organisations decides to move from on-premises to cloud (public, private or hybrid) depending on business need.Containers and Kubernetes - This is the trend becoming more popular in market where organisations are adopting microservice based architectures.Automation & Orchestrations - For addressing large scale deployments, organisations are thinking in direction of automating the processes, application deployments and adopting Orchestration platform giving ability to manage and control resources from one place. Application Modernisation and NetScalerNetScaler App modernisation stack enables each and every enterprise customers to adopt modern architecture seamlessly. NetScaler proxy capabilities allows organisations to address traffic management use cases with end to end visibility into user traffic. Refer NetScaler App modernisation stack and know its capabilities today!
×
×
  • Create New...