Jump to content
Updated Privacy Statement

Konstantinos Kaltsas

Internal Members
  • Posts

    21
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Konstantinos Kaltsas's Achievements

Apprentice

Apprentice (3/14)

  • Dedicated Rare
  • Conversation Starter Rare
  • First Post Rare
  • Collaborator Rare
  • Week One Done

Recent Badges

0

Reputation

  1. Hello Chad, You can find a description for all error codes under the API Specification here: https://developer-docs.netscaler.com/en-us/nextgen-api/apis/#/Authentication/Login
  2. Agile, flexible DevOps practices are crucial to successful application modernization initiatives, which often involve migrating applications from on-premises to public cloud using containers. At the same time, many organizations must continue hosting applications on-premises due to regulatory and compliance requirements, cost concerns, or other business needs. NetScaler, an application delivery and security platform, and Google Anthos, a cloud-centric container platform, work together to provide consistent and reliable application delivery and security for any application on any infrastructure. NetScaler for hybrid cloud application delivery NetScaler provides powerful and flexible topologies that complement organizational boundaries. Dual-tier deployments employ high-capacity hardware or virtualized NetScaler ADCs in the first tier to segment control between network operators and Kubernetes operators, while the second tier employs containerized NetScaler ADCs within the Kubernetes cluster. With its flexible topologies, NetScaler makes it as easier to migrate applications between on-premises and cloud. Because all NetScaler form factors share the same code base, NetScaler works the same regardless of the environment in which it is deployed. So you gain operational consistency along with greater agility regardless of the type of application (monolith or microservices) and infrastructure. Key NetScaler features include auto-scaling, global server load-balancing (GSLB), multi-cluster ingress, and a web application firewall. Google Anthos for hybrid cloud infrastructure management Google Anthos unifies the management of compute infrastructure and applications across on-premises, at the edge, and in multiple public clouds with a Google Cloud-backed control plane for consistent operation at scale. Key features like an enterprise-grade container orchestration and management service; policy and configuration management; and a service mesh help you to accelerate the adoption of Day 2 operations and make your DevOps pipeline more efficient. Key capabilities of NetScaler with Google Anthos Key capabilities you gain from using NetScaler with Google Anthos include: Auto-scale ADCs from within a Google Kubernetes Engine (GKE) cluster based on user demand Automate ADC Day 2 operations using Anthos Config Management Implement advanced security controls and enforce security policies with NetScaler Web App Firewall and Anthos Policy Controller Automate multi-cluster/multi-regional deployments and load balancing configurations from within GKE using NetScaler GSLB and Anthos Config Management Export metrics and transaction data from NetScaler using the containerized NetScaler Observability Exporter and export them to such popular endpoints as Zipkin, Kafka, Elasticsearch, Prometheus, and Splunk Enterprise Automate canary deployments using NetScaler Ingress Controller and Anthos Config Management We provide fully functional labs for you to test. The labs’ source code is publicly available on GitHub. Note that our GitHub account refers to Citrix ADC, which is the former name of NetScaler. Request a free consultation with the NetScaler product team If you’re looking to get started with application modernization, request a free consultation with the NetScaler product team: appmodernization@citrix.com. To participate in the discussion about application modernization, join the NetScaler cloud-native Slack channel.
  3. This is the third post in our series on Citrix ADC with Google Anthos. In our first post, we talked about the importance of modern app deliver and security for hybrid multi-cloud, and in our second post, we focused on achieving consistent and reliable app delivery for Kubernetes apps and shared a lab on GitHub for readers to test. In this post, we’ll focus on security and demonstrate how: Citrix ADC can strengthen your security posture across hybrid and multi-cloud. Citrix Web App Firewall (WAF) works seamlessly with Google Anthos Policy Controller to provide protection for Kubernetes apps and APIs. Citrix Web App Firewall with Google Anthos Policy Controller enforce app protection using configuration as code GitOps enhances continuous configuration along with Google Anthos Config Management for automating security configuration. Protecting Web Apps and APIs When it comes to application delivery, security is a top priority. Web apps and APIs are often an organization’s most valuable but vulnerable assets, and to reach production and go live, there are several requirements that need to be met. From governance and compliance requirements to organization-specific requirements, the task is not an easy one. Citrix Web App Firewall has proven and robust security controls to protect apps against known and unknown application attacks. It defends apps and APIs against OWASP top 10 threats and zero-day attacks and provides security insights for faster remediation. To learn how Citrix Web App Firewall is designed to provide security, check out our product documentation. Our introduction to Citrix Web App Firewall, overview of security checks, and FAQs and deployment guide are great resources to help you get started. Citrix Web App Firewall is designed to be easily enabled and configured as code following the infrastructure and configuration as code paradigms. By providing WAF, bot management, CORS CRD for Kubernetes, security configurations are now possible from within a GKE cluster. You can now automate the configuration of both Tier-1 and Tier-2 Citrix Web App Firewalls easily. Common protections such as buffer overflow, cross site request forgery (CSRF), cross site scripting (XSS), SQL injection, URL allow lists and block lists, or more advanced ones can be easily enabled as policies using simple YAML files. Combining these capabilities with policy agents (as we’ll see in our lab) introduces an enterprise-grade practice of configuring and automating security. The key advantage of using Citrix WAF is that it uses a single code base across all Citrix ADC form factors (MPX and SDX, as well as VPX and CPX) so you can consistently apply and enforce security policies across any application environment. That gives you ease of deployment and simplicity in configurations which saves time and reduces configuration errors. Citrix Web App Firewall follows well-established principles that provide DevOps, CloudOps and SecOps teams with the tools they need to effectively do their job. By supporting both positive and negative security models Citrix Web App Firewall provides the widest protection possible. In addition to that, common event format (CEF) logging enables customers to easily collect and aggregate WAF data for analysis by an enterprise management system. Configuring and integrating a WAF has never been easier. Because security configurations can be part of the source code and stored in Git, different configurations can be created and maintained per environment. “Shifting Security Left” in the early stages of testing can become easier and Dev(Sec)Ops practices can be applied. Configurations are now closer to meeting the actual need, closer to the apps that need protection, and can eliminate false positives. And with a single point of truth, full visibility is achieved for both Operations and Audit teams, making it even easier to perform required audits. Deploying a Modern Application Architecture Here, we’ll focus on deploying a Tier-1 Citrix ADC (VPX) in front of a Google Anthos GKE cluster within GCP. We will leverage Google Anthos Configuration Management for consistent deployment of Citrix components into the Anthos GKE cluster. Additionally, we’ll leverage Google Anthos Policy Controller to ensure that Citrix Web App Firewall configurations exist to protect ingress objects within a cluster. ACM (Anthos Configuration Management) is a GitOps-centric tool that synchronizes configuration into a Anthos Kubernetes cluster from a Git repository. Policy Controller is a component of ACM that can audit or enforce configurations across the cluster. This lab automation has been written with GitHub as the git repository tool of choice. The following diagram illustrates the infrastructure used by our lab that will be deployed. (Click the image to view larger.) Citrix ADC VPX A single Citrix ADC VPX instance is deployed with two network interfaces: nic0 provides access for management (NSIP) and access to back-end servers (SNIP). nic1 provides access for deployed applications (VIPs). Each interface is assigned an internal private IP address and an external public IP address. The instance is deployed as a pre-emptible node to reduce lab costs. The instance automatically configures the password with Terraform. The instance is then automatically configured by the Citrix Ingress Controller and Citrix Node Controller deployed in the GKE cluster. VPCs and Firewall Rules Two VPCs are used in this deployment: The default VPC and subnets are used for instance and GKE cluster deployment. The vip-vpc is used only to host VIP addresses, which routes the traffic back to the services in the default VPC. Default firewall rules apply to the default VPC. Ports 80/443 are permitted into the vip-vpc. GKE Cluster with Anthos Configuration Management A single GKE cluster is deployed as a zonal cluster: Autoscaling is enabled with a minimum of one node and a configurable maximum. The Google Anthos Config Management (ACM) operator is deployed into the GKE cluster and configured to sync the cluster configuration from a GitHub repository. Citrix Ingress Controller and Citrix Node Controller components are automatically installed via ACM into the ctx-ingress namespace. Citrix Web App Firewall Custom Resource Definition (CRD) is installed via ACM to enable developers to create WAF configurations. Worker nodes are deployed as pre-emptible nodes to reduce lab costs. Policy Controller is installed to demonstrate constraints that enforce the presence of a WAF object in a namespace prior to accepting an Ingress resource. GitHub Repository A dedicated GitHub repository is created and loaded with a basic cluster configuration: A basic hierarchical format is used for ease of navigation through namespaces and manifests. Citrix Ingress Controller and Citrix Node Controller deployment manifests are built from templates and added to this repository, along with their other required roles / rolebindings / services / etc. This repository is created and destroyed by Terraform. Online Boutique Demo Application The online boutique demo application provides a microservices-based application for our lab. It has been modified slightly for this environment: An ingress resource has been added to receive all traffic through the Citrix VPX. Application components are controlled through Anthos Config Management and the source git repo. To learn more about how to deploy this lab and see autoscaling in action, please visit Citrix ADC with Google Anthos – WAF with Policy Controller Lab on our Citrix Cloud Native Networking (CNN) hands-on guides. Additional Information Read more about how Citrix can help you on your application modernization journey in our Microservices App Delivery Best Practices library. Interested in learning more about Citrix application and API security? Check our Citrix Web App Firewall data sheet. Find out how a Citrix ADC solution helps manage, monitor, and secure your entire infrastructure and ensure application security efficacy on our e-book on the Top 6 WAF Essentials to Achieve Application Security Efficacy. In our app delivery and security developer docs, you’ll find guidance on configuring Citrix components to meet your specific requirements. Our e-book on six must-haves for app delivery in hybrid- and multi-cloud environments has details on why you need an application delivery controller along with a management and orchestration platform. You can learn more about the role of application delivery in the cloud-native journey in our white paper on seven key considerations for microservices-based application delivery. Finally, the ADC Guide to Managing Hybrid (IT and DevOps) Application Delivery covers how Citrix ADC bridges the gap between traditional and DevOps app delivery. What’s Next? Watch out for the next blog post in our series, where we will discuss how you can use Citrix ADC, with its extensive set of policies, as an API gateway for Kubernetes apps. Looking to get started or take the next step in your app modernization? Our team is now offering free consultations! Send an email to appmodernization@citrix.com to schedule your session or request a call and a specialist will promptly reply with options to connect. Want to join our Citrix cloud-native Slack channel? Sign up now to receive an invitation.
  4. In the previous post in our series on Citrix ADC and Google Anthos, we covered the importance of security and how NetScaler ADC can strengthen your security posture across hybrid and multi-cloud using Citrix Web App Firewall (WAF). NetScaler WAF works seamlessly with Google Anthos Policy Controller to enforce protection for Kubernetes apps and APIs using configuration as code, and the GitOps paradigm enhances Continuous Configuration along with Google Anthos Config Management for automating security configuration. In this blog post we will look at why APIs are important for organizations, and we will discuss how you can leverage NetScaler ADC, with its extensive set of policies, to provide API gateway functionalities at the edge, as an enterprise API gateway or within a Kubernetes cluster as an ingress API gateway. We will cover a dual-tier topology for API gateways and showcase how we can leverage Google Anthos Config Management for consistent API management and protection. Why Are APIs Important? Finance, manufacturing, transportation, healthcare, and telecommunications are just some of the sectors trying to stay competitive in today’s economy by providing access to services through web APIs. With the increasing adoption of cloud, use of smart devices, IoT and more, demand for APIs continues to increase with hundreds of services provided through public or private APIs. Developers have long used APIs to share data and integrate with other systems. Although there are several types of APIs, web APIs are the most common, with REST, SOAP, and gRPC the most frequently adopted. As the use of public cloud and consumption of cloud services grows, APIs have become one of the most important aspects of software architecture. With an increasing shift from monolith apps to microservices-based architectures, APIs are no longer just for exposing services to external consumers. APIs have become the standard method for communication between microservices. There are dozens of architectural patterns for application modernization, and APIs play an important role. Shifting to microservices provides incredible benefits including dynamic scalability, fault isolation, reduced response downtime, and technology stack flexibility to eliminate vendor or technology lock-in and platform dependency. But all these benefits hide challenges that can require deep expertise on microservices-based architecture concepts. Exposing APIs, handling communication between microservices, applying authentication, authorization, and rate limiting require specific measures to be applied in a proper way. Security, high availability, and more require different handling with API management now becoming a cornerstone for the success of application modernization. Citrix API Gateway One of the most common architectures for API management and protection is the API gateway. An API gateway acts as a single-entry point for all clients that make API calls to a particular set of back-end services such as containerized web apps in a Kubernetes cluster. The API gateway is responsible for proxying/routing requests to the appropriate services. Learn more about why you need an API gateway, how it works, and what its benefits are. Citrix provides an enterprise-grade API gateway for north/south API traffic into the Kubernetes cluster. The API gateway integrates with Kubernetes through the Citrix Ingress Controller and the Citrix ADC platforms (ADC MPX, VPX, CPX) deployed as the ingress gateway for on-premises or cloud deployments. Citrix ADC has an extensive set of policies you can use to enforce authentication and authorization policies, rate limit access to services, perform advanced content routing, apply HTTP transactions transformation using rewrite and responder policies, enforce WAF policies and more — all to support secure and reliable access to APIs and to microservices. Citrix’s API gateway is designed to be easily enabled and configured as it follows the infrastructure-as-code and configuration-as-code paradigms. By providing auth, rate limit, content routing, rewrite and responder, WAF CRDs and more, applying policies centrally in an easy manner becomes possible from within a GKE cluster. By creating simple YAML files you can easily provide common API gateway functionalities such as: Authentication mechanism like basic, digest, bearer, API keys, OpenID Connect, SAML, and more. Integration with authentication providers like OAuth, LDAP, and more. Fine-grained authorization using OAuth 2.0. Rate limiting for APIs that are in high demand. OWASP-suggested hardening measures using rewrite and responder policies. The key advantage of using Citrix API gateway is that it integrates with Kubernetes through Citrix Ingress Controller and Citrix ADC. This gives you flexibility as you create your architecture. You can configure an API gateway both on Tier-1 using MPX, VPX and on Tier-2 using CPX. Tier 1 ingress will act as an enterprise API gateway sitting in front of Kubernetes clusters and traditional 3-tier applications. Functionalities like WAF and Auth can be offloaded and managed centrally for all environments. Tier 2 API gateways can be introduced per Kubernetes cluster, namespace or particular set of apps according to business requirements and SDLC (Software Development Lifecycle) practices being followed. Additionally, when extreme isolation is required you can introduce API gateway functionalities for namespace-to-namespace communication. In this case Tier 2 API gateways can be configured from the team responsible for the appropriate namespace or particular set of apps. This high level of flexibility gives all the relevant personas the tools required to do their job while focusing on what they do best. Citrix follows well-established principles that provides DevOps, CloudOps, SecOps and Software Engineering teams with the tools they need to effectively do their job. By supporting the Swagger API specification format, APIs can be defined by software engineers who design the APIs while configuration can be automated from DevOps and CloudOps teams, and security teams can specify the appropriate measures for the APIs to be protected. Combining these capabilities with GitOps tools like Google Anthos Config Management or adopting Citrix’s own API Gateway with GitOps implementation introduces an enterprise-grade practice of API management and protection. Because policy configurations can be part of the source code and stored in Git, different configurations can be created and maintained per environment, enhancing SDLC and DevOps practices even more while the API gateway can be deployed closer to the apps that need be managed and protected. Modern Application Architectures In this section, we will focus on deploying: Tier 1 Citrix ADC (VPX) in front of a Google Anthos GKE cluster within GCP. We will leverage Google Anthos Configuration Management for consistent deployment of Citrix components into the Anthos GKE cluster. VPX will act as a Tier 1 enterprise API gateway where WAF policies will be enabled. Tier 2 Citrix ADC (CPX) using ACM within a Kubernetes namespace that will act as a Tier 2 ingress API gateway for the microservices deployed in that namespace and make use of ACM for consistent Tier 2 API gateway policy configurations. Authentication, authorization, rate limiting, rewrite, and responder policies will be applied for a specific set of APIs. Keycloak, one of the most popular open source identity and access management (IAM) solutions, in a dedicated Kubernetes namespace and use it as our identity provider and authorization (OAuth 2.0) server for our Tier-2 CPX API Gateway. ACM (Anthos Configuration Management) is a GitOps-centric tool that synchronizes configuration into an Anthos Kubernetes cluster from a Git repository. This lab automation has been written with GitHub as the git repository tool of choice. The following diagram illustrates the infrastructure used by our Lab that will be deployed. Citrix ADC VPX A single Citrix ADC VPX instance is deployed with two network interfaces: nic0 provides access for management (NSIP), and access to back-end servers (SNIP). nic1 provides access for deployed applications (VIPs). Each interface is assigned an internal private IP address and an external public IP address. The instance is deployed as a pre-emptible node to reduce lab costs. The instance automatically configures the password with Terraform. The instance is then automatically configured by the Citrix Ingress Controller and Citrix Node Controller deployed in the GKE cluster. VPCs and Firewall Rules This deployment uses two VPCs: The default VPC and subnets are used for instance and GKE cluster deployment. The vip-vpc is used only to host VIP addresses, which routes the traffic back to the services in the default VPC. Default firewall rules apply to the default VPC. Ports 80/443 are permitted into the vip-vpc. GKE Cluster with Anthos Configuration Management A single GKE cluster is deployed as a zonal cluster: Autoscaling is enabled with a minimum of one node and a configurable maximum. Google Anthos Config Management (ACM) operator is deployed into the GKE cluster and configured to sync the cluster configuration from a GitHub repository. Citrix Ingress Controller and Citrix Node Controller components are automatically installed via ACM into the ctx-ingress namespace. Citrix auth, rate limit, rewrite and responder, and WAF CRDs are installed via ACM to enable developers to create policy configurations Keycloak with Postgresql database is installed via ACM into the keycloak namespace. Worker nodes are deployed as pre-emptible nodes to reduce lab costs. GitHub Repository A dedicated GitHub repository is created and loaded with a basic cluster configuration: A basic hierarchical format is used for ease of navigation through namespaces and manifests. Citrix Ingress Controller and Citrix Node Controller deployment manifests are built from templates and added to this repository, along with their other required roles / rolebindings / services / etc. This repository is created and destroyed by Terraform. Echoserver Demo Application An echoserver is a server that replicates the request sent by the client and sends it back. It will be used from our lab to showcase: How a request is blocked on Tier 1 VPX based on WAF policies. How a request is blocked on Tier 2 CPX based on Authentication / Authorization Policies. How a request is blocked on Tier 2 CPX based on Rate limiting policies when a threshold is reached. How a request is manipulated (by adding some extra headers) on Tier 2 CPX based on Rewrite policies. How a response is manipulated on Tier 2 CPX based on Responder policies. For our lab we will deploy a single echoserver instance to see the requests reaching our application and the relevant response. To keep it simple, three Kubernetes services will be created (Pet, User, Play) that will use different ports to access the same microservice (echoserver). That will provide us with an easy way of creating different content routes for each one of the Kubernetes services and showcase how we can apply policies to each API endpoint. Application components and API gateway configurations are controlled through Anthos Config Management and the source Git Repo. The following diagram illustrates a high-level architecture, aiming to present the role of each component for our Lab. To learn more about how to deploy this lab and see API gateway policies in action, please visit Citrix ADC with Google Anthos: Dual-tier API Gateway with ACM Lab in our Citrix Cloud Native Networking (CNN) hands-on guides. Additional Information Visit our Microservices App Delivery Best Practices library to learn how Citrix can help you on your app modernization journey. What is an API gateway? Learn about them here. For more information on dual-tier topology for the API gateway, check our API Gateway for Kubernetes documentation. Want to see how Citrix automates you API gateway configuration? Check our Deploy API Gateway with GitOps. Read our solution brief to find out how to protect your APIs with NetScaler ADC. Learn how to configure Citrix components for your specific requirements in our Developer Docs . For more details on why you need an application delivery controller (ADC) and a management and orchestration platform, read about six must-haves for application delivery in hybrid- and multi-cloud environments. For more on the role of application delivery in the cloud-native journey, read about seven key considerations for microservices-based application delivery. Finally, learn how Citrix ADC bridges the gap between traditional and DevOps app delivery in The ADC Guide to Managing Hybrid (IT and DevOps) Application Delivery What’s Next? Stay tuned for the next blog post in our series, where we will discuss how Citrix ADC’s security-as-code capabilities enable automation for East/West Security for Modern Apps. Looking to get started or take the next step in your app modernization? Our team is offering free consultations! Send an email to appmodernization@citrix.com to schedule your session or request a call and a specialist will reply with options to connect. Want to join our Citrix cloud-native Slack channel? Sign up now to receive an invitation.
  5. NetScaler is an advanced application delivery, load balancing and security solution for your web apps. Terraform provides infrastructure-as-code and declarative approach to managing your NetScaler infrastructure. In this hands-on lab, we will learn how to use Terraform to configure load balancing service in NetScaler and expose your public web-apps over internet. The lab will provision the NetScaler, pair of web-servers, and automation controller and then guide you on using Terraform. Click the Start hands-on Lab at the top of the post to try out ! Let us know your feedback or any issues in the comments section.
  6. Learn how to leverage WAF Policies for protecting your Applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create WAF policies and profiles. How to enable WAF policies on load balancing or content switching virtual server level. How to block or log malicious requests based on different criteria. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  7. Learn how to leverage basic Rewrite / Responder Policies for manipulating Requests and Responses. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create rewrite / responder policies. What is the difference between the two? How to bind a policy on a content switching server. How to manipulate an incoming request based on different criteria. How to redirect a request based on different criteria. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  8. Learn how to deploy & configure a Content Switching virtual server for routing traffic to your applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to deploy a content switching virtual server to route traffic to your apps. How to route traffic based on URL path How to route traffic based on HTTP Header values. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  9. NetScaler is an advanced application delivery, load balancing and security solution for your web apps. Ansible modules simplify the NetScaler management, providing agility to your IT operations. In this hands-on lab, we will learn how to use Ansible to configure load balancing service in NetScaler and expose your public web apps over internet. The lab will provision the NetScaler, pair of web-servers, and automation controller and then guide you on using Ansible workflow. Click the Start hands-on Lab at the top of the post to try out ! Let us know your feedback or any issues in the comments section.
  10. Learn how to deploy & configure a Content Switching virtual server for routing traffic to your applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to deploy a content switching virtual server to route traffic to your apps. How to route traffic based on URL path How to route traffic based on HTTP Header values. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  11. Learn how to leverage basic Rewrite / Responder Policies for manipulating Requests and Responses. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create rewrite / responder policies. What is the difference between the two? How to bind a policy on a content switching server. How to manipulate an incoming request based on different criteria. How to redirect a request based on different criteria. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  12. Learn how to leverage WAF Policies for protecting your Applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create WAF policies and profiles. How to enable WAF policies on load balancing or content switching virtual server level. How to block or log malicious requests based on different criteria. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  13. Learn how to leverage WAF Policies for protecting your Applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create WAF policies and profiles. How to enable WAF policies on load balancing or content switching virtual server level. How to block or log malicious requests based on different criteria. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  14. Learn how to leverage basic Rewrite / Responder Policies for manipulating Requests and Responses. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create rewrite / responder policies. What is the difference between the two? How to bind a policy on a content switching server. How to manipulate an incoming request based on different criteria. How to redirect a request based on different criteria. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  15. Learn how to deploy & configure a Content Switching virtual server for routing traffic to your applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to deploy a content switching virtual server to route traffic to your apps.How to route traffic based on URL pathHow to route traffic based on HTTP Header values.Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
×
×
  • Create New...