Jump to content
Updated Privacy Statement

Konstantinos Kaltsas

Internal Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by Konstantinos Kaltsas

  1. Hello Chad, You can find a description for all error codes under the API Specification here: https://developer-docs.netscaler.com/en-us/nextgen-api/apis/#/Authentication/Login
  2. Agile, flexible DevOps practices are crucial to successful application modernization initiatives, which often involve migrating applications from on-premises to public cloud using containers. At the same time, many organizations must continue hosting applications on-premises due to regulatory and compliance requirements, cost concerns, or other business needs. NetScaler, an application delivery and security platform, and Google Anthos, a cloud-centric container platform, work together to provide consistent and reliable application delivery and security for any application on any infrastructure. NetScaler for hybrid cloud application delivery NetScaler provides powerful and flexible topologies that complement organizational boundaries. Dual-tier deployments employ high-capacity hardware or virtualized NetScaler ADCs in the first tier to segment control between network operators and Kubernetes operators, while the second tier employs containerized NetScaler ADCs within the Kubernetes cluster. With its flexible topologies, NetScaler makes it as easier to migrate applications between on-premises and cloud. Because all NetScaler form factors share the same code base, NetScaler works the same regardless of the environment in which it is deployed. So you gain operational consistency along with greater agility regardless of the type of application (monolith or microservices) and infrastructure. Key NetScaler features include auto-scaling, global server load-balancing (GSLB), multi-cluster ingress, and a web application firewall. Google Anthos for hybrid cloud infrastructure management Google Anthos unifies the management of compute infrastructure and applications across on-premises, at the edge, and in multiple public clouds with a Google Cloud-backed control plane for consistent operation at scale. Key features like an enterprise-grade container orchestration and management service; policy and configuration management; and a service mesh help you to accelerate the adoption of Day 2 operations and make your DevOps pipeline more efficient. Key capabilities of NetScaler with Google Anthos Key capabilities you gain from using NetScaler with Google Anthos include: Auto-scale ADCs from within a Google Kubernetes Engine (GKE) cluster based on user demand Automate ADC Day 2 operations using Anthos Config Management Implement advanced security controls and enforce security policies with NetScaler Web App Firewall and Anthos Policy Controller Automate multi-cluster/multi-regional deployments and load balancing configurations from within GKE using NetScaler GSLB and Anthos Config Management Export metrics and transaction data from NetScaler using the containerized NetScaler Observability Exporter and export them to such popular endpoints as Zipkin, Kafka, Elasticsearch, Prometheus, and Splunk Enterprise Automate canary deployments using NetScaler Ingress Controller and Anthos Config Management We provide fully functional labs for you to test. The labs’ source code is publicly available on GitHub. Note that our GitHub account refers to Citrix ADC, which is the former name of NetScaler. Request a free consultation with the NetScaler product team If you’re looking to get started with application modernization, request a free consultation with the NetScaler product team: appmodernization@citrix.com. To participate in the discussion about application modernization, join the NetScaler cloud-native Slack channel.
  3. This is the third post in our series on Citrix ADC with Google Anthos. In our first post, we talked about the importance of modern app deliver and security for hybrid multi-cloud, and in our second post, we focused on achieving consistent and reliable app delivery for Kubernetes apps and shared a lab on GitHub for readers to test. In this post, we’ll focus on security and demonstrate how: Citrix ADC can strengthen your security posture across hybrid and multi-cloud. Citrix Web App Firewall (WAF) works seamlessly with Google Anthos Policy Controller to provide protection for Kubernetes apps and APIs. Citrix Web App Firewall with Google Anthos Policy Controller enforce app protection using configuration as code GitOps enhances continuous configuration along with Google Anthos Config Management for automating security configuration. Protecting Web Apps and APIs When it comes to application delivery, security is a top priority. Web apps and APIs are often an organization’s most valuable but vulnerable assets, and to reach production and go live, there are several requirements that need to be met. From governance and compliance requirements to organization-specific requirements, the task is not an easy one. Citrix Web App Firewall has proven and robust security controls to protect apps against known and unknown application attacks. It defends apps and APIs against OWASP top 10 threats and zero-day attacks and provides security insights for faster remediation. To learn how Citrix Web App Firewall is designed to provide security, check out our product documentation. Our introduction to Citrix Web App Firewall, overview of security checks, and FAQs and deployment guide are great resources to help you get started. Citrix Web App Firewall is designed to be easily enabled and configured as code following the infrastructure and configuration as code paradigms. By providing WAF, bot management, CORS CRD for Kubernetes, security configurations are now possible from within a GKE cluster. You can now automate the configuration of both Tier-1 and Tier-2 Citrix Web App Firewalls easily. Common protections such as buffer overflow, cross site request forgery (CSRF), cross site scripting (XSS), SQL injection, URL allow lists and block lists, or more advanced ones can be easily enabled as policies using simple YAML files. Combining these capabilities with policy agents (as we’ll see in our lab) introduces an enterprise-grade practice of configuring and automating security. The key advantage of using Citrix WAF is that it uses a single code base across all Citrix ADC form factors (MPX and SDX, as well as VPX and CPX) so you can consistently apply and enforce security policies across any application environment. That gives you ease of deployment and simplicity in configurations which saves time and reduces configuration errors. Citrix Web App Firewall follows well-established principles that provide DevOps, CloudOps and SecOps teams with the tools they need to effectively do their job. By supporting both positive and negative security models Citrix Web App Firewall provides the widest protection possible. In addition to that, common event format (CEF) logging enables customers to easily collect and aggregate WAF data for analysis by an enterprise management system. Configuring and integrating a WAF has never been easier. Because security configurations can be part of the source code and stored in Git, different configurations can be created and maintained per environment. “Shifting Security Left” in the early stages of testing can become easier and Dev(Sec)Ops practices can be applied. Configurations are now closer to meeting the actual need, closer to the apps that need protection, and can eliminate false positives. And with a single point of truth, full visibility is achieved for both Operations and Audit teams, making it even easier to perform required audits. Deploying a Modern Application Architecture Here, we’ll focus on deploying a Tier-1 Citrix ADC (VPX) in front of a Google Anthos GKE cluster within GCP. We will leverage Google Anthos Configuration Management for consistent deployment of Citrix components into the Anthos GKE cluster. Additionally, we’ll leverage Google Anthos Policy Controller to ensure that Citrix Web App Firewall configurations exist to protect ingress objects within a cluster. ACM (Anthos Configuration Management) is a GitOps-centric tool that synchronizes configuration into a Anthos Kubernetes cluster from a Git repository. Policy Controller is a component of ACM that can audit or enforce configurations across the cluster. This lab automation has been written with GitHub as the git repository tool of choice. The following diagram illustrates the infrastructure used by our lab that will be deployed. (Click the image to view larger.) Citrix ADC VPX A single Citrix ADC VPX instance is deployed with two network interfaces: nic0 provides access for management (NSIP) and access to back-end servers (SNIP). nic1 provides access for deployed applications (VIPs). Each interface is assigned an internal private IP address and an external public IP address. The instance is deployed as a pre-emptible node to reduce lab costs. The instance automatically configures the password with Terraform. The instance is then automatically configured by the Citrix Ingress Controller and Citrix Node Controller deployed in the GKE cluster. VPCs and Firewall Rules Two VPCs are used in this deployment: The default VPC and subnets are used for instance and GKE cluster deployment. The vip-vpc is used only to host VIP addresses, which routes the traffic back to the services in the default VPC. Default firewall rules apply to the default VPC. Ports 80/443 are permitted into the vip-vpc. GKE Cluster with Anthos Configuration Management A single GKE cluster is deployed as a zonal cluster: Autoscaling is enabled with a minimum of one node and a configurable maximum. The Google Anthos Config Management (ACM) operator is deployed into the GKE cluster and configured to sync the cluster configuration from a GitHub repository. Citrix Ingress Controller and Citrix Node Controller components are automatically installed via ACM into the ctx-ingress namespace. Citrix Web App Firewall Custom Resource Definition (CRD) is installed via ACM to enable developers to create WAF configurations. Worker nodes are deployed as pre-emptible nodes to reduce lab costs. Policy Controller is installed to demonstrate constraints that enforce the presence of a WAF object in a namespace prior to accepting an Ingress resource. GitHub Repository A dedicated GitHub repository is created and loaded with a basic cluster configuration: A basic hierarchical format is used for ease of navigation through namespaces and manifests. Citrix Ingress Controller and Citrix Node Controller deployment manifests are built from templates and added to this repository, along with their other required roles / rolebindings / services / etc. This repository is created and destroyed by Terraform. Online Boutique Demo Application The online boutique demo application provides a microservices-based application for our lab. It has been modified slightly for this environment: An ingress resource has been added to receive all traffic through the Citrix VPX. Application components are controlled through Anthos Config Management and the source git repo. To learn more about how to deploy this lab and see autoscaling in action, please visit Citrix ADC with Google Anthos – WAF with Policy Controller Lab on our Citrix Cloud Native Networking (CNN) hands-on guides. Additional Information Read more about how Citrix can help you on your application modernization journey in our Microservices App Delivery Best Practices library. Interested in learning more about Citrix application and API security? Check our Citrix Web App Firewall data sheet. Find out how a Citrix ADC solution helps manage, monitor, and secure your entire infrastructure and ensure application security efficacy on our e-book on the Top 6 WAF Essentials to Achieve Application Security Efficacy. In our app delivery and security developer docs, you’ll find guidance on configuring Citrix components to meet your specific requirements. Our e-book on six must-haves for app delivery in hybrid- and multi-cloud environments has details on why you need an application delivery controller along with a management and orchestration platform. You can learn more about the role of application delivery in the cloud-native journey in our white paper on seven key considerations for microservices-based application delivery. Finally, the ADC Guide to Managing Hybrid (IT and DevOps) Application Delivery covers how Citrix ADC bridges the gap between traditional and DevOps app delivery. What’s Next? Watch out for the next blog post in our series, where we will discuss how you can use Citrix ADC, with its extensive set of policies, as an API gateway for Kubernetes apps. Looking to get started or take the next step in your app modernization? Our team is now offering free consultations! Send an email to appmodernization@citrix.com to schedule your session or request a call and a specialist will promptly reply with options to connect. Want to join our Citrix cloud-native Slack channel? Sign up now to receive an invitation.
  4. In the previous post in our series on Citrix ADC and Google Anthos, we covered the importance of security and how NetScaler ADC can strengthen your security posture across hybrid and multi-cloud using Citrix Web App Firewall (WAF). NetScaler WAF works seamlessly with Google Anthos Policy Controller to enforce protection for Kubernetes apps and APIs using configuration as code, and the GitOps paradigm enhances Continuous Configuration along with Google Anthos Config Management for automating security configuration. In this blog post we will look at why APIs are important for organizations, and we will discuss how you can leverage NetScaler ADC, with its extensive set of policies, to provide API gateway functionalities at the edge, as an enterprise API gateway or within a Kubernetes cluster as an ingress API gateway. We will cover a dual-tier topology for API gateways and showcase how we can leverage Google Anthos Config Management for consistent API management and protection. Why Are APIs Important? Finance, manufacturing, transportation, healthcare, and telecommunications are just some of the sectors trying to stay competitive in today’s economy by providing access to services through web APIs. With the increasing adoption of cloud, use of smart devices, IoT and more, demand for APIs continues to increase with hundreds of services provided through public or private APIs. Developers have long used APIs to share data and integrate with other systems. Although there are several types of APIs, web APIs are the most common, with REST, SOAP, and gRPC the most frequently adopted. As the use of public cloud and consumption of cloud services grows, APIs have become one of the most important aspects of software architecture. With an increasing shift from monolith apps to microservices-based architectures, APIs are no longer just for exposing services to external consumers. APIs have become the standard method for communication between microservices. There are dozens of architectural patterns for application modernization, and APIs play an important role. Shifting to microservices provides incredible benefits including dynamic scalability, fault isolation, reduced response downtime, and technology stack flexibility to eliminate vendor or technology lock-in and platform dependency. But all these benefits hide challenges that can require deep expertise on microservices-based architecture concepts. Exposing APIs, handling communication between microservices, applying authentication, authorization, and rate limiting require specific measures to be applied in a proper way. Security, high availability, and more require different handling with API management now becoming a cornerstone for the success of application modernization. Citrix API Gateway One of the most common architectures for API management and protection is the API gateway. An API gateway acts as a single-entry point for all clients that make API calls to a particular set of back-end services such as containerized web apps in a Kubernetes cluster. The API gateway is responsible for proxying/routing requests to the appropriate services. Learn more about why you need an API gateway, how it works, and what its benefits are. Citrix provides an enterprise-grade API gateway for north/south API traffic into the Kubernetes cluster. The API gateway integrates with Kubernetes through the Citrix Ingress Controller and the Citrix ADC platforms (ADC MPX, VPX, CPX) deployed as the ingress gateway for on-premises or cloud deployments. Citrix ADC has an extensive set of policies you can use to enforce authentication and authorization policies, rate limit access to services, perform advanced content routing, apply HTTP transactions transformation using rewrite and responder policies, enforce WAF policies and more — all to support secure and reliable access to APIs and to microservices. Citrix’s API gateway is designed to be easily enabled and configured as it follows the infrastructure-as-code and configuration-as-code paradigms. By providing auth, rate limit, content routing, rewrite and responder, WAF CRDs and more, applying policies centrally in an easy manner becomes possible from within a GKE cluster. By creating simple YAML files you can easily provide common API gateway functionalities such as: Authentication mechanism like basic, digest, bearer, API keys, OpenID Connect, SAML, and more. Integration with authentication providers like OAuth, LDAP, and more. Fine-grained authorization using OAuth 2.0. Rate limiting for APIs that are in high demand. OWASP-suggested hardening measures using rewrite and responder policies. The key advantage of using Citrix API gateway is that it integrates with Kubernetes through Citrix Ingress Controller and Citrix ADC. This gives you flexibility as you create your architecture. You can configure an API gateway both on Tier-1 using MPX, VPX and on Tier-2 using CPX. Tier 1 ingress will act as an enterprise API gateway sitting in front of Kubernetes clusters and traditional 3-tier applications. Functionalities like WAF and Auth can be offloaded and managed centrally for all environments. Tier 2 API gateways can be introduced per Kubernetes cluster, namespace or particular set of apps according to business requirements and SDLC (Software Development Lifecycle) practices being followed. Additionally, when extreme isolation is required you can introduce API gateway functionalities for namespace-to-namespace communication. In this case Tier 2 API gateways can be configured from the team responsible for the appropriate namespace or particular set of apps. This high level of flexibility gives all the relevant personas the tools required to do their job while focusing on what they do best. Citrix follows well-established principles that provides DevOps, CloudOps, SecOps and Software Engineering teams with the tools they need to effectively do their job. By supporting the Swagger API specification format, APIs can be defined by software engineers who design the APIs while configuration can be automated from DevOps and CloudOps teams, and security teams can specify the appropriate measures for the APIs to be protected. Combining these capabilities with GitOps tools like Google Anthos Config Management or adopting Citrix’s own API Gateway with GitOps implementation introduces an enterprise-grade practice of API management and protection. Because policy configurations can be part of the source code and stored in Git, different configurations can be created and maintained per environment, enhancing SDLC and DevOps practices even more while the API gateway can be deployed closer to the apps that need be managed and protected. Modern Application Architectures In this section, we will focus on deploying: Tier 1 Citrix ADC (VPX) in front of a Google Anthos GKE cluster within GCP. We will leverage Google Anthos Configuration Management for consistent deployment of Citrix components into the Anthos GKE cluster. VPX will act as a Tier 1 enterprise API gateway where WAF policies will be enabled. Tier 2 Citrix ADC (CPX) using ACM within a Kubernetes namespace that will act as a Tier 2 ingress API gateway for the microservices deployed in that namespace and make use of ACM for consistent Tier 2 API gateway policy configurations. Authentication, authorization, rate limiting, rewrite, and responder policies will be applied for a specific set of APIs. Keycloak, one of the most popular open source identity and access management (IAM) solutions, in a dedicated Kubernetes namespace and use it as our identity provider and authorization (OAuth 2.0) server for our Tier-2 CPX API Gateway. ACM (Anthos Configuration Management) is a GitOps-centric tool that synchronizes configuration into an Anthos Kubernetes cluster from a Git repository. This lab automation has been written with GitHub as the git repository tool of choice. The following diagram illustrates the infrastructure used by our Lab that will be deployed. Citrix ADC VPX A single Citrix ADC VPX instance is deployed with two network interfaces: nic0 provides access for management (NSIP), and access to back-end servers (SNIP). nic1 provides access for deployed applications (VIPs). Each interface is assigned an internal private IP address and an external public IP address. The instance is deployed as a pre-emptible node to reduce lab costs. The instance automatically configures the password with Terraform. The instance is then automatically configured by the Citrix Ingress Controller and Citrix Node Controller deployed in the GKE cluster. VPCs and Firewall Rules This deployment uses two VPCs: The default VPC and subnets are used for instance and GKE cluster deployment. The vip-vpc is used only to host VIP addresses, which routes the traffic back to the services in the default VPC. Default firewall rules apply to the default VPC. Ports 80/443 are permitted into the vip-vpc. GKE Cluster with Anthos Configuration Management A single GKE cluster is deployed as a zonal cluster: Autoscaling is enabled with a minimum of one node and a configurable maximum. Google Anthos Config Management (ACM) operator is deployed into the GKE cluster and configured to sync the cluster configuration from a GitHub repository. Citrix Ingress Controller and Citrix Node Controller components are automatically installed via ACM into the ctx-ingress namespace. Citrix auth, rate limit, rewrite and responder, and WAF CRDs are installed via ACM to enable developers to create policy configurations Keycloak with Postgresql database is installed via ACM into the keycloak namespace. Worker nodes are deployed as pre-emptible nodes to reduce lab costs. GitHub Repository A dedicated GitHub repository is created and loaded with a basic cluster configuration: A basic hierarchical format is used for ease of navigation through namespaces and manifests. Citrix Ingress Controller and Citrix Node Controller deployment manifests are built from templates and added to this repository, along with their other required roles / rolebindings / services / etc. This repository is created and destroyed by Terraform. Echoserver Demo Application An echoserver is a server that replicates the request sent by the client and sends it back. It will be used from our lab to showcase: How a request is blocked on Tier 1 VPX based on WAF policies. How a request is blocked on Tier 2 CPX based on Authentication / Authorization Policies. How a request is blocked on Tier 2 CPX based on Rate limiting policies when a threshold is reached. How a request is manipulated (by adding some extra headers) on Tier 2 CPX based on Rewrite policies. How a response is manipulated on Tier 2 CPX based on Responder policies. For our lab we will deploy a single echoserver instance to see the requests reaching our application and the relevant response. To keep it simple, three Kubernetes services will be created (Pet, User, Play) that will use different ports to access the same microservice (echoserver). That will provide us with an easy way of creating different content routes for each one of the Kubernetes services and showcase how we can apply policies to each API endpoint. Application components and API gateway configurations are controlled through Anthos Config Management and the source Git Repo. The following diagram illustrates a high-level architecture, aiming to present the role of each component for our Lab. To learn more about how to deploy this lab and see API gateway policies in action, please visit Citrix ADC with Google Anthos: Dual-tier API Gateway with ACM Lab in our Citrix Cloud Native Networking (CNN) hands-on guides. Additional Information Visit our Microservices App Delivery Best Practices library to learn how Citrix can help you on your app modernization journey. What is an API gateway? Learn about them here. For more information on dual-tier topology for the API gateway, check our API Gateway for Kubernetes documentation. Want to see how Citrix automates you API gateway configuration? Check our Deploy API Gateway with GitOps. Read our solution brief to find out how to protect your APIs with NetScaler ADC. Learn how to configure Citrix components for your specific requirements in our Developer Docs . For more details on why you need an application delivery controller (ADC) and a management and orchestration platform, read about six must-haves for application delivery in hybrid- and multi-cloud environments. For more on the role of application delivery in the cloud-native journey, read about seven key considerations for microservices-based application delivery. Finally, learn how Citrix ADC bridges the gap between traditional and DevOps app delivery in The ADC Guide to Managing Hybrid (IT and DevOps) Application Delivery What’s Next? Stay tuned for the next blog post in our series, where we will discuss how Citrix ADC’s security-as-code capabilities enable automation for East/West Security for Modern Apps. Looking to get started or take the next step in your app modernization? Our team is offering free consultations! Send an email to appmodernization@citrix.com to schedule your session or request a call and a specialist will reply with options to connect. Want to join our Citrix cloud-native Slack channel? Sign up now to receive an invitation.
  5. NetScaler is an advanced application delivery, load balancing and security solution for your web apps. Terraform provides infrastructure-as-code and declarative approach to managing your NetScaler infrastructure. In this hands-on lab, we will learn how to use Terraform to configure load balancing service in NetScaler and expose your public web-apps over internet. The lab will provision the NetScaler, pair of web-servers, and automation controller and then guide you on using Terraform. Click the Start hands-on Lab at the top of the post to try out ! Let us know your feedback or any issues in the comments section.
  6. Learn how to leverage WAF Policies for protecting your Applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create WAF policies and profiles. How to enable WAF policies on load balancing or content switching virtual server level. How to block or log malicious requests based on different criteria. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  7. Learn how to leverage basic Rewrite / Responder Policies for manipulating Requests and Responses. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create rewrite / responder policies. What is the difference between the two? How to bind a policy on a content switching server. How to manipulate an incoming request based on different criteria. How to redirect a request based on different criteria. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  8. Learn how to deploy & configure a Content Switching virtual server for routing traffic to your applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to deploy a content switching virtual server to route traffic to your apps. How to route traffic based on URL path How to route traffic based on HTTP Header values. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  9. NetScaler is an advanced application delivery, load balancing and security solution for your web apps. Ansible modules simplify the NetScaler management, providing agility to your IT operations. In this hands-on lab, we will learn how to use Ansible to configure load balancing service in NetScaler and expose your public web apps over internet. The lab will provision the NetScaler, pair of web-servers, and automation controller and then guide you on using Ansible workflow. Click the Start hands-on Lab at the top of the post to try out ! Let us know your feedback or any issues in the comments section.
  10. Learn how to deploy & configure a Content Switching virtual server for routing traffic to your applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to deploy a content switching virtual server to route traffic to your apps. How to route traffic based on URL path How to route traffic based on HTTP Header values. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  11. Learn how to leverage basic Rewrite / Responder Policies for manipulating Requests and Responses. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create rewrite / responder policies. What is the difference between the two? How to bind a policy on a content switching server. How to manipulate an incoming request based on different criteria. How to redirect a request based on different criteria. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  12. Learn how to leverage WAF Policies for protecting your Applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create WAF policies and profiles. How to enable WAF policies on load balancing or content switching virtual server level. How to block or log malicious requests based on different criteria. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  13. Learn how to leverage WAF Policies for protecting your Applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create WAF policies and profiles. How to enable WAF policies on load balancing or content switching virtual server level. How to block or log malicious requests based on different criteria. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  14. Learn how to leverage basic Rewrite / Responder Policies for manipulating Requests and Responses. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create rewrite / responder policies. What is the difference between the two? How to bind a policy on a content switching server. How to manipulate an incoming request based on different criteria. How to redirect a request based on different criteria. Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  15. Learn how to deploy & configure a Content Switching virtual server for routing traffic to your applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to deploy a content switching virtual server to route traffic to your apps.How to route traffic based on URL pathHow to route traffic based on HTTP Header values.Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  16. Learn how to deploy & configure a Content Switching virtual server for routing traffic to your applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to deploy a content switching virtual server to route traffic to your apps.How to route traffic based on URL pathHow to route traffic based on HTTP Header values.Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  17. Learn how to leverage basic Rewrite / Responder Policies for manipulating Requests and Responses. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create rewrite / responder policies.What is the difference between the two?How to bind a policy on a content switching server.How to manipulate an incoming request based on different criteria.How to redirect a request based on different criteria.Click the Start hands-on Lab at the top of the post to try out!Please share your feedback or any issues in the comments section.
  18. Learn how to leverage WAF Policies for protecting your Applications. On this Track we will leverage infrastructure-as-code templates to demonstrate: How to create WAF policies and profiles.How to enable WAF policies on load balancing or content switching virtual server level.How to block or log malicious requests based on different criteria.Click the Start hands-on Lab at the top of the post to try out! Please share your feedback or any issues in the comments section.
  19. How to speed up NetScaler Baseconfig with Terraform modulesSubmitted May 26, 2023Author: Arne Marien, Lead Consultant, Provectus Technologies GmbH At Provectus Technologies, we offer a NetScaler as a Service model that is fundamentally based on the GitOps concept. To make the service efficient, scalable and to reduce errors, the NetScaler configurations are fully managed through Terraform. To be able to maintain the Terraform code for such a growing environment, we developed some Terraform modules for different use cases. Using Terraform modules for NetScaler configuration can provide several benefits: ReusabilityTerraform modules can be reused across different projects, environments, and teams. By creating reusable modules for NetScaler configurations, you can save time and effort by avoiding the need to recreate configurations from scratch every time. ConsistencyTerraform modules can ensure consistency in NetScaler configurations across different environments and stages. By using the same modules for different environments, you can ensure that your NetScaler configurations are consistent, reducing the risk of errors and making it easier to stage your changes. CollaborationTerraform modules can help for collaboration among teams working on NetScaler configurations. By creating modules that can be easily shared and reused, teams can work together more efficiently, reducing the need for duplicative work and improving overall productivity. MaintainabilityTerraform modules can make it easier to maintain NetScaler configurations over time. By breaking configurations down into modules, it becomes easier to update and modify configurations as needed. This can help reduce the time and effort required to maintain NetScaler configurations. A Module can be structured in the following Terraform files: main.tfvariables.tfoutput.tf The main.tf contains the description of all resources that are necessary for a baseconfig. The variables.tf describes the variables that are required to create the resources. These must be passed when invoking the module. The output.tf contains the output of any resources after the execution of the module. A first exampleBaseconfig Sub-ModuleThe main.tf contains the Terraform provider name and version required for the module to work and the two resources we want to add using this module. main.tf>terraform { required_version = ">= 1.3.0" required_providers { citrixadc = { source = "citrix/citrixadc" version = ">= 1.30.0" } }} resource "citrixadc_nsip" "nsip" { for_each = { for ip in var.nsip : ip.ipaddress => ip } ipaddress = each.value.ipaddress netmask = each.value.netmask type = each.value.type mgmtaccess = each.value.mgmtaccess} resource "citrixadc_route" "route" { for_each = { for route in var.route : route.network => route } network = each.value.network netmask = each.value.netmask gateway = each.value.gateway depends_on = [ citrixadc_nsip.nsip ]} The variables.tf specifies the variables required by the resources, that must be provided when the module is called. All optional attributes are provided with the optional(<type>, <default-value>) function and do not have to be provided to the module. The specification of default or optional values makes the module easier to use and more robust. variables.tf>variable "nsip" { description = "List of NSIPs" type = list(object({ ipaddress = string netmask = string type = optional(string, null) mgmtaccess = optional(string, null) }))} variable "route" { description = "Adds routes" type = list(object({ network = string netmask = string gateway = string }))} Root-ModuleTo invoke the module, root module is required. This root-module contains the information of the environments and refers to all sub-modules which are required for the configuration. The main.tf of the root module specifies the provider configuration including the Terraform backend for managing the Terraform state, as well as the submodules. The source must refer to the sub-module. This can be in the same file system but can also refer to a Git repository. Note: The NetScaler credentials required for the provider should only be set for local tests in the root module itself and should never be committed to a repository. The credentials should always be set as environment variables. main.tf>provider "citrixadc" { insecure_skip_verify = true endpoint = "https://<ns-ip>" username = "<username>" password = "<password>"} terraform { backend "local" {} required_providers { citrixadc = { source = "citrix/citrixadc" } }} module "baseconfig" { source = "<path-to-my-module>" nsip = var.nsip route = var.route} The variables.tf of the root module contains the information for an environment. The structure of the variables must match the specifications from the sub-module and contain at least the required variables. All optional values do not have to be passed mandatory. variables.tf>variable "nsip" { description = "Adds NSIPs" type = list(any) default = [ { ipaddress = "10.1.0.10" netmask = "255.255.255.0"type = "SNIP" }, { ipaddress = "10.2.0.10" netmask = "255.255.255.0" mgmtaccess = "ENABLED" gui = "SECUREONLY" type = "SNIP" } ]} variable "route" { description = "Adds routes" type = list(any) default = [{ network = "172.1.0.0" netmask = "255.255.255.0" gateway = "10.1.0.1" }]} Folder StructureWith a growing environment and multiple modules, the directory structure is very important. A good structure may look like the following: .└── terraform ├── environments │ ├── dev │ │ └── baseconfig │ │ ├── main.tf │ │ └── variables.tf │ └── prod └── modules └── netscaler ├── baseconfig │ ├── main.tf │ ├── outputs.tf │ └── variables.tf ├── csvserver └── lbvserver Different environments such as DMZ and LAN as well as stages such as DEV and PROD can be mapped in environments. Modules contains the sub-modules such as baseconfig, which are invoked by the root-modules in environments. Module executionNow we have everything for our first Terraform module in a clear structure. To run Terraform now, we only have to execute the known commands in the directory of the root module. Make sure that the source path points to the baseconfig module. Otherwise, an error will occur during initialization. module "baseconfig" { source = "./../../../modules/netscaler/baseconfig/" nsip = var.nsip route = var.route} When the path is correct, the baseconfig can be executed via the commands terraform init, terraform plan, terraform apply. Module VersioningThere are some good reasons why Terraform code should be located in a Git repository. One of them is the ability to have multiple versions of the code available. Terraform modules can grow over time, as there are maybe more requirered resources or other changes. To avoid that adjustments to modules immediately affect all environments, there is the possibility to refer to a certain version of my sub-module in the root module. This means that I can use version 1.1 with some changes in the DEV Stage, but still use version 1.0 in PROD. To do this, we first need to add our module to a Git repository. ~git initgit add .git commit -m "Initial commit of my module"git remote add origin "<Git repo remote URL>"git push origin main Afterwards we can set a tag to this specific version of the module. ~git tag -a v1.0 -m "Module Version 1.0"git push --follow-tags Now the current state of the baseconfig module is marked as version 1.0. To use exactly this version, the source in the root module must be adjusted. This differs depending on the service used. For a generic Git repository, this works like this: module "baseconfig" { source = "git::https://example.com/network.git//modules/baseconfig?ref=tags/1.0" nsip = var.nsip route = var.route} What's next?There are more modulesThis was the first example of how Terraform modules can significantly simplify NetScaler configuration. Especially in growing environments this allows to build always the same consistent configurations. This does not only apply to base configurations. Services such as load balancing, content switching, or Gateway with Advanced Authentication can also be built as modules. Thinking even further, the underlying infrastructure, such as a NetScaler in Azure or AWS, can also be created automated and reusable via module. How to get started with NetScaler Automation Toolkit?1. Visit the hands-on labs for Terraform with NetScaler to get your self familiar with NetScaler Automation Toolkit. Basic Load Balancing Configuration using TerraformBasic Content Switching Configuration using TerraforBasic Rewrite / Responder Policies Configuration using TerraformBasic Application Protection Configuration (WAF) using Terraform2. Setup your private lab by following NetScaler's Get Started guides for Terraform. Get Started with NetScaler Automation Toolkit using Terraform 3. Follow the Beginners Guide to get comfortable with common commands. Beginners guide to automating ADC with terraform on GitHub 4. Be part of the Community. Pick a common NetScaler use-case / configuration, create a Terraform template and publish it on NetScaler Community. Additional resources for NetScaler Automation Toolkit?Visit the following links to get more details on using Terraform with NetScaler. ADC Terraform Provider on Terraform RegistryADC Terraform Provider on GitHubADM Terraform Provider on GitHubSDX Terraform Provider on GitHubBLX Terraform Provider on GitHubTerraform Templates to Provision VPX on GitHubAbout the AuthorArne Marien is a Consultant focusing on Application Delivery and especially on NetScaler working at Provectus Technologies GmbH . Provectus is a founder-managed IT service provider based in Munich. The business enabler has empowered Corporate and medium-sized companies to meet the requirements of digitalization, since 2001. The core approach involves cloud technologies, digital workplace, and process optimization. More than 110 experts operate under constant compliance, stability, availability & unlimited security.
  20. Simplified Content Switching with TerraformSubmitted Dec 12, 2023Author: Bhalchandra Chaudhari Continuing to our Parent-Child Terraform deployment, in the last blog, we saw the SSL offloading feature with simplified Terraform re-usable modules, which helps to keep your common configuration intact within the NetScaler infrastructure itself. Today, we will see a similar use case for the layer 7 type load balancing feature of NetScaler, i.e., Content Switching, simplified with terraform modules. Content Switching enables the NetScaler ADC appliance to direct requests sent to the same Web host to different servers with different content. Find more details about the “Terraform Parent-Child” deployment in the mentioned SSL offloading article. Below is the Content Switching load balancing created using Terraform parent-child deployment Child Modules prepared to call in root/parent module: SSL certificate key-pair SSL cipher group for frontend traffic SSL cipher group for backend traffic SSL CS virtual server configuration and parameters set. HTTP CS virtual server configuration for secure redirection Require target load balancing set up. Content Switching actions and policies. Responder for HTTP to HTTPS redirection Parent/Root modules: Application-specific module, e.g., corpservices.example.com The directory structure for this deployment is as below ├───citrixadc - Citrix ADC terraform provider files├───commonmodules - child/shared module main directory│ ├───cipher_svc - child (shared) module backend cipher group│ ├───cipher_vs - child (shared) module frontend cipher group│ ├───lbservice - child (configuration) module load balancing services│ ├───responder_httpTohttps - child (shared) module for responder│ └───sslcertkeypair - child (shared) module SSL certificate keypair ├───contentswitching - feature parent directory/module │ └───corporate.example.com - application root/parent directory/module│ ├───TargetLB1 - child (configuration) module load balancing services│ ├───TargetLB2 - child (configuration) module load balancing services│ └───TargetLB3 - child (configuration) module load balancing services└───sslcerts - SSL files (cert and key) directory Each module contains its terraform files to prepare the configuration plan and helps the root module execute the configuration depending on variable attributes/parameter values. The child or shared modules have their own configuration plan. The respective shared module values are provided as references in the application root module so that the required shared configuration should not get destroyed/changed if there is any change in application attributes/parameters. The necessary states are maintained separately. Below is the terraform structure depicts each main directory module and sub-directory (child module) with respective terraform files: │ .gitignore│ secret.tfvars│├───citrixadc├───commonmodules│ ├───cipher_svc│ │ │ .terraform.lock.hcl│ │ │ output.tf│ │ │ provider.tf│ │ │ README.md│ │ │ resources.tf│ │ │ terraform.tfstate│ │ │ terraform.tfstate.backup│ │ │ variables.tf│ ││ ├───cipher_vs│ │ │ .terraform.lock.hcl│ │ │ output.tf│ │ │ provider.tf│ │ │ README.md│ │ │ resources.tf│ │ │ terraform.tfstate│ │ │ terraform.tfstate.backup│ │ │ variables copy.tf│ │ │ variables.tf│ ││ ├───lbservice│ │ output.tf│ │ README.md│ │ resources.tf│ │ variables.tf│ ││ ├───responder_httpTohttps│ │ │ .terraform.lock.hcl│ │ │ output.tf│ │ │ provider.tf│ │ │ resources.tf│ │ │ terraform.tfstate│ │ │ terraform.tfstate.backup│ │ │ terraform.tfvars│ │ │ variables.tf│ ││ └───sslcertkeypair│ │ .terraform.lock.hcl│ │ output.tf│ │ provider.tf│ │ README.md│ │ resources.tf│ │ terraform.tfstate│ │ terraform.tfvars│ │ variables.tf│├───contentswitching│ │ README.md│ ││ └───corporate.example.com│ │ .terraform.lock.hcl│ │ provider.tf│ │ README.md│ │ resources.tf│ │ terraform.tfstate│ │ terraform.tfstate.backup│ │ terraform.tfvars│ │ variables.tf│ ││ ├───TargetLB1│ │ │ .terraform.lock.hcl│ │ │ provider.tf│ │ │ resources.tf│ │ │ terraform.tfstate│ │ │ terraform.tfstate.backup│ │ │ terraform.tfvars│ │ │ variables.tf│ ││ ├───TargetLB2│ │ │ .terraform.lock.hcl│ │ │ provider.tf│ │ │ resources.tf│ │ │ terraform.tfstate│ │ │ terraform.tfstate.backup│ │ │ terraform.tfvars│ │ │ variables.tf│ ││ └───TargetLB3│ │ .terraform.lock.hcl│ │ provider.tf│ │ resources.tf│ │ terraform.tfstate│ │ terraform.tfstate.backup│ │ terraform.tfvars│ │ variables.tf│ ││ └───sslcerts bkclab.cer.cert bkclab.cer.key terracert.cert terracert.key You can find the required terraform files of all modules here , where the resource block of each child module is called using data resource to configure load balancing entity. ExecutionAs states are maintained separately, each child module except “lbservice” runs individually to refer to the output value of each in the parent application module. Similarly, the target LBs are executed to keep the web application content requests separate. On running terraform init, terraform plan, and terraform apply, the module creates all the required entities and bindings between them, as shown below. PS C:\Users\bkchaudhari\article\contentswitching\corporate.example.com> terraform initInitializing modules... Initializing the backend... Initializing provider plugins...- terraform.io/builtin/terraform is built in to Terraform- Finding latest version of citrix/citrixadc...- Installing citrix/citrixadc v1.32.0...- Installed citrix/citrixadc v1.32.0 PS C:\Users\bhalchandrac\ssloffload\example.com> terraform plan -var-file="../../secret.tfvars" Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Intentionally I skipped the plan output, to reduce the article size. Terraform will create 23 resources to configure content switching. PS C:\Users\bhalchandrac\article\example.com> terraform apply -var-file="../../secret.tfvars" Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create……….………………Terraform will perform the following actions:Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes citrixadc_csaction.tf_csaction3: Creating...module.loadbalancingservices.citrixadc_server.tf_server["testservermain01"]: Creating...citrixadc_csaction.tf_csaction2: Creating...citrixadc_csaction.tf_csaction1: Creating...module.loadbalancingservices.citrixadc_server.tf_server["testservermain02"]: Creating...citrixadc_csvserver.tf_csvserverssl: Creating...citrixadc_csvserver.tf_csvserverhttp: Creating...citrixadc_lbvserver.tf_lbvserverhttp: Creating...citrixadc_csaction.tf_csaction3: Creation complete after 0s [id=csact.webapp3]citrixadc_cspolicy.tf_cspolicy3: Creating...citrixadc_csaction.tf_csaction2: Creation complete after 0s [id=csact.webapp2]citrixadc_cspolicy.tf_cspolicy2: Creating...citrixadc_csaction.tf_csaction1: Creation complete after 0s [id=csact.webapp1]citrixadc_cspolicy.tf_cspolicy1: Creating...module.loadbalancingservices.citrixadc_server.tf_server["testservermain01"]: Creation complete after 0s [id=testservermain01]module.loadbalancingservices.citrixadc_server.tf_server["testservermain02"]: Creation complete after 0s [id=testservermain02]module.loadbalancingservices.citrixadc_service.tf_service["testservermain02"]: Creating...module.loadbalancingservices.citrixadc_service.tf_service["testservermain01"]: Creating...citrixadc_cspolicy.tf_cspolicy3: Creation complete after 1s [id=cspol.webapp3]citrixadc_cspolicy.tf_cspolicy2: Creation complete after 1s [id=cspol.webapp2]citrixadc_cspolicy.tf_cspolicy1: Creation complete after 1s [id=cspol.webapp1]module.loadbalancingservices.citrixadc_service.tf_service["testservermain02"]: Creation complete after 1s [id=testservermain02-443]citrixadc_lbvserver.tf_lbvserverhttp: Creation complete after 1s [id=corpwebapp.corporate.com:443]citrixadc_csvserver.tf_csvserverhttp: Creation complete after 1s [id=corpservices.example.com:80]citrixadc_csvserver_responderpolicy_binding.tf_bind: Creating...module.loadbalancingservices.citrixadc_service.tf_service["testservermain01"]: Creation complete after 1s [id=testservermain01-443]citrixadc_lbvserver_service_binding.tf_bindinghttp["testservermain02"]: Creating...citrixadc_lbvserver_service_binding.tf_bindinghttp["testservermain01"]: Creating...citrixadc_csvserver_responderpolicy_binding.tf_bind: Creation complete after 0s [id=corpservices.example.com:80,httptohttps_pol]citrixadc_csvserver.tf_csvserverssl: Creation complete after 1s [id=corpservices.example.com:443]citrixadc_lbvserver_service_binding.tf_bindinghttp["testservermain01"]: Creation complete after 0s [id=corpwebapp.corporate.com:443,testservermain01-443]citrixadc_sslvserver.tf_cssslvserver: Creating...citrixadc_csvserver_cspolicy_binding.tf_cspolbind1: Creating...citrixadc_sslvserver_sslciphersuite_binding.tf_cssslvserver_sslciphersuite_binding: Creating...citrixadc_csvserver_cspolicy_binding.tf_cspolbind3: Creating...citrixadc_lbvserver_service_binding.tf_bindinghttp["testservermain02"]: Creation complete after 0s [id=corpwebapp.corporate.com:443,testservermain02-443]citrixadc_csvserver_cspolicy_binding.tf_cspolbind2: Creating...citrixadc_sslvserver_sslcertkey_binding.cert_bindingcs: Creating...citrixadc_csvserver.tf_defaulttargetbinding: Creating...citrixadc_csvserver_cspolicy_binding.tf_cspolbind1: Creation complete after 1s [id=corpservices.example.com:443,cspol.webapp1]citrixadc_csvserver_cspolicy_binding.tf_cspolbind3: Creation complete after 1s [id=corpservices.example.com:443,cspol.webapp3]citrixadc_sslvserver_sslciphersuite_binding.tf_cssslvserver_sslciphersuite_binding: Creation complete after 1s [id=corpservices.example.com:443,CorporateExternal]citrixadc_csvserver_cspolicy_binding.tf_cspolbind2: Creation complete after 1s [id=corpservices.example.com:443,cspol.webapp2]citrixadc_sslvserver_sslcertkey_binding.cert_bindingcs: Creation complete after 1s [id=corpservices.example.com:443,terracert]citrixadc_sslvserver.tf_cssslvserver: Creation complete after 1s [id=corpservices.example.com:443]citrixadc_csvserver.tf_defaulttargetbinding: Creation complete after 1s [id=corpservices.example.com:443] Apply complete! Resources: 23 added, 0 changed, 0 destroyed. Outputs: csaction1 = "csact.webapp1"csaction2 = "csact.webapp2"csaction3 = "csact.webapp3"cspolicy1 = "cspol.webapp1"cspolicy1binding = "corpservices.example.com:443,cspol.webapp1"cspolicy2 = "cspol.webapp2"cspolicy2binding = "corpservices.example.com:443,cspol.webapp2"cspolicy3 = "cspol.webapp3"cspolicy3binding = "corpservices.example.com:443,cspol.webapp3"cssslvserver_certbinding = "corpservices.example.com:443,terracert"cssslvserver_cipherbinding = "corpservices.example.com:443,CorporateExternal"csvserverhttp = "corpservices.example.com:80"csvserverssl = "corpservices.example.com:443"lbservices = [ "testservermain01-443", "testservermain02-443",]lbvserverhttp = "corpwebapp.corporate.com:443" The module creates all the required entities and binding between them. Note: This module does not show the destroy function of content-switching entities. The parent module can be destroyed entirely at once; however, the target load balancing virtual cannot be destroyed if it has a CS action attached. To remove or change the target LB, unset the target LB from the action and then change/destroy the configuration. Conclusion You can use the NetScaler and Terraform solutions for new content switching or adding settings to existing content switching, or to customize a deployment to fit with the application content access that you want to provide to end users as they need. NetScaler Terraform modules enable an infrastructure-as-code approach and seamlessly integrate with your automation environment using reusable modules. What's next?How to get started with NetScaler Automation Toolkit?(You can find all NetScaler Automation Toolkit details in our Automation Toolkit GitHub repository ) 1. Visit the hands-on labs for Terraform with NetScaler to familiarize yourself with NetScaler Automation Toolkit. Basic Load Balancing Configuration using TerraformBasic Content Switching Configuration using TerraforBasic Rewrite / Responder Policies Configuration using TerraformBasic Application Protection Configuration (WAF) using Terraform2. Set up your private lab by following NetScaler's Get Started guides for Terraform. Get Started with NetScaler Automation Toolkit using Terraform 3. Follow the Beginners Guide to get comfortable with common commands. Beginners guide to automating ADC with terraform on GitHub 4. Be part of the Community. Pick a common NetScaler use case/configuration, create a Terraform template, and publish it on the NetScaler Community. Additional resources for NetScaler Automation Toolkit?Visit the following links to get more details on using Terraform with NetScaler. ADC Terraform Provider on Terraform RegistryADC Terraform Provider on GitHubADM Terraform Provider on GitHubSDX Terraform Provider on GitHubBLX Terraform Provider on GitHubTerraform Templates to Provision VPX on GitHub
  21. Hello Danny, You can find the equivalent Terraform resource here: https://registry.terraform.io/providers/citrix/citrixadc/latest/docs/resources/sslcertkey And the Nitro API here: https://developer-docs.citrix.com/projects/citrix-adc-nitro-api-reference/en/latest/configuration/ssl/sslcertkey/#link Please let me know if that works for you. 🙂
×
×
  • Create New...