Jump to content
Welcome to our new Citrix community!

High Availability issue when running CPX on different hosts with bridge networking


sebastien langlois

Recommended Posts

ADC instances don't see each other when trying to configure high-availability between two Citrix ADC CPX instances,

 

CPX 01 : node IP 192.168.1.10, nsip 172.18.1.4 (in the docker network)

CPX02 : node IP 192.168.1.20, nsip 172.18.1.4 (in the docker network)

Network mode : bridge

 

 

```

root@4a683dcfb3e1:/# cli_script.sh 'add ha node 1 192.168.1.20 -inc enabled'
exec: add ha node 1 192.168.1.20 -inc enabled
Done
root@4a683dcfb3e1:/# cli_script.sh 'show ha node'
exec: show ha node
1)    Node ID:      0
    IP:    172.28.1.4 (4a683dcfb3e1)
    Node State: UP
    Master State: Primary
    Fail-Safe Mode: OFF
    INC State: ENABLED
    Sync State: ENABLED
    Propagation: ENABLED
    Enabled Interfaces : 0/1 0/2
    Disabled Interfaces : None
    HA MON ON Interfaces : 0/2
    HA HEARTBEAT OFF Interfaces : 0/1
    Interfaces on which heartbeats are not seen : None
    Interfaces causing Partial Failure: None
    SSL Card Status: NOT PRESENT
    Sync Status Strict Mode: DISABLED
    Hello Interval: 200 msecs
    Dead Interval: 3 secs
    Node in this Master State for: 0:0:0:27 (days:hrs:min:sec)
2)    Node ID:      1
    IP:    192.168.1.20
    Node State: UNKNOWN/DOWN
    Master State: UNKNOWN
    Fail-Safe Mode: UNKNOWN
    INC State: UNKNOWN
    Sync State: UNKNOWN
    Propagation: UNKNOWN
    Enabled Interfaces : UNKNOWN
    Disabled Interfaces : UNKNOWN
    HA MON ON Interfaces : UNKNOWN
    HA HEARTBEAT OFF Interfaces : UNKNOWN
    Interfaces on which heartbeats are not seen : UNKNOWN
    Interfaces causing Partial Failure: UNKNOWN
    SSL Card Status: UNKNOWN
Local node information:
    Critical Interfaces: 0/2
Done
root@4a683dcfb3e1:/#

```

 

HOST environment variable is used when launching CPX instance and required ports are mapped. CPX docker-compose file provided below.
Bi-directional udp/3003 are seen when capturing packaets with tcpdump on the host.

RPC password have been automatically configured at CPX statup and have ot been changed

NSROOT password has not been changed.

INC mode is used for HA in this lab as our target is to have CPX nodes deployed in different subnets.

 

Same docker-compose is used on both hosts :

```

version: '3.4'

services:

    netscaler:

        container_name: "netscaler"

        image: cpx:13.0-64.35

        privileged: true

        ports:

            - 443:443

            - 22:22

            - 80:80

            - 161:161/udp

            - 9080:9080

            - 3003:3003/udp

            - 3010:3010

            - 8873:8873

        tty: false

        cap_add:

            - NET_ADMIN

        ulimits:

            core: -1

        volumes:

            #- ./adc.conf:/etc/cpx.conf

            - ./cpx:/cpx

        environment:

            - EULA=yes

            - PLATEFORM=CP1000

            - HOST=192.168.1.10

        networks:

          cpx_net:

            ipv4_address: 172.18.1.4

 

networks:

  cpx_net:

    ipam:

      driver: default

      config:

        - subnet: 172.18.0.0/16

```

 

Why both CPX instance are not seeing each other ?

 

 

Link to comment
Share on other sites

Hi Sebastien,

 

As you mentioned that the same file is being used on both 192.168.1.10 and 192.168.1.20 nodes, it is causing the issue that CPXs are not able to reach each other. HOST takes the IP of the node on which CPX is deployed. Please make separate docker compose files for both the nodes with node's IP updated for HOST environment variable. It should fix the reachability of nodes to each other. 

 

Also, I would like to highlight a few points here -

  • Please use 3008 port instead of 3010, as with this release the unsecure RPC port is blocked and only secure RPC communication is allowed which needs port 3008 to be exposed on both the nodes for CPX containers.
  • Default NITRO ports are 9080 for HTTP and 9443 for HTTPS. 80 and 443 are free to be used for configuring desired vservers/services.
  • With 13.0-64.35 release, CPX generates a random password for 'nsroot' user. So after successful creation of CPX HA pair, the cli_script.sh on secondary will require credentials as 2nd argument while cli_script.sh on primary CPX will work seamlessly unless the password is changed manually. Please refer https://docs.citrix.com/en-us/citrix-adc-cpx/current-release/configure-cpx.html for more information on non-default password.
  • Either of privileged mode or CAP_NET_ADMIN is required for CPX. Both are not required. Please refer https://docs.citrix.com/en-us/citrix-adc-cpx/current-release/deploy-using-docker-image-file.html for more information on capability requirements of CPX.

 

Please try these changes for making CPX HA pair across nodes.

 

Thanks & Regards,

Akshay Budhauliya

Link to comment
Share on other sites

Hi Akshay,

 

It works when using port 3008 instead of 3010. Thanks for your advice ;-)

I noticed the nsroot password change on the secondary node after peering. I understand that I have to manually set the nsroot password before beeing able to access CPX through the NITRO API as nsroot password is now randomly configured. Is it correct ?

 

I changed the docker-compose.yml file following your recommendations. The modified file below is used to deploy CPX on both nodes, HOST environment variable is changed with the IP address of each node.

  • Unneeded tcp/22 port has been removed.
  • Replaced tcp/3010 with tcp/3008 port.
  • Removed 'privileged: true' to limit CPX privileges.

 

version: '3.4'

services:

    netscaler:

        container_name: "netscaler"

        image: cpx:13.0-64.35

        ports:

            - 443:443/tcp

            - 80:80/tcp

            - 161:161/udp

            - 9080:9080/tcp

            - 3003:3003/udp

            - 3008:3008/tcp

            - 8873:8873/tcp

        tty: false

        cap_add:

            - NET_ADMIN

        ulimits:

            core: -1

        volumes:

            #- ./adc.conf:/etc/cpx.conf

            - ./cpx:/cpx

        environment:

            - EULA=yes

            - PLATEFORM=CP1000

            - HOST=192.168.1.10

        networks:

          cpx_net:

            ipv4_address: 172.18.1.4

 

networks:

  cpx_net:

    ipam:

      driver: default

      config:

        - subnet: 172.18.0.0/16

 

 

Is this CPX configuration optimized ?

 

Why is /etc/cpx.conf loading so slow at container startup (more than 1 minute) ?

 

What is te recommended configuration of the docker engine userland-proxy ?

 

Thanks & regards,

Sebastien

Link to comment
Share on other sites

Hi Sebastien,

 

Thank you for verifying. :5_smiley:

 

56 minutes ago, sebastien langlois said:

I noticed the nsroot password change on the secondary node after peering. I understand that I have to manually set the nsroot password before beeing able to access CPX through the NITRO API as nsroot password is now randomly configured. Is it correct ?

 

On CPX HA pair, the password for nsroot user on Secondary CPX gets updated with password for nsroot user on Primary CPX.

Yes, now CPX generates nsroot's password randomly upon booting up for 1st time. Post that you can choose to update the password of nsroot user. If you don't wish to change the password, you can use the randomly generated password for NITRO API calls or SSH. The password is saved in /var/deviceinfo/random_id file on CPX's file system. cli_script.sh automatically reads the password from this file. If you are changing the password, then cli_script.sh will take credentials as 2nd argument. Here's how we can provide the password to cli_script.sh -

        cli_script.sh "<command>" ":<user>:<password>"

Example -

        cli_script.sh "show ns ip" ":nsroot:Citrix123"

     

Please refer "Support for using a non-default password in Citrix ADC CPX" section on https://docs.citrix.com/en-us/citrix-adc-cpx/current-release/configure-cpx.html page for example on cli_script.sh with credentials. He

 

3 hours ago, sebastien langlois said:

Is this CPX configuration optimized ?

 

You can also map 9443:9443/tcp port on CPX to use the secure NITRO API. Also, I see that you are exposing port 80 and 443. Do you wish to configure vserver on these ports? If not, then these ports do not require exposing for any management purpose. 

Also, using "tty:false" will stop the CPX logs coming to docker logging driver. So, if you wish for all CPX logs to be available in docker logs, please set "tty:true". 

 

3 hours ago, sebastien langlois said:

Why is /etc/cpx.conf loading so slow at container startup (more than 1 minute) ?

 

This is a boot up configuration file which is parsed and configuration is applied on CPX only after all processes have initialised successfully. It usually takes ~1 minute for CPX to initialise all the processes. Initialisation time of CPX also depends on docker host's CPU and Memory load. It is recommended to have 1 free vCPU core and 1 GB of RAM per Packet Engine of CPX on the docker host. 

 

3 hours ago, sebastien langlois said:

What is te recommended configuration of the docker engine userland-proxy ?

Can you please elaborate on this? Which kind of configuration are you looking for?

 

Please feel free to join the Citrix ADC Cloud Native Slack Channel for any queries on CPX. Please refer https://github.com/citrix/citrix-k8s-ingress-controller for instructions on how to join the slack channel.

 

Thanks & Regards,

Akshay Budhauliya

Link to comment
Share on other sites

  • 3 weeks later...

Hi @Akshay Budhauliya1709160745

 

Based on https://deavid.wordpress.com/2019/06/15/how-to-allow-docker-containers-to-see-the-source-ip-address/ and http://rabexc.org/posts/docker-networking, it seems highly recommended to disable docker engine proxy :

 /etc/docker/daemon.json :

{
     "userland-proxy": false
}

I can't see any advice about docker userland-proxy configuration in CPX documentation. Is it recommended to disable userland-proxy when deploying Citrix ADC CPX container ?

Link to comment
Share on other sites

Hi Sebastien,

 

Thanks for the clarification.

 

If any workload on docker host is not dependent on docker-proxy then it can be disabled. This setting would matter only in case of bridge mode deployments. In case of Kubernetes deployment, Kubernetes CNI will be taking care of networking infrastructure and this setting won't have any effect. 

 

Thanks & Regards,

Akshay Budhauliya

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...