Jump to content
Welcome to our new Citrix community!

feature-node-watch master node routing scrambled.


Tim Hansen

Recommended Posts

Currently using the citrix ingress controller on a six node (3 master 3 worker) openshift 4.7 cluster.

I have feature-node-watch enabled and can see routes being added to the ADC.

Unfortunately for some strange reason the routes for the masters are added to the ADC incorrectly.  It matches the wrong pod cidr's to the wrong master nodes.

The worker nodes get added correctly though?

In the logs it looks like it configures it correctly initially.  But then a few minutes later it scrambles them up?

In the logs I see:
 

2021-03-24 21:36:16,836  - INFO - [kubernetes.py:__init__:108] (MainThread) node_watch_for_routes  True
2021-03-24 21:36:16,836  - INFO - [kubernetes.py:__init__:156] (MainThread) Default certificate: openshift-operators/apps-ose4-wildcard-tls
2021-03-24 21:36:16,857  - INFO - [referencemanager.py:__init__:128] (MainThread) Initializing Reference manager singleton
2021-03-24 21:36:17,261  - INFO - [config_dispatcher.py:_pull_crd_config_from_ns:461] (Dispatcher) Pulling CRD configuration from NetScaler started
2021-03-24 21:36:17,454  - INFO - [kubernetes.py:run:256] (SecretListener) Starting thread that watches /secrets...
2021-03-24 21:36:17,455  - INFO - [kubernetes.py:run:256] (DeploymentListener) Starting thread that watches /deployments...
2021-03-24 21:36:17,528  - INFO - [nitrointerface.py:add_ns_route:4542] (MainThread) Route for 10.132.1.0:255.255.255.0 with gateway:192.168.15.21 added to ADC
2021-03-24 21:36:17,564  - INFO - [nitrointerface.py:add_ns_route:4542] (MainThread) Route for 10.132.2.0:255.255.255.0 with gateway:192.168.15.25 added to ADC
2021-03-24 21:36:17,594  - INFO - [nitrointerface.py:add_ns_route:4542] (MainThread) Route for 10.132.0.0:255.255.255.0 with gateway:192.168.15.147 added to ADC
2021-03-24 21:36:17,625  - INFO - [nitrointerface.py:add_ns_route:4545] (MainThread) Ignoring the exception while configuring Routes in ADC, Routes already exists network:10.132.6.0 gateway:192.168.15.11
2021-03-24 21:36:17,657  - INFO - [nitrointerface.py:add_ns_route:4545] (MainThread) Ignoring the exception while configuring Routes in ADC, Routes already exists network:10.132.4.0 gateway:192.168.15.28
2021-03-24 21:36:17,691  - INFO - [nitrointerface.py:add_ns_route:4545] (MainThread) Ignoring the exception while configuring Routes in ADC, Routes already exists network:10.132.5.0 gateway:192.168.15.27

..........

2021-03-24 21:38:51,096  - INFO - [nitrointerface.py:add_ns_route:4545] (MainThread) Ignoring the exception while configuring Routes in ADC, Routes already exists network:10.132.1.0 gateway:192.168.15.147
2021-03-24 21:40:26,155  - INFO - [nitrointerface.py:_configure_services_nondesired:1702] (MainThread) Unbinding 10.132.5.2:8080 from service group k8sProd-canary_openshift-ingress-canary_443_k8sProd-ingress-canary_openshift-ingress-canary_8080_svc is succesful
2021-03-24 21:42:00,681  - INFO - [nitrointerface.py:add_ns_route:4545] (MainThread) Ignoring the exception while configuring Routes in ADC, Routes already exists network:10.132.0.0 gateway:192.168.15.25
2021-03-24 21:42:30,737  - INFO - [nitrointerface.py:add_ns_route:4545] (MainThread) Ignoring the exception while configuring Routes in ADC, Routes already exists network:10.132.2.0 gateway:192.168.15.21

Not sure what I can do to get the master nodes routing correctly.  

Link to comment
Share on other sites

The CIC version I'm on is 1.10.2.  I think this is the latest version available from the OperatorHub within OpenShift?
Output from hostsubnets:

NAME                      HOST                      HOST IP          SUBNET          EGRESS CIDRS   EGRESS IPS
ose4-6qdcd-master-0       ose4-6qdcd-master-0       192.168.15.21    10.132.1.0/24                  
ose4-6qdcd-master-1       ose4-6qdcd-master-1       192.168.15.25    10.132.2.0/24                  
ose4-6qdcd-master-2       ose4-6qdcd-master-2       192.168.15.147   10.132.0.0/24                  
ose4-6qdcd-worker-5fm64   ose4-6qdcd-worker-5fm64   192.168.15.11    10.132.6.0/24                  
ose4-6qdcd-worker-k4bdc   ose4-6qdcd-worker-k4bdc   192.168.15.28    10.132.4.0/24                  
ose4-6qdcd-worker-r6m6f   ose4-6qdcd-worker-r6m6f   192.168.15.27    10.132.5.0/24  

 

Link to comment
Share on other sites

On 3/26/2021 at 10:38 PM, Tim Hansen said:

The CIC version I'm on is 1.10.2.  I think this is the latest version available from the OperatorHub within OpenShift?
Output from hostsubnets:


NAME                      HOST                      HOST IP          SUBNET          EGRESS CIDRS   EGRESS IPS
ose4-6qdcd-master-0       ose4-6qdcd-master-0       192.168.15.21    10.132.1.0/24                  
ose4-6qdcd-master-1       ose4-6qdcd-master-1       192.168.15.25    10.132.2.0/24                  
ose4-6qdcd-master-2       ose4-6qdcd-master-2       192.168.15.147   10.132.0.0/24                  
ose4-6qdcd-worker-5fm64   ose4-6qdcd-worker-5fm64   192.168.15.11    10.132.6.0/24                  
ose4-6qdcd-worker-k4bdc   ose4-6qdcd-worker-k4bdc   192.168.15.28    10.132.4.0/24                  
ose4-6qdcd-worker-r6m6f   ose4-6qdcd-worker-r6m6f   192.168.15.27    10.132.5.0/24  

 

 

The Operator has an older version of CIC. We are in the process of updating it. Meanwhile, could you please use HELM charts to deploy CIC in your deployment. The newer version of CIC should work fine. 

Below HELM command deploys a CIC in OpenShift with nodeWatch enabled.

 

helm repo add citrix https://citrix.github.io/citrix-helm-charts/
helm install cic citrix/citrix-ingress-controller --set nsIP=<Citrix-ADC-NSIP>,license.accept=yes,adcCredentialSecret=<Citrix-ADC-Credential-Secret>,openshift=true,nsVIP=<Citrix-ADC-VIP>,nodeWatch=true

Please look into our detailed HELM documentation for advanced options: https://artifacthub.io/packages/helm/citrix/citrix-ingress-controller#for-openshift

Link to comment
Share on other sites

Did you delete the already existing static routes from ADC before redeploying CIC? If not, please delete the static routes for all the master and worker nodes (10.132.X.X) from the ADC and then try redeploying CIC using helm. CIC would not delete/modify the already configured static routes from the ADC. 
If it's still not working as expected, could you please send us the CIC logs?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...