Jump to content
Welcome to our new Citrix community!

VPN: Split Tunnel subnet access issue

Rowen Gunn

Recommended Posts



Friday night I enabled Split Tunneling on our company's VPN, switching from our previous Reverse Split Tunneling model. Everything is working as expected accept one weird gotcha that I need some assistance with.


We have a small amount of servers on the 172.16.1.xxx and 172.16.109.xxx subnets. I've entered / 255.240.0 as and Intranet Application and bound it to my SSLVPN gateway along with / I can ping, RDP, SMB, etc to all my internal subnet using the new Split Tunneling VPN model except anything on 172.16.xxx.xxx Despite being covered in the range above I cannot reach server or object in the 172.16.xxx.xxx ranges at all. I can reach servers covered in the / 255.240.0 Intranet Application on other ranges such as 172.21.xxx.xxx or 172.20.xxx.xxx. Also If I  create a hostname based Intranet Application with the FQDN or WINS name of a server in the 172.16.xxx.xxx ranges the VPN user can then reach the server however they must use a DNS name, the IPs in the impacted ranges still are unreachable. I attached a screenshot of how the IP changes but the object is reachable.


I opened a ticket with support and they will look into the issue Monday, however I was hoping someone on the forums might know why this is occurring. I noticed the IP interception range is also 172.16.xxx.xxx (see the attached screenshot). Also I found in the routes after connecting to the VPN there's a mysterious route that I do not have added to the Netscaler's config shown as:      1


I've tried creating a Intranet Application based on IP range for just 172.16.1.xxx and creating an Application that was IP & Netmask for just 172.16.1.xxx and neither worked. No matter what I try I cannot get the VPN client to talk directly to anything on 172.16.xxx.xxx while in Split Tunneling mode. I do not have a route configured on the Netscaler which would impact this range either. Does anyone know why the VPN client is dropping 172.16.xxx.xxx traffic like this?





Link to comment
Share on other sites

Short of a bug causing havoc (which you'll need support to find).

Traffic flow would potentially be affected by routes, pbrs, or external firewall rules.

Also, gateway authorization policies, in addition to the intranet apps.

Gateway cert on vpn vserver needs to be fully trusted and not self-signed/expired or other.  


IF we assume its not a bug at first, then it would be necessary to check all the other potential config issues. During the tests:

1) Run an nstrace and even a client device network trace for comparison.

2) Check syslog (/var/log/ns.log) for gateway events related to possibly deny authorizations or other packet events.

3) Check nslog (/var/nslog/newnslog via nsconmsg -K newnslog -d event  and -d consmsg) output to confirm there isn't some low-level network event showing up (possibly a switch issue).


Also, when you do your ping test, run the pings from the ADC with the -S <SNIP> option to confirm a snip can reach the destination and not just a client side vpn issue.


Finally, is there any overlap between the network at issue and the local network of the client device?  While split tunnel should override, if there's an overlapping network or a competing vpn client, you might have a bonus issue.


So, generic observations:

8 hours ago, Rowen Gunn said:

We have a small amount of servers on the 172.16.1.xxx and 172.16.109.xxx subnets. I've entered / 255.240.0 as and Intranet Application and bound it to my SSLVPN gateway along


Can you clarify your subnet/mask in use for the Intranet app:  Are you using or  (sounds like latter, but confirming the typo)


If you change your intranet app for testing to a simple class B or class C, does it work or not?  Do you proper authorization allow rules specified for these destinations.

Can the ADC reach these destinations normally from itself without the vpn tunnel engaged (proper routes, no firewall rules, or acls denying access)?  If no, then the vpn tunnel users will also fail.  But definitely keep an eye on syslog/nslog for deny authorizations or network issues during test.  


If the split tunnel network overlap with local networks, then advanced vpn client settings may be overriding the intercept behavior.  If you're toggling back and forth with overrides on/off, you might try fully uninstalling/reinstalling the vpn client to make sure you're not picking up inconsistent settings between test cases. Also, remember any type of change of allowed networks or authorization policies would require a user test to do a new logoff/logon event. Can't change settings and test with an existing session; must start a new one to confirm changes.


Also confirm you don't have conflicting settings at the user/group level overriding the settings in effect at the vpn vserver level.


How was your reverse split tunnel originally defined; was there any overlap between the networks to NOT intercept and the networks TO intercept now?  If so, that might be a setting conflict that is creating an unexpected result.  Be sure conflicting setting isn't still in effect AND maybe uninstall/reinstall the vpn plugin on the affected client to ensure "no cached" values are in use.  


But these are some things that might help if it is config related and not a product bug. (Sorry, for the ramble at the end.)


Link to comment
Share on other sites

This has been confirmed by support that for some reason the VPN client is intentionally using / as it's Intranet Application hostname interception range. This means the client is pointing that range, on it's own, to itself to be used for hostnames that you list as Intranet Applications in the gateway config. Support is looking into this to see if there's a way to change this range or modify it.


I find it hard to believe no other company using this VPN in Split Tunnel mode encountered this issue before. We can't be the company using s internal subnet in the 172.16.xxx.xxx range and split tunneling.  The ADC itself can reach the 172.16.xxx.xxx range as we have our DNS and DC's in that range and it can talk to them just fine. It's the VPN clients in Split Tunnel mode which cannot talk to 172.16.xxx.xxx.


I have a Intranet application setup that's the IP range / 255.240.00. This range covers into 172.21.xxx.xxx where we also have servers and you can access those servers while on the VPN. So we know the Intranet Application range I entered is working for subnets NOT 172.16.xxx.xxx. 

Link to comment
Share on other sites

Citrix Support has confirmed this issue we are having is due to the Netscaler VPN client using 172.16.xxx.xxx / as the default Spoofed FQDN IP range for Split Tunnel hostname redirection.


To correct this issue enter another IP range you're not using into the VPN Session Profile > Client Experience > Advanced Settings > "Spoofed IP Addresses for FQDN Based Tunneling". We used / as per Citrix support and now we can access 172.16.xxx.xxx on our VPN. 

  • Like 2
Link to comment
Share on other sites

  • 1 year later...

Hello Rowen,


we are the second company which ran into the same issue, but in our case it started with the Secure Access Version, we were not able to reach the VPN Clients anymore from the network 172.16.x.x.

I opened a Citrix ticket and after a few weeks, Citrix advised me to change the range in the Spoofed IP adress range in the profile.

But there was nothing configured in our envirenmtent so I was wondering if  this would be the solution, then I found your comment and realized that this coould be the solution after that  I changed the range like you mentioned in your article and it started working.....

So thanks for your Support!!!!

But why this problem started in our environment after we updated the clients to Secure Access Version

Link to comment
Share on other sites

  • 2 months later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...