Jump to content
Welcome to our new Citrix community!

Problem to define START_URL configuration


Recommended Posts

Hi, 

I've a website behind CitrixADC VPX (version: NS12.1 53.12) and I've enabled AppFW for this site.

This is START URL settings in profile CSP_revisorelegale

immagine.thumb.png.346fc4a95a21a52e2379fcb154c8b359.png

 

During the analyze of log, I see a lot of blocked STARTURL events:

 

CEF:0|Citrix|NetScaler|NS12.1|APPFW|APPFW_STARTURL|6|src=185.105.94.243 geolocation=Unknown spt=34799 method=POST request=https://www.esempio.eee.gov.it/opencms/opencms/Revisione-legale/ricercaRevisori/index.html?as_url_id\=AAAAAAXqsJZH1lFmZQ13W9TCKi7Wyz0dMw7XSFkTsaJsWUTGxUx_ad5JiK4nW1oMk7Bv0sHJfaci20UB5tqKAXB2SHWNFegbbi1BMc0inlnccSr71dKiN9AU9-23ZxwZdBznX553Uimn6FGRPratCiOZ9MvagEErFr_4fsK6a5FPBJc72A\=\= msg=Disallow Illegal URL. cn1=2228919804 cn2=914117536 cs1=CSP_revisorelegale cs2=PPE1 cs3=vjuef3y52eMJhzENwEsogT1f6sQ0001 cs4=ALERT cs5=2020 act=not blocked

 

In relaxation rules setting, I've created the rule:

^https://www\.esempio\.eee\.gov\.it/opencms.*$ 

I even added ^https://www\.esempio[.].*$

No change, same behavior.

Citrix ADC still log events like the above. Why?  Thanks.

immagine.png

immagine.png

immagine.png

Link to comment
Share on other sites

General comments first:

1) This would be a good time to use LEARNING to see what else is being blocked to help define your start url rules.  Or at least a header viewer utility in your browser to see the requests leading up to this.

2) You also need to decide if you are or aren't using URL Closure or not (changes how start urls behave).  You have it on above, so I will assume you plan to use it.

 

So any request to the website must be permitted by the start urls. Url Closure allows you to define the "main" entry points (aka start urls) for the site and then as long as links the user navigates were presented by the web site from a start url, then those links will be allowed via closure. So the trick is definining those initial entry point patterns in such a way they allow access to the starting points of the site but without being so broad everything is permitted.

 

The most common issues are 1) the initial main url path/url structure not being defined so initial requests are not covered, 2) missing dependencies preventing access to css/images/media so the page still is blocked as the initial start urls don't account for some of the objects needed to load the page.  3) Dynamic URLs that are constructed by the page client-side will not be covered by closure and will still be seen as violations.

 

So when starting with start URLs get the base URL host/path defined and then use syslog or learning to figure what else you need to "seed" the initial start urls. And then adjust from there.  Learning can help you improve your initial start url analysis if its not obvious what the dependencies are.

 

Also, your POST is going to fail unless you hit a start url first.  So, if you are testing this url only without hitting the site first, you could be creating an instant violation.

 

More specific comments about your regex and this current message:

Because you are using sessionless tracking; between test cases have you cleared cookies/browser cache and started over?

And is this page the actualy "first url" you hit or a later transaction?

Finally, are you sure your browser is set to NOT cache and you've cleared existing cache (as cached sessionless starturl data would affect this too)....browser caching is the bane of all appfw configuration testing :)

 

To really diagnose, this change the behavior:

1) clear your learning database (if possible) for this profile:

2) I would start with a start url (assuming closure is on) of just the following:

^https://www\.esempio\.eee\.gov\.it/$

 

3) Test the main page and just a few basic navigation links

4) Then review learned URLs and update the deployed rules for these dependencies.

5) Rinse and repeat until you feel like the learning has helped seed the initial main requirements

 

Then resume testing and see if you start on a main page and link to whatever gives you this function if it is still seeing violation or not. If so, repeat reviewing what learning says and update additional urls.  Remember, you can't test this link without hitting a start url first (and as a post, a valid start url has to be part of the flow.)

 

If still having issues consider enabling the appfw trace in the (appfw profile) and in the nstrace capture.

Post your ns firmware version in case the issue is version specific...but I would start with just simplifying the testing to see if its a configuration issue or a test procedure/order of operations issue.

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Thank you so much Rhonda for your answer,

actually  the final goal is to set CSRF protection.

 

Then, I would like to follow the way without sessionization:

1) I set START_URL check. The Action setting is:

immagine.thumb.png.dc631f550f07d2be8a8a79b9128ae6d5.png

2) In Relaxation Rules (Start Url List) I put:

^https://www\.esempio\.eee\.gov\.it.*$ (which covers all the urls of the site) and

the url of any other site that contains a link to my site (www.google.com for example).

 

Is it a correct approach for CSRF protection without sessionization?

 

Thank you so much.

 

 

 

 

 

Link to comment
Share on other sites

You can't set CSRF until your start urls are configured properly.... so first thing with APPFW is always get your start URLs working first and then move on to other features.

Also, decide whether you really want CSRF vs Referer-Header validation (under start URLS), which still requires Start URLS + Closure working.

 

My recommendation during a phase 1 config, turn of CSRF and Content checking. Possibly Form Field Consistency check UNTIL you get START URLS/Closure working, then add these features in as they could all be impacted by a misconfigured start url behavior.

 

CSRF is a sessionization based-feature AND it requires URL Closure. You can then do old-style sessionization based URL Closure or sessionless.  But regardless, don't tackle CSRF until Start URLs are working or you will have bonus failures/complications as it depends on the other.

 

For most scenarios though you get the same protection with referer header validation and its 1) easier to implement (still get start urls/url closure right first, notice a theme ;)  and 2) its less processing overhead.

 

 

 

Link to comment
Share on other sites

1 hour ago, Cristina Marletta Livi said:

2) In Relaxation Rules (Start Url List) I put:

^https://www\.esempio\.eee\.gov\.it.*$ (which covers all the urls of the site) and

the url of any other site that contains a link to my site (www.google.com for example).

 

Here's the bad thing about this approach.  If you are doing standard whitelisting (Start URLs with URL Closure:OFF), then sometimes broad patterns are needed. But this leaves you open to any URL pattern/attack you don't explicitly deny.

 

Because URL Closure grants access to anything in a matching pattern, it is discouraged to use such a broad pattern as everything is allowed and nothing will be denied via closure and you effectively defeat the purpose of closure guranteeing that users start on a defined entry point. If everything is valid, there is no "bad path" in.

 

There are websites that may require this broad approach because of URL patterns and how they are constructed, but unless we are sure we are in that mode, we don't want to start that broadly with URL Closure.  Plus learning learns on violation to show you "for this config, this is what is still being blocked; do you want to keep blocking or allow".

You want a "line in the sand" where we know something(s) that work and let learning show what else we need. With closure enabled, the learning can help us see if the way the URLs are constructed a broader definition is needed.

 

This pattern, when used with closure is as bad/broad as the second example:

[1] ^https://www\.esempio\.eee\.gov\.it.*$

[2] ^https://www\.esempio\.eee\.gov\.it

 

[1] and [2] effectively says any URL that starts with this expression is valid. With closure this is very bad as basically every path/directory/parameter of the site is now a valid entry point unless you explicitly deny them.

Any part of your website flow that would have required the user to go to http://<server>/path1 to /path2 , would now allow the user to go straight to /path2 without seeing /path1 as everything is valid. Your site will navigate...but you effectively lose a lot of security. 

Usually with closure I only want enough of the site permitted for where users "start" their nvaigation. And if the page presents the link it will then be allowed, but if a user direct browsers and skips the entry point it would be blocked.

 

So, when first configuring appfw and URL closure (using learning), we start narrow:
^https://www\.esempio\.eee\.gov\.it/$

Then we navigate from the default page and try a few links. Review learning to help us find things not covered by closure like media and css directories. Update the list narrowly and repeat test until initial navigation works.

Then you think about how users may bookmark or direct browse and see what else you need and use learning to update.  For complex apps, learning can help generate regex's that are hopefully between two broad and two narrow.

 

Web sites like web portal pages have "broad" access points and need a lot of URL entry points to function with closure; but the navigation in site is then conditionally restrictive if closure is violated, make the protection more nuanced than straight whitelist/blacklist.  Whereas a line of business app may only allow users to start on "page1" to authentcate and then everything derives from there, so much easier to implement.

 

Dynamic URLs, spontaneously generated query patterns can add complexity, but learning should help you see which patterns you need so you don't start with a "/.*" and allow everything.  Otherwise, you may have to decide whether closure is a good idea or not, because if closure says URL is good some security checks can be exempted makeing you less protected with adjusting the profile setting to do otherwise.

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...