Jump to content

H Desk

Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by H Desk

  1. Hi,

    I tried the fastreconnect registry settings that have been mentioned in a couple of other threads and we have not seen problems with servers crashing or sessions hanging since. The 3 settings are:

     

    HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\Reconnect
    FastReconnect (REG_DWORD)
    Value: 0
    DisableGPCalculation (REG_DWORD)
    Value: 1

     

    HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\Ica\GroupPolicy
    EnforceUserPolicyEvaluationSuccess (REG_DWORD)
    Value: 0

     

    There's some incompatibility between this feature and RDP in Windows 2019.

     

  2. This problem, and quite a few others, appear to have been solved by adding the registry keys below, which I found in a Reddit thread:

     

    Path: HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\Reconnect
    Name: FastReconnect
    Type: REG_DWORD
    Value: 0

    Name: DisableGPCalculation
    Type: REG_DWORD
    Value: 1

     

    Path: HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\Ica\GroupPolicy
    Name: EnforceUserPolicyEvaluationSuccess
    Type: REG_DWORD
    Value: 0

  3. Hi,

     

    We recently upgraded to LTSR 2203 CU2 from 1912 CU5 and have been experiencing problems with VDA servers randomly deregistering from one delivery controller and registering to another. The process only takes a second and doesn't cause any issues, but it generates an alert in Director which isn't pretty to look at. Understandably, people question if there is a bigger problem when repeated errors like these are seen.

     

    The Delivery controllers, StoreFronts and PVS servers are Windows 2016 and LTSR 2203 CU2. The machine catalogue and delivery group have now been upgraded to 2106. The VDA servers are Windows 2019\LTSR 2203 CU2.

     

    After upgrading the DLC, SF and PVS servers, some of the VDA servers were still running LTSR 1912 and did not experience the problem. Now that they are all 2203, they all get the error at some point.

     

    The problem starts when the servers are under load, usually from 9am onwards. It doesn't occur for a delivery group where the VDAs are still on LTSR 1912 CU5, but these servers are moderately loaded.

     

    On the VDA server we see two events:

    1048, The Citrix Desktop Service is re-registering with the DDC: 'NotificationManager:NotificationServiceThread: WCF failure or rejection by broker
    1010, The Citrix Desktop Service successfully obtained the following list of 2 delivery controller(s) with which to register

     

    There are no unusual events on either delivery controller and there are no network connectivity issues according to PRTG.

     

    I have tried uninstalling the VDA and reinstalling it.

     

    I found another thread from 2017 which had the same problem (which was for an older version of course) the solution in that case was a hotfix that replaced rpm.dll.

     

    Thanks,

  4. On 11/9/2022 at 11:41 AM, H Desk said:

    Thanks - installing the latest licence from the portal fixed this issue. However, I got a new problem after I upgraded the first node, I logged on OK with MFA, but when I tried to start a Citrix desktop it didn't start and HA failed back to the un-upgraded node. Usually I am able to fail over to the upgraded node, test to make sure it works and then upgrade the other node. Reverting to snapshot got it back to normal. I will try upgrading to 13.1.33.52 this time, but I'm worried in case it does the same thing again then it's missing the patches for the recent CVEs.

     

    It upgraded fine with 13.1.33.52 and the new licence. I see that 13.1.33.49 is no longer available for download, so it must have been a bug with that version.

  5. Thanks - installing the latest licence from the portal fixed this issue. However, I got a new problem after I upgraded the first node, I logged on OK with MFA, but when I tried to start a Citrix desktop it didn't start and HA failed back to the un-upgraded node. Usually I am able to fail over to the upgraded node, test to make sure it works and then upgrade the other node. Reverting to snapshot got it back to normal. I will try upgrading to 13.1.33.52 this time, but I'm worried in case it does the same thing again then it's missing the patches for the recent CVEs.

  6. Since downgrading to CU3, we have had no problems for almost 2 weeks. I guess we're stuck on that release for the foreseeable future. 

     

    It last happened on a CU5 server the day before we downgraded it, and I wasn't able to logon to it, but I could run Powershell remotely and kill all logonui.exe processes. Unfortunately, it didn't bring the server back.

     

  7. Hi,

    I've been experiencing the same problem - we have Windows 2019 servers with the latest FSLogix and CU5. It happens once or twice per week where the server can't be RDPed or consoled into but remote tools like services.msc and eventvwer can still connect OK. The IP Helper, AppX get stuck in a starting or stopping state. New users trying to connect to the broken server get stuck logging on and existing users can't reconnect or logoff their session. We also have some Windows 2012R2 servers with CU5 and FSLogix which never get the problem.

     

    I'll try downgrading the Windows 2019 servers to CU3.

    Thanks.

  8. Hi,

     

    I am having the same problem, which started earlier in the year. It happened on 1912 CU2 but now that we have upgraded to CU5 it is still happening. I've installed the latest version of FSLogix which didn't help either. We also have a few Windows 2012R2 servers with CU5 and the same version of FSLogix, but they're not affected by this problem. We have 40 Windows 2019 servers and it happens once or twice a week, when new users can't start a session on the affected server. We can put the server into maintenance mode, but the existing users can't logoff and their session is stuck. We can't RDP or console into the server, though we can ping it and remotely open services.msc and event logs. When the problem happens, the IP Helper and AppX services get stuck in a starting or stopping state.

     

    Thanks,

  9. On 3/17/2022 at 1:25 AM, Markus Fumasoli1709152661 said:

    @Gary Lloyd: Was you able to fix the issue? We have the same issue but Citrix Support is not really a help

     

     

    Hi,

    Citrix and Microsoft support blame each other for any issue and their tactics are to procrastinate by requesting the same logs and information repeatedly until you either give up or solve it yourself.

     

    I didn't try the URCP fix in the end, however, we've since upgraded our hosts and storage and have been able to allocate more memory to the Citrix servers. The crashes caused by shadowing always happened during peak usage (usually the afternoon) when the load on the farm was reaching 80-90%. Now it gets to 60-70% load and I have been cautiously testing shadowing without any crashes so far. I'm going to re-enable shadowing for the rest of the team to see how it goes - I'll let you know what happens.

     

  10. Hi,

    I am having the same problem with Windows 2019 and LTSR CU1 - the problem was happening about once a month, but we also use Fslogix and after increasing the number of user profile folders redirected to the C drive, the number of crashes have increased to 1-2 per day. We also have a Windows 2012 desktop and the crashes do not occur at all on this. We use Sophos anti-virus, but I don't think this is causing the problem, since there are no crashes on the 2012 desktop or the 2016 desktop we used before switching to the 2019 desktop.

     

    After reading this thread, I'm going to upgrade to CU2 and also the latest version of Fslogix (2.9.7654.46150) so hopefully these will fix the problem.

    Thanks,

    Gary.

×
×
  • Create New...