Skip to main content

Does any one have any idea about below scenario.

I have an environment setup with two Win servers (installed CR) and using VIP / Load Balance URL.

When i restart primary server, CR getting down but when i restart secondary sever CR is up. 

as I configured failover between these two servers, expecting CR is always up when any of server in maintenance.  Also i can see 100% CPU utilization on one server and <20% utilization on second server.

@rbkadiyam 

For a HA setup is necessary three nodes minimum.  Kindly check the follow KB

https://apeople.automationanywhere.com/s/article/What-is-the-limitation-of-2-node-clustering-for-Automation-360-16-and-v11-3-5

https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/deployment-planning/on-prem-install/cloud-ha-deploy.html

Also, verify if you have any blocked port between CR nodes.

https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/deployment-planning/on-prem-install/cloud-firewall-rules.html

 

Let us know your feedback.

Regards.

 


@Raul Jaimes  : we are good with respect to the KB article items …  everything is working fine w.r.t. direct server urls as well as load balanced urls.. only my concern is when primary server restarting control room url is not reachable until all AA services up where as secondary server restarts control room is up… seems some config issue between these two servers which i might be missing… 


@rbkadiyam

Do you shutdown A360 CR services or  shutdown servers? I guess you check the URL for every control-room when you shutdown any server and url load balancer.  The scenario is like a DR. But a DR deployment  is necessary to have a multinode deployment and database replication.

https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/deployment-planning/on-prem-install/cloud-ha-dr-deploy.html

https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/deployment-planning/on-prem-install/cloud-dr-failover-procedure.html

Kindly specify what scenario are you facing:

1.-shutdown CR1, does load balancer show A360 CR2 ?

2.-restart CR1 and restart CR2 , does load balancer show CR1?

3.-shutdown CR1 and restart CR2 , does load balancer show CR2?

Let us know more about your architecture.

Regards.


@Raul Jaimes  here is my scenario

  • CR1 AA services stopped 
  • CR2 host url working fine 
  • Load balancer url not working 
  • CR1 AA service started 
  • Load balance url work working as expected
  • CR2 AA services stopped
  • Load balance url working as expected
  • CR2 AA services started 
  • Load Balance url working as expected 

 

 

 


@rbkadiyam 

Thanks for your reply. Load balancer issue?  Some load balancers have a monitoring for every  pool of ports (healthcheck). The pool should have every member of services, for example server1:80 and server2:80 (i.e. LB layer7) 
If any monitoring fails, the same pool should be able to send a responde with an available port.

 

https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/deployment-planning/on-prem-install/cloud-load-balancer-settings-deploy.html

So, verify if the configuration of the monitoring groups inside the LB is correct as first step when a monitoring node is down.

Also check if you are having any of the follow issues:

https://apeople.automationanywhere.com/s/article/SSL-offloading-for-A2019-18-and-later-version

https://apeople.automationanywhere.com/s/article/Load-balancing-question-To-identify-the-active-server

https://apeople.automationanywhere.com/s/article/A360-Unable-to-access-Control-Room-URL-over-https-through-Load-Balancer

 

Let me know your feedback.

 

Regards.

 


@Raul Jaimes  Fixed the issue :: 

On 2nd server - cluster.properties file 

ignite.discovery.static.ips=serv1,serv2
ignite.local.static.ip=serv2

Causing issue… 

replaced with 

ignite.discovery.static.ips=serv2,serv1
ignite.local.static.ip=serv2

Resolved issue…. 


@Raul Jaimes  Fixed the issue :: 

On 2nd server - cluster.properties file 

ignite.discovery.static.ips=serv1,serv2
ignite.local.static.ip=serv2

Causing issue… 

replaced with 

ignite.discovery.static.ips=serv2,serv1
ignite.local.static.ip=serv2

Resolved issue…. 

Great! 

If you have an additional reference , please post it here. :)

 

 


Reply