Skip to main content
Answer

" Failed to connect to Elasticsearch server" after upgrading from v29 -> v32

  • April 17, 2024
  • 5 replies
  • 874 views

Forum|alt.badge.img

Hi,

After upgrading my CR on-premise server the home dashboard is playing the message that can’t connect to Elasticsearch server, everything is working fine - just the dashboard is not working.

 

The information available at \ProgramData\AA\Logs\ is informing that “999/1000” shard are open.

“ 17T14:22:49.070+00:00","audit_format_version":4,"audit_request_remote_address":"127.0.0.1","audit_request_privilege":"indices:admin/auto_create","audit_node_host_address":"127.0.0.1","audit_request_effective_user":"es_client","audit_trace_indices":["metric_logs_20240401"],"audit_node_host_name":"localhost"} due to
org.opensearch.common.ValidationException: Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [999]/[1000] maximum shards open;” 

 

Any suggestion is appreciated.

Best answer by rbkadiyam

@aeugenio  check elastic search indexes, seems it reached to max 

https://localhost:47599 login with es_client with elastic search password 

check the count and see if indexes are more than 100, free up with some of old indexes and restart elastic service

 

5 replies

rbkadiyam
Premier Pathfinder | Tier 7
Forum|alt.badge.img+17
  • Premier Pathfinder | Tier 7
  • Answer
  • April 18, 2024

@aeugenio  check elastic search indexes, seems it reached to max 

https://localhost:47599 login with es_client with elastic search password 

check the count and see if indexes are more than 100, free up with some of old indexes and restart elastic service

 


Forum|alt.badge.img
  • Cadet | Tier 2
  • May 6, 2024

@rbkadiyam I encountered a similar issue while upgrading from v29 to v32 as well. I followed Article #000007733 to check for the counts and any duplicates, but stopped short of deleting any indices. Would that be the next recommended step, based on the message below?

 

Error message in WebCR.log file

{"error":{"root_cause":[{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [999]/[1000] maximum shards open;"}],"type":"validation_exception","reason":"Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [999]/[1000] maximum shards open;"},"status":400}


jon.stueveapeople
Automation Anywhere Team
Forum|alt.badge.img+7

Please contact support at support@automationanywhere.com if you are still experiencing an error.


Forum|alt.badge.img+3

@rbkadiyam I encountered a similar issue while upgrading from v29 to v32 as well. I followed Article #000007733 to check for the counts and any duplicates, but stopped short of deleting any indices. Would that be the next recommended step, based on the message below?

 

Error message in WebCR.log file

{"error":{"root_cause":[{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [999]/[1000] maximum shards open;"}],"type":"validation_exception","reason":"Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [999]/[1000] maximum shards open;"},"status":400}

Hello, ​@Gabe 7472 .

Did you find a solution for this issue?


Forum|alt.badge.img+3

Hi,

After upgrading my CR on-premise server the home dashboard is playing the message that can’t connect to Elasticsearch server, everything is working fine - just the dashboard is not working.

 

The information available at \ProgramData\AA\Logs\ is informing that “999/1000” shard are open.

“ 17T14:22:49.070+00:00","audit_format_version":4,"audit_request_remote_address":"127.0.0.1","audit_request_privilege":"indices:admin/auto_create","audit_node_host_address":"127.0.0.1","audit_request_effective_user":"es_client","audit_trace_indices":["metric_logs_20240401"],"audit_node_host_name":"localhost"} due to
org.opensearch.common.ValidationException: Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [999]/[1000] maximum shards open;” 

 

Any suggestion is appreciated.

Hello, ​@aeugenio 

Did you find a solution?