Elasticsearch的一些api随记:修订间差异
小无编辑摘要 |
小无编辑摘要 |
||
第175行: | 第175行: | ||
=== | ===Other=== | ||
====对于有大量索引的刚重启的es集群==== | ====对于有大量索引的刚重启的es集群==== | ||
第204行: | 第204行: | ||
=====在es集群恢复期间因节点内存压力大(node was low on resources: memory.)而被k8s Evicted===== | =====在es集群恢复期间因节点内存压力大(node was low on resources: memory.)而被k8s Evicted===== | ||
调整缩小 jvm 配置值,尽量不超配(requests 和 limit尽量一致或提高requests值) | 调整缩小 jvm 配置值,尽量不超配(requests 和 limit尽量一致或提高requests值) | ||
=== Error === | |||
==== 分片数达到maximum错误 ==== | |||
2022-11-10T10:26:03.643184618Z org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [3] shards, but this cluster currently has [1999]/[2000] maximum normal shards open; | |||
解决: | |||
调整index ilm 策略或者调整集群的max_shards_per_node配置 | |||
临时生效配置: | |||
<nowiki>curl -H "content-type: application/json" -X PUT "127.0.0.1:9200/_cluster/settings" -d '{"transient": {"cluster.max_shards_per_node": "5000"}}'</nowiki> | |||
永久更改性配置: | |||
<nowiki>curl -H "content-type: application/json" -X PUT "127.0.0.1:9200/_cluster/settings" -d '{"persistent": {"cluster.max_shards_per_node": "5000"}}'</nowiki> | |||
[[分类:Elasticsearch]] | [[分类:Elasticsearch]] | ||
{{DEFAULTSORT:api随记}} | {{DEFAULTSORT:api随记}} |
2022年11月10日 (四) 18:56的版本
Health
/_cat/health
/_cluster/health
Indices health
按条件查看索引状态
/_cat/indices?help /_cat/indices?health=red&v&s=store.size:desc,index
/_cat/indices?health=yellow&v&s=store.size:desc,index
/_cat/indices?health=green&v&s=store.size:desc,index
Nodes
/_cat/nodes?v
查看es各节点磁盘空间占用、分片数目等
/_cat/allocation?v
Get master node
/_cat/master?v
用于定位分片状态以及分片为何故障
/_cat/shards/index_name-*?v&s=state,index&h=index,shard,prirep,state,docs,store,ip,node,unassigned.reason
/_cluster/allocation/explain
Shards
/_cat/shards
/_cat/shards?index=index_name
https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-thread-pool.html
/_cluster/settings?pretty&include_defaults=true | grep processors
get maximum number of threads info
curl "127.1:9200/_cat/thread_pool?v&h=ip,node_name,id,name,max,size,queue_size,queue,active,rejected&pretty"
templates
/_cat/templates?v
/_template
/_template/{template_name}
use template to change the replicas settings of all indexes
{ "order": 2147483647, "index_patterns": [ "*" ], "settings": { "index": { "number_of_replicas": "0" } } }
ilm (index lifecycle policy)
get /_ilm/policy
get /_ilm/policy/{ilm_name}
PUT /_ilm/policy/ilm-30d-delete { "policy": { "phases": { "delete": { "min_age": "30d", "actions": { "delete": { "delete_searchable_snapshot" : true } } } } } }
Cluster settings
/_cluster/settings?include_defaults=true&pretty
/_cluster/settings?include_defaults=true
Wildcard expressions or all indices are not allowed
允许泛匹配删除索引
PUT /_cluster/settings { "persistent": { "action": { "destructive_requires_name": "false" } } }
primaries recovery settings
{ "transient": { "cluster": { "routing": { "allocation": { "node_initial_primaries_recoveries": 10, "node_concurrent_incoming_recoveries": null, "node_concurrent_outgoing_recoveries": null, "node_concurrent_recoveries": 20 } } } } }
{ "transient": { "cluster": { "routing": { "allocation": { "node_initial_primaries_recoveries": null, "node_concurrent_incoming_recoveries": null, "node_concurrent_recoveries": null } } } } }
{ "persistent": { "cluster": { "routing": { "allocation": { "node_initial_primaries_recoveries": 30, "node_concurrent_incoming_recoveries": null, "node_concurrent_recoveries": 10 } } } } }
- - new version shard allocation config
- cluster.routing.allocation.node_concurrent_recoveries
# es 6.8.2 # PUT /_cluster/settings { "persistent": { "cluster": { "routing": { "allocation": { "node_initial_primaries_recoveries": 8 } } } } }
Index settings
modify the number of replicas in bulk
{ "index": { "number_of_replicas": 1 } }
get indices and order by number of replicas
/_cat/indices?health=green&v&s=rep:asc,store.size:desc,index
Other
对于有大量索引的刚重启的es集群
(主分片在1w-2w)
加快es集群恢复速度
结合es节点资源监控图,观测节点cpu压力,以及cpu IO wait
适当通过update cluster settings接口动态增加node_initial_primaries_recoveries (Defaults to 4
)
和 node_concurrent_recoveries
(A shortcut to set both cluster.routing.allocation.node_concurrent_incoming_recoveries
and cluster.routing.allocation.node_concurrent_outgoing_recoveries
Defaults to 2
)数值
通过使用 cluster settings + include_defaults=true 筛选查到当前配置值
减少集群从red状态到yellow状态的耗时:增加索引副本数量,增加node_initial_primaries_recoveries值
减少集群从yellow状态到green状态的耗时:增加 node_concurrent_recoveries 值
通过访问 /_cluster/allocation/explain
接口查到阻碍集群 to green(yellow)的原因
在es集群恢复期间因节点内存压力大(node was low on resources: memory.)而被k8s Evicted
调整缩小 jvm 配置值,尽量不超配(requests 和 limit尽量一致或提高requests值)
Error
分片数达到maximum错误
2022-11-10T10:26:03.643184618Z org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [3] shards, but this cluster currently has [1999]/[2000] maximum normal shards open;
解决:
调整index ilm 策略或者调整集群的max_shards_per_node配置
临时生效配置:
curl -H "content-type: application/json" -X PUT "127.0.0.1:9200/_cluster/settings" -d '{"transient": {"cluster.max_shards_per_node": "5000"}}'
永久更改性配置:
curl -H "content-type: application/json" -X PUT "127.0.0.1:9200/_cluster/settings" -d '{"persistent": {"cluster.max_shards_per_node": "5000"}}'