欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Elasticsearch集群安装部署

程序员文章站 2022-06-01 13:12:29
...

一、 elasticsearch安装
1、 jdk安装(版本1.8 忽略)
2、 官网下载elasticsearch 7.9版本压缩包,上传到linux系统,解压
tar -zxvf elasticsearch-7.9.0-linux-x86_64.tar.gz
创建软连接
ln -s elasticsearch-7.9.0 elasticsearch
3、 配置elasticsearch.yml文件
Node-1配置为:

cluster.name: estest
node.name: node-1
node.master: true
node.data: true
path.data: /home/hadoop/data/espath/data
path.logs: /home/hadoop/data/espath/log
network.host: 192.168.59.132
http.port: 9201

transport.tcp.port: 9301
discovery.seed_hosts: ["192.168.59.133:9302","192.168.59.134:9303","192.168.59.132:9301"]
#discovery.zen.ping.multicast.enabled: true
#discovery.zen.ping.unicast.hosts: ["192.168.59.132:9300","192.168.59.133:9300","192.168.59.134:9300"]
discovery.zen.minimum_master_nodes: 2
cluster.initial_master_nodes: ["node-1","node-2"]
http.cors.enabled: true
http.cors.allow-origin: "*"

node-2配置为:

cluster.name: estest
node.name: node-2
node.master: true
node.data: true
path.data: /home/hadoop/data/espath/data
path.logs: /home/hadoop/data/espath/log
network.host: 192.168.59.133
http.port: 9202
transport.tcp.port: 9302
discovery.seed_hosts: ["192.168.59.133:9302","192.168.59.134:9303","192.168.59.132:9301"]
#discovery.zen.ping.multicast.enabled: true
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
#discovery.zen.ping.unicast.hosts: ["192.168.59.132:9300","192.168.59.133:9300","192.168.59.134:9300"]

Node-3 配置为:

cluster.name: estest
node.name: node-3
node.master: false
node.data: true
path.data: /home/hadoop/data/espath/data
path.logs: /home/hadoop/data/espath/log
network.host: 192.168.59.134
http.port: 9203
transport.tcp.port: 9303
discovery.seed_hosts: ["192.168.59.133:9302","192.168.59.134:9303","192.168.59.132:9301"]
#discovery.zen.ping.multicast.enabled: true
discovery.zen.minimum_master_nodes: 2
cluster.initial_master_nodes: ["node-1","node-2"]
# 是否支持跨域,*表示支持所有域名访问。以下两个配置可以运行elasticsearch-head访问
http.cors.enabled: true
http.cors.allow-origin: "*"
#discovery.zen.ping.unicast.hosts: ["192.168.59.132:9300","192.168.59.133:9300","192.168.59.134:9300"]

4、 集群启动过程中遇到的问题及解决
1)
Elasticsearch集群安装部署

解决:a、原因:无法创建本地文件问题,用户最大可创建文件数太小,解决方案:切换到root用户,编辑limits.conf配置文件, 添加类似如下内容:
    vi /etc/security/limits.conf
   然后添加如下内容: 注意*不要去掉了
    * soft nofile 65536
    * hard nofile 131072

  • soft nproc 2048
  • hard nproc 4096
       需要保存、退出、重新登录才可生效。
    ulimit -n查看进程数
      b、原因:最大虚拟内存太小,解决办法切换到root用户修改配置sysctl.conf:
       vi /etc/sysctl.conf
      添加下面配置:vm.max_map_count=655360
      最后记得执行:sysctl -p
    然后,重新启动elasticsearch,即可启动成功。
    2)failed to obtain node locks, tried [[/var/lib/elasticsearch]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes]
    查找ES进程号,杀掉进程然后重启
    ps -ef | grep elastic
    kill -9 进程号

二、 安装head插件
1、 安装nodejs
yum install node npm
查看版本:node –v
npm -v
2、 git下载head插件
git clone https://github.com/mobz/elasticsearch-head.git
3、 进入elasticsearch-head目录
cd elasticsearch-head/
sudo npm install -g grunt --registry=https://registry.npm.taobao.org
npm install grunt –save
npm install
4、 配置
1) grunt_fileSets.js
Elasticsearch集群安装部署

2) _site/app.js
Elasticsearch集群安装部署

5、 启动
cd node_modules/grunt
bin/grunt server
浏览器登陆:192.168.59.132:9100
浏览器界面如图:
Elasticsearch集群安装部署
Elasticsearch集群安装部署
Elasticsearch集群安装部署

三、 安装kibana
1、 官网下载和elasticsearch相同版本的kibana安装包,解压
Tar –zxvf kibana-7.9.0-linux-x86_64.tar.gz
2、 配置kibana.yml

server.port: 5601
server.host: "192.168.59.132"
server.name: "kibana"
elasticsearch.hosts: ["http://192.168.59.132:9200"]
kibana.index: ".kibana"
i18n.locale: "zh-CN"
logging.dest: /home/hadoop/data/kibana.log
#elasticsearch.username: "elastic"
#elasticsearch.password: "12345678"
server.maxPayloadBytes: 104857600
# kibana会将部分数据写入es,这个是ex中索引的名字
# kibana.index: ".kibana"

3、 启动
Bin/kibana
Web浏览:192.168.59.132:5601
4、 启动遇到的一个问题

log [02:03:52.857] [error][plugins][reporting][validations] The
Reporting plugin encountered issues launching Chromium in a self-test.
You may have trouble generating reports. log [02:03:52.858]
[error][plugins][reporting][validations] { TimeoutError: waiting for
target failed: timeout 30000ms exceeded
at Function.waitWithTimeout (/home/hadoop/soft/kibana-7.9.0-linux-x86_64/node_modules/puppeteer-core/lib/helper.js:215:26)
at Browser.waitForTarget (/home/hadoop/soft/kibana-7.9.0-linux-x86_64/node_modules/puppeteer-core/lib/Browser.js:214:27)
at Browser. (/home/hadoop/soft/kibana-7.9.0-linux-x86_64/node_modules/puppeteer-core/lib/helper.js:112:23)
at Launcher.launch (/home/hadoop/soft/kibana-7.9.0-linux-x86_64/node_modules/puppeteer-core/lib/Launcher.js:185:21)
at process._tickCallback (internal/process/next_tick.js:68:7) name: ‘TimeoutError’ } log [02:03:52.861]
[error][plugins][reporting][validations] Error: Could not close
browser client handle!
at browserFactory.test.then.browser (/home/hadoop/soft/kibana-7.9.0-linux-x86_64/x-pack/plugins/reporting/server/lib/validate/validate_browser.js:26:15) log [02:03:52.866] [warning][plugins][reporting][validations]
Reporting plugin self-check generated a warning: Error: Could not
close browser client handle!

解决方法:将chrome的浏览器版本升级下就可以了,360浏览器改为极速模式访问,其他浏览器未使用。