linux 安装 Elasticsearch6.3.x +elsticsearch-head客户端详细步骤以及问题解决方案

手动安装elasticsearch

最简单的方式是通过Yum或rpm的方式进行安装,这里介绍的是手动安装的方法:

1、进入官网查看最新版本的下载链接

https://www.elastic.co/downloads/elasticsearch

2、使用命令行进行下载:

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.3.0.tar.gz

3、解压文件

tar -zxvf elasticsearch-6.3.0.tar.gz

4、运行elasticsearch

(坑1)6.3.0版本启动 bin/elasticsearch -d 后台启动(正确方式) 必须切换用户启动

启动之后,curl http://localhost:9200来访问测试

执行 sh /opt/elasticsearch-6.3.2/bin/elasticsearch -d  其中-d表示后台启动

[root@localhost bin]# Exception in thread "main" java.lang.RuntimeException: don't run elasticsearch as root.

       at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:94)

       at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:160)

       at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:286)

       at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)

Refer to the log for complete error details.

原因是elasticsearch默认是不支持用root用户来启动的。 su elastic 用户启动

解决方案一 6.3.0版本不管用:

sh /usr/local/elasticsearch-6.3.0/bin/elasticsearch -d -Des.insecure.allow.root=true

注意:正式环境用root运行可能会有安全风险,不建议用root来跑。

解决方案二:添加专门的用户

[root@p7 /]# useradd elastic

chown -R elastic:elastic elasticsearch-6.3.0

[root@p7 /]# su elastic

[elastic@p7 /]$ sh /opt/elasticsearch-6.3.0/bin/elasticsearch -d

su 账号切换用户

useradd elastic

chown -R elastic:elastic  elasticsearch-6.3.0

su elastic

sh /opt/elasticsearch-6.3.0/bin/elasticsearch -d

使用 curl http://localhost:9200/ 查看是否运行,如果返回如下信息则标示运行正常:

[elastic@localhost local]$ curl http://localhost:9200/

{

  "name" : "Astrid Bloom",

  "cluster_name" : "elasticsearch",

  "version" : {

    "number" : "6.3.0",

    "build_hash" : "ce9f0c7394dee074091dd1bc4e9469251181fc55",

    "build_timestamp" : "2016-08-29T09:14:17Z",

    "build_snapshot" : false,

    "lucene_version" : "5.5.2"

  },

  "tagline" : "You Know, for Search"

}

elasticsearch默认restful-api的端口是9200 不支持Ip地址,只能在本机用http://localhost:9200来访问。如果需要改变,需要修改配置文件。

默认情况下 Elasticsearch 的 RESTful 服务只有本机才能访问,也就是说无法从主机访问虚拟机中的服务。为了方便调试,

network.host: 0.0.0.0

http.port: 9200

或去除network.host 和http.port之前的注释,并将network.host的IP地址修改为本机外网IP。然后重启,Elasticsearch 关闭方法(输入命令:ps -ef | grep elasticsearch ,找到进程,然后kill掉就行了。

如果外网还是不能访问,则有可能是防火墙设置导致的。

 

问题二:坑(2)

[2017-04-13T00:08:51,031][ERROR][o.e.b.Bootstrap ] [ZdbjA-a] node validation exception

[4] bootstrap checks failed

[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

[2]: max number of threads [1024] for user [es] is too low, increase to at least [2048]

[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

[4]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

[2017-04-13T00:08:51,035][INFO ][o.e.n.Node ] [ZdbjA-a] stopping ...

[2017-04-13T00:08:51,097][INFO ][o.e.n.Node ] [ZdbjA-a] stopped

[2017-04-13T00:08:51,097][INFO ][o.e.n.Node ] [ZdbjA-a] closing ...

[2017-04-13T00:08:51,107][INFO ][o.e.n.Node ] [ZdbjA-a] closed

这里报了若干个错误,我们一个一个来

  1. max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

原因:无法创建本地文件问题,用户最大可创建文件数太小

解决方案:

切换到root用户,编辑limits.conf配置文件, 添加类似如下内容:

vi /etc/security/limits.conf

添加如下内容:

* soft nofile 65536

* hard nofile 131072

* soft nproc 2048

* hard nproc 4096

如果报以下错误

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] 表示 elasticsearch用户拥有的内存权限太小,至少需要262144;

解决办法:

在   /etc/sysctl.conf文件最后添加一行

vm.max_map_count=262144

必须重启服务器,之前可能是复制的格式不对, 一直不生效报错,reboot重启才正确

备注:* 代表Linux所有用户名称(比如 hadoop)

保存、退出、重新登录才可生效

 

更能问题参考 https://github.com/DimonHo/DH_Note/issues/3 问题总结

 

1、使用elsticsearch-head连接es 客户端非常方便

  • 首先我们进入github并搜索elsticsearch-head软件,选择进入mobz/elasticsearch-head,在download的地方点击下载,将zip包下载到我们的电脑中。 
  • 下载head安装包,下载地址:https://github.com/mobz/elasticsearch-head/archive/master.zip 这是接从git 上下载下来
  • 下载好安装包后在任意目录解压缩,然后可以看见如下的目录结构,双击index.html运行。 

  • 若你已经有一个可以从外部访问的es单机服务,那么在最上方的搜索栏中输入你的访问路径,点击搜索即可连接es,连接成功后是如下页面:(若没有一个单机的es,可以转到我的另外一篇博客:ES的单机安装) 

 

 

 

 

不成功的话, 要修改配置文件

elasticsearch/config/elasticsearch.yml

# 增加新的参数,这样head插件可以访问es

http.cors.enabled: true

http.cors.allow-origin: "*"

切换用户, 重新启动服务

elasticsearch/bin/elasticsearch

手动安装环境各种问题记不太清楚了, 大部分都是环境变量,访问限制等问题

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Table of Contents Elasticsearch for Hadoop Credits About the Author About the Reviewers www.PacktPub.com Support files, eBooks, discount offers, and more Why subscribe? Free access for Packt account holders Preface What this book covers What you need for this book Who this book is for Conventions Reader feedback Customer support Downloading the example code Downloading the color images of this book Errata Piracy Questions 1. Setting Up Environment Setting up Hadoop for Elasticsearch Setting up Java Setting up a dedicated user Installing SSH and setting up the certificate Downloading Hadoop Setting up environment variables Configuring Hadoop Configuring core-site.xml Configuring hdfs-site.xml Configuring yarn-site.xml Configuring mapred-site.xml The format distributed filesystem Starting Hadoop daemons Setting up Elasticsearch Downloading Elasticsearch Configuring Elasticsearch Installing Elasticsearch's Head plugin Installing the Marvel plugin Running and testing Running the WordCount example Getting the examples and building the job JAR file Importing the test file to HDFS Running our first job Exploring data in Head and Marvel Viewing data in Head Using the Marvel dashboard Exploring the data in Sense Summary 2. Getting Started with ES-Hadoop Understanding the WordCount program Understanding Mapper Understanding the reducer Understanding the driver Using the old API – org.apache.hadoop.mapred Going real — network monitoring data Getting and understanding the data Knowing the problems Solution approaches Approach 1 – Preaggregate the results Approach 2 – Aggregate the results at query-time Writing the NetworkLogsMapper job Writing the mapper class Writing Driver Building the job Getting the data into HDFS Running the job Viewing the Top N results Getting data from Elasticsearch to HDFS Understanding the Twitter dataset Trying it yourself Creating the MapReduce job to import data from Elasticsearch to HDFS Writing the Tweets2Hdfs mapper Running the example Testing the job execution output Summary ...

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值