elasticsearch-8.8.12-with-ik+pinyin+passwd+alpine

前言


   除了源码Dockerfile收费,其他都免费

最新elasticsearch镜像alpine稳定版本,已集成分词,符合阿里云容器安全规范。可用于elk+skywalking等场景的存储!

image-20240430154906285

image-20240430152009101

image-20240430151934590

效果

  • 官方镜像大小

image-20240430150957298

  • 优化后大小

    已修复log4j漏洞;已集成pinyin分词;已集成ik分词;支持密码认证;volumes支持本地文件符合阿里云容器安全规范

image-20240430151058856

使用

复制后,服务器上可以直接运行。需提前安装docker+docker-compose

D_PATH=/data
mkdir -p $D_PATH/es && cd $D_PATH/es 
cat <<EOF > docker-compose.yml
version: '3.9'
services:
  es:
    container_name: es
    hostname: es
    restart: unless-stopped # unless-stopped always on-failure
    image: registry.cn-hangzhou.aliyuncs.com/earic/es:8.12.2-elasticsearch-alpine
    environment:
      - node.name=es01
      - cluster.name=docker-cluster
      - TZ=Asia/Shanghai
      - network.host=0
      # 内存不够时,注释掉
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
      - cluster.publish.timeout=180s      
      - discovery.type=single-node
      - xpack.license.self_generated.type=basic
      - xpack.security.enabled=true 
      # 根据磁盘使用情况来决定是否继续分配shard。默认设置是开启的,磁盘不够时用
      - cluster.routing.allocation.disk.threshold_enabled=false
      #节点数分片最大数限制,默认1000
      - cluster.max_shards_per_node=100000
    volumes:
      - es_new_data:/usr/share/elasticsearch/data  #数据文件挂载
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "10"
    deploy:
      resources:
        limits:
          cpus: '1'
          memory: 2048m
        reservations:
          cpus: '0.8'
          memory: 1024M     
    healthcheck:
      test: curl  -s http://localhost:9200 >/dev/null; if [[ 25793? == 52 ]]; then echo 0; else echo 1; fi
      interval: 30s
      timeout: 10s
      retries: 5
    ports:
      - "9400:9200"
      - "9500:9300"
#networks:
#  default:
#    external: true
#    name: jsxl
volumes:
   es_new_data:
      driver: local
      driver_opts:
         type: "none"
         o: "bind"
         device: "/data/es/data"       
EOF

密码

docker exec -it es bash
./bin/elasticsearch-setup-passwords auto --batch -u "http://localhost:9200"

image-20240430143956453

验证

  • 可用性

    curl  -u elastic:7W6qJfHxHI1QjdOS8xir  http://127.0.0.1:9400/_cluster/health?pretty

image-20240430144128286

  • 拼音分词

    curl -H 'Content-Type:application/json' -XGET 'http://127.0.0.1:9400/_analyze?pretty=true' -u elastic:7W6qJfHxHI1QjdOS8xir -d '{"analyzer":"pinyin", "text":"我的中国心"}'

    image-20240430144431322

  • ik_smart分词

    curl -X  PUT http://127.0.0.1:9400/test001  -u elastic:7W6qJfHxHI1QjdOS8xir
    curl -X   POST 'http://127.0.0.1:9400/test001/_analyze?pretty=true' -H 'Content-Type: application/json' -d '{"text":"我们是软件工程师","tokenizer":"ik_smart"}' -u elastic:7W6qJfHxHI1QjdOS8xir

    image-20240430145251115

命令

常用命令参考

# 查询健康状态
curl -u elastic:bsSB5EKyCMH3vwXSosc1   http://127.0.0.1:9400/_cluster/health?pretty
​
# 查询分片数
curl -u elastic:bsSB5EKyCMH3vwXSosc1   http://127.0.0.1:9400/_cat/allocation?v
​
# 更新分片数
curl -u elastic:bsSB5EKyCMH3vwXSosc1  -X PUT -H "Content-Type: application/json" -d '{"transient": {"cluster": {"max_shards_per_node":30000 } } }' "http://127.0.0.1:9400/_cluster/settings"
​
# 新建索引的副本都设为0
curl -XPUT -u elastic:bsSB5EKyCMH3vwXSosc1  'http://127.0.0.1:9400/_settings' -H 'content-Type:application/json' -d '{"number_of_replicas": 0 }'
​
# 发现 unassigned 的分片
curl -XGET  -u elastic:bsSB5EKyCMH3vwXSosc1 localhost:9400/_cat/shards?h=index,shard,prirep,state,unassigned.reason| grep UNASSIGNED
​
# 所有的unassigned都可以通过explain API查到原因和处理方案
curl -XGET  -u elastic:bsSB5EKyCMH3vwXSosc1 localhost:9400/_cluster/allocation/explain
​
## 删除unassigned 的分片
curl -u elastic:bsSB5EKyCMH3vwXSosc1 -XDELETE localhost:9400/skywalking-index-2023-12-27_segment-20240401
​
# 查询所有index
curl -XGET  -u elastic:bsSB5EKyCMH3vwXSosc1 localhost:9400/_cat/indices?v
​
# 删除所有index,默认不允许
curl  -u elastic:bsSB5EKyCMH3vwXSosc1 -XDELETE localhost:9400/_all

镜像制作

文件内容

image-20240430150115730

elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
​
discovery.type: "single-node"
xpack.ml.enabled: false
xpack.security.enabled: false
log4j2.properties
status = error
​
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
​
######## Server JSON ############################
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json
appender.rolling.layout.type = ECSJsonLayout
appender.rolling.layout.dataset = elasticsearch.server
​
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 128MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB
################################################
######## Server -  old style pattern ###########
appender.rolling_old.type = RollingFile
appender.rolling_old.name = rolling_old
appender.rolling_old.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling_old.layout.type = PatternLayout
appender.rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
​
appender.rolling_old.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling_old.policies.type = Policies
appender.rolling_old.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling_old.policies.time.interval = 1
appender.rolling_old.policies.time.modulate = true
appender.rolling_old.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling_old.policies.size.size = 128MB
appender.rolling_old.strategy.type = DefaultRolloverStrategy
appender.rolling_old.strategy.fileIndex = nomax
appender.rolling_old.strategy.action.type = Delete
appender.rolling_old.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling_old.strategy.action.condition.type = IfFileName
appender.rolling_old.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling_old.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling_old.strategy.action.condition.nested_condition.exceeds = 2GB
################################################
​
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling
rootLogger.appenderRef.rolling_old.ref = rolling_old
​
######## Deprecation JSON #######################
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.json
appender.deprecation_rolling.layout.type = ECSJsonLayout
# Intentionally follows a different pattern to above
appender.deprecation_rolling.layout.dataset = deprecation.elasticsearch
appender.deprecation_rolling.filter.rate_limit.type = RateLimitingFilter
​
appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.json.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4
​
appender.header_warning.type = HeaderWarningAppender
appender.header_warning.name = header_warning
#################################################
​
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = WARN
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.appenderRef.header_warning.ref = header_warning
logger.deprecation.additivity = false
​
######## Search slowlog JSON ####################
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
  .cluster_name}_index_search_slowlog.json
appender.index_search_slowlog_rolling.layout.type = ECSJsonLayout
appender.index_search_slowlog_rolling.layout.dataset = elasticsearch.index_search_slowlog
​
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
  .cluster_name}_index_search_slowlog-%i.json.gz
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.size.size = 1GB
appender.index_search_slowlog_rolling.strategy.type = DefaultRolloverStrategy
appender.index_search_slowlog_rolling.strategy.max = 4
#################################################
​
#################################################
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false
​
######## Indexing slowlog JSON ##################
appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
  _index_indexing_slowlog.json
appender.index_indexing_slowlog_rolling.layout.type = ECSJsonLayout
appender.index_indexing_slowlog_rolling.layout.dataset = elasticsearch.index_indexing_slowlog
​
​
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
  _index_indexing_slowlog-%i.json.gz
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.size.size = 1GB
appender.index_indexing_slowlog_rolling.strategy.type = DefaultRolloverStrategy
appender.index_indexing_slowlog_rolling.strategy.max = 4
#################################################
​
​
logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false
​
​
logger.com_amazonaws.name = com.amazonaws
logger.com_amazonaws.level = warn
​
logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.name = com.amazonaws.jmx.SdkMBeanRegistrySupport
logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.level = error
​
logger.com_amazonaws_metrics_AwsSdkMetrics.name = com.amazonaws.metrics.AwsSdkMetrics
logger.com_amazonaws_metrics_AwsSdkMetrics.level = error
​
logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.name = com.amazonaws.auth.profile.internal.BasicProfileConfigFileLoader
logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.level = error
​
logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.name = com.amazonaws.services.s3.internal.UseArnRegionResolver
logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.level = error
​
​
appender.audit_rolling.type = RollingFile
appender.audit_rolling.name = audit_rolling
appender.audit_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_audit.json
appender.audit_rolling.layout.type = PatternLayout
appender.audit_rolling.layout.pattern = {\
                "type":"audit", \
                "timestamp":"%d{yyyy-MM-dd'T'HH:mm:ss,SSSZ}"\
                %varsNotEmpty{, "node.name":"%enc{%map{node.name}}{JSON}"}\
                %varsNotEmpty{, "node.id":"%enc{%map{node.id}}{JSON}"}\
                %varsNotEmpty{, "host.name":"%enc{%map{host.name}}{JSON}"}\
                %varsNotEmpty{, "host.ip":"%enc{%map{host.ip}}{JSON}"}\
                %varsNotEmpty{, "event.type":"%enc{%map{event.type}}{JSON}"}\
                %varsNotEmpty{, "event.action":"%enc{%map{event.action}}{JSON}"}\
                %varsNotEmpty{, "authentication.type":"%enc{%map{authentication.type}}{JSON}"}\
                %varsNotEmpty{, "user.name":"%enc{%map{user.name}}{JSON}"}\
                %varsNotEmpty{, "user.run_by.name":"%enc{%map{user.run_by.name}}{JSON}"}\
                %varsNotEmpty{, "user.run_as.name":"%enc{%map{user.run_as.name}}{JSON}"}\
                %varsNotEmpty{, "user.realm":"%enc{%map{user.realm}}{JSON}"}\
                %varsNotEmpty{, "user.run_by.realm":"%enc{%map{user.run_by.realm}}{JSON}"}\
                %varsNotEmpty{, "user.run_as.realm":"%enc{%map{user.run_as.realm}}{JSON}"}\
                %varsNotEmpty{, "user.roles":%map{user.roles}}\
                %varsNotEmpty{, "apikey.id":"%enc{%map{apikey.id}}{JSON}"}\
                %varsNotEmpty{, "apikey.name":"%enc{%map{apikey.name}}{JSON}"}\
                %varsNotEmpty{, "authentication.token.name":"%enc{%map{authentication.token.name}}{JSON}"}\
                %varsNotEmpty{, "authentication.token.type":"%enc{%map{authentication.token.type}}{JSON}"}\
                %varsNotEmpty{, "origin.type":"%enc{%map{origin.type}}{JSON}"}\
                %varsNotEmpty{, "origin.address":"%enc{%map{origin.address}}{JSON}"}\
                %varsNotEmpty{, "realm":"%enc{%map{realm}}{JSON}"}\
                %varsNotEmpty{, "url.path":"%enc{%map{url.path}}{JSON}"}\
                %varsNotEmpty{, "url.query":"%enc{%map{url.query}}{JSON}"}\
                %varsNotEmpty{, "request.method":"%enc{%map{request.method}}{JSON}"}\
                %varsNotEmpty{, "request.body":"%enc{%map{request.body}}{JSON}"}\
                %varsNotEmpty{, "request.id":"%enc{%map{request.id}}{JSON}"}\
                %varsNotEmpty{, "action":"%enc{%map{action}}{JSON}"}\
                %varsNotEmpty{, "request.name":"%enc{%map{request.name}}{JSON}"}\
                %varsNotEmpty{, "indices":%map{indices}}\
                %varsNotEmpty{, "opaque_id":"%enc{%map{opaque_id}}{JSON}"}\
                %varsNotEmpty{, "trace.id":"%enc{%map{trace.id}}{JSON}"}\
                %varsNotEmpty{, "x_forwarded_for":"%enc{%map{x_forwarded_for}}{JSON}"}\
                %varsNotEmpty{, "transport.profile":"%enc{%map{transport.profile}}{JSON}"}\
                %varsNotEmpty{, "rule":"%enc{%map{rule}}{JSON}"}\
                %varsNotEmpty{, "put":%map{put}}\
                %varsNotEmpty{, "delete":%map{delete}}\
                %varsNotEmpty{, "change":%map{change}}\
                %varsNotEmpty{, "create":%map{create}}\
                %varsNotEmpty{, "invalidate":%map{invalidate}}\
                }%n
appender.audit_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_audit-%d{yyyy-MM-dd}-%i.json.gz
appender.audit_rolling.policies.type = Policies
appender.audit_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.audit_rolling.policies.time.interval = 1
appender.audit_rolling.policies.time.modulate = true
appender.audit_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.audit_rolling.policies.size.size = 1GB
appender.audit_rolling.strategy.type = DefaultRolloverStrategy
appender.audit_rolling.strategy.fileIndex = nomax
​
logger.xpack_security_audit_logfile.name = org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail
logger.xpack_security_audit_logfile.level = info
logger.xpack_security_audit_logfile.appenderRef.audit_rolling.ref = audit_rolling
logger.xpack_security_audit_logfile.additivity = false
​
logger.xmlsig.name = org.apache.xml.security.signature.XMLSignature
logger.xmlsig.level = error
logger.samlxml_decrypt.name = org.opensaml.xmlsec.encryption.support.Decrypter
logger.samlxml_decrypt.level = fatal
logger.saml2_decrypt.name = org.opensaml.saml.saml2.encryption.Decrypter
logger.saml2_decrypt.level = fatal
logrotate
/var/log/elasticsearch/*.log {
    daily
    rotate 50
    size 50M
    copytruncate
    compress
    delaycompress
    missingok
    notifempty
    create 644 elasticsearch elasticsearch
}
build.sh
docker build  --no-cache  -t registry.cn-hangzhou.aliyuncs.com/earic/es:8.12.2-elasticsearch-alpine .
docker-healthcheck
#!/bin/bash
set -eo pipefail
​
host="$(hostname --ip-address || echo '127.0.0.1')"
​
if health="$(curl -fsSL "http://$host:9200/_cat/health?h=status")"; then
    health="$(echo "$health" | sed -r 's/^[[:space:]]+|[[:space:]]+$//g')" # trim whitespace (otherwise we'll have "green ")
    if [ "$health" = 'green' ]; then
        exit 0
    fi
    echo >&2 "unexpected health status: $health"
fi
exit 2
elastic-entrypoint.sh
#!/bin/bash
​
set -e
umask 0002
declare -a es_opts
​
while IFS='=' read -r envvar_key envvar_value
do
    # Elasticsearch env vars need to have at least two dot separated lowercase words, e.g. `cluster.name`
    if [[ "$envvar_key" =~ ^[a-z0-9_]+\.[a-z0-9_]+ ]]; then
        if [[ ! -z $envvar_value ]]; then
          es_opt="-E${envvar_key}=${envvar_value}"
          es_opts+=("${es_opt}")
        fi
    fi
done < <(env)
​
export ES_JAVA_HOME=$(dirname "$(dirname "$(readlink -f "$(which javac || which java)")")")
export ES_JAVA_OPTS="-Des.cgroups.hierarchy.override=/ $ES_JAVA_OPTS"
​
# Determine if x-pack is enabled
if bin/elasticsearch-plugin list -s | grep -q x-pack; then
    if [[ -n "$ELASTIC_PASSWORD" ]]; then
        [[ -f config/elasticsearch.keystore ]] ||  bin/elasticsearch-keystore create
        echo "$ELASTIC_PASSWORD" | bin/elasticsearch-keystore add -x 'bootstrap.password'
    fi
fi
​
# Add elasticsearch as command if needed
if [ "${1:0:1}" = '-' ]; then
    set -- elasticsearch "$@"
fi
​
# Drop root privileges if we are running elasticsearch
# allow the container to be started with `--user`
if [ "$1" = 'elasticsearch' -a "$(id -u)" = '0' ]; then
    # Change the ownership of user-mutable directories to elasticsearch
    chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/{data,logs}
    set -- su-exec elasticsearch "$@" "${es_opts[@]}"
fi
​
# ik相关配置
if [ -n "$REMOTE_EXT_DICT" ]; then
sed -i 's#<!-- <entry key="remote_ext_dict">words_location</entry> -->#<entry key="remote_ext_dict">'''${REMOTE_EXT_DICT}'''</entry>#g'  /usr/share/elasticsearch/config/analysis-ik/IKAnalyzer.cfg.xml
fi
if [ -n "$REMOTE_EXT_STOPWORDS" ]; then
sed -i 's#<!-- <entry key="remote_ext_stopwords">words_location</entry> -->#<entry key="remote_ext_stopwords">'''${REMOTE_EXT_STOPWORDS}'''</entry>#g'  /usr/share/elasticsearch/config/analysis-ik/IKAnalyzer.cfg.xml
fi
​
exec "$@"
Dockerfile

点我看完整内容icon-default.png?t=N7T8http://mp.weixin.qq.com/s?__biz=MzA4OTQ0MjA1Ng==&mid=2651269491&idx=1&sn=be91f63d168cb9cd15987c3bec96bf9a&chksm=8be95b14bc9ed202bc4c169bb23c4c9ca7c196ad3823e11bcff3a92e19d4b24bea400543b192&token=1563984431&lang=zh_CN#rd

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

wu858773457

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值