graylog2 架构--转载

原文地址:http://docs.graylog.org/en/latest/pages/architecture.html

Architectural considerations

There are a few rules of thumb when scaling resources for Graylog:

  • graylog-server nodes should have a focus on CPU power.
  • Elasticsearch nodes should have as much RAM as possible and the fastest disks you can get. Everything depends on I/O speed here.
  • MongoDB is only being used to store configuration and the dead letter messages, and can be sized fairly small.
  • graylog-web-interface nodes are mostly waiting for HTTP answers of the rest of the system and can also be rather small.
  • graylog-radio nodes act as workers. They don’t know each other and you can shut them down at any point in time without changing the cluster state at all.

Also keep in mind that messages are only stored in Elasticsearch. If you have data loss on Elasticsearch, the messages are gone - except if you have created backups of the indices.

MongoDB is only storing meta information and will be abstracted with a general database layer in future versions. This will allow you to use other databases like MySQL instead.

Minimum setup

This is a minimum Graylog setup that can be used for smaller, non-critical, or test setups. None of the components is redundant but it is easy and quick to setup.

../_images/simple_setup.png

Bigger production setup

This is a setup for bigger production environments. It has several graylog-server nodes behind a load balancer that share the processing load. The load balancer can ping the graylog-server nodes via REST/HTTP to check if they are alive and take dead nodes out of the cluster.

../_images/extended_setup.png

Highly available setup with Graylog Radio

Beginning with Graylog 1.0 on we do no longer recommend running Graylog Radio because we are now using a high-performant message journal (from the Apache Kafka project) in every graylog-server instance which is spooling all incoming messages to disk immediately and is able to buffer load spikes just at least as good as Graylog Radio was, but with less dependencies and maintenance overhead.

If you are running a setup with Graylog Radio we recommend to shut down the Graylog Radio architecture including AMQP or Kafka brokers completely and directly send messages to thegraylog-server nodes. If you have been using Graylog Radio for load balancing, you should now put a classic load balancer in front of your graylog-server nodes.

This approach has been proven to work great in large high-throughput setups of several of our large scale customers and immensely reduced complexity of their setups.

The Kafka and AMQP inputs are still supported and can be used to build a custom setup using message brokers, if you want to keep using that. A reason for this might be that Graylog is not the only subscriber to the messages on the bus. However we would recommend to use Graylog forwarders to either write to a message bus after processing or write to other systems directly.

转载于:https://www.cnblogs.com/davidwang456/p/4486778.html

## 什么是graylog Graylog 是一个简单易用、功能较全面的日志管理工具,相比 ELK 组合, 优点: - 部署维护简单 - 查询语法简单易懂(对比ES的语法…) - 内置简单的告警 - 可以将搜索结果导出为 json - 提供简单的聚合统计功能 - UI 比较友好 - 当然, 拓展性上比 ELK 差很多。 整套依赖: - Graylog 提供 graylog 对外接口 - Elasticsearch 日志文件的持久化存储和检索 - MongoDB 只是存储一些 Graylog 的配置 ## 安装 > 可以是裸机安装,也可以是docker安装,这里用docker安装 环境要求: - centos7.4 - cpu2个 内存2G 参考: https://hub.docker.com/r/graylog2/graylog/ ### 环境准备 ``` mkdir /root/graylog && cd /root/graylog //挂载目录 mkdir -p mongo_data graylog_journal es_data //配置文件目录 mkdir -p ./graylog/config cd ./graylog/config wget https://raw.githubusercontent.com/Graylog2/graylog-docker/3.0/config/graylog.conf wget https://raw.githubusercontent.com/Graylog2/graylog-docker/3.0/config/log4j2.xml //提前准备镜像 docker pull mongo:3 docker pull graylog/graylog:3.0 docker pull elasticsearch:5.6.9 ``` ### docker-compose.yml ``` version: '2' services: # MongoDB: https://hub.docker.com/_/mongo/ mongo: image: mongo:3 volumes: - ./mongo_data:/data/db - /etc/localtime:/etc/localtime # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/docker.html elasticsearch: image: elasticsearch:5.6.9 volumes: - ./es_data:/usr/share/elasticsearch/data - /etc/localtime:/etc/localtime environment: - http.host=0.0.0.0 - transport.host=localhost - network.host=0.0.0.0 # Disable X-Pack security: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/security-settings.html#general-security-settings - xpack.security.enabled=false - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 mem_limit: 1g # Graylog: https://hub.docker.com/r/graylog/graylog/ graylog: image: graylog/graylog:3.0 volumes: - ./graylog_journal:/usr/share/graylog/data/journal - ./graylog/config:/usr/share/graylog/data/config - /etc/localtime:/etc/localtime environment: # CHANGE ME! - GRAYLOG_PASSWORD_SECRET=somepasswordpepper # Password: admin - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 # 这里需要修改为要暴露的机器的地址 - GRAYLOG_HTTP_EXTERNAL_URI=http://10.121.60.2:9000/ links: - mongo - elasticsearch ports: # Graylog web interface and REST API - 9000:9000 # Syslog TCP - 514:514 # Syslog UDP - 514:514/udp # GELF TCP - 12201:12201 # GELF UDP - 12201:12201/udp # GELF HTTP - 12202:12202 ``` ### 启动 `docker-compose -f docker-compose.yml up -d` 通过http://10.121.60.2:9000/访问web,admin/admin ### 修改配置 - email相关(告警需要) ``` transport_email_enabled = true transport_email_hostname = smtp.163.com transport_email_port = 994 transport_email_use_auth = true transport_email_use_tls = true transport_email_use_ssl = true transport_email_auth_username = 17191093767@163.com transport_email_auth_password = zhim123456 transport_email_subject_prefix = [graylog] transport_email_from_email = 17191093767@163.com transport_email_web_interface_url = http://10.121.60.2:9000 ``` ## 使用 ### 配置添加Inputs > Graylog 节点能够接受数据的类型称之为input,常见的有GELF TCP, GELF UDP, GELF HTTP. 说明:GELF TCP, GELF UDP可以使用同一个端口,HTTP需要另起端口,原因不解释。 - 添加三个input,过程略,tcp,udp端口使用默认的12201,http端口使用12202- 验证 ``` // udp echo -n '{ "version": "1.1", "host": "example.org", "short_message": "A short message info with udp", "level": 1, "_some_info": "foo", "_tag": "test11" }' | nc -w10 -u 10.121.60.2 12201 // tcp echo -n -e '{ "version": "1.1", "host": "example.org", "short_message": "A short message with tcp", "level": 1, "_some_info": "foo" }'"\0" | nc -w10 10.121.60.2 12201 //http curl -X POST -H 'Content-Type: application/json' -d '{ "version": "1.1", "host": "example.org", "short_message": "A short message with http", "level": 5, "_some_info": "foo" }' 'http://10.121.60.2:12202/gelf' ``` ### docker 日志添加到graylog ``` docker run --log-driver=gelf \ --log-opt gelf-address=udp://10.121.60.2:12201 \ --log-opt tag=test1 \ -v /etc/localtime:/etc/localtime \ -it nginx /bin/bash ``` docker-compose.yaml ``` services: mongo: logging: driver: "gelf" options: gelf-address: "udp://10.121.60.2:12201" tag: mongo volumes: - /etc/localtime:/etc/localtime ``` ### java日志直接发送到graylog > 使用logback ``` 10.121.60.2 12201 <!--An example of overwriting the short message pattern--> %ex{short}%.100m <!-- Use HTML output of the full message. Yes, any layout can be used (please don't actually do this)--> %d{MM-dd HH:mm:ss.SSS} [%thread] %-5level \(%F:%L\) - %msg %n true true true true requestId:long <!--Facility is not officially supported in GELF anymore, but you can use staticFields to do the same thing--> tag business-server ``` ## 系统使用 功能菜单说明 - search 日志查询面板 ![](assets/2018-07-10-11-52-07.png) - streams 将日志对象按照filed定义为stream,默认的stream为all messages ![](assets/2018-07-10-11-52-22.png) - alerts 告警相关,选择一个stream对象定义告警条件和通知方式,当stream里面的日志满足条件时候告警并通知 ![](assets/2018-07-10-11-52-35.png) - dashboards 图形面板 ![](assets/2018-07-10-11-52-53.png) - source 告警所在主机 ![](assets/2018-07-10-11-53-37.png) - system 系统配置 ![](assets/2018-07-10-11-53-52.png) ### 查询条件 [官方说明文档](http://docs.graylog.org/en/3.0/pages/queries.html) > 关键字不分大小写 - 单个关键字查询 `ssh` - 多关键字查询,含有ssh or login `ssh login` - 含有某个字符串查询 `ssh login` - 同时含有多个关键字查询 `opening index" AND db` - 根据字段值查询 `tag:couchdb.peer0.org1.ygsoft.com` - 含有多个tag的查询,某条记录只要其中一个字段满足即可 ``` tag: (orderer.ygsoft.com couchdb.peer0.org1.ygsoft.com) or tag: orderer.ygsoft.com couchdb.peer0.org1.ygsoft.com ``` - 含有完全匹配字段 `tag:"ssh login"` - 含有某个字段的记录: `_exists_:tag` - 不含有某个字段的记录: `NOT _exists_:tag` - AND OR `"ssh login" AND source:example.org` `("ssh login" AND (source:example.org OR source:another.example.org)) OR _exists_:always_find_me` - NOT ``` "ssh login" AND NOT source:example.org NOT example.org ``` 注意: AND, OR, and NOT 只能大写. - 通配符 ? 表示单个字符 星号表示0个和多个字符 ``` source:*.org source:exam?le.org source:exam?le.* ``` 注意:默认首字母不能使用通配符,因为这样会使用大量的内存;强行开启修改配置文件`allow_leading_wildcard_searches = true` - 当某关键字不确认顺序的时候使用~ ``` ssh logni~ source:exmaple.org~ 结果可以匹配 ssh login and example.org ``` - 以下字符在使用的时候必须用反斜杠转义 ``` && || : \ / + - ! ( ) { } [ ] ^ " ~ * ? Example: resource:\/posts\/45326 ``` ### 查询条件可以保存下来 使用 save search criteria 按钮
<think>好的,用户想查找Graylog2服务器的安装、配置或使用指南。我需要先回顾之前的引用内容,看看是否有相关信息的线索。引用1提到Graylog使用MongoDB和Elasticsearch,这可能涉及安装时的依赖项。引用3指出Graylog的配置比ELK简单,界面更易用,可能用户需要具体的步骤指导。 首先,安装部分应该包括MongoDB、Elasticsearch和Graylog本身的安装步骤。用户可能需要知道如何正确安装这些组件,特别是版本兼容性,因为Graylog2可能对依赖的版本有特定要求。需要确认各组件的最新版本或推荐版本。 接下来是配置,MongoDB和Elasticsearch的配置可能涉及基本设置,比如绑定地址、集群配置等。Graylog的配置文件通常位于/etc/graylog/server/server.conf,需要设置密码、密钥、网络接口等。引用中提到Elasticsearch存储数据,MongoDB存储元数据,所以正确配置这两个服务是关键。 启动和验证部分,需要指导用户如何启动各个服务,并检查状态是否正常。Web界面访问通常是HTTP://服务器IP:9000,可能需要防火墙设置开放端口。 日志收集配置部分,用户可能需要将应用日志发送到Graylog,比如通过Log4j配置,如引用2中的源码示例所示,可以提到如何修改Spring Boot应用的日志配置。 此外,用户可能遇到的问题包括端口冲突、服务启动失败、依赖版本不兼容等,需要给出常见问题解决方法。例如,Elasticsearch的heap size设置,或者MongoDB无法启动的问题。 最后,提供参考链接或文档,帮助用户进一步查阅官方指南。引用中的源码地址可能对用户有帮助,可以提及。 需要确保回答结构清晰,分步骤说明,并验证每个步骤的正确性。同时,根据引用内容,强调Graylog的易用性,减少用户的顾虑。可能用户对Graylog2Graylog版本有混淆,需要确认是否Graylog2是旧版本,现在是否推荐使用新版本,但根据问题,还是按照Graylog2来回答,不过可能需要注明版本信息。</think>以下是Graylog2服务器的安装与配置分步指南: ### 一、环境准备 1. **系统要求**: - 推荐使用Ubuntu 20.04/CentOS 7+ - 内存≥4GB,存储空间≥10GB - 开放端口:9000(Web)、12201/514(UDP/TCP日志接收) 2. **依赖组件安装**: ```bash # 安装OpenJDK sudo apt install openjdk-11-jdk # 安装MongoDB(Graylog2需3.6+版本) sudo apt install mongodb-server # 安装Elasticsearch(需6.x版本,与Graylog2兼容) wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.23.deb sudo dpkg -i elasticsearch-6.8.23.deb ``` ### 二、Graylog2核心安装 ```bash # 添加Graylog仓库 wget https://packages.graylog2.org/repo/packages/graylog-2.5-repository_latest.deb sudo dpkg -i graylog-2.5-repository_latest.deb sudo apt update && sudo apt install graylog-server ``` ### 三、关键配置(编辑`/etc/graylog/server/server.conf`) 1. **密码与密钥配置**: ```conf password_secret = 生成32位随机字符串 root_password_sha2 = $(echo -n "您的密码" | sha256sum | cut -d' ' -f1) ``` 2. **网络绑定配置**: ```conf rest_listen_uri = http://0.0.0.0:9000/api/ web_listen_uri = http://0.0.0.0:9000/ ``` 3. **Elasticsearch连接**: ```conf elasticsearch_hosts = http://localhost:9200 ``` ### 四、服务启动与验证 ```bash # 启动依赖服务 sudo systemctl start mongodb elasticsearch # 启动Graylog sudo systemctl enable graylog-server && sudo systemctl start graylog-server # 检查状态 journalctl -u graylog-server -f # 查看实时日志 ``` ### 五、日志收集配置示例(SpringBoot集成) 在`logback-spring.xml`中添加GELF输出: ```xml <appender name="GELF" class="de.siegmar.logbackgelf.GelfUdpAppender"> <graylogHost>127.0.0.1</graylogHost> <graylogPort>12201</graylogPort> <layout class="de.siegmar.logbackgelf.GelfLayout"> <originHost>${HOSTNAME}</originHost> <includeFullMdc>true</includeFullMdc> </layout> </appender> ``` (配置示例引用自Graylog官方文档[^2]) ### 常见问题解决 1. **Elasticsearch集群状态异常**: - 检查`/etc/elasticsearch/jvm.options`内存设置: ``` -Xms1g -Xmx1g ``` 2. **Web界面无法访问**: ```bash sudo ufw allow 9000/tcp # 开放防火墙端口 ``` ### 官方文档参考 - [Graylog2完整安装指南](https://docs.graylog.org/docs/install) - [生产环境配置建议](https://docs.graylog.org/docs/configuration)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值