当前位置: 首页 > news >正文

柳市做网站建设手机优化助手下载

柳市做网站建设,手机优化助手下载,做公众号的网站模板下载,怎么样建网站Springboot集成sysloglogstash收集日志到ES 1、背景 Logstash 是一个实时数据收集引擎,可收集各类型数据并对其进行分析,过滤和归纳。按照自己条件分析过滤出符合的数据,导入到可视化界面。它可以实现多样化的数据源数据全量或增量传输&…

Springboot集成syslog+logstash收集日志到ES

1、背景

Logstash 是一个实时数据收集引擎,可收集各类型数据并对其进行分析,过滤和归纳。按照自己条件分析过滤出符合的数据,导入到可视化界面。它可以实现多样化的数据源数据全量或增量传输,数据标准格式处理,数据格式化输出等的功能,常用于日志处理。工作流程分为三个阶段:

  1. input数据输入阶段,可接收oracle、mysql、postgresql、file等多种数据源;
  2. filter数据标准格式化阶段,可过滤、格式化数据,如格式化时间、字符串等;
  3. output数据输出阶段,可输出到elasticsearch、mongodb、kafka等接收终端。

架构原理:springboot发出syslog日志,通过系统的rsyslog服务进行数据转发,logstash监听rsyslog端口过滤数据并发到es进行存储

2、springboot集成syslog

maven依赖:

<dependency><groupId>org.slf4j</groupId><artifactId>slf4j-api</artifactId><version>1.7.7</version>
</dependency>
<dependency><groupId>ch.qos.logback</groupId><artifactId>logback-core</artifactId><version>1.1.7</version>
</dependency>
<dependency><groupId>ch.qos.logback</groupId><artifactId>logback-classic</artifactId><version>1.1.7</version>
</dependency>

logback.xml文件配置

配置好日志之后,在root标签中添加appender才能生效;

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false"><!-- 控制台输出 --><appender name="consoleLogAppender" class="ch.qos.logback.core.ConsoleAppender"><filter class="ch.qos.logback.classic.filter.ThresholdFilter"><level>INFO</level></filter><encoder><pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)</pattern></encoder></appender><appender name="infoFileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender"><File>./logs/service.log</File><filter class="ch.qos.logback.classic.filter.LevelFilter"><level>INFO</level><onMatch>ACCEPT</onMatch><onMismatch>DENY</onMismatch></filter><encoder><pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)</pattern></encoder><rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"><fileNamePattern>./logs/service-log-%d{yyyy-MM-dd}.log</fileNamePattern><maxHistory>15</maxHistory><totalSizeCap>5GB</totalSizeCap></rollingPolicy></appender><appender name="errorFileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender"><File>./logs/service-error.log</File><filter class="ch.qos.logback.classic.filter.LevelFilter"><level>ERROR</level><onMatch>ACCEPT</onMatch><onMismatch>DENY</onMismatch></filter><encoder><pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)</pattern></encoder><rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"><fileNamePattern>./logs/service-error.log.%d{yyyy-MM-dd}.log</fileNamePattern><maxHistory>15</maxHistory><totalSizeCap>5GB</totalSizeCap></rollingPolicy></appender><appender name="msgAppender" class="ch.qos.logback.core.rolling.RollingFileAppender"><File>./logs/service-msg.log</File><filter class="ch.qos.logback.classic.filter.LevelFilter"><level>INFO</level><onMatch>ACCEPT</onMatch><onMismatch>DENY</onMismatch></filter><encoder><pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)</pattern></encoder><rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"><fileNamePattern>./logs/service-msg-%d{yyyy-MM-dd}.log</fileNamePattern><maxHistory>5</maxHistory><totalSizeCap>5GB</totalSizeCap></rollingPolicy></appender><appender name="taskAppender" class="ch.qos.logback.core.rolling.RollingFileAppender"><File>./logs/service-task.log</File><filter class="ch.qos.logback.classic.filter.LevelFilter"><level>INFO</level><onMatch>ACCEPT</onMatch><onMismatch>DENY</onMismatch></filter><encoder><pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)</pattern></encoder><rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"><fileNamePattern>./logs/service-task-%d{yyyy-MM-dd}.log</fileNamePattern><maxHistory>5</maxHistory><totalSizeCap>5GB</totalSizeCap></rollingPolicy></appender><appender name="mybatisplus" class="ch.qos.logback.core.rolling.RollingFileAppender"><File>./logs/service-mybatisplus.log</File><filter class="ch.qos.logback.classic.filter.LevelFilter"><level>DEBUG</level><onMatch>ACCEPT</onMatch><onMismatch>DENY</onMismatch></filter><encoder><pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)</pattern></encoder><rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"><fileNamePattern>./logs/service-mybatisplus-%d{yyyy-MM-dd}.log</fileNamePattern><maxHistory>5</maxHistory><totalSizeCap>5GB</totalSizeCap></rollingPolicy></appender><!-- 定义一个 SyslogAppender --><appender name="SYSLOG" class="ch.qos.logback.classic.net.SyslogAppender"><syslogHost>localhost</syslogHost><port>12525</port><facility>LOCAL0</facility> <!-- 设置 Syslog 设施,这意味着服务发送到 Syslog 服务器的所有日志都将被标记为来源于 LOCAL0 --><filter class="ch.qos.logback.classic.filter.LevelFilter"><level>WARN</level><onMatch>ACCEPT</onMatch><onMismatch>DENY</onMismatch></filter><suffixPattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}] - [%p] - [%X{app:-${app}}] - [%thread] - [%logger{36}.%M] - %msg%n</suffixPattern></appender><logger name="msgLogger" level="info"  additivity="false"><appender-ref ref="msgAppender" /></logger><logger name="taskLogger" level="info"  additivity="false"><appender-ref ref="taskAppender" /></logger><!--  <logger name="com.zbnsec.opera.project.simulator.framework.task" level="DEBUG"><appender-ref ref="mybatisplus" /></logger>--><root level="INFO" additivity="false"><appender-ref ref="consoleLogAppender"/><appender-ref ref="infoFileAppender"/><appender-ref ref="errorFileAppender"/><appender-ref ref="SYSLOG"/></root>
</configuration>

SyslogAppender是syslog的配置:
syslogHost:指的是syslog服务器的主机名/IP地址
port:syslog服务器的监听端口,默认为514 udp
facility:标识消息的来源
suffixPattern:描述日志的格式

3、rsyslog接收springboot应用的日志

1、服务器安装rsyslog服务

apt install rsyslog 安装
systemctl start rsyslog 启动服务
systemctl status rsyslog 查看服务状态
systemctl enable rsyslog  设置rsyslog服务器在启动时自动运行

2、配置rsyslog.conf

rsyslog的配置文件位于:/etc/rsyslog.conf

global(workDirectory="/var/lib/rsyslog")
module(load="builtin:omfile" Template="RSYSLOG_TraditionalFileFormat")
include(file="/etc/rsyslog.d/*.conf" mode="optional")*.* @@localhost:12515*.info;mail.none;authpriv.none;cron.none                /var/log/messages
authpriv.*                                              /var/log/secure
mail.*                                                  -/var/log/maillog
cron.*                                                  /var/log/cron
*.emerg                                                 :omusrmsg:*
uucp,news.crit                                          /var/log/spooler
local7.*                                                /var/log/boot.log

以上配置转发了12525端口的syslog,@@代表udp;
如果此时需要系统日志,则需要以下配置:tail -500f /var/log/messages 则会看到系统日志一直在刷新保存

module(load="imuxsock"  SysSock.Use="off") 
module(load="imjournal"  StateFile="imjournal.state") 
module(load="imklog") 
module(load="immark") 
$imjournalRatelimitInterval 0

如果需要将sprigboot日志同时也存储在messages文件,则需要以下配置:
注意:这里监听12525端口,则在logstash启动时,同时监听12525,会出现端口占用,则logstash不会接收到springboot日志数据;

# 监听 UDP 端口
module(load="imudp")
input(type="imudp" port="12525")# 监听 TCP 端口
module(load="imtcp")
input(type="imtcp" port="12525")

修改完配置之后,执行 systemctl restart rsyslog 重新启动服务

4、集成logstash

1、拉取logstash镜像

logstash的版本要和ES的版本一致,否则可能出现其他问题

docker pull docker.elastic.co/logstash/logstash:7.4.0

2、配置logstash

除了以下配置,其他的都使用logstash容器中的默认配置,可以起一个空的容器,把这些默认配置(config目录和pipeline目录)复制出来
logstash.yaml:

config.reload.automatic: true
config.reload.interval: 3s
http.host: "0.0.0.0"
path.logs: /usr/share/logstash/logs/

logstash.conf:

status = error
name = LogstashPropertiesConfig
appender.console.type = Console
appender.console.name = plain_console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.json_console.type = Console
appender.json_console.name = json_console
appender.json_console.layout.type = JSONLayout
appender.json_console.layout.compact = true
appender.json_console.layout.eventEol = true
# Define Rolling File Appender
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:ls.logs}/logstash-plain.log
appender.rolling.filePattern = ${sys:ls.logs}/logstash-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 20
rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
rootLogger.appenderRef.rolling.ref = rolling

pipelines.yml: 在pipeline目录中配置几个管道,则在这里对应配置

- pipeline.id: system-syslogpath.config: "/usr/share/logstash/pipeline/fscr-syslog.conf"

fscr-syslog.conf:

input {syslog {port => 12525type => "system-syslog"}
}
filter {if [type] == "system-syslog" {mutate {# Remove ANSI escape sequencesgsub => ["message", "\e\[\d+(;\d+)*m", ""]}if [message] =~ /^\[/ {dissect {mapping => {"message" => "[%{timestamp}] - [%{loglevel}] - [%{app}] - [%{thread_info}] - [%{source_class}] - %{log_message}"}}}mutate {# Convert "WARN" to "WARNING"gsub => ["loglevel", "^WARN$", "WARNING"]add_field => [ "received_at", "%{@timestamp}" ]add_field => [ "received_from", "%{host}" ]add_field => [ "syslog_hostname", "%{logsource}" ]add_field => [ "syslog_severity", "%{loglevel}" ]add_field => [ "syslog_program", "%{app}" ]add_field => [ "syslog_message", "%{message}" ]add_field => [ "syslog_timestamp", "%{timestamp}" ]remove_field => ["severity_label", "facility_label", "facility", "priority"]}date {match => ["adjusted_received_at", "ISO8601"]timezone => "Asia/Shanghai"target => "@timestamp"}}
}output {if [loglevel] == "WARNING" or [loglevel] == "ERROR" {elasticsearch {hosts => ["http://esHost:9200"]index => "logstash-%{+YYYY.MM.dd}"template_name => "logstash"   # 指定模板(该模板已经存在于es中)template_overwrite => false}}if [loglevel] == "WARNING" or [loglevel] == "ERROR" {stdout {codec => rubydebug}}
}

logstash.json索引文件:

{"name": "logstash","order": 0,"version": 60001,"index_patterns": ["logstash-*"],"settings": {"index": {"number_of_shards": "1","refresh_interval": "5s"}},"mappings": {"dynamic_templates": [{"message_field": {"path_match": "message","mapping": {"norms": false,"type": "text"},"match_mapping_type": "string"}},{"string_fields": {"mapping": {"norms": false,"type": "text","fields": {"keyword": {"ignore_above": 256,"type": "keyword"}}},"match_mapping_type": "string","match": "*"}}],"properties": {"@timestamp": {"type": "date"},"geoip": {"dynamic": true,"properties": {"ip": {"type": "ip"},"latitude": {"type": "half_float"},"location": {"type": "geo_point"},"longitude": {"type": "half_float"}}},"@version": {"type": "keyword"}}},"aliases": {}
}

启动容器:

docker run --name logstash -itd --net=host \-v /opt/fscr/middleware/logstash/logstash/config:/usr/share/logstash/config \-v /opt/fscr/middleware/logstash/logstash/pipeline:/usr/share/logstash/pipeline \-p 5044:5044 -p 9600:9600 \logstash:8.8.0

容器启动后,无error日志,可以看到打印的日志信息,为正常启动;

http://www.fp688.cn/news/155640.html

相关文章:

  • 合肥网站建设模板优秀的网络搜索引擎营销案例
  • 找个公司做网站需要注意什么条件百度官网下载安装到桌面上
  • 做外汇哪个网站看外国消息广东网站营销seo方案
  • 绵阳网站建设怎么做百度sem
  • 广告设计与制作的课程南宁seo排名首页
  • 网站优化排名易下拉技术百度热搜榜排行
  • 清远做网站seo搜资源的搜索引擎
  • 网络专业公司排行榜郑州seo外包公司哪家好
  • 公司网站需要服务器吗今天发生的重大新闻事件
  • 鲜花团购网站建设哈尔滨最新消息
  • 国产免费erp软件seo排名培训公司
  • 信息系网站建设开题报告书杭州旺道企业服务有限公司
  • 网站公安备案手续互动营销案例
  • b2b网站的盈利模式bt搜索引擎最好用的
  • 商城网站建设哪家好网站优化方案范文
  • 网站流量导入是什么意思全网搜索引擎
  • 好女人生活常识网站建设网页设计软件
  • 如何做一份企业网站安卓在线视频嗅探app
  • 免费1级做看网站怎么注册个人网站
  • 怎么用网站做调查表国内专业seo公司
  • 福建省网站备案用户注销宠物美容师宠物美容培训学校
  • 大气个人网站源码抖音关键词搜索指数
  • 创建企业网站经过哪些步骤seo搜索引擎优化课程
  • 伊犁网站建设百度网站排名seo
  • 做套现网站外贸seo站
  • 做公司的网站有哪些东西吗百度一下你就知道官网网页版
  • wordpress 社交seo网站的优化流程
  • 织梦商业网站内容管理系统武汉seo排名公司
  • 天津制作公司网站凡科小程序
  • 做外贸的网站b2c最新推广注册app拿佣金