日志收集实践参考

一. 日志规约

记日志的目的就是为了快速定位问题

  1. 参见阿里手册

二. 本地测试环境准备

今天主要讲的是 logs -> logstash -> kafka ,本地使用docker模拟

(一) 启动zookeeper

  1. 拉取zookeeper docker镜像

    1
    docker pull wurstmeister/zookeeper
  2. 启动zookeeper

    1
    docker run -d --name zookeeper -p 2181 -t wurstmeister/zookeeper

(二) 启动Kafka,查看接收消息

  1. 拉取kafka docker镜像

    1
    docker pull wurstmeister/kafka
  2. 启动kafka

    1
    2
    docker run --name kafka -e HOST_IP=localhost -e KAFKA_ADVERTISED_PORT=9092 -e KAFKA_BROKER_ID=1 -e ZK=zk -p 9092 --link zookeeper:zk -t wurstmeister/kafka
    docker ps
  3. 创建一个topic

    1
    2
    3
    docker exec -it ${CONTAINER ID} /bin/bash
    cd opt/kafka_2.12-1.0.0/
    bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic biz-logs
  4. 新打开一个shell起一个消费者

    1
    2
    cd opt/kafka_2.12-1.0.0/
    bin/kafka-console-consumer.sh --zookeeper zookeeper:2181 --topic biz-logs --from-beginning
  5. 新打开一个shell起一个生产者测试下

    1
    2
    cd opt/kafka_2.12-1.0.0/
    bin/kafka-console-producer.sh --broker-list localhost:9092 --topic biz-logs
  6. 在生产者输入,显示在消费者的输出中,表明启动正常了

(三) 启动logstash

  1. 拉取logstash docker 镜像

    1
    docker pull logstash
  2. 创建udp接收,输出到kafka的配置文件udp-kafka.conf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    input {
    gelf {
    port => 12201
    type => docker
    codec => json
    }
    udp {
    type => docker
    port => 10800
    codec => json
    }
    tcp {
    type => docker
    port => 10800
    codec => json
    }
    }
    filter {
    }
    output {
    stdout { codec => rubydebug }
    kafka {
    topic_id => "biz-logs"
    bootstrap_servers => "ka:9092"
    codec => "json"
    }
    }
  3. 启动logstash(执行命令需要在上面配置文件同一个目录下,windows系统把$PWD改成配置文件所在目录的绝对路径)

    1
    docker run -it --rm -v "$PWD":/config-dir -p 10800:10800/udp -p 10800:10800/tcp -p 12201:12201/udp --link kafka:ka logstash -f /config-dir/udp-kafka.conf

三. logback 日志输出

前置

  1. 配置appName,在pom文件中增加

    1
    2
    3
    <properties>
    <appName>authority</appName>
    </properties>
  2. 在.gitignore文件中添加忽略

    1
    logs/
  3. 日志数据

  • 默认已有数据
Field Description
@timestamp Time of the log event. (yyyy-MM-dd’T’HH:mm:ss.SSSZZ) See customizing timezone.
@version Logstash format version (e.g. 1) See customizing version.
message Formatted log message of the event
logger_name Name of the logger that logged the event
thread_name Name of the thread that logged the event
level String name of the level of the event
level_value Integer value of the level of the event
stack_trace (Only if a throwable was logged) The stacktrace of the throwable. Stackframes are separated by line endings.
tags (Only if tags are found) The names of any markers not explicitly handled. (e.g. markers from MarkerFactory.getMarker will be included as tags, but the markers from Markers will not.)
  • 打开includeCallerData
Field Description
caller_class_name Fully qualified class name of the class that logged the event
caller_method_name Name of the method that logged the event
caller_file_name Name of the file that logged the event
caller_line_number Line number of the file where the event was logged

(一)配置日志本地输出

  1. 配置pom文件,确定输出路径

    1
    2
    3
    4
    5
    6
    7
    <!-- 线上环境 -->
    <!-- 日志存放地址 -->
    <logging.path>/data/logs/${appName}</logging.path>

    <!-- 本地环境 -->
    <!-- 日志存放地址 -->
    <logging.path>${basedir}/logs</logging.path>
  2. 配置logback.xml,添加console输出

    1
    2
    3
    4
    5
    6
    <appender name="consoleRolling" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
    <pattern>${CONSOLE_LOG_PATTERN}</pattern>
    <charset>utf8</charset>
    </encoder>
    </appender>
  3. 配置logback.xml,添加本地日志输出

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    <appender name="dailyRollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <File>${logging.path}/app.log</File>
    <!-- rollingPolicy下面两种配置都是可以的,上面那种更简洁一点 -->
    <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
    <!-- daily rollover -->
    <fileNamePattern>${logging.path}/app.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
    <maxHistory>30</maxHistory>
    <maxFileSize>50MB</maxFileSize>
    </rollingPolicy>
    <!--<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">-->
    <!--<fileNamePattern>${logging.path}/app-%d{yyyyMMdd}.%i.log</fileNamePattern>-->
    <!--<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">-->
    <!--<maxFileSize>50MB</maxFileSize>-->
    <!--</timeBasedFileNamingAndTriggeringPolicy>-->
    <!--<maxHistory>30</maxHistory>-->
    <!--</rollingPolicy>-->
    <encoder>
    <!--
    %d{HH:mm:ss.SSS}——日志输出时间,因为每个文件都是按天存放的,所以没必要再输出年月日
    %X{traceId}/%X{spanId}——预留
    ${appName}——pom文件中定义的应用名
    %thread——输出日志的进程名字,这在Web应用以及异步任务处理中很有用
    %-5level——日志级别,并且使用5个字符靠左对齐
    %logger{36}——日志输出者的名字
    %msg——日志消息
    %n——平台的换行符
    -->
    <pattern>%d{HH:mm:ss.SSS} [%X{traceId}/%X{spanId}] ${appName} [%thread] %-5level %logger{35} - %msg %n</pattern>
    </encoder>
    </appender>
  4. 启动项目,看到日志输出
    1) 日志文件输出

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    15:36:58.036 [/] authority [main] INFO  org.hibernate.Version - HHH000412: Hibernate Core {5.0.12.Final}
    15:36:58.038 [/] authority [main] INFO org.hibernate.cfg.Environment - HHH000206: hibernate.properties not found
    15:36:58.039 [/] authority [main] INFO org.hibernate.cfg.Environment - HHH000021: Bytecode provider name : javassist
    15:36:58.090 [/] authority [main] INFO o.h.annotations.common.Version - HCANN000001: Hibernate Commons Annotations {5.0.1.Final}
    15:36:58.251 [/] authority [main] INFO c.a.druid.pool.DruidDataSource - {dataSource-1} inited
    15:36:58.895 [/] authority [main] INFO org.hibernate.dialect.Dialect - HHH000400: Using dialect: org.hibernate.dialect.MySQL5Dialect
    15:36:59.373 [/] authority [main] INFO o.h.tool.hbm2ddl.SchemaUpdate - HHH000228: Running hbm2ddl schema update
    15:37:21.387 [/] authority [main] INFO o.s.o.j.LocalContainerEntityManagerFactoryBean - Initialized JPA EntityManagerFactory for persistence unit 'default'
    15:37:21.540 [/] authority [main] INFO org.redisson.Version - Redisson 2.3.0
    15:37:21.661 [/] authority [nioEventLoopGroup-2-7] INFO o.r.c.p.SinglePubSubConnectionPool - 1 connections initialized for /192.168.1.204:6379
    15:37:21.662 [/] authority [nioEventLoopGroup-2-2] INFO o.r.c.pool.MasterConnectionPool - 5 connections initialized for /192.168.1.204:6379

    2) console输出

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    2017-11-21 15:36:58.036  INFO 1899 --- [           main] org.hibernate.Version                    : HHH000412: Hibernate Core {5.0.12.Final}
    2017-11-21 15:36:58.038 INFO 1899 --- [ main] org.hibernate.cfg.Environment : HHH000206: hibernate.properties not found
    2017-11-21 15:36:58.039 INFO 1899 --- [ main] org.hibernate.cfg.Environment : HHH000021: Bytecode provider name : javassist
    2017-11-21 15:36:58.090 INFO 1899 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.0.1.Final}
    2017-11-21 15:36:58.251 INFO 1899 --- [ main] com.alibaba.druid.pool.DruidDataSource : {dataSource-1} inited
    2017-11-21 15:36:58.895 INFO 1899 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.MySQL5Dialect
    2017-11-21 15:36:59.373 INFO 1899 --- [ main] org.hibernate.tool.hbm2ddl.SchemaUpdate : HHH000228: Running hbm2ddl schema update
    2017-11-21 15:37:21.387 INFO 1899 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
    2017-11-21 15:37:21.540 INFO 1899 --- [ main] org.redisson.Version : Redisson 2.3.0
    2017-11-21 15:37:21.661 INFO 1899 --- [ntLoopGroup-2-7] o.r.c.pool.SinglePubSubConnectionPool : 1 connections initialized for /192.168.1.204:6379
    2017-11-21 15:37:21.662 INFO 1899 --- [ntLoopGroup-2-2] o.r.c.pool.MasterConnectionPool : 5 connections initialized for /192.168.1.204:6379

(二)使用udp输出到logstash

mdc默认添加到json输出的日志里

  1. 添加依赖

    1
    2
    3
    4
    5
    <dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>4.11</version>
    </dependency>
  2. 配置logback.xml,加上以下内容(通过udp协议发送到logstash,其他的方式参考 https://github.com/logstash/logstash-logback-encoder/tree/logstash-logback-encoder-4.11

    1
    2
    3
    4
    5
    <appender name="stash" class="net.logstash.logback.appender.LogstashSocketAppender">
    <port>10800</port>/
    <includeCallerData>true</includeCallerData>
    <customFields>{"app_name":"authority"}</customFields>
    </appender>
  3. 启动项目,看到日志输出

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    {
    "caller_file_name" => "DruidDataSource.java",
    "level" => "INFO",
    "caller_line_number" => 1658,
    "message" => "{dataSource-1} closed",
    "type" => "docker",
    "caller_method_name" => "close",
    "@timestamp" => 2017-11-16T08:39:49.543Z,
    "app_name" => "authority",
    "caller_class_name" => "com.alibaba.druid.pool.DruidDataSource",
    "thread_name" => "Thread-69",
    "level_value" => 20000,
    "@version" => 1,
    "host" => "172.17.0.1",
    "logger_name" => "com.alibaba.druid.pool.DruidDataSource"
    }
    {
    "caller_file_name" => "SpringApplication.java",
    "level" => "ERROR",
    "caller_line_number" => 771,
    "message" => "Application startup failed",
    "type" => "docker",
    "caller_method_name" => "reportFailure",
    "@timestamp" => 2017-11-16T08:39:47.303Z,
    "app_name" => "authority",
    "caller_class_name" => "org.springframework.boot.SpringApplication",
    "thread_name" => "main",
    "level_value" => 40000,
    "@version" => 1,
    "host" => "172.17.0.1",
    "logger_name" => "org.springframework.boot.SpringApplication",
    "stack_trace" => "java.lang.IllegalStateException: Failed to execute CommandLineRunner\n\tat org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:735)\n\tat org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:716)\n\tat org.springframework.boot.SpringApplication.afterRefresh(SpringApplication.java:703)\n\tat org.springframework.boot.SpringApplication.run(SpringApplication.java:304)\n\tat org.springframework.boot.SpringApplication.run(SpringApplication.java:1118)\n\tat org.springframework.boot.SpringApplication.run(SpringApplication.java:1107)\n\tat com.yudianbank.project.YdjfBootAuthorityApplication.main(YdjfBootAuthorityApplication.java:20)\nCaused by: org.redisson.client.RedisException: Unexpected exception while processing command\n\tat org.redisson.command.CommandAsyncService.convertException(CommandAsyncService.java:267)\n\tat org.redisson.command.CommandAsyncService.get(CommandAsyncService.java:112)\n\tat org.redisson.RedissonObject.get(RedissonObject.java:55)\n\tat org.redisson.RedissonMapCache.scanIterator(RedissonMapCache.java:611)\n\tat org.redisson.RedissonMapIterator.iterator(RedissonMapIterator.java:32)\n\tat org.redisson.RedissonBaseMapIterator.hasNext(RedissonBaseMapIterator.java:66)\n\tat org.redisson.RedissonMapIterator.hasNext(RedissonMapIterator.java:23)\n\tat java.util.concurrent.ConcurrentMap.forEach(ConcurrentMap.java:104)\n\tat com.yudianbank.project.service.Impl.RedisServiceImpl.putClientRMapCache(RedisServiceImpl.java:132)\n\tat com.yudianbank.project.startup.InitRedisData.lambda$initAllClientData$5(InitRedisData.java:229)\n\tat java.util.HashMap.forEach(HashMap.java:1289)\n\tat com.yudianbank.project.startup.InitRedisData.initAllClientData(InitRedisData.java:228)\n\tat com.yudianbank.project.startup.InitRedisData.run(InitRedisData.java:107)\n\tat org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:732)\n\t... 6 common frames omitted\nCaused by: org.redisson.RedissonShutdownException: Redisson is shutdown\n\tat org.redisson.command.CommandAsyncService.async(CommandAsyncService.java:425)\n\tat org.redisson.command.CommandAsyncService.evalAsync(CommandAsyncService.java:401)\n\tat org.redisson.command.CommandAsyncService.evalReadAsync(CommandAsyncService.java:337)\n\tat org.redisson.RedissonMapCache.scanIterator(RedissonMapCache.java:537)\n\t... 16 common frames omitted\n"
    }

(三)使用tcp输出到logstash

mdc默认添加到json输出的日志里

  1. 添加依赖

    1
    2
    3
    4
    5
    <dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>4.11</version>
    </dependency>
  2. 配置logback.xml,加上以下内容(通过tcp协议发送到logstash,其他的方式参考 https://github.com/logstash/logstash-logback-encoder/tree/logstash-logback-encoder-4.11

    1
    2
    3
    4
    5
    6
    7
    8
    <appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
    <destination>127.0.0.1:10800</destination>
    <!-- encoder is required -->
    <encoder class="net.logstash.logback.encoder.LogstashEncoder">
    <includeCallerData>true</includeCallerData>
    <customFields>{"app_name":"${appName}"}</customFields>
    </encoder>
    </appender>
  3. 启动项目,看到日志输出

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    {
    "caller_file_name" => "DruidDataSource.java",
    "level" => "INFO",
    "caller_line_number" => 1658,
    "message" => "{dataSource-1} closed",
    "type" => "docker",
    "caller_method_name" => "close",
    "@timestamp" => 2017-11-16T08:39:49.543Z,
    "app_name" => "authority",
    "caller_class_name" => "com.alibaba.druid.pool.DruidDataSource",
    "thread_name" => "Thread-69",
    "level_value" => 20000,
    "@version" => 1,
    "host" => "172.17.0.1",
    "logger_name" => "com.alibaba.druid.pool.DruidDataSource"
    }
    {
    "caller_file_name" => "SpringApplication.java",
    "level" => "ERROR",
    "caller_line_number" => 771,
    "message" => "Application startup failed",
    "type" => "docker",
    "caller_method_name" => "reportFailure",
    "@timestamp" => 2017-11-16T08:39:47.303Z,
    "app_name" => "authority",
    "caller_class_name" => "org.springframework.boot.SpringApplication",
    "thread_name" => "main",
    "level_value" => 40000,
    "@version" => 1,
    "host" => "172.17.0.1",
    "logger_name" => "org.springframework.boot.SpringApplication",
    "stack_trace" => "java.lang.IllegalStateException: Failed to execute CommandLineRunner\n\tat org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:735)\n\tat org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:716)\n\tat org.springframework.boot.SpringApplication.afterRefresh(SpringApplication.java:703)\n\tat org.springframework.boot.SpringApplication.run(SpringApplication.java:304)\n\tat org.springframework.boot.SpringApplication.run(SpringApplication.java:1118)\n\tat org.springframework.boot.SpringApplication.run(SpringApplication.java:1107)\n\tat com.yudianbank.project.YdjfBootAuthorityApplication.main(YdjfBootAuthorityApplication.java:20)\nCaused by: org.redisson.client.RedisException: Unexpected exception while processing command\n\tat org.redisson.command.CommandAsyncService.convertException(CommandAsyncService.java:267)\n\tat org.redisson.command.CommandAsyncService.get(CommandAsyncService.java:112)\n\tat org.redisson.RedissonObject.get(RedissonObject.java:55)\n\tat org.redisson.RedissonMapCache.scanIterator(RedissonMapCache.java:611)\n\tat org.redisson.RedissonMapIterator.iterator(RedissonMapIterator.java:32)\n\tat org.redisson.RedissonBaseMapIterator.hasNext(RedissonBaseMapIterator.java:66)\n\tat org.redisson.RedissonMapIterator.hasNext(RedissonMapIterator.java:23)\n\tat java.util.concurrent.ConcurrentMap.forEach(ConcurrentMap.java:104)\n\tat com.yudianbank.project.service.Impl.RedisServiceImpl.putClientRMapCache(RedisServiceImpl.java:132)\n\tat com.yudianbank.project.startup.InitRedisData.lambda$initAllClientData$5(InitRedisData.java:229)\n\tat java.util.HashMap.forEach(HashMap.java:1289)\n\tat com.yudianbank.project.startup.InitRedisData.initAllClientData(InitRedisData.java:228)\n\tat com.yudianbank.project.startup.InitRedisData.run(InitRedisData.java:107)\n\tat org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:732)\n\t... 6 common frames omitted\nCaused by: org.redisson.RedissonShutdownException: Redisson is shutdown\n\tat org.redisson.command.CommandAsyncService.async(CommandAsyncService.java:425)\n\tat org.redisson.command.CommandAsyncService.evalAsync(CommandAsyncService.java:401)\n\tat org.redisson.command.CommandAsyncService.evalReadAsync(CommandAsyncService.java:337)\n\tat org.redisson.RedissonMapCache.scanIterator(RedissonMapCache.java:537)\n\t... 16 common frames omitted\n"
    }

(四)access日志

每次请求的具体数据

能输出的数据

Field Description
@timestamp Time of the log event. (yyyy-MM-dd’T’HH:mm:ss.SSSZZ) See customizing timezone.
@version Logstash format version (e.g. 1) See customizing version.
@message Message in the form ${remoteHost} - ${remoteUser} [${timestamp}] “${requestUrl}” ${statusCode} ${contentLength}
@fields.method HTTP method
@fields.protocol HTTP protocol
@fields.status_code HTTP status code
@fields.requested_url Request URL
@fields.requested_uri Request URI
@fields.remote_host Remote host
@fields.HOSTNAME another field for remote host (not sure why this is here honestly)
@fields.remote_user Remote user
@fields.content_length Content length
@fields.elapsed_time Elapsed time in millis

(五)完整版配置参考

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<appender name="dailyRollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">

<File>${logging.path}/app.log</File>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${logging.path}/app.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<maxHistory>30</maxHistory>
<maxFileSize>50MB</maxFileSize>
</rollingPolicy>
<!--<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">-->
<!--<fileNamePattern>${logging.path}/app-%d{yyyyMMdd}.%i.log</fileNamePattern>-->
<!--<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">-->
<!--<maxFileSize>50MB</maxFileSize>-->
<!--</timeBasedFileNamingAndTriggeringPolicy>-->
<!--<maxHistory>30</maxHistory>-->
<!--</rollingPolicy>-->
<encoder>
<!--
%d{HH:mm:ss.SSS}——日志输出时间
%X{traceId}/%X{spanId}——预留
${appName}——pom文件中定义的应用名
%thread——输出日志的进程名字,这在Web应用以及异步任务处理中很有用
%-5level——日志级别,并且使用5个字符靠左对齐
%logger{36}——日志输出者的名字
%msg——日志消息
%n——平台的换行符
-->
<pattern>%d{HH:mm:ss.SSS} [%X{traceId}/%X{spanId}] ${appName} [%thread] %-5level %logger{35} - %msg %n</pattern>
</encoder>
</appender>

<appender name="consoleRolling" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>


<appender name="stash" class="net.logstash.logback.appender.LogstashSocketAppender">
<port>10800</port>/
<includeCallerData>true</includeCallerData>
<customFields>{"app_name":"authority"}</customFields>
</appender>

<!--<appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">-->
<!--<destination>127.0.0.1:10800</destination>-->
<!--&lt;!&ndash; encoder is required &ndash;&gt;-->
<!--<encoder class="net.logstash.logback.encoder.LogstashEncoder">-->
<!--<includeCallerData>true</includeCallerData>-->
<!--<customFields>{"app_name":"${appName}"}</customFields>-->
<!--</encoder>-->
<!--</appender>-->

<logger name="com.yudianbank" level="DEBUG" additivity="false">
<appender-ref ref="dailyRollingFile"/>
<appender-ref ref="consoleRolling"/>
<appender-ref ref="stash"/>
</logger>

<root level="INFO">
<appender-ref ref="dailyRollingFile"/>
<appender-ref ref="consoleRolling"/>
<appender-ref ref="stash"/>
</root>
</configuration>

四. log4j2 日志输出

前置

  1. 配置appName,在pom文件中增加

    1
    2
    3
    <properties>
    <appName>authority</appName>
    </properties>
  2. 在.gitignore文件中添加忽略

    1
    logs/

(一)配置日志本地输出

  1. 配置pom文件,确定输出路径

    1
    2
    3
    4
    5
    6
    7
    <!-- 线上环境 -->
    <!-- 日志存放地址 -->
    <logging.path>/data/logs/${appName}</logging.path>

    <!-- 本地环境 -->
    <!-- 日志存放地址 -->
    <logging.path>${basedir}/logs</logging.path>
  2. log4j2.xml,添加console输出

    1
    2
    3
    4
    5
    6
    7
    <!-- 输出到控制台 -->
    <Console name="Console" target="SYSTEM_OUT" ignoreExceptions="false">
    <!-- 需要记录的级别 -->
    <ThresholdFilter level="debug" onMatch="ACCEPT" onMismatch="DENY"/>
    <!--<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}:%4p %t (%F:%L) - %m%n"/>-->
    <PatternLayout pattern="%d [%X{traceId}/%X{spanId}] [%thread] %-5level %logger{36} - %msg%n"/>
    </Console>
  3. 配置log4j2.xml,添加本地日志输出

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    <RollingFile name="dailyRollingFile" immediateFlush="true"
    fileName="${logging.path}/app.log" filePattern="${logging.path}/app.%d{yyyy-MM-dd}.%i.log">
    <PatternLayout>
    <pattern>%d{HH:mm:ss.SSS} [%X{traceId}/%X{spanId}] ${appName} [%thread] %-5level %logger{36} - %msg%n</pattern>
    </PatternLayout>
    <Policies>
    <OnStartupTriggeringPolicy/>
    <TimeBasedTriggeringPolicy/>
    <!-- 每个日志文件最大50MB -->
    <SizeBasedTriggeringPolicy size="50MB" />
    </Policies>
    </RollingFile>
  4. 启动项目,看到日志输出
    1) 日志文件输出

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    13:48:07.950 [/] weixin [RMI TCP Connection(4)-127.0.0.1] INFO  com.yudianbank.framework.rpc.sdk.ConsumerInvokerListener - 队列boot.queue.com.yudianbank.api.service.api.ApiToFrontSerive创建成功!
    13:48:08.064 [/] weixin [RMI TCP Connection(4)-127.0.0.1] INFO com.yudianbank.framework.rpc.sdk.ConsumerInvokerListener - 队列boot.queue.com.yudianbank.api.service.api.ApiToApproveService创建成功!
    13:48:08.120 [/] weixin [RMI TCP Connection(4)-127.0.0.1] INFO com.yudianbank.framework.rpc.sdk.ConsumerInvokerListener - 队列boot.queue.com.yudianbank.api.service.AppMgmToWeixinService创建成功!
    13:48:08.703 [/] weixin [RMI TCP Connection(4)-127.0.0.1] INFO com.yudianbank.common.config.RedissonConfigration - redisson.yaml >> content:{"singleServerConfig":{"idleConnectionTimeout":10000,"pingTimeout":1000,"connectTimeout":10000,"timeout":3000,"retryAttempts":3,"retryInterval":1500,"reconnectionTimeout":3000,"failedAttempts":3,"subscriptionsPerConnection":5,"address":["//192.168.1.204:6379"],"subscriptionConnectionMinimumIdleSize":1,"subscriptionConnectionPoolSize":50,"connectionMinimumIdleSize":10,"connectionPoolSize":64,"database":0,"dnsMonitoring":false,"dnsMonitoringInterval":5000},"threads":0,"codec":{"class":"org.redisson.codec.JsonJacksonCodec"},"useLinuxNativeEpoll":false}
    13:48:13.204 [/] weixin [http-nio-8080-exec-1] INFO com.yudianbank.common.web.filter.BackURLFilter - http://localhost:8080/ [GET]
    13:48:13.218 [/] weixin [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - Request IP: 127.0.0.1
    13:48:13.221 [/] weixin [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - Request Header:
    13:48:13.222 [/] weixin [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - user-agent = IntelliJ IDEA/172.4343.14
    13:48:13.223 [/] weixin [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - accept-encoding = gzip
    13:48:13.224 [/] weixin [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - cache-control = no-cache
    13:48:13.225 [/] weixin [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - pragma = no-cache
    13:48:13.226 [/] weixin [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - host = localhost:8080
    13:48:13.227 [/] weixin [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - accept = text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
    13:48:13.228 [/] weixin [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - connection = keep-alive

    2) console输出

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    2017-11-22 13:50:55.314 [/] [RMI TCP Connection(2)-127.0.0.1] INFO  com.yudianbank.framework.rpc.sdk.ConsumerInvokerListener - 队列boot.queue.com.yudianbank.api.service.api.ApiToFrontSerive创建成功!
    2017-11-22 13:50:55.469 [/] [RMI TCP Connection(2)-127.0.0.1] INFO com.yudianbank.framework.rpc.sdk.ConsumerInvokerListener - 队列boot.queue.com.yudianbank.api.service.api.ApiToApproveService创建成功!
    2017-11-22 13:50:55.510 [/] [RMI TCP Connection(2)-127.0.0.1] INFO com.yudianbank.framework.rpc.sdk.ConsumerInvokerListener - 队列boot.queue.com.yudianbank.api.service.AppMgmToWeixinService创建成功!
    2017-11-22 13:50:56.025 [/] [RMI TCP Connection(2)-127.0.0.1] INFO com.yudianbank.common.config.RedissonConfigration - redisson.yaml >> content:{"singleServerConfig":{"idleConnectionTimeout":10000,"pingTimeout":1000,"connectTimeout":10000,"timeout":3000,"retryAttempts":3,"retryInterval":1500,"reconnectionTimeout":3000,"failedAttempts":3,"subscriptionsPerConnection":5,"address":["//192.168.1.204:6379"],"subscriptionConnectionMinimumIdleSize":1,"subscriptionConnectionPoolSize":50,"connectionMinimumIdleSize":10,"connectionPoolSize":64,"database":0,"dnsMonitoring":false,"dnsMonitoringInterval":5000},"threads":0,"codec":{"class":"org.redisson.codec.JsonJacksonCodec"},"useLinuxNativeEpoll":false}
    [2017-11-22 01:51:00,500] Artifact root-weixin:war: Artifact is deployed successfully
    [2017-11-22 01:51:00,500] Artifact root-weixin:war: Deploy took 27,783 milliseconds
    2017-11-22 13:51:00.998 [/] [http-nio-8080-exec-1] INFO com.yudianbank.common.web.filter.BackURLFilter - http://localhost:8080/ [GET]
    2017-11-22 13:51:01.010 [/] [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - Request IP: 127.0.0.1
    2017-11-22 13:51:01.011 [/] [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - Request Header:
    2017-11-22 13:51:01.011 [/] [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - user-agent = IntelliJ IDEA/172.4343.14
    2017-11-22 13:51:01.012 [/] [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - accept-encoding = gzip
    2017-11-22 13:51:01.012 [/] [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - cache-control = no-cache
    2017-11-22 13:51:01.012 [/] [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - pragma = no-cache
    2017-11-22 13:51:01.012 [/] [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - host = localhost:8080
    2017-11-22 13:51:01.012 [/] [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - accept = text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
    2017-11-22 13:51:01.012 [/] [http-nio-8080-exec-1] DEBUG com.yudianbank.common.web.filter.BackURLFilter - connection = keep-alive

(二)使用gelf输出到logstash

mdc默认添加到json输出的日志里

  1. 添加依赖

    1
    2
    3
    4
    5
    6
    <!-- logstash:UDP -->
    <dependency>
    <groupId>biz.paluch.logging</groupId>
    <artifactId>logstash-gelf</artifactId>
    <version>1.11.1</version>
    </dependency>
  2. 配置log4j2.xml,加上以下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    <!--log4j2 logstash-gelf UDP 协议,自动重连 https://github.com/mp911de/logstash-gelf-->
    <Gelf name="gelf" host="udp:localhost" port="12201" version="1.1" extractStackTrace="true"
    filterStackTrace="true" mdcProfiling="true" includeFullMdc="true" maximumMessageSize="8192"
    originHost="%host{fqdn}" additionalFieldTypes="fieldName1=String,fieldName2=String">
    <Field name="timestamp" pattern="%d{dd MMM yyyy HH:mm:ss.SSS}"/>
    <Field name="level" pattern="%level"/>
    <Field name="caller_class_name" pattern="%C"/>
    <Field name="caller_line_number" pattern="%L"/>
    <Field name="caller_method_name" pattern="%M"/>
    <Field name="logger_name" pattern="%c"/>
    <Field name="thread_name" pattern="%t"/>
    <!--<Field name="className" pattern="%C"/>-->
    <Field name="server" pattern="%host"/>
    <Field name="server.fqdn" pattern="%host{fqdn}"/>
    <!-- This is a static field -->
    <!--<Field name="fieldName2" literal="fieldValue2" />-->
    <!-- This is a field using MDC -->
    <!--<Field name="traceId" mdc="traceId"/>-->
    <!--<Field name="spanId" mdc="spanId"/>-->
    <Field name="app_name" mdc="${appName}"/>
    <DynamicMdcFields regex="mdc.*"/>
    <DynamicMdcFields regex="(mdc|MDC)fields"/>
    </Gelf>
  3. 启动项目,看到日志输出

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    {
    "server" => "localhost",
    "source_host" => "172.17.0.1",
    "level" => "INFO",
    "caller_line_number" => 147,
    "message" => "队列boot.queue.com.yudianbank.api.service.api.ApiToApproveService创建成功!",
    "type" => "docker",
    "server.fqdn" => "localhost",
    "caller_method_name" => "afterPropertiesSet",
    "@timestamp" => 2017-11-22T07:08:02.822Z,
    "caller_class_name" => "com.yudianbank.framework.rpc.sdk.ConsumerInvokerListener",
    "thread_name" => "RMI TCP Connection(2)-127.0.0.1",
    "host" => "localhost",
    "@version" => "1",
    "logger_name" => "com.yudianbank.framework.rpc.sdk.ConsumerInvokerListener",
    "facility" => "logstash-gelf",
    "timestamp" => "22 十一月 2017 15:08:02.822"
    }
    {
    "server" => "localhost",
    "source_host" => "172.17.0.1",
    "level" => "INFO",
    "caller_line_number" => 147,
    "message" => "队列boot.queue.com.yudianbank.api.service.AppMgmToWeixinService创建成功!",
    "type" => "docker",
    "server.fqdn" => "localhost",
    "caller_method_name" => "afterPropertiesSet",
    "@timestamp" => 2017-11-22T07:08:02.858Z,
    "caller_class_name" => "com.yudianbank.framework.rpc.sdk.ConsumerInvokerListener",
    "thread_name" => "RMI TCP Connection(2)-127.0.0.1",
    "host" => "localhost",
    "@version" => "1",
    "logger_name" => "com.yudianbank.framework.rpc.sdk.ConsumerInvokerListener",
    "facility" => "logstash-gelf",
    "timestamp" => "22 十一月 2017 15:08:02.858"
    }

(三)完整版配置参考

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
<?xml version="1.0" encoding="UTF-8"?>
<!-- Log4j 2.x 配置文件。每30秒自动检查和应用配置文件的更新; -->
<Configuration status="WARN" monitorInterval="30" strict="true" schema="Log4J-V2.2.xsd">
<Appenders>
<!-- 输出到控制台 -->
<Console name="Console" target="SYSTEM_OUT" ignoreExceptions="false">
<!-- 需要记录的级别 -->
<ThresholdFilter level="debug" onMatch="ACCEPT" onMismatch="DENY"/>
<!--<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}:%4p %t (%F:%L) - %m%n"/>-->
<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{traceId}/%X{spanId}] [%thread] %-5level %logger{36} - %msg%n"/>
</Console>

<RollingFile name="dailyRollingFile" immediateFlush="true"
fileName="${logging.path}/app.log" filePattern="${logging.path}/app.%d{yyyy-MM-dd}.%i.log">
<PatternLayout>
<pattern>%d{HH:mm:ss.SSS} [%X{traceId}/%X{spanId}] ${appName} [%thread] %-5level %logger{36} - %msg%n</pattern>
</PatternLayout>
<Policies>
<OnStartupTriggeringPolicy/>
<TimeBasedTriggeringPolicy/>
<!-- 每个日志文件最大50MB -->
<SizeBasedTriggeringPolicy size="50MB" />
</Policies>
</RollingFile>

<!-- 输出到文件,按天或者超过80MB分割 logstash 每隔2秒去读取 验证 ok -->
<!--<RollingFile name="RollingFile" fileName="/data/logs/logstash.log"-->
<!--filePattern="/data/logs/$${date:yyyy-MM}/logstash-%d{yyyy-MM-dd}-%i.log.gz"-->
<!--ignoreExceptions="false">-->
<!--<JSONLayout complete="true" includeStacktrace="true" locationInfo="true"/>-->
<!--<Policies>-->
<!--<OnStartupTriggeringPolicy/>-->
<!--<TimeBasedTriggeringPolicy/>-->
<!--<SizeBasedTriggeringPolicy size="80 MB"/>-->
<!--</Policies>-->
<!--</RollingFile>-->

<!-- 输出到 logstash,通过 TCP 协议, 这种方式有个缺点是断掉之后不会自动重连 验证 ok
Logstash提供了log4j输入插件,但是只能用于log4j1.x,不能用于log4j2,因此,我们在配置文件中使用tcp输入插件,关于该插件的参数解释,
详见:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-tcp.html。
-->
<!--<Socket name="logstash" host="127.0.0.1" port="5460" protocol="TCP">-->
<!--<LogStashJSONLayout>-->
<!--<KeyValuePair key="application_name" value="${sys:application.name}"/>-->
<!--<KeyValuePair key="application_version" value="${sys:application.version}"/>-->
<!--<KeyValuePair key="environment_type" value="${sys:deploy_env}"/>-->
<!--<KeyValuePair key="cluster_location" value="${sys:cluster_location}"/>-->
<!--<KeyValuePair key="cluster_name" value="${sys:cluster_name}"/>-->
<!--<KeyValuePair key="hostname" value="${sys:hostname}"/>-->
<!--<KeyValuePair key="host_ip" value="${sys:host_ip}"/>-->
<!--<KeyValuePair key="application_user" value="${sys:user.name}"/>-->
<!--<KeyValuePair key="environment_user" value="${env:USER}"/>-->
<!--</LogStashJSONLayout>-->
<!--</Socket>-->

<!-- 直接输出到 kafka 成功,验证 ok -->
<!--<Kafka name="Kafka" topic="test_stash">-->
<!--<JSONLayout complete="true" includeStacktrace="true" locationInfo="true" compact="true"/>-->
<!--<Property name="bootstrap.servers">192.168.1.204:9092</Property>-->
<!--</Kafka>-->

<!--log4j2 logstash-gelf UDP 协议,自动重连 https://github.com/mp911de/logstash-gelf-->
<Gelf name="gelf" host="udp:localhost" port="12201" version="1.1" extractStackTrace="true"
filterStackTrace="true" mdcProfiling="true" includeFullMdc="true" maximumMessageSize="8192"
originHost="%host{fqdn}" additionalFieldTypes="fieldName1=String,fieldName2=String">
<Field name="timestamp" pattern="%d{dd MMM yyyy HH:mm:ss.SSS}"/>
<Field name="level" pattern="%level"/>
<Field name="caller_class_name" pattern="%C"/>
<Field name="caller_line_number" pattern="%L"/>
<Field name="caller_method_name" pattern="%M"/>
<Field name="logger_name" pattern="%c"/>
<Field name="thread_name" pattern="%t"/>
<!--<Field name="className" pattern="%C"/>-->
<Field name="server" pattern="%host"/>
<Field name="server.fqdn" pattern="%host{fqdn}"/>
<!-- This is a static field -->
<!--<Field name="fieldName2" literal="fieldValue2" />-->
<!-- This is a field using MDC -->
<!--<Field name="traceId" mdc="traceId"/>-->
<!--<Field name="spanId" mdc="spanId"/>-->
<Field name="app_name" mdc="${appName}"/>
<DynamicMdcFields regex="mdc.*"/>
<DynamicMdcFields regex="(mdc|MDC)fields"/>
</Gelf>
</Appenders>

<Loggers>
<Root level="FATAL"> <!-- 全局配置 ALL > TRACE > DEBUG > INFO > WARN > ERROR > FATAL > OFF -->
<AppenderRef ref="Console"/>
<AppenderRef ref="dailyRollingFile"/>
</Root>
<Logger name="com.yudianbank" level="ALL" additivity="false">
<AppenderRef ref="Console"/>
<AppenderRef ref="gelf"/>
<AppenderRef ref="dailyRollingFile"/>
</Logger>
<!-- 为sql语句配置特殊的Log级别,方便调试 -->
<Logger name="com.p6spy" level="INFO" additivity="false">
<AppenderRef ref="Console"/>
<AppenderRef ref="dailyRollingFile"/>
</Logger>
</Loggers>
</Configuration>