Logback Basics

1, build.sbt

name := "Avro"

version := "0.1"

scalaVersion := "2.12.8"

// https://mvnrepository.com/artifact/org.apache.avro/avro
libraryDependencies += "org.apache.avro" % "avro" % "1.8.2"

// https://mvnrepository.com/artifact/ch.qos.logback/logback-classic
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.2.3"

2, project structure

 3, logback.xml

<?xml version="1.0" encoding="UTF-8"?>

<!--
    scan: when this attribute is set to "true", the "logback.xml" file will be reloaded every time the framework detects its change. The default value is "true".
    scanPeriod: the time period between two consecutive scans of the "logback.xml" file. This attribute is effective only when attribute "scan" is set to "true". The default value is "60 second".
    debug: when this attribute is set to "true", the framework will print out the logging information of the "logback" framework so that you can debug the "logback" framework itself using logs. The default value is "false".
-->
<configuration scan="true" scanPeriod="60 seconds" debug="false">

    <!-- The root directory of the logs. -->
    <!--<property name="LOG_HOME" value="${catalina.home}/logs"/>-->
    <property name="LOG_HOME" value="D:/spark/logs"/>

    <!-- The name of the log file, before being rolled -->
    <property name="APP_NAME" value="spark"/>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">

        <!--
            Patterns of the log file content. Every log information is formatted using the following pattern.
            %d: the timestamp.
            %thread: the thread name.
            %-5level: print out the level name from left to right. When 5 character width is not filled full by the level name, space is appended. For example, "INFO" is printed as "INFO ".
            %logger{50}: package name followed by class name, such as "basic.Adventure1$", and only a combined name with a maximum width of 50 character is allowed (otherwise, "..." is appended at the end of the combined name).
            %line: the line number of the logger inside the logger class. (e.g., the line number where log.debug("start...") occurs.
            %msg: logging content.
            %n: newline character.
        -->
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [ %thread ] - [ %-5level ] [ %logger{50} : %line ] - %msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>

    </appender>

    <!-- Rolling log files. First log into a log file of a certain name at a certain location (specified by the tag <file>). Then, when certain conditions are met (such as when a day is over and the log file reaches a certain size, say 500MB), the log file is rolled (renamed and sometimes compressed and the logging is now done a whole new log file specified by the tag <file> previously mentioned, starting all over again) -->
    <appender name="APP_LOG_APPENDER" class="ch.qos.logback.core.rolling.RollingFileAppender">

        <!-- Specify the name of the log file, which is rolled when certain conditions are met (that is, triggered), such as when a day is over or when the file size reaches a certain size, say 500MB -->
        <file>${LOG_HOME}/${APP_NAME}.log</file>

        <!-- When the file is rolled, we must decide the conditions to roll (often based on time and file size) and new names of the rolled log file -->
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">

            <!--
                Patterns of the log file content. Every log information is formatted using the following pattern.
                %d: the timestamp.
                %i: file index (e.g., the first log file during the day is indexed 0).
            -->
            <fileNamePattern>${LOG_HOME}/${APP_NAME}-%d{yyyy-MM-dd}-%i.log.gz</fileNamePattern>
            <!--<fileNamePattern>${LOG_HOME}/${APP_NAME}-%d{yyyy-MM-dd_HH-mm-ss}-%i.log.gz</fileNamePattern>-->

            <!--
                An optional node. It specifies the maximum difference of days between the dates when the rolled files are created and today, in the directory which keeps all the rolled log files. The logback framework will delete any old files if the difference of days between the date when the log file is rolled and today is more than the number specified.
                In the following configuration, we specified that only rolled log files created within a year are kept.
            -->
            <MaxHistory>365</MaxHistory>

            <!-- The the log file being written to reaches the size specified below, the log file is rolled (that is, compressed, renamed and moved to the specified directory). In this case, once the log file reaches a size of 500MB, the log file is compressed to a file named "avro-2018-12-23-0.log.gz" and moved to the directory "E:/tmp/logs". -->
            <maxFileSize>500MB</maxFileSize>

            <!-- Total size of the rolled log files in the directory. The framework will delete old rolled log files once the size cap is reached. -->
            <totalSizeCap>20GB</totalSizeCap>

        </rollingPolicy>

        <!--
            Patterns of the log file content. Every log information is formatted using the following pattern.
            %d: the timestamp.
            %thread: the thread name.
            %-5level: print out the level name from left to right. When 5 character width is not filled full by the level name, space is appended. For example, "INFO" is printed as "INFO ".
            %logger{50}: package name followed by class name, such as "basic.Adventure1$", and only a combined name with a maximum width of 50 character is allowed (otherwise, "..." is appended at the end of the combined name).
            %line: the line number of the logger inside the logger class. (e.g., the line number where log.debug("start...") occurs.
            %msg: logging content.
            %n: newline character.
        -->
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [ %thread ] - [ %-5level ] [ %logger{50} : %line ] - %msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>

    </appender>

    <!-- Rolling log files. First log into a log file of a certain name at a certain location (specified by the tag <file>). Then, when certain conditions are met (such as when a day is over and the log file reaches a certain size, say 500MB), the log file is rolled (renamed and sometimes compressed and the logging is now done a whole new log file specified by the tag <file> previously mentioned, starting all over again) -->
    <appender name="ERROR_LOG_APPENDER" class="ch.qos.logback.core.rolling.RollingFileAppender">

        <!-- Specify the name of the log file, which is rolled when certain conditions are met (that is, triggered), such as when a day is over or when the file size reaches a certain size, say 500MB -->
        <file>${LOG_HOME}/${APP_NAME}-error.log</file>

        <!-- When the file is rolled, we must decide the conditions to roll (often based on time and file size) and new names of the rolled log file -->
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">

            <!--
                Patterns of the log file content. Every log information is formatted using the following pattern.
                %d: the timestamp.
                %i: file index (e.g., the first log file during the day is indexed 0).
            -->
            <fileNamePattern>${LOG_HOME}/${APP_NAME}-error-%d{yyyy-MM-dd}-%i.log.gz</fileNamePattern>

            <!--
                An optional node. It specifies the maximum difference of days between the dates when the rolled files are created and today, in the directory which keeps all the rolled log files. The logback framework will delete any old files if the difference of days between the date when the log file is rolled and today is more than the number specified.
                In the following configuration, we specified that only rolled log files created within a year are kept.
            -->
            <MaxHistory>365</MaxHistory>

            <!-- The the log file being written to reaches the size specified below, the log file is rolled (that is, compressed, renamed and moved to the specified directory). In this case, once the log file reaches a size of 500MB, the log file is compressed to a file named "avro-2018-12-23-0.log.gz" and moved to the directory "E:/tmp/logs". -->
            <maxFileSize>500MB</maxFileSize>

            <!-- Total size of the rolled log files in the directory. The framework will delete old rolled log files once the size cap is reached. -->
            <totalSizeCap>20GB</totalSizeCap>

        </rollingPolicy>

        <!-- Only log the messages with level of ERROR-->
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>ERROR</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>

        <!--
            Patterns of the log file content. Every log information is formatted using the following pattern.
            %d: the timestamp.
            %thread: the thread name.
            %-5level: print out the level name from left to right. When 5 character width is not filled full by the level name, space is appended. For example, "INFO" is printed as "INFO ".
            %logger{50}: package name followed by class name, such as "basic.Adventure1$", and only a combined name with a maximum width of 50 character is allowed (otherwise, "..." is appended at the end of the combined name).
            %line: the line number of the logger inside the logger class.
            %msg: logging content.
            %n: newline character.
        -->
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [ %thread ] - [ %-5level ] [ %logger{50} : %line ] - %msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>

    </appender>

    <!--
        The levels of the logger, from the lowest to the highest: TRACE, DEBUG, INFO, WARN, ERROR.
        If you set the level of the root node to DEBUG, then the logger will only log messages of levels of DEBUG, INFO, WARN and ERROR. When you set the level to INFO, then only logs with levels of INFO, WARN and ERROR will be printed out.
    -->
    <root level="DEBUG">
        <appender-ref ref="STDOUT"/>
        <appender-ref ref="APP_LOG_APPENDER"/>
        <appender-ref ref="ERROR_LOG_APPENDER"/>
    </root>

    <!--
        The following logger '<logger name="basic" level="DEBUG" additivity="true"/>' handles all the logging with the package "basic". It will only deal with logging with levels higher than "DEBUG" and discards all the rest.
        Because the attribute "additivity" is set to "true", the logging information will first be dealt with and then passed onto a logger directly higher up the logger hierarchy (in this case, the "root" logger).
        When you are executing basic.Adventure1's main method, you will first executing the following logger '<logger name="basic" level="DEBUG" additivity="true"/>', because Object "Adventure1" is within package "basic". The logger will print out nothing (since it has no its own child nodes with a tag of "appender-ref"), and pass all the logging information to the "root" logger node, which is a logger node directly higher up the logger hierarchy. When the "root" logger node receives the logging information from its subordinate logger nodes, it will delegate all the logging to its various child nodes with tag of "appender-ref", including "STDOUT" (which prints out to the standard output) and "APP_LOG_APPENDER" (which writes to a specified log file and roll them to a specific directory when certain conditions are met).
    -->
    <logger name="basic" level="DEBUG" additivity="true"/>

    <!-- The following logger has no attribute "level", so it will inherit from logger "<logger name="basic" level="DEBUG" additivity="true"/>". It will first log out to the standard output and then pass the logging onto its supervisor logger (in this case, the logger '<logger name="basic" level="DEBUG" additivity="true"/>'. -->
    <logger name="basic.Adventure1" additivity="true">
        <appender-ref ref="STDOUT"/>
    </logger>

</configuration>

4, StringPair.avsc

{
    "type": "record",
    "name": "StringPair",
    "doc": "A pair of strings.",
    "fields": [
        {"name": "left", "type": "string"},
        {"name": "right", "type": "string"}
    ]
}

5, source code:

package basic

import java.io.ByteArrayOutputStream

import org.apache.avro.Schema
import org.apache.avro.generic.{GenericData, GenericDatumReader, GenericDatumWriter, GenericRecord}
import org.apache.avro.io.{DecoderFactory, EncoderFactory}
import org.slf4j.LoggerFactory

object Adventure1 {

  private[this] val logger = LoggerFactory.getLogger(this.getClass)

  def test1 = {

    logger.info("开始......")
    logger.debug("debug start......")

    //If this schema is saved in a file on the classpath called StringPair.avsc (.avsc is the conventional extension for an Avro schema), we can load it using the following two lines of code
    val parser = new Schema.Parser()
    val schema = parser.parse(getClass.getResourceAsStream("/StringPair.avsc"))

    //We can create an instance of an Avro record using the Generic API as follows
    val datum = new GenericData.Record(schema)
    datum.put("left", "L")
    datum.put("right", "R")

    //Next, we serialize the record to an output stream:
    val out = new ByteArrayOutputStream()
    val writer = new GenericDatumWriter[GenericRecord](schema)

    //Set up a binaryEncoder, using the ByteArrayOutputStream; using null for reuse means that you create a brand new encoder, instead of reusing an existing encoder
    val encoder = EncoderFactory.get().binaryEncoder(out, null)
    //Writing the GenericRecord using the binaryEncoder to the ByteArrayOutputStream
    writer.write(datum, encoder)
    //Flush the encoder, so that everything that's to be encoded is written to the ByteArrayOutputStream without holding anything
    encoder.flush()
    //Closing the ByteArrayOutputStream
    out.close()

    //A reader to read Avro record of type GenericRecord with schema
    val reader = new GenericDatumReader[GenericRecord](schema)
    //Create a new binaryDecoder and do not reuse an existing one, to read and decode from ByteArrayOutputStream
    val decoder = DecoderFactory.get().binaryDecoder(out.toByteArray, null)
    val result = reader.read(null, decoder)

    println(result.get("left").toString)
    println(result.get("right").toString)

    logger.info("结束......")
    logger.debug("debug end......")

  }

  def main(args: Array[String]): Unit = {

    test1

  }

}

6, console output:

"C:\Program Files\Java\jdk1.8.0_144\bin\java.exe" "-javaagent:D:\IntelliJ IDEA\IntelliJ IDEA 2018.3.2\lib\idea_rt.jar=52158:D:\IntelliJ IDEA\IntelliJ IDEA 2018.3.2\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_144\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_144\jre\lib\rt.jar;E:\Portfolio\Avro\target\scala-2.12\classes;C:\Users\Administrator\.ivy2\cache\com.thoughtworks.paranamer\paranamer\bundles\paranamer-2.7.jar;C:\Users\Administrator\.ivy2\cache\org.slf4j\slf4j-api\jars\slf4j-api-1.7.25.jar;C:\Users\Administrator\.ivy2\cache\org.xerial.snappy\snappy-java\bundles\snappy-java-1.1.1.3.jar;C:\Users\Administrator\.ivy2\cache\org.tukaani\xz\jars\xz-1.5.jar;C:\Users\Administrator\.ivy2\cache\org.scala-lang\scala-library\jars\scala-library-2.12.8.jar;C:\Users\Administrator\.ivy2\cache\org.codehaus.jackson\jackson-mapper-asl\jars\jackson-mapper-asl-1.9.13.jar;C:\Users\Administrator\.ivy2\cache\org.codehaus.jackson\jackson-core-asl\jars\jackson-core-asl-1.9.13.jar;C:\Users\Administrator\.ivy2\cache\org.apache.commons\commons-compress\jars\commons-compress-1.8.1.jar;C:\Users\Administrator\.ivy2\cache\org.apache.avro\avro\bundles\avro-1.8.2.jar;C:\Users\Administrator\.ivy2\cache\ch.qos.logback\logback-classic\jars\logback-classic-1.2.3.jar;C:\Users\Administrator\.ivy2\cache\ch.qos.logback\logback-core\jars\logback-core-1.2.3.jar" basic.Adventure1
2018-12-23 20:10:38.189 [ main ] - [ INFO  ] [ basic.Adventure1$ : 16 ] - 开始......
2018-12-23 20:10:38.189 [ main ] - [ INFO  ] [ basic.Adventure1$ : 16 ] - 开始......
2018-12-23 20:10:38.189 [ main ] - [ DEBUG ] [ basic.Adventure1$ : 17 ] - debug start......
2018-12-23 20:10:38.189 [ main ] - [ DEBUG ] [ basic.Adventure1$ : 17 ] - debug start......
L
R
2018-12-23 20:10:38.689 [ main ] - [ INFO  ] [ basic.Adventure1$ : 50 ] - 结束......
2018-12-23 20:10:38.689 [ main ] - [ INFO  ] [ basic.Adventure1$ : 50 ] - 结束......
2018-12-23 20:10:38.689 [ main ] - [ DEBUG ] [ basic.Adventure1$ : 51 ] - debug end......
2018-12-23 20:10:38.689 [ main ] - [ DEBUG ] [ basic.Adventure1$ : 51 ] - debug end......

Process finished with exit code 0

7, file (E:/tmp/logs/avro.log) output:

2018-12-23 18:19:58.425 [ main ] - [ INFO  ] [ basic.Adventure1$ : 16 ] - 开始......
2018-12-23 18:19:58.425 [ main ] - [ DEBUG ] [ basic.Adventure1$ : 17 ] - debug start......
2018-12-23 18:19:58.918 [ main ] - [ INFO  ] [ basic.Adventure1$ : 50 ] - 结束......
2018-12-23 18:19:58.918 [ main ] - [ DEBUG ] [ basic.Adventure1$ : 51 ] - debug end......

8, files in the specified directory:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值