Springboot、Mybatis、Swagger-ui、Prometheus入门使用,ioc和aop的理解

本文介绍了SpringBoot结合Mybatis的项目结构,常用的注解如@Controller、@RestController、@Autowired等,并展示了如何使用Swagger-ui生成API文档。同时,讲解了Prometheus监控指标的配置和使用,以及logback日志配置和日志写入Kafka的方法。还涵盖了Spring的IOC和AOP概念,以及Tomcat的工作原理。
摘要由CSDN通过智能技术生成

Springboot、Mybatis、Swagger-ui、Prometheus、logback日志入门使用,ioc和aop的理解

1.SpringBoot+mybatis

1.工程结构

个人习惯分为以下几个模块:

①config:用于放置各种配置信息类;

②controller:接口入口;

③service:执行逻辑;

④entity:实体类,对应数据库的表或者各种对象;

⑤mapper:各种sql语句,不习惯写mapple.xml配置文件,觉得这样方便;

⑥util:习惯放置各种工具类;

2.常用注解

1.@Controller
标识该为controller类,它的方法要想在页面请求时能看到返回值需要再加个注解@ResponseBody,不然返回不了值
2.@RestController
这个注解就让这个controller类的所有方法都不用加@ResponseBody了
3.@Autowired
自动注入,平常新建一个对象需要object xx = new xxx,当那些类被spring bean管理后,就可以用这个注解替代后面的new了;
4.@PostMapping(“/api”)
这个是@GetMapping和@PostMapping的父级,一个接口有get请求或者post请求,我习惯于用这个放到类上作为基础请求path,至于方法再细分是什么请求;
5.@GetMapping和@PostMapping
@GetMapping注解:
    
	@GetMapping("/getone")
    @ResponseBody
    @ApiOperation("获取一个用户")
    private String getuser(@RequestParam("name") String name){
        return us.getone(name);
    }

如上一个Getmapping,请求的时候ip:8080:xxx/getone?name=aa,请求的时候传的值可以在请求path后加上?字段名等于某个类型,这样传的数据清晰明显但是不安全,方法里头用@RequestParam("name")去接收这个值,这就是get请求;
@PostMapping注解:

	@PostMapping(value = "/insertjdbcone")
    @ResponseBody
    @ApiOperation("存入一个用户")
    private String insertjdbcone(@RequestParam Integer id,@RequestParam String name,@RequestParam String sex){
        int row = us.insertjdbcone(id,name,sex);
        return "ok" + " " + row;
    }

这个post请求,传值的方式就不能单纯用?name=aa这样了,需要用post请求方式如下:
    curl --location --request POST 'localhost:1107/userinfo/insertjdbcone?id=22&name=333&sex=男'
这儿用的@RequestParam接收的值,如果用@RequestBody则只能用body包含,其请求方式会更加安全,为如下:
    
 @PostMapping(value = "/insertjdbcone2")
    @ResponseBody
    @ApiOperation("获取存入一个jdbc用户")
    private String insertjdbcone2(@RequestBody UserEntity userEntity){
        int row = us.insertjdbcone(userEntity.getId(),userEntity.getName(),userEntity.getSex());
        return "ok" + " " + row;
    }

这个请求方式就为:
    curl --location --request POST 'localhost:1107/userinfo/insertjdbcone2' \
--header 'Content-Type: application/json' \
--data-raw '{"id":1234,"name":"1234","sex":"女"}'
6.@Component
我理解上它是本身不具备特定的业务含义或功能,它只是一个标识注解,用于告诉 Spring 容器该类是一个组件。如果希望在使用 @Component 的基础上表达更加具体的业务含义,可以使用 @Service、@Repository、@Controller 等更专门的注解。
7.@PostConstruct
1.一个在 Spring 框架中用于标记方法的注解。当一个类被 Spring 容器实例化为 Bean 之后,被 @PostConstruct 注解标记的方法会在该 Bean 的初始化阶段被调用,在 Bean 的构造函数执行之后、依赖注入完成之后,可以用来执行一些需要在 Bean 初始化之后进行的逻辑,例如初始化数据库连接、加载配置文件、启动定时任务等。

2.使用 @PostConstruct 注解的方法必须满足以下条件:
方法不能有任何参数。
方法的返回类型必须为 void。
方法不能抛出任何已检查异常。
8.@Bean
1Spring@Bean 注解用于告诉方法,产生一个 Bean 对象,然后这个 Bean对象交给 Spring 管理。 产生这个 Bean 对象的方法 Spring 只会调用一次,随后这个Spring将会将这个Bean对象放在自己的IOC容器中。
2@Component , @Repository , @ Controller , @Service 这些注解只局限于自己编写的类,而@Bean注解能把第三方库中的类实例加入IOC容器中并交给spring管理。
3@Bean注解的另一个好处就是能够动态获取一个Bean对象,能够根据环境不同得到不同的Bean对象。

例子:这个docket就是被注入到spring ioc容器中的一个导入包的bean
	@Bean
    public Docket api() {
        return new Docket(DocumentationType.SWAGGER_2)
                .select()    .apis(RequestHandlerSelectors.basePackage("com.example.demo5.controller")) // 设置扫描的包路径
                .paths(PathSelectors.any())
                .build()
                .apiInfo(apiInfo());
    }
9.@Configuration
标识为配置类。这个配置类的作用就是告诉 Spring 框架如何创建和配置各种对象(称为 Bean),以及如何组织和管理这些对象,通俗而言,感觉是为了管理上面说的 bean 类注解的类和方法。
10.@TableName(“userinfo”)
MyBatis-Plus框架提供的,表示这个entity实体类对应的你jdbc库下那个表,表字段与类属性一致
11.@Data
自动生成get,set,tostring,hadcode这些方法
12.@TableField
与tablename注解一起使用的,是对类属性的一个具体定义,有的字段表里可能没有或者没有值用这个去详细设置。
13.@Select,@Delete等
于mapper类上使用,执行sql
14.@Value(“${JAVA_HOME:1.0.1}”)
加到类字段属性上,可以初始化复制,还可以获取当前环境变量或者配置文件中的赋值没有则使用默认值比如上方

3.使用举例

1.controller类
//这个类添加Prometheus的一些属性,调用了servie层的逻辑,get请求
@RequestMapping("/current")
@RestController
public class HealthyController {
    @Autowired
    private HealthyService hs;
    @Autowired
    private PrometheusMetricsConfig pmc;

    @GetMapping("/status")
    private String currenthealth(){
        pmc.getHealthyCount().increment();
        pmc.getUserCount().increment(2);
        pmc.getAmountSum().record(3);
        pmc.getThreadValue().set(1111.11);
        return hs.checkhealth();
    }
    
}
//post请求
	@Autowired
    UserinfoService us;
        
    @PostMapping(value = "/insertjdbcone2")
    @ResponseBody
    @ApiOperation("获取存入一个jdbc用户")
    private String insertjdbcone2(@RequestBody UserEntity userEntity){
        int row = us.insertjdbcone(userEntity.getId(),userEntity.getName(),userEntity.getSex());
        return "ok" + " " + row;
    }


//一般一些token、au等信息在header中,在方法中用@RequestHeader("token")就可以获得了,接口得到的客户端的一些信息就可以在HttpServletRequest request中获取, 返回给客户端的信息放在HttpServletResponse response
@RestController
@RequestMapping("token")
public class TokenController {
    private Logger logger =  LoggerFactory.getLogger(TokenController.class);

    @Autowired
    TokenService ts;

    @PostMapping("check")
    private ResponseEntity<String> CheckToken(@RequestHeader("Authorization") String token, @RequestParam String id, @RequestParam String name, HttpServletRequest request, HttpServletResponse response){
        for (Cookie cookie : request.getCookies()) {
            System.out.println(cookie.getValue());
            System.out.println(cookie.getDomain());
            System.out.println(cookie.getComment());
        }

    	Cookie cookie = new Cookie("passwd", "Ad2l8~b34");
        cookie.setMaxAge(3600); // 设置 Cookie 的过期时间,单位为秒
        cookie.setPath("/"); // 设置 Cookie 的路径,根路径下的所有请求都会带上该 Cookie
        response.addCookie(cookie);
        System.out.println(request.getHeader("X-Real-IP"));
        System.out.println(request.getRequestURI().toString());
        if (isValidToken(token)) {
            this.logger.info("token is right!");
            return ResponseEntity.ok("Success" + id + " " + name );
        } else {
            this.logger.error("you use error token!");
            return ResponseEntity.status(HttpStatus.UNAUTHORIZED).body("Invalid token");
        }
    }

    private boolean isValidToken(String token) {
        return "okok".equals(token);
    }
}
2.service类
//前面的controller调用这个service类,它再执行具体的逻辑,这儿是去mapper类中执行相应的sql操作
@Service
public class UserinfoService {
    @Autowired
    UserMapper um;

    public int insertjdbcone(Integer id,String name,String sex){
        return um.insertA(id,name,sex);
    }
}
3.mapper类
//mapper要继承BaseMapper类并实现里头的方法,我习惯写sql,可以用方法或者写mapper.xml方式
@Component
public interface UserMapper extends BaseMapper<UserEntity> {

    @Override
    default UserEntity selectOne(Wrapper<UserEntity> queryWrapper) {
        return null;
    }

    @Override
    int insert(UserEntity entity);

    //要注意传值方式,接收用@Param,使用用#{xxx},判断条件不好加就where 1=1等
    @Select("select * from mytest where id=#{id}")
    UserEntity getA(@Param("id") Integer id);

    @Insert("insert into mytest values (#{id},#{name},#{sex})")
    Integer insertA(@Param("id")Integer id,@Param("name")String name,@Param("sex") String sex);
}
4.多环境application.xml
默认application.xml,别的在后面加application-pre.xml这样
server:
  port: 1106
  forward-headers-strategy: native
spring:
  main:
    allow-bean-definition-overriding: true
  datasource:
    driver-class-name: com.mysql.jdbc.Driver
    url: jdbc:mysql://xxxx/testnz?useUnicode=true&characterEncoding=utf8
    username: root
    password: xxxx
  profiles:
    active: $profiles.active$
  application:
    name: All
pom文件里添加,这样默认是local环境,打包可以在idea中选择哪个环境的,运行可以指定spring。profiles.active=prod指定哪个环境的配置文件
<build>
        <plugins>
            <!--要加plugin-->
            <plugin>
                <artifactId>maven-resources-plugin</artifactId>
                <configuration>
                    <delimiters>
                        <delimiter>$</delimiter>
                    </delimiters>
                    <useDefaultDelimiters>false</useDefaultDelimiters>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <configuration>
                    <excludes>
                        <exclude>
                            <groupId>org.projectlombok</groupId>
                            <artifactId>lombok</artifactId>
                        </exclude>
                    </excludes>
                </configuration>
            </plugin>
        </plugins>

        <!--设置resource资源-->
        <resources>
            <resource>
                <directory>src/main/resources</directory>
                <filtering>true</filtering>
                <includes>
                    <include>application-${profiles.active}.yml</include>
                    <include>application.yml</include>
                    <include>*</include>
                </includes>
            </resource>
        </resources>
    </build>

    <!--配置环境映射-->
    <profiles>
        <profile>
            <!-- 生产环境 -->
            <id>prod</id>
            <properties>
                <profiles.active>prod</profiles.active>
            </properties>
        </profile>
        <profile>
            <!-- 本地开发环境 -->
            <id>local</id>
            <properties>
                <profiles.active>local</profiles.active>
            </properties>
            <activation>
                <activeByDefault>true</activeByDefault>
            </activation>
        </profile>
        <profile>
            <!-- 测试环境 -->
            <id>ppe</id>
            <properties>
                <profiles.active>ppe</profiles.active>
            </properties>
        </profile>
    </profiles>
</project>
5.logback日志配置

1.依赖:

		<dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>1.2.3</version>
        </dependency>
        <dependency>
            <groupId>com.alibaba.fastjson2</groupId>
            <artifactId>fastjson2</artifactId>
            <version>2.0.32</version>
        </dependency>

2.在resource下新建 logback.xml,有的得取名为logback-spring.xml才有效

①一个文件内的日志输出

<configuration>
    <include resource="org.springframework.boot.logging.logback.defaults.xml"/>
    <!-- 控制台输出 -->
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <!-- 文件输出 -->
    <appender name="FILE" class="ch.qos.logback.core.FileAppender">
        <file>app.log</file>
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <!-- 日志级别设置 -->
    <root level="INFO">
        <appender-ref ref="CONSOLE" />
        <appender-ref ref="FILE" />
    </root>
</configuration>

②按照日期的日志文件输出

<configuration>
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>./myapp.log</file>
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>./myapp-%d{yyyy-MM-dd}.log</fileNamePattern>
            <maxHistory>7</maxHistory>
        </rollingPolicy>
    </appender>

    <logger name="com.example.myapp" level="debug" additivity="false">
        <appender-ref ref="CONSOLE" />
        <appender-ref ref="FILE" />
    </logger>

    <root level="debug">
        <appender-ref ref="CONSOLE" />
        <appender-ref ref="FILE" />
    </root>
</configuration>

3.使用

private Logger logger =  LoggerFactory.getLogger(TokenController.class);
this.logger.error("you use error token!");
6.日志自动写入kafka

1.logback.xml配置

<configuration>
    <include resource="org.springframework.boot.logging.logback.defaults.xml"/>
    <!-- 控制台输出 -->
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <!-- 文件输出 -->
    <appender name="FILE" class="ch.qos.logback.core.FileAppender">
        <file>file.log</file>
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <!-- 日志级别设置 -->
    <root level="INFO">
        <appender-ref ref="CONSOLE" />
        <appender-ref ref="FILE" />
    </root>


    <springProfile name="default">
        <root level="debug">
            <appender-ref ref="CONSOLE"/>
            <appender-ref ref="FILE" />
        </root>
    </springProfile>


    
	 <!-- class需要自定义重写-->
    <springProfile name="local">
        <appender name="iclickKafka"
                  class="com.example.demo5.util.KafkaAppender">
            <topic>AA_retry</topic>
            <kafkaProducerProperties>
                bootstrap.servers=10.11.40.102:9092,10.11.40.101:9092,10.11.40.103:9092
                value.serializer=org.apache.kafka.common.serialization.StringSerializer
                key.serializer=org.apache.kafka.common.serialization.StringSerializer
            </kafkaProducerProperties>
            <service>nnnnzzzzz</service>
            <dockerHost>1111.1111.1111.1111</dockerHost>
            <logToSystemOut>false</logToSystemOut>
        </appender>
        <appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
            <neverBlock>true</neverBlock>
            <includeCallerData>true</includeCallerData>
            <discardingThreshold>0</discardingThreshold>
            <queueSize>2048</queueSize>
            <appender-ref ref="iclickKafka" />
        </appender>
        <root level="INFO">
            <appender-ref ref="CONSOLE"/>
            <appender-ref ref="FILE" />
            <appender-ref ref="ASYNC"/>
        </root>
    </springProfile>

    <springProfile name="prod">
        <appender name="iclickKafka"
                  class="com.parllay.service.common.kafkaAppender.KafkaAppender">
            <topic>fht_logToEs</topic>
            <kafkaProducerProperties>
                bootstrap.servers=10.11.20.51:9092,10.11.20.52:9092,10.11.20.53:9092,10.11.20.54:9092,10.11.20.55:9092
                value.serializer=org.apache.kafka.common.serialization.StringSerializer
                key.serializer=org.apache.kafka.common.serialization.StringSerializer
            </kafkaProducerProperties>
            <service>Data-Analysis</service>
            <dockerHost>${POD_NAME}</dockerHost>
            <logToSystemOut>false</logToSystemOut>
        </appender>
        <appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
            <neverBlock>true</neverBlock>
            <includeCallerData>true</includeCallerData>
            <discardingThreshold>0</discardingThreshold>
            <queueSize>2048</queueSize>
            <appender-ref ref="iclickKafka" />
        </appender>
        <root level="INFO">
            <appender-ref ref="CONSOLE"/>
            <appender-ref ref="FILE" />
            <appender-ref ref="ASYNC"/>
        </root>
    </springProfile>
</configuration>

2.日志解析类

//日志数据格式类
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.classic.spi.StackTraceElementProxy;
import com.alibaba.fastjson2.JSON;
import com.alibaba.fastjson2.JSONObject;

import java.io.IOException;
import java.io.StringReader;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;

public class JsonFormatter {

    private boolean expectJsonMessage = false;
    private boolean includeMethodAndLineNumber = false;
    private Map extraPropertiesMap = null;

    //event是事件对象,service以及dockerhost是在logback中传入的
    public String format(ILoggingEvent event, String service, String dockerHost) {
        JSONObject jsonObject = new JSONObject();
        jsonObject.put("level",event.getLevel().levelStr);
        jsonObject.put("message",event.getMessage());
        jsonObject.put("ip",dockerHost);
        jsonObject.put("service",service);
        if (expectJsonMessage) {
            Object object = JSON.parse(event.getFormattedMessage());
            jsonObject.put("message", object);
        } else {
            jsonObject.put("message", event.getFormattedMessage());
        }
        if (includeMethodAndLineNumber) {
            StackTraceElement[] callerDataArray = event.getCallerData();
            if (callerDataArray != null && callerDataArray.length > 0) {
                StackTraceElement stackTraceElement = callerDataArray[0];
                jsonObject.put("method", stackTraceElement.getMethodName());
                jsonObject.put("lineNumber", stackTraceElement.getLineNumber() + "");
            }
        }
        if ("ERROR".equals(event.getLevel().levelStr)) {
            jsonObject.put("error", handleErrorInfo(event));
        }

        if (this.extraPropertiesMap != null) {
            jsonObject.putAll(extraPropertiesMap);
        }
        return jsonObject.toJSONString();
    }

    private String handleErrorInfo(ILoggingEvent event) {
        StringBuilder error = new StringBuilder();
        if (event.getThrowableProxy() != null && event.getThrowableProxy().getStackTraceElementProxyArray() != null) {
            if (event.getThrowableProxy().getClassName() != null)
                error.append(event.getThrowableProxy().getClassName()).append("\n");
            StackTraceElementProxy[] array = event.getThrowableProxy().getStackTraceElementProxyArray();
            for (StackTraceElementProxy stackTraceElementProxy : array) {
                error.append(stackTraceElementProxy.getSTEAsString()).append("\n");
            }
        }
        return error.toString();
    }

    public boolean getExpectJsonMessage() {
        return expectJsonMessage;
    }

    public void setExpectJsonMessage(boolean expectJsonMessage) {
        this.expectJsonMessage = expectJsonMessage;
    }

    public boolean getIncludeMethodAndLineNumber() {
        return includeMethodAndLineNumber;
    }

    public void setIncludeMethodAndLineNumber(boolean includeMethodAndLineNumber) {
        this.includeMethodAndLineNumber = includeMethodAndLineNumber;
    }

    public void setExtraProperties(String thatExtraProperties) {
        final Properties properties = new Properties();
        try {
            properties.load(new StringReader(thatExtraProperties));
            Enumeration<?> enumeration = properties.propertyNames();
            extraPropertiesMap = new HashMap();
            while (enumeration.hasMoreElements()) {
                String name = (String) enumeration.nextElement();
                String value = properties.getProperty(name);
                extraPropertiesMap.put(name, value);
            }
        } catch (IOException e) {
            System.out.println("There was a problem reading the extra properties configuration: " + e.getMessage());
            e.printStackTrace();
        }
    }
}
//日志解析类
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.AppenderBase;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.StringReader;
import java.util.Properties;

public class KafkaAppender extends AppenderBase<ILoggingEvent> {

    private static final Logger LOGGER = LoggerFactory.getLogger(KafkaAppender.class);
    private JsonFormatter formatter = new JsonFormatter();
    private boolean logToSystemOut = false;
    private String kafkaProducerProperties;
    private String topic;
    private String service;
    private String dockerHost;
    private KafkaProducer producer;

    @Override
    public void start() {
        super.start();
        LOGGER.info("Starting KafkaAppender...");
        final Properties properties = new Properties();
        try {
            properties.load(new StringReader(kafkaProducerProperties));
            producer = new KafkaProducer<>(properties);
        } catch (Exception exception) {
            System.out.println("KafkaAppender: Exception initializing Producer. " + exception + " : " + exception.getMessage());
            exception.printStackTrace();
            throw new RuntimeException("KafkaAppender: Exception initializing Producer.", exception);
        }
        System.out.println("KafkaAppender: Producer initialized: " + producer);
        if (topic == null) {
            LOGGER.error("KafkaAppender requires a topic. Add this to the appender configuration.");
            System.out.println("KafkaAppender requires a topic. Add this to the appender configuration.");
        } else {
            LOGGER.info("KafkaAppender will publish messages to the '{}' topic.", topic);
            System.out.println("KafkaAppender will publish messages to the '" + topic + "' topic.");
        }
        LOGGER.info("kafkaProducerProperties = {}", kafkaProducerProperties);
        LOGGER.info("Kafka Producer Properties = {}", properties);
        if (logToSystemOut) {
            System.out.println("KafkaAppender: kafkaProducerProperties = '" + kafkaProducerProperties + "'.");
            System.out.println("KafkaAppender: properties = '" + properties + "'.");
        }
    }

    @Override
    public void stop() {
        super.stop();
        LOGGER.info("Stopping KafkaAppender...");
        producer.close();
    }

    @Override
    protected void append(ILoggingEvent event) {
        String string = this.formatter.format(event, this.service, this.dockerHost);
        if (logToSystemOut) {
            System.out.println("KafkaAppender: Appending string: '" + string + "'.");
        }
        try {
            ProducerRecord<String, String> producerRecord = new ProducerRecord<>(topic, string);
            producer.send(producerRecord);
        } catch (Exception e) {
            System.out.println("KafkaAppender: Exception sending message: '" + e + " : " + e.getMessage() + "'.");
            e.printStackTrace();
        }
    }

    public JsonFormatter getFormatter() {
        return formatter;
    }

    public void setFormatter(JsonFormatter formatter) {
        this.formatter = formatter;
    }

    public String getTopic() {
        return topic;
    }

    public void setTopic(String topic) {
        this.topic = topic;
    }

    public String getLogToSystemOut() {
        return logToSystemOut + "";
    }

    public void setLogToSystemOut(String logToSystemOutString) {
        if ("true".equalsIgnoreCase(logToSystemOutString)) {
            this.logToSystemOut = true;
        }
    }

    public String getService() {
        return service;
    }

    public void setService(String service) {
        this.service = service;
    }

    public String getDockerHost() {
        return dockerHost;
    }

    public void setDockerHost(String dockerHost) {
        this.dockerHost = dockerHost;
    }

    public String getKafkaProducerProperties() {
        return kafkaProducerProperties;
    }

    public void setKafkaProducerProperties(String kafkaProducerProperties) {
        this.kafkaProducerProperties = kafkaProducerProperties;
    }
}

2.Swagger-ui使用

1.引入依赖

 		<!-- Swagger UI -->
            <dependency>
                <groupId>io.springfox</groupId>
                <artifactId>springfox-boot-starter</artifactId>
                <version>3.0.0</version>
            </dependency>

2.配置类

配置类有2个

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import springfox.documentation.builders.ApiInfoBuilder;
import springfox.documentation.builders.PathSelectors;
import springfox.documentation.builders.RequestHandlerSelectors;
import springfox.documentation.service.ApiInfo;
import springfox.documentation.spi.DocumentationType;
import springfox.documentation.spring.web.plugins.Docket;
import springfox.documentation.swagger2.annotations.EnableSwagger2;

/**
 * @time:2023/5/25
 * @author:Zheng
 * @desc:swagger配置
 */
@Configuration
@EnableSwagger2
public class SwaggerConfig {
    @Bean
    public Docket api() {
        return new Docket(DocumentationType.SWAGGER_2)
                .select()
               .apis(RequestHandlerSelectors.basePackage("com.example.demo5.controller")) // 设置扫描的包路径
                .paths(PathSelectors.any())
                .build()
                .apiInfo(apiInfo());
    }

    private ApiInfo apiInfo() {
        return new ApiInfoBuilder()
                .title("Swagger API Documentation")
                .description("API documentation for Spring Boot project")
                .version("1.0.0")
                .build();
    }
}
//什么也不用加就行
@Configuration
public class SpringMmcConfig extends DelegatingWebMvcConfiguration {}

3.Controller接口若需增加描述信息则需要增加注解

@ApiOperation("这个接口是用来做XXX的")

3.自定义Prometheus指标

1.引入依赖

		 <!-- prometheus依赖 -->
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-registry-prometheus</artifactId>
        </dependency>
         <!-- 这个依赖是后续指标中gauge需要引入的设置默认value-->
        <dependency>
            <groupId>com.google.guava</groupId>
            <artifactId>guava</artifactId>
            <version>31.1-jre</version>
            <scope>compile</scope>
        </dependency>


applucation.yml或者application.properties中需要引入的配置,与spring同级别
# application.properties添加以下配置用于暴露指标,端口自定义,此处1110
management:
  server:
    port: 1110
  endpoint:
    prometheus:
      enabled: true
  endpoints:
    web:
      exposure:
        include: prometheus,health

2.编写代码

1.配置类
import com.google.common.util.concurrent.AtomicDouble;
import io.micrometer.core.instrument.Counter;
import io.micrometer.core.instrument.DistributionSummary;
import io.micrometer.core.instrument.Gauge;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.distribution.Histogram;
import io.micrometer.prometheus.PrometheusMeterRegistry;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import javax.annotation.PostConstruct;
import java.util.concurrent.atomic.AtomicInteger;

/**
 * @time:2023/6/28
 * @author:Zheng
 * @desc: prometheus展示指标列表
 */
@Component
public class PrometheusMetricsConfig {

    //各个指标,每一个controller可以设定任何一个变量,建议不用的业务控制不同名字的变量
    //会统一展示
    //常用指标有counter、summary、gauge几个类型
    private Counter orderCount;
    private DistributionSummary amountSum;
    private Counter userCount;
    private Counter healthyCount;
    private Gauge threadCount;

    //指标类若有warning,可以忽略
    @Autowired
    private MeterRegistry registry;
    private AtomicDouble threadValue = new AtomicDouble(9999.99);

    //注册初始化,展示了两种注册方式,需要注意的是gauge有默认值,此处设置为上面的9999.99
    @PostConstruct
    private void init() {
        orderCount = registry.counter("order_request_count", "order", "test-svc");
        amountSum = registry.summary("order_amount_sum", "orderAmount", "test-svc");
        userCount = registry.counter("user_count_is","user","iclick");
        //healthyCount = registry.counter("healthy_count","healthy","iclick");
        threadCount = Gauge.builder("thread_count",threadValue::get)
                .description("Current memory usage")
                .tags("application", "iclick")
                .register(registry);

        healthyCount = Counter.builder("healthy_count").description("healthy select count")
                .tags("select user", "iclcik")
                .register(registry);
    }

    //往下的get方法是为了在controller或者service层中赋值
    public Counter getOrderCount() {
        return orderCount;
    }

    public DistributionSummary getAmountSum() {
        return amountSum;
    }

    public Counter getUserCount(){
        return userCount;
    }

    public Counter getHealthyCount(){
        return healthyCount;
    }

    public AtomicDouble getThreadValue(){
        return threadValue;
    }

    public Gauge getThreadCount(){
        return threadCount;
    }
}
2.使用方式
@RequestMapping("/current")
@RestController
public class HealthyController {
    @Autowired
    private HealthyService hs;
    
    //注入配置类
    @Autowired
    private PrometheusMetricsConfig pmc;

    @RequestMapping("/status")
    private String currenthealth(){
        //为配置类中的值赋值,否则为默认值,具体赋值方式可以细看。gauge类型不一样,需要修改的是AtomicDouble这个初始值,修改会后自动给gauge值变更,不然就只能删除重新注册这个版本。
        pmc.getHealthyCount().increment();
        pmc.getUserCount().increment(2);
        pmc.getAmountSum().record(3);
        pmc.getThreadValue().set(1111.11);
        return hs.checkhealth();
    }
}

4.IOC和AOP

面试被问了几百遍的 IoC 和 AOP ,还在傻傻搞不清楚? - 知乎 (zhihu.com)

个人傻瓜式理解就是一种思想:
IOC: 把类交友spring管理,用则取;
AOP:有点像提取公共方法,像是接口被人继承实现一样;

5.Tomcat

平常使用springboot并不是我们直接访问接口,而是中间有个默认的服务器tomcat,这个是内置的可以更换,使用起来没有什么感知但是实际上发挥着作用。

Tomcat 有两个核心功能:
1.处理 Socket 连接,负责网络字节流与 Request 和 Response 对象的转化。
2.加载和管理 Servlet,以及具体处理 Request 请求。
针对这两个功能,Tomcat 设计了两个核心组件来分别完成这两件事,即:连接器(Connector)和容器(Container)。

整个过程大致就是:Connector 连接器接收连接请求,创建Request和Response对象用于和请求端交换数据,然后分配线程让Engine(也就是Servlet容器)来处理这个请求,并把产生的Request和Response对象传给Engine。当Engine处理完请求后,也会通过Connector将响应返回给客户端。

 Engine是 Tomcat 容器里的顶级容器(Container),我们可以通过 Container 类查看其他的子容器:Engine、Host、Context、Wrapper。Engine 子容器是 Host,Host 的子容器是 Context,Context 子容器是 Wrapper,所以这4个容器的关系就是父子关系,即:Wrapper > Context > Host > Engine (>表示继承)。
 
 tomcat的配置也是我们可以在配置文件里定义修改的,比如最基础的server.port=1106就是了。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值