JavaWeb开发总结(四)

Django的使用

开源项目django-vue-admin

介绍 | Django-Vue-Admin

生成表

Django创建Web项目,命令python manage.py makemigrations 和 python manage.py migrate的区别-CSDN博客

第一步:在models中第一次创建models类,如图所示:


第二步:执行命令 python manage.py makemigrations
表示在应用目录下的migations的文件下多了一个0001_initial.py的文件

第三步:执行命令 python manage.py migrate


下面我看一下数据库:

Python常见语法

元组转列表

cursor.execute('select areaname from`tb_area` where `level`=1')
a = cursor.fetchall()  --》结果<class ‘tuple’>: ((‘北京’,),(‘重庆’,))

province_list = [u[-1] for u in a] --》转化为列表,结果

<class ‘lit’>:[‘北京’,’重庆’]

查询一条元组记录获得字段值

sql = 'select model_name,tb_name from`tb_report_template` where `id`=%s'
cursor.execute(sql, [id])
info = cursor.fetchone()  # 获得一行数据,然后多次使用cursor.fetchone(),依次取得下一条结果,直到为空
model_name = info[0]
tb_name =  info[1]

for循环

#coding=utf-8
l = ['鹅鹅鹅', '曲项向天歌', '锄禾日当午', '春种一粒粟']
for i in l:
     print(i)

鹅鹅鹅

曲项向天歌

锄禾日当午

春种一粒粟
#可以获取下表,enumerate每次循环可以得到下表及元素
for i, v in enumerate(l):
    print(i, v)

0 鹅鹅鹅

1 曲项向天歌

2 锄禾日当午

3 春种一粒粟

列表生成式:快速生成具有特定规律的列表

print([i for i in range(1, 11)])
print([i*2 for i in range(1, 11)])
print([i*i for i in range(1, 11)])
print([str(i) for i in range(1, 11)])
print([i for i in range(1, 11) if i % 2 == 0])

[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

[2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]

['1', '2', '3', '4', '5', '6', '7', '8', '9', '10']

[2, 4, 6, 8, 10]

pandas清洗数据一些语法:

# 求平均替的列替换称nan

total_df[avg_list] = total_df[avg_list].replace(0, np.nan)

total_df[avg_list] = total_df[avg_list].apply(pd.to_numeric, errors='coerce')

mos_df[mos_compare_col] = mos_df[mos_compare_col].astype(float)

# 所有空值替换成0

total_df = total_df.fillna(0)

# 去掉业务账号为0的数据

total_df = total_df.loc[total_df["业务账号"] != 0]

# apply

mos["mos3"]=mos["直播收视MOS平均值"].apply(lambda x :1 if 0<x<=3 else 0)

# 函数使用

grouped25 = moslow25.groupby(['reportTime', 'cityCode'])

fun25 = {'businessId': 'last', 'mosTotalNum': 'count', 'lowMosNum': 'sum'}

x25 = grouped25.agg(fun25).reset_index()

# 自定义函数

def judge1(mos_df):
    if (mos_df['缓存卡顿次数'] > 0) or (mos_df['组播请求时延(s)'] > 1) or (mos_df['单播请求时延(s)'] > 2):
        return '1'
    else:
        return ''

mos_df['IPTV直观质差'] = mos_df.apply(judge1,axis = 1)

# pandas拼接

mos_df['根因识别'] = mos_df['IPTV直观质差'].str.cat(
mos_df['网络传输质差'],sep=' ')

merge左连接:

total_result_df = pd.merge(total_result_df, pppoe_df,
                           on=['LOID'],
                           how='left')

保留两位小数或整数

两位小数:

total_result_df[['缓存卡顿次数','组播请求次数']] = total_result_df[['缓存卡顿次数','组播请求次数']].round(2)

整数:

total_result_df[['缓存卡顿次数','组播请求次数']] = total_result_df[['缓存卡顿次数','组播请求次数']].round()

字典的使用

p_dic = {
'101':'北京'
,'104':'重庆'
,'108':'福建'
,'103':'广东'

}

1普通字符里的使用

pro_user_df[['IPTV_Account_ID','acct_nbr']].to_csv('/data03/PPPoE/{}_user_info.csv'.format(p_dic.get(pro_no)), index=False)

2 pandas中的使用

total_result_df['省份名称'] = total_result_df['省份'].map(p_dic)

python 安装一些特定版本的包

D:\soft\python\Python3\python.exe -m pip install pymongo==3.11.3

Excel左右角有绿色尖角

选中整列为文本格式的列----数据----分列----完成---再右键---设置单元格格式为常规---确定

SHA1获取

当App打包,需要用定位功能,引用高德地图,得先注册SHA1

我的账号密码是xxx路径是:C:\Users\Administrator\.android\debug.keystore

高德地图key:xxx

以前的:xxx 

MD5:xxx9:AC:0E:E6:B0:6E:97:71:93:C1:66:9F

         SHA1: xxx:D9:99:C1:AD:42:4D:7F:7E:B3

         SHA256: xxx:00:AF:AB:4F:C7:91:97:79:06:C9:10:EB:62:C3:A1:8C:4D:2A:83

         签名算法名称: SHA256withRSA

         版本: 3

1.java.lang.Exception:密钥库文件不存在:密钥库

获取(高德,百度,腾讯)地图key时都要得到SHA1值,用cmd(或android studio的终端工具)命令获取SHA1值,出现如图异常:

看看系统用户下.android文件夹debug.keystore文件是否丢失,如果没有debug.keystore文件,看下文:

2.具体解决步骤

①在终端工具输入或窗口运行窗口输入(窗口键+ R输入CMD):

 keytool -genkey -v -keystore debug.keystore -alias androiddebugkey -keyalg RSA -validity 10000

②填写注册信息,注意要求设置密码,密码请记录下来以便后续获取SHA1使用(注册内容可以随意填写)

③验证debug.keystore是否生成

keytool -list -v -keystore debug.keystore

这样就成功了,接着再看看.android目录下,已经有debug.keystore文件了

④再次填写注册信息,出现如图异常:

 keytool -genkey -v -keystore debug.keystore -alias androiddebugkey -keyalg RSA -validity 10000

⑤管理密钥和证书工具

云打包

选择Android-》使用自有证书

申请appkey_android

百度地图,我的是:

xxx

最后给你发邮件:链接:登录百度账号

高德地图,需要百度高德地图,选择地图api,到右上角小人,注册,后进入应用管理,然后添加应用

我的是:

xxx

点击参数配置,输入appkey_android

WebSocket推送消息

1 Tomcat7之后,支持websoket所以pom.xml引入依赖不用引入websoket直接引入javaee

     <dependency>

           <groupId>javax</groupId>

           <artifactId>javaee-api</artifactId>

           <version>7.0</version>

           <scope>provided</scope>

  </dependency>

2 创建webSoket的两个Configure,为的是在WebSocket服务器端注入其他类

MyConfigure.java

package com.wms.ui.stboneuser.controller;

import org.springframework.context.annotation.Bean;

import org.springframework.context.annotation.Configuration;

@Configuration

public class MyConfigure

{

    @Bean

    public MyEndpointConfigure newConfigure()

    {

        return new MyEndpointConfigure();

    }

}

MyEndpointConfigure.java

package com.wms.ui.stboneuser.controller;

import javax.websocket.server.ServerEndpointConfig;

import org.springframework.beans.BeansException;

import org.springframework.beans.factory.BeanFactory;

import org.springframework.context.ApplicationContext;

import org.springframework.context.ApplicationContextAware;

public class MyEndpointConfigure extends ServerEndpointConfig.Configurator implements ApplicationContextAware

{

    private static volatile BeanFactory context;

    @Override

    public <T> T getEndpointInstance(Class<T> clazz) throws InstantiationException

    {

         return context.getBean(clazz);

    }

    @Override

    public void setApplicationContext(ApplicationContext applicationContext) throws BeansException

    {

        System.out.println("auto load"+this.hashCode());

        MyEndpointConfigure.context = applicationContext;

    }

}

3创建WebSocketServer.java

package com.wms.ui.stboneuser.controller;

import java.io.IOException;

import java.util.Collection;

import java.util.HashMap;

import java.util.Iterator;

import java.util.Map;

import javax.annotation.Resource;

import javax.websocket.OnClose;

import javax.websocket.OnError;

import javax.websocket.OnMessage;

import javax.websocket.OnOpen;

import javax.websocket.Session;

import javax.websocket.server.PathParam;

import javax.websocket.server.ServerEndpoint;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;

import org.springframework.jdbc.core.JdbcTemplate;

import org.springframework.stereotype.Component;

//Websocket

@Component

@ServerEndpoint(value = ("/webSocket/{userId}"), configurator = MyEndpointConfigure.class)

public class WebSocketServer {

@Resource

private JdbcTemplate JdbcTemplateTiDB;

private Logger logger = LoggerFactory.getLogger(this.getClass());

// 静态变量,用来记录当前在线连接数。应该把它设计成线程安全的。

private static int onlineCount = 0;

// 为了识别用户,封装用户-websocketSession

private static final Map<String, Session> users;

//为了识别查询次数

private static Map<String,Integer> userId_count = new HashMap<String,Integer>();

static {

users = new HashMap<>();

}

/**

 * 连接建立成功调用的方法

 */

@OnOpen

public void onOpen(@PathParam("userId") String userId, Session session) {

logger.info("新客户端连入,用户id:" + userId);

users.put(userId, session);

addOnlineCount(); // 在线数加1

// 轮询状态表

Thread t = new ActionThread(userId);

// 开启线程

t.start();

}

/**

 * 连接关闭调用的方法

 */

@OnClose

public void onClose(Session session) {

logger.info("一个客户端关闭连接");

try {

session.close();

subOnlineCount(); // 在线数减1

} catch (IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

}

/**

 * 收到客户端消息后调用的方法

 *

 * @param message

 *            客户端发送过来的消息

 */

@OnMessage

public void onMessage(String message, Session session) {

}

/**

 * 发生错误时调用

 */

@OnError

public void onError(Session session, Throwable error) {

logger.error("websocket出现错误");

error.printStackTrace();

}

/*

 * 给指定用户发送消息

 */

public static void sendMessage(String userId, String message) {

Session session = users.get(userId);

try {

session.getBasicRemote().sendText(message);

} catch (IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

}

/**

 * 群发自定义消息

 */

public static void sendInfo(String message) throws IOException {

Collection<Session> sessions = users.values();

Iterator<Session> iter = sessions.iterator();

while (iter.hasNext()) {

Session session = iter.next();

session.getBasicRemote().sendText(message);

}

}

public static synchronized int getOnlineCount() {

return onlineCount;

}

public static synchronized void addOnlineCount() {

WebSocketServer.onlineCount++;

}

public static synchronized void subOnlineCount() {

WebSocketServer.onlineCount--;

}

// 发送表状态的线程类

class ActionThread extends Thread {

private String userId; // 随机数

public ActionThread(String userId) {

this.userId = userId;

}

@Override

public synchronized void run() {

// 记录查询次数

userId_count.put(userId, 0);

while (true) {

try {

Thread.sleep(2000);

// 查询表状态

Map<String, Object> map = JdbcTemplateTiDB

.queryForMap("select lineNumber,statuscode from action_status where randomNum = " + userId);

Long lineNumber = (Long) map.get("lineNumber"); // 文件总行数

Integer statuscode = (Integer) map.get("statuscode"); // 0未入库   1已入库

// 状态表首次查询

if (userId_count.get(userId) == 0 && statuscode == null) {

// 状态改变,websocket推送消息

sendMessage(userId, "lineNumber:"+lineNumber+" statuscode:" + statuscode);

userId_count.put(userId, 1);

}

// 插入结束

if (statuscode != null) {

// 状态改变,websocket推送消息

sendMessage(userId,"lineNumber:"+lineNumber+" statuscode:" + statuscode);

//关闭websocket连接

users.remove(userId);

userId_count.remove(userId);

subOnlineCount();

return;

}

} catch (Exception e) {

e.printStackTrace();

}

}

}

}

}

4 前端

//WebSocket

        function openWebSocket(randomNum){

         //使用websocket接收后端推送的消息

         var websocket;

            if('WebSocket' in window) {

                console.log("此浏览器支持websocket");

                websocket = new WebSocket("ws://localhost:8080/iptv/webSocket/"+randomNum);

            } else if('MozWebSocket' in window) {

                alert("此浏览器只支持MozWebSocket");

            } else {

                alert("此浏览器只支持SockJS");

            }

            websocket.onopen = function(evnt) {

                console.log(evnt);

                console.log("连接服务器成功!")

            };

            websocket.onmessage = function(evnt) {

                alert(evnt.data)

            };

            websocket.onerror = function(evnt) {};

            websocket.onclose = function(evnt) {

                   console.log("与服务器断开了链接!");

            }

        //关闭连接

    function closeWebSocket(){

        websocket.close();

    }

//发送消息

        function send() {

            if(websocket != null) {

                var message = document.getElementById('message').value;

                console.log(message);

                websocket.send(message);

            } else {

                alert('未与服务器链接.');

            }

        }

        }

maven项目,打成jar直接在Linux中启动

1 引入依赖

将<build></build>加到<project></project>里

主要是指定main方法

<build>

        <plugins>

         <plugin>

                <artifactId>maven-assembly-plugin</artifactId>

                <configuration>

                    <appendAssemblyId>false</appendAssemblyId>

                    <descriptorRefs>

                        <descriptorRef>jar-with-dependencies</descriptorRef>

                    </descriptorRefs>

                    <archive>

                        <manifest>

                            <!-- 此处指定main方法入口的class -->

                            <mainClass>wxd.RemoteShellExecutor</mainClass>

                        </manifest>

                    </archive>

                </configuration>

                <executions>

                    <execution>

                        <id>make-assembly</id>

                        <phase>package</phase>

                        <goals>

                            <goal>assembly</goal>

                        </goals>

                    </execution>

                </executions>

            </plugin>

            <plugin>

                <groupId>org.apache.maven.plugins</groupId>

                <artifactId>maven-compiler-plugin</artifactId>

                <version>3.3</version>

                <configuration>

                    <source>1.8</source>

                    <target>1.8</target>

                </configuration>

            </plugin>

        </plugins>

    </build>

2 打包

到项目根目录cmd  输入mvn clean     mvn package

项目打包是jar

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <groupId>com.wxd</groupId>

  <artifactId>callpython</artifactId>

  <version>0.0.1-SNAPSHOT</version>

  <packaging>jar</packaging>

  <name>Archetype - callpython</name>

  <url>http://maven.apache.org</url>

  

   <properties>

<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

</properties>

3 linux运行jar

在有java环境下  java -jar xxx.jar

war包个别文件替换部署

iptvnew.war运行期间

将iptvnew 里的文件替换

将Iptvnew进程杀死

将iptvnew.war重命名 iptvnew.war.bak  剩下iptvnew文件

bin/./startup.sh

再想换,直接修改,然后直接ps –ef|grep java kill=9 xxx

bin/.startup.sh

运行jar并指明端口

Java –jar xxx.jar --server.port=9090

后台启动jar

nohup java –jar xxx.jar &

手动指定一个参数来规定日志文件的输出地点:

nohup java –jar xxx.jar >catalina.out 2>&1 &

不需要输出日志,可以

nohup java –jar xxx.jar >/dev/null &

加上端口为

java –jar wpc.jar--server.port=9090 >/dev/null &

也可以将命令放置start.sh

#!/bin/bash

java –jar wpc.jar--server.port=9090 >/dev/null &

疑惑

Alarmbusinessgateway项目

public class SharedStorage {

private static ConcurrentLinkedQueue<ChannelBean> channelList = new ConcurrentLinkedQueue<>();

public static ConcurrentLinkedQueue<ChannelBean> getChannelList() {

return channelList;

}

public static ChannelBean getChannelBean(ChannelId channelId) {

for (ChannelBean bean : channelList) {

if (bean.getChannelId().equals(channelId)) {

return bean;

}

}

return null;

}

Watch耗时统计

StopWatch watch = new StopWatch("耗时统计");
      watch.start("stbinfo");
      String s = stbcmnumservice.findCountryByTimeAndprovinceCode(-1, "").toString();
      watch.stop();
      logger.info(watch.prettyPrint());
return s;

Future 解决同步并发  多线程

https://www.jianshu.com/p/b8952f07ee5d

你了解Java中的Future吗?

淡定_蜗牛关注

0.1832019.01.17 19:19:28字数 2,177阅读 5,253

1.概述

在本文中,我们将了解Future。自Java 1.5以来一直存在的接口,在处理异步调用和并发处理时非常有用。

2.创建Future

Executor类是为了执行任务,ExecutorService好像也是为了执行任务。

不过ExecutorService要管的事情好像多了很多。

又要中断,又要关闭,又要提供runnable执行那些任务。

所以这三个类之间的关系是一个怎样的关系呢。

ExecutorService继承Executor,所以就不说了,都是为了执行任务

ExecutorService执行任务,Future是任务执行的结果,大概就这个关系。



作者:你缺少想象力
链接:https://www.jianshu.com/p/32ca8241e8fa
来源:简书
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

简单地说,Future类表示异步计算的未来结果 - 这个结果最终将在处理完成后出现在Future中。

让我们看看如何编写创建和返回Future实例的方法。

Future接口是长时间运行方法异步处理的理想选择。这使我们能够在等待Future封装的任务完成时执行一些其他事情。

利用Future的异步性质的操作示例如下:

  • 计算密集型过程(数学和科学计算)
  • 操纵大数据结构(大数据)
  • 远程方法调用(下载文件,抓取HTML,Web服务)。
2.1 用FutureTask实现Future

对于我们的示例,我们将创建一个非常简单的类来计算Integer的平方。这绝对不属于“长期运行”方法类别,但是我们将对它进行一次Thread.sleep()调用以使其持续1秒钟完成:

public class SquareCalculator {    

     

    private ExecutorService executor = Executors.newSingleThreadExecutor();

     

    public Future<Integer> calculate(Integer input) {        

        return executor.submit(() -> {

            Thread.sleep(1000);

            return input * input;

        });

    }

}

实际执行计算的代码位包含在call()方法中,作为lambda表达式提供。正如你所看到的,除了之前提到的sleep()调用之外没有什么特别之处。

当我们将注意力转向Callable 和ExecutorService的使用时,它会变得更有趣。

Callable是一个接口,表示返回结果并具有单个call()方法的任务。在这里,我们使用lambda表达式创建了它的实例。

创建一个Callable实例并没有把我们带到任何地方,我们仍然必须将这个实例传递给一个执行器,该执行器将负责在一个新线程中启动该任务并返回有价值的Future对象。这就是ExecutorService的用武之地。

我们可以通过几种方式获得ExecutorService实例,其中大部分都是由实用程序类Executors的静态工厂方法提供的。在这个例子中,我们使用了基本的newSingleThreadExecutor(),它为我们提供了一次能够处理单个线程的ExecutorService。

一旦我们有了一个ExecutorService对象,我们只需要调用submit()传递我们的Callable作为参数。submit()将负责启动任务并返回FutureTask 对象,该对象是Future接口的实现。

3.使用Future

到目前为止,我们已经学会了如何创建Future的实例。

在本节中,我们将通过探索Future的API中的所有方法来学习如何使用此实例。

3.1 使用isDone()和get()来获取结果

现在我们需要调用calculate()并使用返回的Future来获得生成的Integer。Future API中的两种方法将帮助我们完成这项任务。

Future.isDone()告诉我们执行程序是否已完成任务处理。如果任务完成,则返回 true,否则返回 false。

从计算中返回实际结果的方法是Future.get()。请注意,此方法会阻止执行,直到任务完成,但在我们的示例中,这不会成为问题,因为我们首先通过调用isDone()来检查任务是否已完成。

通过使用这两种方法,我们可以在等待主任务完成时运行其他一些代码:

Future<Integer> future = new SquareCalculator().calculate(10);

 

while(!future.isDone()) {

    System.out.println("Calculating...");

    Thread.sleep(300);

}

Integer result = future.get();

在这个例子中,我们在输出上写一条简单的消息,让用户知道程序正在执行计算。

方法get()将阻止执行,直到任务完成。但是我们不必担心,因为我们的示例只是在确保任务完成后才调用get()。因此,在这种情况下,future.get()将始终立即返回。

值得一提的是,get()有一个重载版本,它接受超时和TimeUnit作为参数:

Integer result = future.get(500, TimeUnit.MILLISECONDS);

get(long,TimeUnit)和get()之间的区别在于,如果任务在指定的超时时间之前没有返回,前者将抛出TimeoutException。

3.2 使用cancel()取消Future

假设我们已经触发了一项任务,但由于某种原因,我们不再关心结果了。我们可以使用Future.cancel(boolean)告诉执行程序停止操作并中断其底层线程:

Future<Integer> future = new SquareCalculator().calculate(4);

boolean canceled = future.cancel(true);

从上面的代码我们的Future实例永远不会完成它的操作。实际上,如果我们尝试从该实例调用get(),在调用cancel()之后,结果将是CancellationException。Future.isCancelled()将告诉我们Future是否已被取消。这对于避免获取CancellationException非常有用。

对cancel()的调用可能会失败。在这种情况下,其返回值将为false。请注意,cancel()接受一个布尔值作为参数 - 这将控制执行此任务的线程是否应该被中断。

4.使用线程池进行更多多线程处理

我们当前的ExecutorService是单线程的,因为它是使用Executors.newSingleThreadExecutor获得的。要突出显示“单线程”,让我们同时触发两个计算:

SquareCalculator squareCalculator = new SquareCalculator();

Future<Integer> future1 = squareCalculator.calculate(10);

Future<Integer> future2 = squareCalculator.calculate(100);

while (!(future1.isDone() && future2.isDone())) {

   System.out.println(

     String.format(

       "future1 is %s and future2 is %s", 

       future1.isDone() ? "done" : "not done", 

       future2.isDone() ? "done" : "not done"

     )

   );

   Thread.sleep(300);

}

Integer result1 = future1.get();

Integer result2 = future2.get();

System.out.println(result1 + " and " + result2);

squareCalculator.shutdown();

现在让我们分析一下这段代码的输出:

calculating square for: 10

future1 is not done and future2 is not done

future1 is not done and future2 is not done

future1 is not done and future2 is not done

future1 is not done and future2 is not done

calculating square for: 100

future1 is done and future2 is not done

future1 is done and future2 is not done

future1 is done and future2 is not done

100 and 10000

很明显,这个过程并不平行。注意第二个任务仅在第一个任务完成后才开始,使整个过程大约需要2秒钟才能完成。

为了使我们的程序真正具有多线程,我们应该使用不同风格的ExecutorService。让我们看一下如果我们使用工厂方法Executors.newFixedThreadPool()提供的线程池,我们的示例的行为会如何变化:

public class SquareCalculator {

 

   private ExecutorService executor = Executors.newFixedThreadPool(2);

    

   //...

}

通过对SquareCalculator类的简单更改,我们现在有一个执行器,它可以使用2个同步线程。

如果我们再次运行完全相同的客户端代码,我们将获得以下输出:

calculating square for: 10

calculating square for: 100

future1 is not done and future2 is not done

future1 is not done and future2 is not done

future1 is not done and future2 is not done

future1 is not done and future2 is not done

100 and 10000

现在看起来好多了。注意2个任务如何同时开始和结束运行,整个过程大约需要1秒钟才能完成。

还有其他工厂方法可用于创建线程池,例如Executors.newCachedThreadPool(),它们在可用时重用以前使用过的Thread,而Executors.newScheduledThreadPool() 则调度命令在给定的延迟后运行。

5. ForkJoinTask概述

ForkJoinTask是一个实现 Future的抽象类,能够运行由 ForkJoinPool中的少量实际线程托管的大量任务。

在本节中,我们将快速介绍ForkJoinPool的主要特性。

ForkJoinTask的主要特征是它通常会产生新的子任务,作为完成其主要任务所需的工作的一部分。它通过调用fork()生成新任务,并使用join()收集所有结果,从而得到类的名称。

有两个实现ForkJoinTask的抽象类:RecursiveTask,它在完成时返回一个值,而RecursiveAction则不返回任何内容。顾名思义,这些类将用于递归任务,例如文件系统导航或复杂的数学计算。

让我们扩展前面的例子来创建一个类,给定一个Integer,它将计算所有因子元素的和平方。因此,例如,如果我们将数字4传递给我们的计算器,我们应该得到4 + 3 + 2 + 1的总和为30的结果。

首先,我们需要创建RecursiveTask的具体实现并实现其compute()方法。这是我们编写业务逻辑的地方:

public class FactorialSquareCalculator extends RecursiveTask<Integer> {

 

   private Integer n;

   public FactorialSquareCalculator(Integer n) {

       this.n = n;

   }

   @Override

   protected Integer compute() {

       if (n <= 1) {

           return n;

       }

       FactorialSquareCalculator calculator

         = new FactorialSquareCalculator(n - 1);

       calculator.fork();

       return n * n + calculator.join();

   }

}

注意我们如何通过在compute()中创建FactorialSquareCalculator的新实例来实现递归。通过调用fork(),一个非阻塞方法,我们要求ForkJoinPool启动这个子任务的执行。

在join()方法从计算返回的结果,这是我们增加我们目前正在访问数的平方。

现在我们只需要创建一个ForkJoinPool来处理执行和线程管理:

ForkJoinPool forkJoinPool = new ForkJoinPool();

FactorialSquareCalculator calculator = new FactorialSquareCalculator(10);

forkJoinPool.execute(calculator);

6.结论

在本文中,我们对Future接口进行了全面的了解,访问了它的所有方法。我们还学习了如何利用线程池的强大功能来触发多个并行操作。还简要介绍了ForkJoinTask类,fork()和join()的主要方法。

原文链接

https://mp.weixin.qq.com/s/n8QgasKfToqTB_JXVG4mTQ

Kafka

20道常见的kafka面试题以及答案_Happy编程的博客-CSDN博客

Kafka是一种高吞吐量、分布式、基于发布/订阅的消息系统,最初由LinkedIn公司开发,使用Scala语言编写,目前是Apache的开源项目。

broker: Kafka服务器,负责消息存储和转发

topic:消息类别,Kafka按照topic来分类消息

partition: topic的分区,一个topic可以包含多个partition, topic 消息保存在各个partition上4. offset:消息在日志中的位置,可以理解是消息在partition上的偏移量,也是代表该消息的唯一序号

Producer:消息生产者

Consumer:消息消费者

Consumer Group:消费者分组,每个Consumer必须属于一个group

Zookeeper:保存着集群 broker、 topic、 partition等meta 数据;另外,还负责broker故障发现, partition leader选举,负载均衡等功能

partition中的每条Message包含了以下三个属性: offset,MessageSize,data,其中offset表示Message在这个partition中的偏移量,offset不是该Message在partition数据文件中的实际存储位置,而是逻辑上一个值,它唯一确定了partition中的一条Message,可以认为offset是partition中Message的 id; MessageSize表示消息内容data的大小;data为Message的具体内容

为什么要使用kafka

缓冲和削峰:上游数据时有突发流量,下游可能扛不住,或者下游没有足够多的机器来保证冗余,kafka在中间可以起到一个缓冲的作用,把消息暂存在kafka中,下游服务就可以按照自己的节奏进行慢慢处理。

解耦和扩展性:项目开始的时候,并不能确定具体需求。消息队列可以作为一个接口层,解耦重要的业务流程。只需要遵守约定,针对数据编程即可获取扩展能力。

冗余:可以采用一对多的方式,一个生产者发布消息,可以被多个订阅topic的服务消费到,供多个毫无关联的业务使用。

健壮性:消息队列可以堆积请求,所以消费端业务即使短时间死掉,也不会影响主要业务的正常进行。

异步通信:很多时候,用户不想也不需要立即处理消息。消息队列提供了异步处理机制,允许用户把一个消息放入队列,但并不立即处理它。想向队列中放入多少消息就放多少,然后在需要的时候再去处理它们。

————————————————

版权声明:本文为CSDN博主「徐周」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。

原文链接:https://blog.csdn.net/qq_28900249/java/article/details/90346599

Kafka使用

消息队列,依赖于kafka,kafka本身内置zookeeper,单节点可以不搭建自己的kafka,多节点需要搭建kafka集群,再搭建zookeeper集群

  1. 组件图

生产者 broker 消费者

Producers  message    topic consumer  group

Producer       ..         partition consumer

Producer   ..      partition consumer

  1. 组件特性

Producers:producer + broker   生产者生产数据

Consumers:border + consumer 消费者消费数据

Partition :message消息传到topic分区到多个partition

Broker会将消息持久化到kafka的log文件中,默认存7天,可以人为复制一个broker副本,这样,就算7天后被删,也有备份

Messge:发送的消息,有offset唯一的下标,一个下标就对应一条数据

一个partition对应一个consumer 一条数据被一个消费者消费

一个consumer可以对应多个partition  一个消费者可以消费多条消息

消费者消费是并行消费

  1. 配置文件

Server.properties  broker配置文件

可以设置partition id

Port

Host.name

Log.dir 日志目录

Broker id 唯一

    Zookeeper.connect 程序设置

Producer.propterties 生产者配置文件 一般在程序中设置

可以设置message同步或异步

可以设置使用的partition id

设置使用的borker port

Consumer.properties 消费者配置文件

Zookeeper.connect

Group.id  设置consumer group

Log4j.properties  kafka日志信息

  1. 安装kafka

解压:tar –zxvf kafka_2.10-0.8.1.1.tgz

启动服务:

首先启动zookeeper服务

bin/zookeeper-server-start.sh config/zookeeper.properties

启动Kafka

bin/kafka-server-start.sh config/server.properties>/dev/null 2>&1 &

创建topic

创建一个”test”的topic,一个分区一个副本(会覆盖server.properties的配置)

bin/kafka-topics.sh  --create  --zookeeper localhost:2181 --replication-factor 1

-- partitions 1 --topic test

查看主题

bin/kafka-topics.sh  --list  --zookeeper localhost:2181

查看主题详情

bin/kafka-topics.sh  --describe  --zookeeper localhost:2181  --topic test

删除主题

bin/kafka-run-class.sh kafka.admin.TopicCommand --delete --topic test  --zookeeper 192.168.1.161:2181不是真的删,可以设置—true真实删

创建生产者

bin/kafka-console-producer.sh  --broker-list localhost:9092  --topic test

写上message:

suns

xiaohuahuahau

xiaohei

创建消费者

bin/kafka-console-consumer.sh  --zookeeper  localhost:2181  --topic test  --from-beginning

kafka集群

分别创建zookeeper集群和kafka集群,两者之间没有什么关联,需要注意的是:

    1. 安装zk集群
    2. 修改kafka配置文件详情,及server.properties

broker.id: 唯一,填数字

host.name: 唯一,填数字

zookeeper.connect=192.168.40.134:2181,192.168.40.132:2181,192.168.40.133:2181  这是多个zookeeper的ip和端口i

jps看kafka进程是否启动

如要编程,先将kafka服务端启动,程序再编写producer和consumer

可以先启动consumer,再启动producer

编写代码:

引入kafka依赖

<dependency>
   <groupId>org.springframework.kafka</groupId>
   <artifactId>spring-kafka</artifactId>
   <version>1.1.1.RELEASE</version>
</dependency>

Springboot集成kafka

注意springboot版本要用新的

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.1.3.RELEASE</version>
    <relativePath/>
</parent>

Pom.xml

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>2.2.4.RELEASE

</version>
</dependency>

生产者配置application-dev.yml

bootstrap-servers为kafka多个节点地址

spring:
  kafka:
    bootstrap-servers: 10.0.9.1:9092,10.0.9.2:9092,10.0.9.3:9092
    producer:
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      batch-size: 16384
      buffer-memory: 33554432

消费者配置application-dev.yml

bootstrap-servers为kafka多个节点地址

spring:
  kafka:
    consumer:
      auto-offset-reset: earliest
      group-id: CSS-WEB-GROUP
      bootstrap-servers: 10.0.9.1:9092,10.0.9.2:9092,10.0.9.3:9092
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

kafka配置常量KafkaConstant.java 供生产者和消费者配置使用

package com.develop.weak.passwd.scanner.constant;

/**
 * @description: KafkaConstant <br>
 * date: 2020/7/7 12:49 下午 <br>
 * @author: chl <br>
 * version: 1.0 <br>
 */
public interface KafkaConstant {
    interface KafkaConfig {
        String WEAK_PASSWD_SCANNER_TOPIC = "TOPIC_WEAK_PASSWD_SCANNER";
        String WEAK_PASSWD_SCANNER_GROUP_ID = "WEAK_PASSWD_SCANNER_GROUP_ID";
    }

}

kafka配置类KafkaConfig.java 供生产者和消费者使用

package com.develop.weak.passwd.scanner.constant;

/**
 * @description: KafkaConfig <br>
 * date: 2020/7/6 12:31 下午 <br>
 * @author: chl <br>
 * version: 1.0 <br>
 */
public enum KafkaConfig {
    WEAK_PASSWD_SCANNER_TOPIC("TOPIC", "TOPIC_WEAK_PASSWD_SCANNER"),
    WEAK_PASSWD_SCANNER_GROUP_ID("GROUP_ID", "WEAK_PASSWD_SCANNER_GROUP_ID");
    private String name;
    private String value;
    private Integer code;

    KafkaConfig(String name, String value) {
        this.name = name;
        this.value = value;
    }

    KafkaConfig(String name, Integer code) {
        this.name = name;
        this.code = code;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getValue() {
        return value;
    }

    public void setValue(String value) {
        this.value = value;
    }

    public Integer getCode() {
        return code;
    }

    public void setCode(Integer code) {
        this.code = code;
    }
}

生产者发送消息至kafka

@EventListener
public void eventListener(TaskEvent taskEvent) {
    log.info("listener-taskEvent=========>{}", taskEvent);
    List<TaskGoalDO> taskgoals = (List<TaskGoalDO>)taskEvent.getSource();
    ScanResult scanResult = new ScanResult();
    scanResult.setSuccess(true);

    final CountDownLatch count = new CountDownLatch(taskgoals.size());
    for (TaskGoalDO taskgoal : taskgoals) {

        threadPoolTaskExecutor.execute(() -> {
            //添加goal
            GoalDO goal = new GoalDO();
            goal.setSystemName(taskgoal.getSystemName());
            goal.setArea(taskgoal.getArea());
            goal.setGoal(taskgoal.getGoal());
            goal.setGmtCreate(new Date());
            //存放执行结果
            List<ResultDO> resultList = new ArrayList<>();
            TaskOnlineDO taskOnline = new TaskOnlineDO();
            if(goalService.save(goal)>0){
                //添加在线任务
                taskOnline.setAdapterId(taskgoal.getAdapterId());
                taskOnline.setStrategyId(taskgoal.getStrategyId());
                taskOnline.setDetectionObj(goal.getId().toString());
                taskOnline.setReportGeneration(1);//手动下载报表
                taskOnline.setThreadCound(taskgoal.getThreadCound());
                taskOnline.setTimeout(taskgoal.getTimeout());
                if(taskOnlineService.save(taskOnline)>0){
                    //执行任务
                    taskOnlineService.execute(taskOnline.getId());
                    //查询执行结果
                    Map map = new HashMap();
                    map.put("taskId",taskOnline.getId());
                    resultList = resultService.list(map);
                }
            }

            count.countDown();       

        });
    }
    try {
        count.await();
        //封装payload对象scanresultDO
        ScanResultDO scanresultDO = new ScanResultDO();
        scanresultDO.setWeakpwdScanId(taskgoals.get(0).getWeakpwdScanId());
        scanresultDO.setTaskId(taskgoals.get(0).getTaskId());
        scanresultDO.setTaskName(taskgoals.get(0).getTaskName());
        scanResult.setCode(1);//弱口令已扫描结束
        scanResult.setMessage("扫描任务已结束");
        scanResult.setPayload(scanresultDO);
        ListenableFuture<SendResult<String, String>> send
                = kafkaTemplate.send(KafkaConstant.KafkaConfig.WEAK_PASSWD_SCANNER_TOPIC, JSON.toJSONString(scanResult));
        send.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {

            @Override
            public void onSuccess(SendResult<String, String> stringStringSendResult) {
                log.info("send message success....");
            }

            @Override
            public void onFailure(Throwable throwable) {
                log.error("send message error....", throwable);
            }
        });
    } catch (InterruptedException e) {
        log.error("弱口令扫描中断异常",e);
    }

}

消费者监听kafka消费消息

package com.develop.weak.passwd.scanner.service.impl;

import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.TypeReference;
import com.develop.common.utils.DateUtils;
import com.develop.weak.passwd.scanner.constant.KafkaConstant;
import com.develop.weak.passwd.scanner.entity.ScanResult;
import com.develop.weak.passwd.scanner.entity.ScanResultDO;
import com.develop.weak.passwd.scanner.entity.TaskGoalDO;
import com.develop.weak.passwd.scanner.service.IWeakpasswdScanService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.BeanUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

/**
 * @description: HostScanMessageListener <br>
 * date: 2020/7/7 11:47 上午 <br>
 * @author: chl <br>
 * version: 1.0 <br>
 */
@Component
@Slf4j
public class WeakScanMessageListener {
    @Autowired
    private IWeakpasswdScanService weakpasswdScanService;
    
    @KafkaListener(topics = {KafkaConstant.KafkaConfig.WEAK_PASSWD_SCANNER_TOPIC},
            groupId = KafkaConstant.KafkaConfig.WEAK_PASSWD_SCANNER_GROUP_ID)
    public void onMessage(String message) {
        log.info("HostScanMessageListener.onMessage========>message={}",message);
        ScanResult<ScanResultDO> result = JSON.parseObject(message,ScanResult.class);
        ScanResult<ScanResultDO> resultType = JSON.parseObject(message,new TypeReference<ScanResult<ScanResultDO>>(){});
        ScanResult<ScanResultDO> result3 = new ScanResult<>();
        BeanUtils.copyProperties(resultType,result3);
        log.info("result======>{}",result);
        log.info("resultType======>{}",resultType);
        log.info("result3======>{}",result3);
        ScanResultDO scanResultDO = result3.getPayload();
        Long taskId = scanResultDO.getTaskId();
        TaskGoalDO taskGoalDO = weakpasswdScanService.selectWeakpasswdScanById(taskId);
        Integer scanstatus = taskGoalDO.getScanstatus();
        if(scanstatus == 0){
            //将扫描结果入库
            scanResultDO.setGmtCreate(DateUtils.getNowDate());

            //修改扫描状态为扫描结束
            taskGoalDO.setGmtModified(DateUtils.getNowDate());
            taskGoalDO.setScanstatus(1);
            int r = weakpasswdScanService.handleResult(taskGoalDO,scanResultDO);
        }


    }

}

kafka eagle监听

Dubbo

@Service暴露服务 @Reference来引用服务

user-service-provider 服务提供者(这个项目是实现类)

user-service-consumer 服务消费者(这个项目是实现类)

taobao-interface 用于暴露服务,其中有user-service-provider的接口,与user-service-consumer的接口!

————————————————

原文链接:https://blog.csdn.net/qq_38263083/java/article/details/83417933

————————————————

提供者:weakpasswordcheck项目

暴露服务:weak-passwd-scanner-interface

消费者:weak-passwd-scanner

1首先提供者weakpasswordcheck引入暴露服务weak-passwd-scanner-interface的依赖和dubbo相关依赖

2消费者weak-passwd-scanner引入暴露服务weak-passwd-scanner-interface的依赖和dubbo相关依赖

注意springboot版本要用新的

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.1.3.RELEASE</version>
    <relativePath/>
</parent>

<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    <java.version>1.8</java.version>
    <mybatis-plus.version>3.3.0</mybatis-plus.version>
    <velocity.version>1.7</velocity.version>
    <activiti.version>5.22.0</activiti.version>
    <mybatis.version>3.4.4</mybatis.version>
    <druid.version>1.0.28</druid.version>
    <fastjson.version>1.2.31</fastjson.version>
    <dubbo.version>2.7.7</dubbo.version>
    <nacos.version>1.1.4</nacos.version>
    <spring-kafka.version>2.2.4.RELEASE</spring-kafka.version>
</properties>

<dependency>
    <groupId>org.apache.dubbo</groupId>
    <artifactId>dubbo</artifactId>
    <version>${dubbo.version}</version>
</dependency>
<dependency>
    <groupId>io.netty</groupId>
    <artifactId>netty-all</artifactId>
</dependency>
<dependency>
    <groupId>com.develop</groupId>
    <artifactId>ccs-weak-passwd-scanner-interface</artifactId>
    <version>1.0-SNAPSHOT</version>
</dependency>

<dependency>
    <groupId>org.apache.dubbo</groupId>
    <artifactId>dubbo-spring-boot-starter</artifactId>
    <version>${dubbo.version}</version>
</dependency>

<dependency>
    <groupId>org.apache.dubbo</groupId>
    <artifactId>dubbo-registry-nacos</artifactId>
    <version>${dubbo.version}</version>
    <exclusions>
        <exclusion>
            <artifactId>fastjson</artifactId>
            <groupId>com.alibaba</groupId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>com.alibaba.nacos</groupId>
    <artifactId>nacos-client</artifactId>
    <version>${nacos.version}</version>
</dependency>

3服务者加上dubbo配置 application-dev.yml     host为运行程序的ip     address为dubbo注册中心

dubbo:
  registry:
    address: nacos://10.0.9.4:8848
  protocol:
    port: 28880
    name: dubbo
    host: 10.0.3.159
    #host: 10.0.6.116

4 消费者加上bubbo配置 application-dev.yml    host为运行程序的ip    address为dubbo注册中心

dubbo:
  registry:
    check: true
    address: nacos://10.0.9.4:8848
  consumer:
    check: false
  protocol:
    port: 28880
    name: dubbo
    #host: 10.0.3.159
    host: 10.0.6.25
  reference:
    check: false

5服务者启动类上加上Dubbl注解 @DubboComponentScan

package com.develop;

import org.apache.dubbo.config.spring.context.annotation.DubboComponentScan;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.ServletComponentScan;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.transaction.annotation.EnableTransactionManagement;
@DubboComponentScan
@EnableTransactionManagement
@ServletComponentScan
@MapperScan("com.develop.*.dao")
@SpringBootApplication(exclude = { org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration.class })
@EnableCaching
/**
 * description:  <br>
 * version: 1.0 <br>
 * date: 2020-01-03 14:22 <br>
 * @author: chl <br>
 * @param null
 * @return
 */
public class DevelopApplication {
   public static void main(String[] args) {
      SpringApplication.run(DevelopApplication.class, args);
   }
}

6先启动服务者,启动后看dubbo有没有注册上,访问注册中心

http://10.0.9.103:8848/nacos/#/

7服务者需要实现暴露者interface里的service接口,实现类上加上@DubboService

package com.develop.weakpasswordcheck.service.impl;

import com.develop.hydra.domain.AdapterDO;
import com.develop.hydra.service.AdapterService;
import com.develop.weak.passwd.scanner.entity.Adapter;
import com.develop.weak.passwd.scanner.service.WeakAdapterService;
import lombok.extern.slf4j.Slf4j;
import org.apache.dubbo.config.annotation.DubboService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.util.ArrayList;
import java.util.List;

@DubboService
@Service
@Slf4j
public class WPCAdapterServiceImpl implements WeakAdapterService {
    @Autowired
    AdapterService adapterService;

    @Override
    public List<Adapter> getAll() {
        List<Adapter> list = new ArrayList<Adapter>();
        List<AdapterDO> all = adapterService.getAll();
        for(AdapterDO adapter:all){
            Adapter map = new Adapter(adapter.getId(),adapter.getComment());
            list.add(map);
        }
        return list;
    }

}

8消费者引入服务   引入暴露者interface加上@DubboReference  引入后就可以直接使用

import org.apache.dubbo.config.annotation.DubboReference;

@DubboReference
private WeakPasswdScanService weakPasswdScannerService;

9消费者启动类加上

@DubboComponentScan  @NacosConfigurationProperties(dataId=”ccs-web”,autoRefreshed=true)

ccs-web为application-dev.yml 中develp.name即项目的名称

package com.develop;

import com.alibaba.nacos.api.config.annotation.NacosConfigurationProperties;
import org.apache.dubbo.config.spring.context.annotation.DubboComponentScan;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration;

/**
 * 启动程序
 *
 * @author chl
 */
@SpringBootApplication(exclude = {DataSourceAutoConfiguration.class})
@DubboComponentScan
@NacosConfigurationProperties(dataId = "ccs-web", autoRefreshed = true)
@MapperScan({"com.develop.**.mapper", "com.develop.**.dao"})
public class CcsWebApplication {
    public static void main(String[] args) {
        System.setProperty("spring.devtools.restart.enabled", "false");
        SpringApplication.run(CcsWebApplication.class, args);
        System.out.println("                                                                       \n" +
                "                                                                       \n" +
                "    ,---,                                   ,--,                       \n" +
                "  .'  .' `\\                               ,--.'|            ,-.----.   \n" +
                ",---.'     \\                              |  | :     ,---.  \\    /  \\  \n" +
                "|   |  .`\\  |               .---.         :  : '    '   ,'\\ |   :    | \n" +
                ":   : |  '  |   ,---.     /.  ./|  ,---.  |  ' |   /   /   ||   | .\\ : \n" +
                "|   ' '  ;  :  /     \\  .-' . ' | /     \\ '  | |  .   ; ,. :.   : |: | \n" +
                "'   | ;  .  | /    /  |/___/ \\: |/    /  ||  | :  '   | |: :|   |  \\ : \n" +
                "|   | :  |  '.    ' / |.   \\  ' .    ' / |'  : |__'   | .; :|   : .  | \n" +
                "'   : | /  ; '   ;   /| \\   \\   '   ;   /||  | '.'|   :    |:     |`-' \n" +
                "|   | '` ,/  '   |  / |  \\   \\  '   |  / |;  :    ;\\   \\  / :   : :    \n" +
                ";   :  .'    |   :    |   \\   \\ |   :    ||  ,   /  `----'  |   | :    \n" +
                "|   ,.'       \\   \\  /     '---\" \\   \\  /  ---`-'           `---'.|    \n" +
                "'---'          `----'             `----'                      `---`    \n" +
                "                                                                       ");
    }
}

springcloud

五大组建

  • Spring Cloud Eureka:服务注册与发现
  • Spring Cloud Zuul:服务网关
  • Spring Cloud Ribbon:客户端负载均衡
  • Spring Cloud Feign:声明性的Web服务客户端
  • Spring Cloud Hystrix:断路器

springCloud类似dubbo使用

spring cloud nacos 简单配置_nacos服务器设置了context-path,客户端怎么配置-CSDN博客

多个项目项目依赖,在最外层pom.xml里引入各个module,在各个模块里引入最外层parent依赖

api创建ssoUser接口:@FeignClient(value = "amp-sso-server", path = "/ssoService", fallbackFactory = SsoUserServiceIFallBack.class)

ssoUser微服务实现api implements SsoUserServiceI

pom可以引入oauth单点登录

 soUser微服务的启动类加上@EnableFeignClients ,扫描@Feignxxx接口

其他服务调用ssoUser微服务  @Autowired或@Resource api接口

其他服务的api中Pom.xml需要引入调用服务接口依赖spring-cloud-starter-openfeign

Springboot加入spring提供的连接池ThreadPoolTaskExecutor的使用

1 连接池配置信息:application-dev.yml

develop:
  uploadPath: c:/var/uploaded_files/
  username: admin
  password: Osssafe!2020
  thread-pool-config:
    open: true
    corePoolSize: 8
    maxPoolSize: 64
    queueCapacity: 10000
    keepAliveSeconds: 30
    threadNamePrefix: task-asyn

这里的thread-pool-config配置可以用TheadConfigProperties类替代

package com.develop.host.scanner.config.properties;

import lombok.Data;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.stereotype.Component;

/**
 * @description: ThreadConfigProperties <br>
 * date: 2020/7/7 11:04 上午 <br>
 * @author: chl <br>
 * version: 1.0 <br>
 */
@Component
@ConfigurationProperties(prefix = "develop.thread-config")
@Data
public class ThreadConfigProperties {
    /**
     * 核心线程池大小
     */
    private Integer corePoolSize = 50;

    /**
     * 最大可创建的线程数
     */
    private Integer maxPoolSize = 200;

    /**
     * 队列最大长度
     */
    private Integer queueCapacity = 1000;

    /**
     * 线程池维护线程所允许的空闲时间
     */
    private Integer keepAliveSeconds = 300;
    /**
     * 线程名称前缀
     */
    private String threadNamePrefix = "taskThread";


}

2 连接池配置类:DevelopConfig.java

package com.develop.common.config;

import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.stereotype.Component;

import java.util.HashMap;
import java.util.Map;

/**
 * description:  <br>
 * version: 1.0 <br>
 * date: 2020-01-03 15:13 <br>
 * @author: chl <br>
 * @return
 */
@Component
@ConfigurationProperties(prefix="develop")
public class DevelopConfig {
   /**
    * 上传路径
    */
   private String uploadPath;

   private String username;

   private String password;

   public String getUploadPath() {
      return uploadPath;
   }

   private Map<String,Object> threadPoolConfig = new HashMap<>();


   public void setUploadPath(String uploadPath) {
      this.uploadPath = uploadPath;
   }

   public String getUsername() {
      return username;
   }

   public void setUsername(String username) {
      this.username = username;
   }

   public String getPassword() {
      return password;
   }

   public void setPassword(String password) {
      this.password = password;
   }

   public Map<String, Object> getThreadPoolConfig() {
      return threadPoolConfig;
   }

   public void setThreadPoolConfig(Map<String, Object> threadPoolConfig) {
      this.threadPoolConfig = threadPoolConfig;
   }
}

若果配置信息用的TheadConfigProperties类,线程池参数配置修改如下,不需要有getThreadPoolConfig和setThreadPollConfig

/**
 * 线程池参数配置
 */
@Resource
private ThreadConfigProperties threadConfigProperties;

3 将连接池配置信息配置给连接池注册的bean

package com.develop.common.config;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;

import javax.annotation.Resource;
import java.util.concurrent.ThreadPoolExecutor;

/**
 * 线程池配置
 *
 * @author chl
 **/
@Configuration
public class ThreadPollConfig {

    @Resource
    private DevelopConfig developConfig;

    @Bean(name = "threadPoolTaskExecutor")
    public ThreadPoolTaskExecutor threadPoolTaskExecutor() {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();

        executor.setMaxPoolSize(Integer.parseInt(developConfig.getThreadPoolConfig().get("maxPoolSize").toString()));
        executor.setCorePoolSize(Integer.parseInt(developConfig.getThreadPoolConfig().get("corePoolSize").toString()));
        executor.setQueueCapacity(Integer.parseInt(developConfig.getThreadPoolConfig().get("queueCapacity").toString()));
        executor.setKeepAliveSeconds(Integer.parseInt(developConfig.getThreadPoolConfig().get("keepAliveSeconds").toString()));
        executor.setThreadNamePrefix(developConfig.getThreadPoolConfig().get("threadNamePrefix").toString());
        // 线程池对拒绝任务(无线程可用)的处理策略
        executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
        return executor;
    }
}

如果1 的配置信息用的TheadConfigProperties类,ThreadPollConfig修改如下

package com.develop.host.scanner.config;

import com.develop.common.utils.Threads;
import org.apache.commons.lang3.concurrent.BasicThreadFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;

import javax.annotation.Resource;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ScheduledThreadPoolExecutor;
import java.util.concurrent.ThreadPoolExecutor;

/**
 * 线程池配置
 *
 * @author chl
 **/
@Configuration
public class ThreadPoolConfig {
    @Resource
    private DevelopConfig developConfig;

    @Bean(name = "threadPoolTaskExecutor")
    public ThreadPoolTaskExecutor threadPoolTaskExecutor() {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        executor.setMaxPoolSize(developConfig.getThreadConfigProperties().getMaxPoolSize());
        executor.setCorePoolSize(developConfig.getThreadConfigProperties().getCorePoolSize());
        executor.setQueueCapacity(developConfig.getThreadConfigProperties().getQueueCapacity());
        executor.setKeepAliveSeconds(developConfig.getThreadConfigProperties().getKeepAliveSeconds());
        executor.setThreadNamePrefix(developConfig.getThreadConfigProperties().getThreadNamePrefix());
        // 线程池对拒绝任务(无线程可用)的处理策略
        executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
        return executor;
    }

    /**
     * 执行周期性或定时任务
     */
    @Bean(name = "scheduledExecutorService")
    protected ScheduledExecutorService scheduledExecutorService() {
        return new ScheduledThreadPoolExecutor(developConfig.getThreadConfigProperties().getCorePoolSize(),
                new BasicThreadFactory.Builder().namingPattern("schedule-pool-%d").daemon(true).build()) {
            @Override
            protected void afterExecute(Runnable r, Throwable t) {
                super.afterExecute(r, t);
                Threads.printException(r, t);
            }
        };
    }
}

4 使用ThreadPollTaskExecutor连接池

import com.alibaba.fastjson.JSON;
import com.develop.common.utils.DateUtil;
import com.develop.hydra.adapter.HydraAdapter;
import com.develop.hydra.adapter.util.SplitIpsUtil;
import com.develop.hydra.domain.AdapterDO;
import com.develop.hydra.domain.CreakConfigDO;
import com.develop.hydra.service.AdapterService;
import com.develop.medusa.adapter.MedusaAdapter;
import com.develop.pwddict.domain.PwddictDO;
import com.develop.pwddict.service.PwddictService;
import com.develop.resultcenter.service.ResultService;
import com.develop.strategy.domain.StrategyDO;
import com.develop.strategy.service.StrategyService;
import com.develop.weak.passwd.scanner.constant.KafkaConstant;
import com.develop.weak.passwd.scanner.entity.ScanResult;
import com.develop.weak.passwd.scanner.entity.ScanResultDO;
import com.develop.weak.passwd.scanner.service.WeakPasswdScanService;
import com.develop.weakpasswordcheck.service.WPCScanService;
import lombok.extern.slf4j.Slf4j;
import org.apache.dubbo.config.annotation.DubboService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;
import org.springframework.stereotype.Service;
import org.springframework.util.StringUtils;
import com.develop.weak.passwd.scanner.entity.TaskGoalDO;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;


import javax.annotation.Resource;
import java.io.*;
import java.text.SimpleDateFormat;
import java.util.*;
import java.util.concurrent.TimeUnit;

@Resource//(name="clientInboundChannelExecutor")连接池信息配好后这里可以删了
private ThreadPoolTaskExecutor threadPoolTaskExecutor;

@Override
public ScanResult<String> scanHosts(List<TaskGoalDO> taskgoals) {
    //封装扫描结果
    ScanResult scanResult = new ScanResult();
    scanResult.setSuccess(true);
    scanResult.setScanId(taskgoals.get(0).getTaskId());
    scanResult.setCode(HttpStatus.OK.value());
    threadPoolTaskExecutor.execute(() -> {
        //TODO 主机扫描逻辑,此处可以单独出一个Service进行调用
        try {
            for (TaskGoalDO taskgoal:taskgoals) {
                ScanResultDO scanstart = scanstart(taskgoal);
                scanResult.setMessage(scanstart.getDetail());
                scanResult.setPayload(scanstart);
            }
            TimeUnit.SECONDS.sleep(30);
        } catch (InterruptedException e) {
            scanResult.setCode(HttpStatus.INTERNAL_SERVER_ERROR.value());
            scanResult.setSuccess(false);
            e.printStackTrace();
        }
        ListenableFuture<SendResult<String, String>> send
                = kafkaTemplate.send(KafkaConstant.KafkaConfig.WEAK_PASSWD_SCANNER_TOPIC, JSON.toJSONString(scanResult));
        send.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {

            @Override
            public void onSuccess(SendResult<String, String> stringStringSendResult) {
                log.info("send message success....");
            }
            @Override
            public void onFailure(Throwable throwable) {
                log.error("send message error....", throwable);
            }
        });

    });

    return scanResult;
}

CountDownLatch同步器的使用

CountDownLatch是一个同步计数器,能够保证在其他线程完成某一个业务操作前,当前线程一直处于等待/阻塞状态。具体来说,这个计数器将会从给定的某一个数值count开始,通过countDown()方法的调用进行倒数。当执行某一次countDown()操作后,计数器的count数值就会减一,当等于0,所有调用了await()方法的线程,就会被唤醒,就解除等待/阻塞状态继续执行。

@EventListener
public void eventListener(TaskEvent taskEvent) {
    log.info("listener-taskEvent=========>{}", taskEvent);
    List<TaskGoalDO> taskgoals = (List<TaskGoalDO>)taskEvent.getSource();
    ScanResult scanResult = new ScanResult();
    scanResult.setSuccess(true);

    final CountDownLatch count = new CountDownLatch(taskgoals.size());
    for (TaskGoalDO taskgoal : taskgoals) {

        threadPoolTaskExecutor.execute(() -> {
            //添加goal
            GoalDO goal = new GoalDO();
            goal.setSystemName(taskgoal.getSystemName());
            goal.setArea(taskgoal.getArea());
            goal.setGoal(taskgoal.getGoal());
            goal.setGmtCreate(new Date());
            //存放执行结果
            List<ResultDO> resultList = new ArrayList<>();
            TaskOnlineDO taskOnline = new TaskOnlineDO();
            if(goalService.save(goal)>0){
                //添加在线任务
                taskOnline.setAdapterId(taskgoal.getAdapterId());
                taskOnline.setStrategyId(taskgoal.getStrategyId());
                taskOnline.setDetectionObj(goal.getId().toString());
                taskOnline.setReportGeneration(1);//手动下载报表
                taskOnline.setThreadCound(taskgoal.getThreadCound());
                taskOnline.setTimeout(taskgoal.getTimeout());
                if(taskOnlineService.save(taskOnline)>0){
                    //执行任务
                    taskOnlineService.execute(taskOnline.getId());
                    //查询执行结果
                    Map map = new HashMap();
                    map.put("taskId",taskOnline.getId());
                    resultList = resultService.list(map);
                }
            }

            count.countDown();

            //有结果后将扫描结果推入kafka
            /*if(!CollectionUtils.isEmpty(resultList)){
                ResultDO resultDO = resultList.get(0);
                //封装payload对象scanresultDO
                ScanResultDO scanresultDO = handleResult(taskOnline.getId(),resultDO);
                scanresultDO.setWeakpwdScanId(taskgoal.getWeakpwdScanId());
                scanresultDO.setTaskId(taskgoal.getTaskId());
                scanresultDO.setTaskName(taskgoal.getTaskName());
                scanResult.setCode(0);//当前弱口令任务正在扫描
                scanResult.setMessage(scanresultDO.getDetail());
                scanResult.setPayload(scanresultDO);
                ListenableFuture<SendResult<String, String>> send
                        = kafkaTemplate.send(KafkaConstant.KafkaConfig.WEAK_PASSWD_SCANNER_TOPIC, JSON.toJSONString(scanResult));
                send.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {

                    @Override
                    public void onSuccess(SendResult<String, String> stringStringSendResult) {
                        log.info("send message success....");
                    }

                    @Override
                    public void onFailure(Throwable throwable) {
                        log.error("send message error....", throwable);
                    }
                });
            }*/


        });
    }
    try {
        count.await();
        //封装payload对象scanresultDO
        ScanResultDO scanresultDO = new ScanResultDO();
        scanresultDO.setWeakpwdScanId(taskgoals.get(0).getWeakpwdScanId());
        scanresultDO.setTaskId(taskgoals.get(0).getTaskId());
        scanresultDO.setTaskName(taskgoals.get(0).getTaskName());
        scanResult.setCode(1);//弱口令已扫描结束
        scanResult.setMessage("扫描任务已结束");
        scanResult.setPayload(scanresultDO);
        ListenableFuture<SendResult<String, String>> send
                = kafkaTemplate.send(KafkaConstant.KafkaConfig.WEAK_PASSWD_SCANNER_TOPIC, JSON.toJSONString(scanResult));
        send.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {

            @Override
            public void onSuccess(SendResult<String, String> stringStringSendResult) {
                log.info("send message success....");
            }

            @Override
            public void onFailure(Throwable throwable) {
                log.error("send message error....", throwable);
            }
        });
    } catch (InterruptedException e) {
        log.error("弱口令扫描中断异常",e);
    }

}

CycleBarrier

CyclicBarrier也类似于一个减法计数器,设置指定的数值减到0后执行相应的事件。

和CountDownLatch不同的是CyclicBarrier的await方法内部自动减1,也就是调用await方法就执行了减1操作。假设数值设置为5,每个线程调用一次await方法,如果只有4个线程调用了,那么程序不会结束,会等待第五次调用await后,计数值减到0才会继续运行直到结束。

使用场景类似于CountDownLatch与CyclicBarrier的区别

CountDownLatch主要是实现了1个或N个线程需要等待其他线程完成某项操作之后才能继续往下执行操作,描述的是1个线程或N个线程等待其他线程的关系。CyclicBarrier主要是实现了多个线程之间相互等待,直到所有的线程都满足了条件之后各自才能继续执行后续的操作,描述的多个线程内部相互等待的关系。

CountDownLatch是一次性的,而CyclicBarrier则可以被重置而重复使用。

Semaphore

经常用来限流

设置一个数值,这个数值相当于一个停车场里面的车位,每次只准这么多的车进来停放,多了就要慢慢等。释放一个资源(有一个车走了,空了一个车位),等待的线程才能使用(等待的车辆有位置停车)。

信号量主要用于两个目的:一个是用于多个共享资源的互斥使用,另一个用于并发线程数的控制。

原理:

acquire(获取)

        当一个线程调用 acquire 操作时,他要么通过成功获取信号量(信号量-1)

        要么一直等下去,直到有线程释放信号量,或超时

release (释放)

        实际上会将信号量的值 + 1,然后唤醒等待的线程。

jdk提供的连接池ThreadPoolExecutor的使用

1连接池配置XxlJobAdminConfig.java

package com.xxl.job.admin.core.conf;

import com.xxl.job.admin.core.scheduler.XxlJobScheduler;
import com.xxl.job.admin.dao.*;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.mail.javamail.JavaMailSender;
import org.springframework.stereotype.Component;

import javax.annotation.Resource;
import javax.sql.DataSource;

/**
 * xxl-job config
 *
 * @author xuxueli 2017-04-28
 */

@Component
public class XxlJobAdminConfig implements InitializingBean, DisposableBean {

    private static XxlJobAdminConfig adminConfig = null;
    public static XxlJobAdminConfig getAdminConfig() {
        return adminConfig;
    }


    // ---------------------- XxlJobScheduler ----------------------

    private XxlJobScheduler xxlJobScheduler;

    @Override
    public void afterPropertiesSet() throws Exception {
        adminConfig = this;

        xxlJobScheduler = new XxlJobScheduler();
        xxlJobScheduler.init();
    }

    @Override
    public void destroy() throws Exception {
        xxlJobScheduler.destroy();
    }


    // ---------------------- XxlJobScheduler ----------------------

    // conf
    @Value("${xxl.job.i18n}")
    private String i18n;

    @Value("${xxl.job.accessToken}")
    private String accessToken;

    @Value("${spring.mail.username}")
    private String emailUserName;

    @Value("${xxl.job.triggerpool.fast.max}")
    private int triggerPoolFastMax;

    @Value("${xxl.job.triggerpool.slow.max}")
    private int triggerPoolSlowMax;

    @Value("${xxl.job.logretentiondays}")
    private int logretentiondays;

    // dao, service

    @Resource
    private XxlJobLogDao xxlJobLogDao;
    @Resource
    private XxlJobInfoDao xxlJobInfoDao;
    @Resource
    private XxlJobRegistryDao xxlJobRegistryDao;
    @Resource
    private XxlJobGroupDao xxlJobGroupDao;
    @Resource
    private XxlJobLogReportDao xxlJobLogReportDao;
    @Resource
    private JavaMailSender mailSender;
    @Resource
    private DataSource dataSource;


    public String getI18n() {
        return i18n;
    }

    public String getAccessToken() {
        return accessToken;
    }

    public String getEmailUserName() {
        return emailUserName;
    }

    public int getTriggerPoolFastMax() {
        if (triggerPoolFastMax < 200) {
            return 200;
        }
        return triggerPoolFastMax;
    }

    public int getTriggerPoolSlowMax() {
        if (triggerPoolSlowMax < 100) {
            return 100;
        }
        return triggerPoolSlowMax;
    }

    public int getLogretentiondays() {
        if (logretentiondays < 7) {
            return -1;  // Limit greater than or equal to 7, otherwise close
        }
        return logretentiondays;
    }

    public XxlJobLogDao getXxlJobLogDao() {
        return xxlJobLogDao;
    }

    public XxlJobInfoDao getXxlJobInfoDao() {
        return xxlJobInfoDao;
    }

    public XxlJobRegistryDao getXxlJobRegistryDao() {
        return xxlJobRegistryDao;
    }

    public XxlJobGroupDao getXxlJobGroupDao() {
        return xxlJobGroupDao;
    }

    public XxlJobLogReportDao getXxlJobLogReportDao() {
        return xxlJobLogReportDao;
    }

    public JavaMailSender getMailSender() {
        return mailSender;
    }

    public DataSource getDataSource() {
        return dataSource;
    }

}

2使用连接池JobTriggerPoolHelper.java

package com.xxl.job.admin.core.thread;

import com.xxl.job.admin.core.conf.XxlJobAdminConfig;
import com.xxl.job.admin.core.model.XxlJobLog;
import com.xxl.job.admin.core.trigger.TriggerTypeEnum;
import com.xxl.job.admin.core.trigger.XxlJobTrigger;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicInteger;

/**
 * job trigger thread pool helper
 *
 * @author xuxueli 2018-07-03 21:08:07
 */
public class JobTriggerPoolHelper {
    private static Logger logger = LoggerFactory.getLogger(JobTriggerPoolHelper.class);


    // ---------------------- trigger pool ----------------------

    // fast/slow thread pool
    private ThreadPoolExecutor fastTriggerPool = null;
    private ThreadPoolExecutor slowTriggerPool = null;

    public void start() {
        fastTriggerPool = new ThreadPoolExecutor(
                10,
                XxlJobAdminConfig.getAdminConfig().getTriggerPoolFastMax(),
                60L,
                TimeUnit.SECONDS,
                new LinkedBlockingQueue<Runnable>(1000),
                new ThreadFactory() {
                    @Override
                    public Thread newThread(Runnable r) {
                        return new Thread(r, "xxl-job, admin JobTriggerPoolHelper-fastTriggerPool-" + r.hashCode());
                    }
                });

        slowTriggerPool = new ThreadPoolExecutor(
                10,
                XxlJobAdminConfig.getAdminConfig().getTriggerPoolSlowMax(),
                60L,
                TimeUnit.SECONDS,
                new LinkedBlockingQueue<Runnable>(2000),
                new ThreadFactory() {
                    @Override
                    public Thread newThread(Runnable r) {
                        return new Thread(r, "xxl-job, admin JobTriggerPoolHelper-slowTriggerPool-" + r.hashCode());
                    }
                });
    }


    public void stop() {
        //triggerPool.shutdown();
        fastTriggerPool.shutdownNow();
        slowTriggerPool.shutdownNow();
        logger.info(">>>>>>>>> xxl-job trigger thread pool shutdown success.");
    }


    /**
     * job timeout count
     */
    private volatile long minTim = System.currentTimeMillis() / 60000;
    private volatile ConcurrentMap<Integer, AtomicInteger> jobTimeoutCountMap = new ConcurrentHashMap<>();


    /**
     * add trigger
     */
    public void addTrigger(final int jobId, final TriggerTypeEnum triggerType, final int failRetryCount, final String executorShardingParam, final String executorParam) {

        // choose thread pool
        ThreadPoolExecutor triggerPool_ = fastTriggerPool;
        AtomicInteger jobTimeoutCount = jobTimeoutCountMap.get(jobId);
        // job-timeout 10 times in 1 min
        if (jobTimeoutCount != null && jobTimeoutCount.get() > 10) {
            triggerPool_ = slowTriggerPool;
        }

        // trigger
        triggerPool_.execute(new Runnable() {
            @Override
            public void run() {

                long start = System.currentTimeMillis();

                try {
                    // do trigger
                    XxlJobTrigger.trigger(jobId, triggerType, failRetryCount, executorShardingParam, executorParam);
                } catch (Exception e) {
                    logger.error(e.getMessage(), e);
                } finally {

                    // check timeout-count-map
                    long minTim_now = System.currentTimeMillis() / 60000;
                    if (minTim != minTim_now) {
                        minTim = minTim_now;
                        jobTimeoutCountMap.clear();
                    }

                    // incr timeout-count-map
                    long cost = System.currentTimeMillis() - start;
                    if (cost > 500) {       // ob-timeout threshold 500ms
                        AtomicInteger timeoutCount = jobTimeoutCountMap.putIfAbsent(jobId, new AtomicInteger(1));
                        if (timeoutCount != null) {
                            timeoutCount.incrementAndGet();
                        }
                    }

                }

            }
        });
    }


    // ---------------------- helper ----------------------

    private static JobTriggerPoolHelper helper = new JobTriggerPoolHelper();

    public static void toStart() {
        helper.start();
    }

    public static void toStop() {
        helper.stop();
    }

    /**
     * @param jobId
     * @param triggerType
     * @param failRetryCount        >=0: use this param
     *                              <0: use param from job info config
     * @param executorShardingParam
     * @param executorParam         null: use job param
     *                              not null: cover job param
     */
    public static void trigger(int jobId, TriggerTypeEnum triggerType, int failRetryCount, String executorShardingParam, String executorParam) {
        helper.addTrigger(jobId, triggerType, failRetryCount, executorShardingParam, executorParam);
    }

    /**
     * description: triggerManual 手动调用 <br>
     * version: 1.0 <br>
     * date: 2020/5/11 4:50 下午 <br>
     *
     * @param jobId         jobId
     * @param executorParam 执行参数
     * @param xxlJobLog     调度日志
     * @return void
     * @author: chl <br>
     */
    public static void triggerManual(int jobId, String executorParam, XxlJobLog xxlJobLog) {
        helper.addTriggerManual(jobId, executorParam, xxlJobLog);
    }

    /**
     * description: addTriggerManual 手动调用触发<br>
     * version: 1.0 <br>
     * date: 2020/5/11 4:54 下午 <br>
     *
     * @param jobId
     * @param executorParam
     * @param xxlJobLog
     * @return void
     * @author: chl <br>
     */
    public void addTriggerManual(final int jobId, final String executorParam, final XxlJobLog xxlJobLog) {
        // choose thread pool
        ThreadPoolExecutor triggerPool_ = fastTriggerPool;
        AtomicInteger jobTimeoutCount = jobTimeoutCountMap.get(jobId);
        // job-timeout 10 times in 1 min
        if (jobTimeoutCount != null && jobTimeoutCount.get() > 10) {
            triggerPool_ = slowTriggerPool;
        }

        // trigger
        triggerPool_.execute(new Runnable() {
            @Override
            public void run() {

                long start = System.currentTimeMillis();

                try {
                    // do trigger
                    XxlJobTrigger.trigger(jobId, TriggerTypeEnum.MANUAL, -1,
                            null, executorParam, xxlJobLog);

                } catch (Exception e) {
                    logger.error(e.getMessage(), e);
                } finally {

                    // check timeout-count-map
                    long minTim_now = System.currentTimeMillis() / 60000;
                    if (minTim != minTim_now) {
                        minTim = minTim_now;
                        jobTimeoutCountMap.clear();
                    }

                    // incr timeout-count-map
                    long cost = System.currentTimeMillis() - start;
                    if (cost > 500) {       // ob-timeout threshold 500ms
                        AtomicInteger timeoutCount = jobTimeoutCountMap.putIfAbsent(jobId, new AtomicInteger(1));
                        if (timeoutCount != null) {
                            timeoutCount.incrementAndGet();
                        }
                    }

                }

            }
        });


    }


}

postman 变量赋值的使用

获取验证码

Tests:

pm.test("Set verifyCode success"function () {

    var jsonData = pm.response.json()

    pm.environment.set("verifyCode", jsonData.verifyCode);

    pm.environment.set("uuid", jsonData.uuid);

    // postman.setNextRequest("登录");

});

登录

Body:

{

    "username""admin",

    "password""admin123",

    "code""{{verifyCode}}",

    "uuid""{{uuid}}"

}

Tests:

pm.test("Set token success"function () {

    var jsonData = pm.response.json()

    pm.environment.set("token", jsonData.token);

});

用户信息

用户信息

Tests:

pm.test("get userinfo success"function () {

    var jsonData = pm.response.json()

    !!!jsonData.user == true

});

Weakpasswd-scan

需要token的请求Authorization的Type都要选择Bearer Token

postman发送soap报文

例如url:http://localhost:8082/iptvServerService/services/iptvUpWService?wsdl

报文:

<soapenv:Envelope  xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"  

xmlns:ser="http://service.henanIptvServer.com">

<soapenv:Header/>

<soapenv:Body>

<ser:invokeService>

<ser:serviceName>queryOnuInfo1</ser:serviceName>

<ser:accountId>1850291456</ser:accountId>

<ser:type>3</ser:type>

</ser:invokeService>

</soapenv:Body>

</soapenv:Envelope>

Elasticsearch es数据库

Elasticsearch数据库_elasticsearch是数据库吗-CSDN博客

1 什么是elasticsearch

Elasticsearch和MongoDB/Redis/Memcache一样,是非关系型数据库。是一个接近实时的搜索平台,从索引这个文档到这个文档能够被搜索到只有一个轻微的延迟,企业应用定位:采用Restful API标准的可扩展和高可用的实时数据分析的全文搜索工具。

2 elasticsearch安装

1 新建es用户

adduser es

可以使用 passwd es换密码,我没有换

会生成用户目录:/home/es

2 下载elasticsearch-7.3.1-linux-x86_64.tar.gz 放进/home/es

cd /home/es/ elasticsearch-7.3.1-linux-x86_64.tar.gz

解压tar –zxfvf elasticsearch-7.3.1-linux-x86_64.tar.gz

3 切换root用户,给es用户赋予/home/es/elasticsearch-7.3.1

chown -R es /home/es/elasticsearch-7.3.1

4 切换es用户,启动es

su es

cd /home/es/elasticsearch-7.3.1

./bin/elasticserch

也可以加上  -d  后台启动

5测试es是否启动成功

curl localhost:9200

如出现上图说明es启动成功

输入curl http://10.0.3.73:9200

3 常见错误及其解决方式

6修改配置文件jvm.options  

cd /home/es/elasticsearch-7.3.1/config

vi jvm.options  将-Xms1g 改为 –Xms512m  将-Xmx1g改为-Xmx512m

然后修改 config/elasticsearch.yml

注释掉#修改如下

network.host: 0.0.0.0

discovery.seed_hosts: ["10.0.3.73:9200"]

修改后重启 ./bin/elasticsearch –d    再访问curl 10.0.3.73:9200  启动成功

如果报错,常见错误如下

错误一:max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]

解决:执行下面的命令:

sudo sysctl -w vm.max_map_count=262144

错误二:max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]

解决:执行下面的命令:

sudo vim /etc/security/limits.conf

在limits.conf最下方加入下面两行(这里的yjclsx是之前2.4步骤中新建的用户名):

  1. yjclsx hard nofile 65536
  2. yjclsx soft nofile 65536

 

4 启动完es使用Postman测试

添加index:   http://10.0.3.73:9200/library/\

{

"settings":{

"index":{

"number_of_shards":1,

"number_of_replicas":0

  }

    }

      }

  1. 安装head插件

Elasticsearch7.3学习笔记3- head插件安装和使用_51CTO博客_elasticsearch启动head命令

将nodejs、phantomjs、elasticsearch-head提前下载好防至/home/es下

将/home/es/权限赋予es用户

#su root

#chown -R es /home/es/

  1. nodejs安装
  2. # wget https://nodejs.org/dist/v10.9.0/node-v10.9.0-linux-x64.tar.xz    // 下载

此wget下载步骤可省略,已经下载好,放/home/es下即可

  1. # tar xf  node-v10.9.0-linux-x64.tar.xz       // 解压
  2. # cd node-v10.9.0-linux-x64/                  // 进入解压目录
  3. # ./bin/node -v                               // 执行node命令 查看版本

v10.9.0

解压文件的 bin 目录底下包含了 node、npm 等命令,我们可以使用 ln 命令来设置软连接:

ln -s /usr/software/nodejs/bin/npm   /usr/local/bin/

ln -s /usr/software/nodejs/bin/node   /usr/local/bin/

2.phantomjs安装配置

wget https://github.com/Medium/phantomjs/releases/download/v2.1.1/phantomjs-2.1.1-linux-x86_64.tar.bz2

此wget下载步骤可省略,已经下载好,放/home/es下即可

tar –jxvf  phantomjs-2.1.1-linux-x86_64.tar.bz2

  vim /etc/profile

export PATH=$PATH:/usr/local/phantomjs-2.1.1-linux-x86_64/bin

我的是

export PATH=$PATH:$JAVA_HOME/bin:$MAVEN_HOME/bin:$PAHT:/home/es/phantomjs-2.1.1-linux-x86_64/bin

 #注意环境变量$Path移动在最前面

source /etc/profile

3.elasticsearch-head安装

•   git clone git://github.com/mobz/elasticsearch-head.git

•   cd elasticsearch-head

•   npm install -g cnpm --registry=https://registry.npm.taobao.org

(直接安装 会存在phantomjs克隆不下来导致安装进行不下去,出现以下错误,所以需要先安装phantomjs)

•   npm run start

•   open http://localhost:9100/

  1. elasticsearch-head发现主机 并连接
    elasticsearch.yml配置文件修改:
  2. http.cors.enabled: true

http.cors.allow-origin: "*"

启动后

使用

使用浏览器访问http://10.0.3.73:9100/

将elasticsearch网址改为自己的es访问网址

6 Springboot使用elasticsearch的api

https://www.cnblogs.com/yloved/p/12888138.html

pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.3.2.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.example</groupId>
    <artifactId>elasticsearch</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>elasticsearch</name>
    <description>Demo project for Spring Boot</description>

    <properties>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <!--Elastic Search 客户端  start-->
        <dependency>
            <groupId>org.elasticsearch.client</groupId>
            <artifactId>elasticsearch-rest-high-level-client</artifactId>
            <version>7.6.2</version>
        </dependency>
        <!--如果elasticsearch-rest-high-level-client包中依赖的elasticsearch等版本不一致,需要手动引入相同版本-->
        <dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch</artifactId>
            <version>7.6.2</version>
        </dependency>
        <dependency>
            <groupId>org.elasticsearch.client</groupId>
            <artifactId>elasticsearch-rest-client</artifactId>
            <version>7.6.2</version>
        </dependency>
        <!--Elastic Search 客户端  end-->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-configuration-processor</artifactId>
            <optional>false</optional>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-thymeleaf</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
            <version>1.18.10</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
            <exclusions>
                <exclusion>
                    <groupId>org.junit.vintage</groupId>
                    <artifactId>junit-vintage-engine</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-test</artifactId>
            <version>2.3.2.RELEASE</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>5.2.8.RELEASE</version>
            <scope>compile</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

application.yml

elasticsearch:
  hostList: 10.0.3.73:9200 #多个节点用逗号隔开

EsApplication.java

package com.develop;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration;

/**
 * 启动程序
 *
 * @author chl
 */
@SpringBootApplication(exclude = {DataSourceAutoConfiguration.class})
public class  EsApplication {
    public static void main(String[] args) {
        SpringApplication.run(EsApplication.class, args);
        System.out.println("                                                                       \n" +
                "                                                                       \n" +
                "    ,---,                                   ,--,                       \n" +
                "  .'  .' `\\                               ,--.'|            ,-.----.   \n" +
                ",---.'     \\                              |  | :     ,---.  \\    /  \\  \n" +
                "|   |  .`\\  |               .---.         :  : '    '   ,'\\ |   :    | \n" +
                ":   : |  '  |   ,---.     /.  ./|  ,---.  |  ' |   /   /   ||   | .\\ : \n" +
                "|   ' '  ;  :  /     \\  .-' . ' | /     \\ '  | |  .   ; ,. :.   : |: | \n" +
                "'   | ;  .  | /    /  |/___/ \\: |/    /  ||  | :  '   | |: :|   |  \\ : \n" +
                "|   | :  |  '.    ' / |.   \\  ' .    ' / |'  : |__'   | .; :|   : .  | \n" +
                "'   : | /  ; '   ;   /| \\   \\   '   ;   /||  | '.'|   :    |:     |`-' \n" +
                "|   | '` ,/  '   |  / |  \\   \\  '   |  / |;  :    ;\\   \\  / :   : :    \n" +
                ";   :  .'    |   :    |   \\   \\ |   :    ||  ,   /  `----'  |   | :    \n" +
                "|   ,.'       \\   \\  /     '---\" \\   \\  /  ---`-'           `---'.|    \n" +
                "'---'          `----'             `----'                      `---`    \n" +
                "                                                                       ");
    }
}

ElasticSerachConfig.java

package com.develop.config;

import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.PropertySource;
import org.springframework.stereotype.Component;

@Component
@PropertySource("classpath:application.yml")//配置文件地址,可以自定义
@ConfigurationProperties("elasticsearch")//属性前缀
public class ElasticSearchConfig {
    //@Value("${elasticsearch.hostList}")
    private String hostList;//配置文件中的属性

    public String getHostList() {
        return hostList;
    }

    public void setHostList(String hostList) {
        this.hostList = hostList;
    }

    @Bean(value = "RestHighLevelClient", destroyMethod = "close")
    public RestHighLevelClient restHighLevelClient() {
        //通过逗号分割节点
        String[] split = hostList.split(",");
        HttpHost[] httpHosts = new HttpHost[split.length];
        for (int i = 0; i < split.length; i++) {
            //通过冒号分离出每一个节点的ip,port
            String[] split1 = split[i].split(":");
            //这里http写固定了,只为测试使用,可以通过读取配置文件赋值的方式优化
            httpHosts[i] = new HttpHost(split1[0], Integer.parseInt(split1[1]), "http");
        }
        RestHighLevelClient restHighLevelClient = new RestHighLevelClient(RestClient.builder(httpHosts));
        return restHighLevelClient;
    }
}

ElasticTest.java

package com.develop.test;

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.DocWriteResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.replication.ReplicationResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.Strings;
import org.elasticsearch.search.aggregations.AggregationBuilders;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;

import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

@RunWith(SpringRunner.class)
@SpringBootTest
public class ElasticTest {

    @Autowired
    @Qualifier("RestHighLevelClient")
    private RestHighLevelClient client;
    /**
     * 添加数据
     * @throws IOException
     */
    @Test
    public void index() throws IOException {
        UserTest userTest = new UserTest();
        userTest.setName("董28");
        userTest.setSex("男");
        //由于客户端不支持自定义实体对象类作为添加数据的参数,所以提前先将对象转化为Map对象
        Map map = entityToMap(userTest);
        System.out.println(map);
        IndexRequest indexRequest = new IndexRequest("posts1")
                .id("1").source(map);
        //异步
        client.indexAsync(indexRequest, RequestOptions.DEFAULT, new ActionListener<IndexResponse>() {

            @Override
            public void onResponse(IndexResponse indexResponse) {
                System.out.println(indexResponse);
                if (indexResponse.getResult() == DocWriteResponse.Result.CREATED) {
                    //新建
                } else if (indexResponse.getResult() == DocWriteResponse.Result.UPDATED) {
                    //修改
                }
                ReplicationResponse.ShardInfo shardInfo = indexResponse.getShardInfo();
                if (shardInfo.getTotal() != shardInfo.getSuccessful()) {
                    //
                }
                if (shardInfo.getFailed() > 0) {
                    for (ReplicationResponse.ShardInfo.Failure failure : shardInfo.getFailures()) {
                        String reason = failure.reason();
                    }
                }

            }

            @Override
            public void onFailure(Exception e) {
                e.printStackTrace();
                System.out.println("exception...");
            }
        });
        try {
            Thread.sleep(2000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        //同步
        /*IndexRequest indexRequest = new IndexRequest("es","user")
                .id("1").source(map);
        IndexResponse index = client.index(indexRequest, RequestOptions.DEFAULT);
        System.out.println(index);*/

    }

    /**
     * 通过索引index,id查询
     *
     */
    @Test
    public void get() throws IOException {
        GetRequest getRequest = new GetRequest("posts", "3");
        /*FetchSourceContext fetchSourceContext = new FetchSourceContext(true, new String[]{"sex"}, Strings.EMPTY_ARRAY);
        getRequest.fetchSourceContext(fetchSourceContext)*/
        GetResponse getResponse = client.get(getRequest, RequestOptions.DEFAULT);
        if (getResponse.isExists()) {
            System.out.println(getResponse);
            System.out.println(getResponse.getId());
            System.out.println(getResponse.getSource());
            System.out.println(getResponse.getSourceAsMap());
            UserTest userTest1 = entityConvert(getResponse.getSourceAsString(), UserTest.class);
            System.out.println(userTest1);
            UserTest userTest2 = entityConvert(getResponse.getSource(), UserTest.class);
            System.out.println(userTest2);
        }
    }

    @Test
    public void update() throws IOException {
        UpdateRequest updateRequest = new UpdateRequest("es", "1");
        updateRequest.doc("sex", "女");
        UpdateResponse update = client.update(updateRequest, RequestOptions.DEFAULT);
        System.out.println(update);
    }

    @Test
    public void delete() throws IOException {
        DeleteRequest deleteRequest = new DeleteRequest("posts","3");
        DeleteResponse delete = client.delete(deleteRequest, RequestOptions.DEFAULT);
    }

    /**
     * 通过字段值查询
     * @throws IOException
     */
    @Test
    public void search() throws IOException {
        SearchRequest searchRequest = new SearchRequest("posts");
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        //通过匹配某一字段查询
//        searchSourceBuilder.query(QueryBuilders.termQuery("name","董"));
        //选出指定结果字段
        searchSourceBuilder.fetchSource(new String[]{"id"}, Strings.EMPTY_ARRAY);
        //对结果排序
//        searchSourceBuilder.sort(new FieldSortBuilder("id").order(SortOrder.DESC));
        //聚集结果
        searchSourceBuilder.aggregation(
                AggregationBuilders
                        .max("maxValue")//命名
                        .field("id"));//指定聚集的字段
        //查询所有
//        searchSourceBuilder.query(QueryBuilders.matchAllQuery());
        //从0开始
//        searchSourceBuilder.from(2).size(2);

        searchRequest.source(searchSourceBuilder);
        SearchResponse search = client.search(searchRequest, RequestOptions.DEFAULT);
        System.out.println(search);
    }

    @Data
    @AllArgsConstructor
    @NoArgsConstructor
    private static class UserTest {
        private int id;
        private String name;
        private String sex;
        private Others others = new Others("132@qq.com", "199");
    }

    @Data
    @AllArgsConstructor
    @NoArgsConstructor
    private static class Others {
        private String email;
        private String phone;
    }

    /**
     * 实体对象转化为map
     * @param object
     * @return
     * @throws JsonProcessingException
     */
    static Map entityToMap(Object object) throws JsonProcessingException {
        Map map = entityConvert(object, HashMap.class);
        return map;
    }

    /**
     * 一个对象转化为另一个有相同属性的对象
     * @param object
     * @param clazz
     * @param <T>
     * @return
     * @throws JsonProcessingException
     */
    static <T> T entityConvert(Object object, Class<T> clazz) throws JsonProcessingException {
        ObjectMapper objectMapper = new ObjectMapper();
        String s;
        if (object instanceof String) {
            s = String.valueOf(object);
        } else {
            s = objectMapper.writeValueAsString(object);
        }
        T t = objectMapper.readValue(s, clazz);
        return t;
    }
}

  1. es在项目中的使用

流行的ELK日志监控告警,包括logstash 采集日志,elasticsearch 存储日志,kibana 利用数据生成web页面展示给客户,当前系统主要包括:

1 logstash收集服务器中产生的log

2 elasticsearch存储收集的log数据,比如ssh日志,存放的Index统一以ssh-2020-08-15命名,type统一名为doc

3 java通过elasticserach api ,搜索ssh开头的index,type为doc的,并且按照rangeQuery(三天前时间,现在时间)查询最近三天的log数据,犹豫数据量比较大,每隔6000000L秒读取一次,并且采取游标的方式分批读取,读取后的数据,其中一个字段叫messageDigest表示日志详细信息,按照一定的告警规则对比,一旦发现符合告警规则,即把那条数据存入mysql数据库

maven搭建私服用nexus

配置nginx解决跨域问题

在10.0.9.4服务器配置

cd /usr/local/nginx/conf

vi ccs-web.conf

server {

   listen       10090;

   server_name  ccs-admin;

        location /ccs-web {

            alias /www/ccs-web/;

            index  index.html index.htm;

            try_files $uri $uri/ index.html;

        }

        location /ccs-web/api/ {

            proxy_pass http://10.0.9.39:8080/;

            proxy_set_header Host $http_host;

            proxy_set_header X-Real-IP $remote_addr;

            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html

        #

        error_page   500 502 503 504  /50x.html;

        location = /50x.html {

            root   html;

        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80

}

cd /usr/local/nginx/conf

vi nginx.conf

http {

    include       mime.types;

    default_type  application/octet-stream;

    include /usr/local/nginx/conf/nsdm.conf;

    include /usr/local/nginx/conf/ccs-web.conf;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '

    #                  '$status $body_bytes_sent "$http_referer" '

    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

修改完让nginx配置生效:

cd /usr/local/nginx/sbin

sudo ./nginx  -s  reload

前端打包名字取ccs-web ,打包后台访问地址用相对路径      ../ccs-web/api

访问网址为http://10.0.9.4:10090/ccs-web/

Java配置

# 应用的访问路径
context-path: /

vue前段配置

VUE_APP_BASE_API='/api/'

正向代理:

(客户端   代理)    服务器

以免客户端的ip被暴露,就访问代理,从而通过代理访问到服务器的数据

  1. 正向代理:在客户端(浏览器)配置代理服务器,通过代理服务器进行互联网访问

反向代理:

客户端   (代理    服务器)

(2)反向代理:客户端对代理是无感知的,因为客户端不需要任何配置就可以访问,我们只需要将请求送到反向代理服务器,由反向代理服务器需选择目标服务器获取数据后,在返回给客户端,此时反向代理服务器和目标服务器对外就是一个服务器,暴露的是代理服务器地址,隐藏了真实服务器IP地址

正向代理中,proxy和client同属一个LAN,对server透明; 反向代理中,proxy和server同属一个LAN,对client透明。

正向代理是代理客户端,为客户端收发请求,使真实客户端对服务器不可见

反向代理是代理服务器端,为服务器收发请求,使真实服务器对客户端不可见

映射方向:

集团iptv要跳转到xxl_job,并让浏览器ip显示集团iptv的ip,就需要代理配置xxl_job ip

集团Iptv: http://10.245.5.19:10070/iptv/manage/index

Xxljob所在ip: 10.190.1.227

单点登录url: /xxl-job-admin/toLogin?token=8a7585937b5c4139017fe7f6df731754

因此需要在10.245.5.19服务器配置nginx

# xxl_job

server {

        listen       10104;

        server_name  localhost;

        location ~/ {

            proxy_pass http://10.190.1.227:18888;

            proxy_connect_timeout       60;

            proxy_read_timeout          60;

            proxy_send_timeout          60;

     #      proxy_redirect     off;

           proxy_set_header   Host      $http_host;

     #      proxy_set_header   X-Real-IP $remote_addr;

     #       proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;

     #       proxy_set_header X-NginX-Proxy true;

        }

    }

Nginx之proxy_pass详解

nginx中配置proxy_pass代理转发时,如果在proxy_pass后面的url加/,表示绝对根路径;如果没有/,表示相对路径,把匹配的路径部分也给代理走。

假设下面四种情况分别用 http://192.168.1.1/proxy/test.html 进行访问。

第一种:
location /proxy/ {
proxy_pass http://127.0.0.1/;
}
代理到URL:http://127.0.0.1/test.html

第二种(相对于第一种,最后少一个 / )
location /proxy/ {
proxy_pass http://127.0.0.1;
}
代理到URL:http://127.0.0.1/proxy/test.html

第三种:
location /proxy/ {
proxy_pass http://127.0.0.1/aaa/;
}
代理到URL:http://127.0.0.1/aaa/test.html

第四种(相对于第三种,最后少一个 / )
location /proxy/ {
proxy_pass http://127.0.0.1/aaa;
}
代理到URL:http://127.0.0.1/aaatest.html

配置的优先级顺序

在nginx配置中,匹配规则的优先级顺序为:精确匹配>正则匹配>前缀匹配>通配符匹配。即当请求的URL匹配多个规则时,Nginx会按照优先级选择最匹配的规则进行处理。

  1. 精确匹配

location=/index.html{...}

  1. 前缀匹配

location ^~/static/{...} 表示以/static/开头

  1. 正则匹配

location ~/images/.*\.(jpg|png|gif)${...} 表示使用~和~*进行匹配,~表示区分大小写,~*表示不区分大小写

  1. 通配符匹配

location /abc*{...} 表示匹配访问地址以/abc开头的任意路径

  1. 特殊字符转义

匹配$、^、{、}等时,可以加上反斜杠\例如\$进行转义

Mongo

Mongo基础

https://www.cnblogs.com/xiaohema/p/8455063.html

https://www.cnblogs.com/gugunan/p/9829924.html

新建集合(表)

db.createCollection('user_portrait_day',{capped:false,size:6142800,max:1000000})

添加数据

user0={

    'addTime'                     :    '2022-05-19',

'reportTime'                  :    '2022-05-19',

'cityCode'                    :    '12501',

'businessId'                  :    'yuj65748',

'oltName'                     :    'yuj65748',

'oltIp'                       :    'yuj65748',

'catonTotalDuration'          :    '5465',

'catonTotalTime'              :    '5465',

'EPGRequestSuccessRate'       :    '5465',

'EPGRequestAvgTime'           :    '5465',

'liveRequestSuccessRate'      :    '5465',

'liveRequestAvg'              :    '5465',

'vodRequestSuccessRate'       :    '5465',

'vodRequsetAvg'               :    '5465',

'RTPPacketLossAvg'            :    '5465',

'MDIDFAvg'                    :    '5465',

'MDIMLRAvg'                   :    '5465',

'stbMemoryUsageRate'          :    '5465',

'stbCpuUseRatePeak'           :    '5465',

'liveMosAvg'                  :    '5465',

'vodMosAvg'                   :    '5465',

'mosDegradation'              :    '5465',

'complaint'                   :    '5465',

'lightWane'                   :    '5465',

'speedPoor'                   :    '5465',

'ponFlow'                     :    '5465',

'alarmData'                   :    '5465',

'Col_sum'                     :    '99'

}

db.user_portrait_day.insert(user0)

db.user_portrait_day.find()

添加索引

db.user_portrait_day.createIndex({'reportTime':-1,'cityCode':1,'businessId':1},{'name':'idx_user_portrait_day',background:true})

查询语句

db.stbAll.find();

db.stbAll.find({"stbId": "2153010GGQHYK1512015"});

db.stbAll.aggregate(

  {"$match":{"stbId": "0000030000056800CH011077B0AD7014"}},

{"$match":{"reportingTime": {"$gte":"20201211000000","$lte":"20201218000000"}}},

{"$sort":{"reportingTime":-1}},

{"$limit":5}

);

db.stbStart.aggregate(

  {"$match":{"stbId": "0000030000056800CH011077B0AD7014"}},

{"$match":{"reportingTime": {"$gte":"20201211000000","$lte":"20201218000000"}}},

{"$sort":{"reportingTime":-1}},

{"$limit":5}

);

db.stbPmview.aggregate(

  {"$match":{"stbId": "0000030000056800CH011077B0AD7014"}},

{"$match":{"reportingTime": {"$gte":"20201118175725","$lte":"20201218175727"}}},

{"$sort":{"reportingTime":-1}},

{"$limit":5}

);

db.stbAll.findOne({ "stbId" : "00000442001B06500001B4014222CEEF"})

db.stbAll.insertMany([{“a”:”a1”,”b”:”b1”},{“a”:”a2”,”b”:”b2”}]);

修改语句

db.t_iptv_showrank3.update( { reportTime: "2022-04-01"}, {$set:

    {reportTime: "2022-05-01"}

},{multi:true});

java使用mongo

^*?模糊查询 i可忽略大小写

org.springframework.data.mongodb.core.query

/**
 * 一个月内观看记录
 * @param startTimeStr
 * @param endTimeStr
 * @param stbId
 * @param pageIndex
 * @param pageSize
 * @return
 */
@Override
public String recentVideo(String startTimeStr, String endTimeStr, String stbId, int pageIndex, int pageSize) {
   String startTimeStr_mongo = "";
   String endTimeStr_mongo="";
   try {
      Date parse = sdf.parse(startTimeStr);
      startTimeStr_mongo = sdf1.format(parse);
      Date parse1 = sdf.parse(endTimeStr);
      endTimeStr_mongo = sdf1.format(parse1);
   } catch (ParseException e) {
      e.printStackTrace();
   }
   logger.info("startTime: " + startTimeStr_mongo);
   logger.info("endTime: " + endTimeStr_mongo);
   Query query = new Query();
   query.addCriteria(where("reportingTime").gte(Long.parseLong(startTimeStr_mongo)).lte(Long.parseLong(endTimeStr_mongo)));
   query.addCriteria(where("stbId").is(stbId));
   query.addCriteria(where("programAddress").regex("^(igmp|rtsp).*?", "i"));//i可忽略大小写
   /*query.addCriteria(new Criteria().orOperator(
         Criteria.where("programAddress").regex("^rtsp.*?")));*/
   //query.addCriteria(where("programAddress").regex("igmp"+".*$"));
   long sumcount = mongoTemplate2.count(query,"stbPmview");
   logger.info("查询mongodb总数:"+sumcount);
   query.with(new Sort(Direction.DESC, "reportingTime"));
   List<STBPmView> list = mongoTemplate2.find(query.skip((pageIndex - 1) * pageSize).limit(pageSize), STBPmView.class,"stbPmview");

   JSONObject jo = new JSONObject();
   jo.put("total", sumcount);
   jo.put("rows", JSONArray.fromObject(list));
   return jo.toString();
}

阿里一键自动部署jar包

Alibaba Cloud Toolkit一键自动部署jar包-CSDN博客

Cloud Toolkit 之 Command 编写指南-阿里云开发者社区

  • 25
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
javaweb项目开发全程实录pdf是指将javaweb项目的开发全过程记录下来,并以PDF文件的形式呈现出来。 javaweb项目的开发全程可以分为以下几个阶段: 1. 需求分析阶段:首先,需要与客户进行需求沟通,了解客户的需求和期望。然后,根据需求编写需求文档,详细描述项目的功能、界面、流程等方面的要求。 2. 设计阶段:在设计阶段,根据需求文档进行系统设计,包括数据库设计、系统模块划分、界面设计等。设计完成后,编写设计文档,明确系统的各个模块的功能和关系。 3. 编码阶段:在编码阶段,根据设计文档编写代码。使用Java语言和相关的开发框架,实现系统的各个模块的功能。开发过程中,需要进行单元测试,确保代码的正确性和稳定性。 4. 测试阶段:在测试阶段,进行系统的功能测试、性能测试和安全测试等。通过各种测试手段,发现并修复系统中的漏洞和问题,确保系统的质量和稳定性。 5. 部署阶段:在部署阶段,将开发完成的系统部署到服务器上,并进行必要的配置和优化。配置完成后,进行系统的整体测试,确保系统能够在实际环境中正常运行。 6. 运维阶段:在项目上线后,需要进行系统的监控和维护工作。及时处理系统中的异常、故障和漏洞等问题,保证系统的正常运行。 javaweb项目开发全程实录pdf可以记录上述各个阶段的详细过程和关键环节。其中包括需求的收集和分析、设计的整体思路和细节、编码的具体实现和调试过程、测试的方法和结果、部署和运维的步骤等。通过将开发全程实录成pdf文件,可以方便开发人员进行回顾和总结,也便于项目经理和其他相关人员了解项目的进展和问题。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值