项目案例

用户登录行为风控系统

背景技术

近年来,伴随着互联网金融的风生水起;国家出台相关文件,要求加大互联网交易风险防控力度;鼓励通过大数据分析、用户行为建模等手段建立和完善交易风险检测模型。但是目前大数据风控还存在有效性差,准确性不高等问题。基于用户登录行为分析的风控方法,通过多特征多模型融合,多数据的智能处理方法能提高风险预测准确性,更符合信息发展时代风控业务的发展需求。

技术实现要素

解决的技术问题在于提供一种基于用户登录行为分析的风控方法;实现用户输入风险识别用户登录地风险识别密码重试风险识别设备来源风险识别等。

  • 输入风险识别

在这里插入图片描述

  • 用户登录地风险识别

在这里插入图片描述

  • 用户密码重试风险识别

在这里插入图片描述

  • 设备来源识别

在这里插入图片描述

风险评估架构

  • 评估架构

在这里插入图片描述

  • 计算架构

在这里插入图片描述

Flume分布式日志采集

介绍

Flume是一种分布式,可靠且可用的服务,用于有效地收集,聚合和移动大量日志数据。Flume构建在日志流之上一个简单灵活的架构。它具有可靠的可靠性机制和许多故障转移和恢复机制,具有强大的容错性。使用Flume这套架构实现对日志流数据的实时在线分析。Flume支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(可定制)的能力。

架构

在这里插入图片描述

搭建Flume运行环境

  • Java Runtime Environment - Java 1.8 or later
[root@CentOS ~]# tar -zxf apache-flume-1.9.0-bin.tar.gz -C /usr/

配置文件结构

# 声明组件信息
<Agent>.sources = <Source1> <Source2>
<Agent>.sinks = <Sink1> <Sink1>
<Agent>.channels = <Channel1> <Channel2>

# 组件配置
<Agent>.sources.<Source>.<someProperty> = <someValue>
<Agent>.channels.<Channel>.<someProperty> = <someValue>
<Agent>.sinks.<Sink>.<someProperty> = <someValue>

# 链接组件
<Agent>.sources.<Source>.channels = <Channel1> <Channel2> ...
<Agent>.sinks.<Sink>.channel = <Channel1>

快速入门

在这里插入图片描述

  • 第一台目标机器
[root@CentOS apache-flume-1.9.0-bin]# vi conf/demo01.properties
# 声明组件信息
a1.sources = s1
a1.sinks = sk1
a1.channels = c1

# 组件配置
a1.sources.s1.type = TAILDIR
a1.sources.s1.filegroups = f1
a1.sources.s1.filegroups.f1 = /root/logs/userLoginFile.*

a1.channels.c1.type = memory

a1.sinks.sk1.type = avro
a1.sinks.sk1.hostname = 192.168.111.133
a1.sinks.sk1.port = 44444

# 链接组件
a1.sources.s1.channels = c1
a1.sinks.sk1.channel = c1

  • 第二台目标机器(192.168.111.133)
[root@CentOS apache-flume-1.9.0-bin]# vi conf/demo01.properties
# 声明组件信息
a1.sources = s1
a1.sinks = sk1
a1.channels = c1

# 组件配置
a1.sources.s1.type = avro
a1.sources.s1.bind = 192.168.111.133
a1.sources.s1.port = 44444

a1.channels.c1.type = memory

a1.sinks.sk1.type = file_roll
a1.sinks.sk1.sink.directory = /root/file_roll
a1.sinks.sk1.sink.rollInterval = 0

# 链接组件
a1.sources.s1.channels = c1
a1.sinks.sk1.channel = c1


  • 启动

    • 先启动第二台
    [root@CentOS apache-flume-1.9.0-bin]# ./bin/flume-ng agent --conf conf/ --conf-file conf/demo01.properties  --name a1
    
    • 在启动第一台
    [root@CentOS apache-flume-1.9.0-bin]# ./bin/flume-ng agent --conf conf/ --conf-file conf/demo01.properties  --name a1
    

Avro Source(重要)

  • 一般可以通过Avro Sink 将结果直接写入 Avro Source,这种情况,一般指的是通过flume采集本地的日志文件,架构一般如上图所示,一般情况下的应用服务器必须和agent部署在同一台物理主机。(服务器端日志采集)
  • 用户调用Flume的暴露的SDK,直接将数据发送给Avro Source(移动端)
<dependency>
    <groupId>org.apache.flume</groupId>
    <artifactId>flume-ng-sdk</artifactId>
    <version>1.9.0</version>
</dependency>
Properties props = new Properties();
props.setProperty(RpcClientConfigurationConstants.CONFIG_CLIENT_TYPE, "avro");
props.put("client.type", "default_loadbalance");
props.put("hosts", "h1 h2 h3");
String host1 = "192.168.111.133:44444";
String host2 = "192.168.111.133:44444";
String host3 = "192.168.111.133:44444";
props.put("hosts.h1", host1);
props.put("hosts.h2", host2);
props.put("hosts.h3", host3);
props.put("host-selector", "random"); // round_robin

RpcClient client= RpcClientFactory.getInstance(props);
Event event= EventBuilder.withBody("1 zhangsan true 28".getBytes());
client.append(event);

client.close();

Avro Source | memory channel| Kafka Sink

[root@CentOS apache-flume-1.9.0-bin]# vi conf/demo02.properties

# 声明组件信息
a1.sources = s1
a1.sinks = sk1
a1.channels = c1

# 组件配置
a1.sources.s1.type = avro
a1.sources.s1.bind = 192.168.111.132
a1.sources.s1.port = 44444

a1.channels.c1.type = memory

a1.sinks.sk1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.sk1.kafka.bootstrap.servers = 192.168.111.132:9092
a1.sinks.sk1.kafka.topic = topic01
a1.sinks.sk1.flumeBatchSize = 20
a1.sinks.sk1.kafka.producer.acks = 1
a1.sinks.sk1.kafka.producer.linger.ms = 1

# 链接组件
a1.sources.s1.channels = c1
a1.sinks.sk1.channel = c1
[root@CentOS apache-flume-1.9.0-bin]# ./bin/flume-ng agent --conf conf/ --conf-file conf/demo02.properties  --name a1

注意 a1.sinks.sk1.flumeBatchSize官方写错了a1.sinks.sk1.kafka.flumeBatchSize

Flume和log4j整合

<dependency>
    <groupId>org.apache.flume</groupId>
    <artifactId>flume-ng-sdk</artifactId>
    <version>1.9.0</version>
</dependency>
<dependency>
    <groupId>org.apache.flume.flume-ng-clients</groupId>
    <artifactId>flume-ng-log4jappender</artifactId>
    <version>1.9.0</version>
</dependency>
  • log4j.properties
log4j.appender.flume = org.apache.flume.clients.log4jappender.LoadBalancingLog4jAppender
log4j.appender.flume.Hosts = 192.168.111.132:44444 192.168.111.132:44444 192.168.111.132:44444
log4j.appender.flume.Selector = ROUND_ROBIN
log4j.appender.flume.MaxBackoff = 30000

log4j.logger.com.baizhi = DEBUG,flume
  • 测试代码
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
public class TestLog {
    private static Log log= LogFactory.getLog(TestLog.class);
    public static void main(String[] args) {
        log.debug("你好!_debug");
        log.info("你好!_info");
        log.warn("你好!_warn");
        log.error("你好!_error");
    }
}

Spring Boot flume logback整合?

  • SpringBoot项目组引入logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="60 seconds" debug="false">

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender" >
        <encoder>
            <pattern>%p %c#%M %d{yyyy-MM-dd HH:mm:ss} %m%n</pattern>
            <charset>UTF-8</charset>
        </encoder>
    </appender>

    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
      <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
         <fileNamePattern>logs/userLoginFile-%d{yyyyMMdd}.log</fileNamePattern>
         <maxHistory>30</maxHistory>
      </rollingPolicy>
      <encoder>
         <pattern>%p %c#%M %d{yyyy-MM-dd HH:mm:ss} %m%n</pattern>
				   <charset>UTF-8</charset>
      </encoder>
    </appender>
    
    <!-- 控制台输出日志级别 -->
    <root level="ERROR">
         <appender-ref ref="STDOUT" />
    </root>
    
    <!--additivity 为false,日志不会再父类appender中输出-->
    <logger name="com.baizhi.tests" level="INFO" additivity="false">
        <appender-ref ref="FILE" />
        <appender-ref ref="STDOUT" />
    </logger>

</configuration>
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

private static final Logger LOG = LoggerFactory.getLogger(TestSpringBootLog.class);
LOG.info("-----------------------");
集成 Flume +logback

在这里插入图片描述

  • 在SpringBoot工程中添加当前版本flume的sdk
<dependency>
    <groupId>org.apache.flume</groupId>
    <artifactId>flume-ng-sdk</artifactId>
    <version>1.9.0</version>
</dependency>
  • 在项目的logback.xml中添加flume的appender实现
<appender name="flume" class="com.gilt.logback.flume.FlumeLogstashV1Appender">
    <flumeAgents>
        192.168.111.132:44444,
        192.168.111.132:44444,
        192.168.111.132:44444
    </flumeAgents>
    <flumeProperties>
        connect-timeout=4000;
        request-timeout=8000
    </flumeProperties>
    <batchSize>1</batchSize>
    <reportingWindow>1000</reportingWindow>
    <reporterMaxThreadPoolSize>150</reporterMaxThreadPoolSize>
    <reporterMaxQueueSize>102400</reporterMaxQueueSize>
    <additionalAvroHeaders>
        myHeader=myValue
    </additionalAvroHeaders>
    <application>smapleapp</application>
    <layout class="ch.qos.logback.classic.PatternLayout">
        <pattern>%p %c#%M %d{yyyy-MM-dd HH:mm:ss} %m%n</pattern>
    </layout>
</appender>
定制自己的Appender
public class BZFlumeLogAppender extends UnsynchronizedAppenderBase<ILoggingEvent> {
    private String flumeAgents;
    protected Layout<ILoggingEvent> layout;
    private static RpcClient rpcClient;
    @Override
    protected void append(ILoggingEvent eventObject) {
       String body= layout!= null? layout.doLayout(eventObject):eventObject.getFormattedMessage();
        if(rpcClient==null){
           rpcClient=buildRpcClient();
        }
        Event event= EventBuilder.withBody(body,Charset.forName("UTF-8"));
        try {
            rpcClient.append(event);
        } catch (EventDeliveryException e) {
            e.printStackTrace();
        }
    }

    public void setFlumeAgents(String flumeAgents) {
        this.flumeAgents = flumeAgents;
    }

    public void setLayout(Layout<ILoggingEvent> layout) {
        this.layout = layout;
    }
    private   RpcClient buildRpcClient(){
        Properties props = new Properties();

        int i = 0;
        for (String agent : flumeAgents.split(",")) {
            String[] tokens = agent.split(":");
            props.put("hosts.h" + (i++), tokens[0] + ':' + tokens[1]);
        }
        StringBuffer buffer = new StringBuffer(i * 4);
        for (int j = 0; j < i; j++) {
            buffer.append("h").append(j).append(" ");
        }
        props.put("hosts", buffer.toString());

        if(i > 1) {
            props.put("client.type", "default_loadbalance");
            props.put("host-selector", "round_robin");
        }

        props.put("backoff", "true");
        props.put("maxBackoff", "10000");

        return RpcClientFactory.getInstance(props);
    }
}
<appender name="bz" class="com.baizhi.flume.BZFlumeLogAppender">
    <flumeAgents>
        192.168.111.132:44444,192.168.111.132:44444
    </flumeAgents>
    <layout class="ch.qos.logback.classic.PatternLayout">
        <pattern>%p %c#%M %d{yyyy-MM-dd HH:mm:ss} %m</pattern>
    </layout>
</appender>

Flume对接HDFS (静态批处理)

将一个目录下的日志文件,采集到HDFS中,并且删除采集完成的日志文件(批处理作业中)

spooldir source、jdbc channel、HDFS Sink

# 声明组件信息
a1.sources = s1
a1.sinks = sk1
a1.channels = c1

# 组件配置
a1.sources.s1.type = spooldir
a1.sources.s1.spoolDir = /root/spooldir
a1.sources.s1.deletePolicy = immediate
a1.sources.s1.includePattern = ^.*\.log$

a1.channels.c1.type = jdbc

a1.sinks.sk1.type = hdfs
a1.sinks.sk1.hdfs.path= hdfs:///flume/%y-%m-%d/
a1.sinks.sk1.hdfs.filePrefix = events-
a1.sinks.sk1.hdfs.useLocalTimeStamp = true
a1.sinks.sk1.hdfs.rollInterval = 0
a1.sinks.sk1.hdfs.rollSize = 0 
a1.sinks.sk1.hdfs.rollCount = 0
a1.sinks.sk1.hdfs.fileType = DataStream

# 链接组件
a1.sources.s1.channels = c1
a1.sinks.sk1.channel = c1

拦截器&通道选择器

在这里插入图片描述
日志分流案例:

需求只采集用户模块的日志流,对需要评估的数据发送给evaluatetopic,其他用户板块的数据发送给usertopic。

# 声明组件信息
a1.sources = s1
a1.sinks = sk1 sk2
a1.channels = c1 c2

# 组件配置
a1.sources.s1.type = avro
a1.sources.s1.bind = 192.168.111.132
a1.sources.s1.port = 44444

# 拦截器 
a1.sources.s1.interceptors = i1 i2
a1.sources.s1.interceptors.i1.type = regex_filter
a1.sources.s1.interceptors.i1.regex = .*UserController.*
a1.sources.s1.interceptors.i1.excludeEvents = false

a1.sources.s1.interceptors.i2.type = regex_extractor
a1.sources.s1.interceptors.i2.regex = .*(EVALUATE|SUCCESS).*
a1.sources.s1.interceptors.i2.serializers = s1
a1.sources.s1.interceptors.i2.serializers.s1.name = type

a1.channels.c1.type = memory
a1.channels.c2.type = memory

a1.sinks.sk1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.sk1.kafka.bootstrap.servers = 192.168.111.132:9092
a1.sinks.sk1.kafka.topic = evaluatetopic
a1.sinks.sk1.flumeBatchSize = 20
a1.sinks.sk1.kafka.producer.acks = 1
a1.sinks.sk1.kafka.producer.linger.ms = 1

a1.sinks.sk2.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.sk2.kafka.bootstrap.servers = 192.168.111.132:9092
a1.sinks.sk2.kafka.topic = usertopic
a1.sinks.sk2.flumeBatchSize = 20
a1.sinks.sk2.kafka.producer.acks = 1
a1.sinks.sk2.kafka.producer.linger.ms = 1

# 通道选择器分流
a1.sources.s1.selector.type = multiplexing
a1.sources.s1.selector.header = type
a1.sources.s1.selector.mapping.EVALUATE = c1
a1.sources.s1.selector.mapping.SUCCESS = c2
a1.sources.s1.selector.default = c2

# 链接组件
a1.sources.s1.channels = c1 c2
a1.sinks.sk1.channel = c1
a1.sinks.sk2.channel = c2

Sink Processor

在这里插入图片描述

# 声明组件
a1.sources = s1
a1.sinks = sk1 sk2 
a1.channels = c1

# 将看 k1 k2 归纳一个组
a1.sinkgroups = g1 
a1.sinkgroups.g1.sinks = sk1 sk2
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.backoff = true
a1.sinkgroups.g1.processor.selector = round_robin

# 配置source属性
a1.sources.s1.type = avro
a1.sources.s1.bind = 192.168.111.132
a1.sources.s1.port = 44444

# 配置sink属性
a1.sinks.sk1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.sk1.kafka.bootstrap.servers = 192.168.111.132:9092
a1.sinks.sk1.kafka.topic = evaluatetopic
a1.sinks.sk1.flumeBatchSize = 20
a1.sinks.sk1.kafka.producer.acks = 1
a1.sinks.sk1.kafka.producer.linger.ms = 1

a1.sinks.sk2.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.sk2.kafka.bootstrap.servers = 192.168.111.132:9092
a1.sinks.sk2.kafka.topic = usertopic
a1.sinks.sk2.flumeBatchSize = 20
a1.sinks.sk2.kafka.producer.acks = 1
a1.sinks.sk2.kafka.producer.linger.ms = 1

# 配置channel属性
a1.channels.c1.type = memory
a1.channels.c1.transactionCapacity = 1

# 将source连接channel
a1.sources.s1.channels = c1 
a1.sinks.sk1.channel = c1
a1.sinks.sk2.channel = c1

Spring Boot + MyBatis(Restful)发布服务

  • 基础依赖
<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.1.5.RELEASE</version>
</parent>
<dependencies>
    <!--配置web-->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <!--MyBatis依赖-->
    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <version>5.1.47</version>
    </dependency>

    <dependency>
        <groupId>org.mybatis.spring.boot</groupId>
        <artifactId>mybatis-spring-boot-starter</artifactId>
        <version>2.0.1</version>
    </dependency>
    <!--文件上传-->
    <dependency>
        <groupId>commons-io</groupId>
        <artifactId>commons-io</artifactId>
        <version>2.6</version>
    </dependency>
    <dependency>
        <groupId>commons-fileupload</groupId>
        <artifactId>commons-fileupload</artifactId>
        <version>1.4</version>
    </dependency>

    <!--运行junit测试-->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency>

</dependencies>
<build>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
        </plugin>
    </plugins>
</build>

  • 配置application.json
server.port=8088
server.servlet.context-path=/UserApplication

spring.servlet.multipart.max-file-size=100MB
spring.servlet.multipart.enabled=true
# 一定别忘记了
spring.servlet.multipart.location=E:/uploadfiles

基于jQuery插件库- EasyUI 框架(页面渲染,纯粹的ajax)

自定义jQuery插件实现表单数据检测

登陆风险评估(代码实现)

  • 输入异常检测

通过计算用户输入的时间向量与历史向量做对比,如果明显超出域值,即为异常。

在这里插入图片描述

class InputVectorEvaluate extends RiskEvaluate{
  /**
    *
    * @param currentInputVector:当前输入向量
    * @param historyVector:历史输入向量数组
    * @return
    */
  def doEvaluate(currentInputVector: Array[Double], historyVector: ListBuffer[Array[Double]]): (String, Boolean) = {

    var n=historyVector.length
    //计算平均向量
    var avgVector=new Array[Double](currentInputVector.length)
    for(i <- 0 until  currentInputVector.length ){
      avgVector(i) = historyVector.map(v=>v(i)).reduce(_+_)/historyVector.length
    }
    //n个历史向量间 彼此距离
    var distances=new ListBuffer[Double]
    for(i<- historyVector ;j <- historyVector;if(i!=j)){ //任意两个点坐标的距离
       //必须保证 i!=j 计算 i和j的距离
       var total:Double=0
       for( item<- 0 until avgVector.length){
         total+=  Math.pow((i(item)-j(item)),2)
       }
      //添加距离
       distances += Math.sqrt(total)
      //将向量从historyVector中移除
      historyVector -= i
    }
    //对distances结果进行排序
    distances=distances.sortBy(distance=>distance)

    //计算当前输入向量和平均向量的距离
    var curentDistance:Double=0
    for( item<- 0 until avgVector.length){
      curentDistance+=  Math.pow((avgVector(item)-currentInputVector(item)),2)
    }
    //获取当前向量和历史平均向量的距离
    curentDistance=Math.sqrt(curentDistance)
    //获取比对阈值
    var  threshold: Double = distances((n * (n - 1))/6)

    println("距离:"+distances.map(t=>t+"").reduce(_+","+_))
    println("avgVector:"+avgVector.map(t=>t+"").reduce(_+","+_))
    println("currentInputVector:"+currentInputVector.map(t=>t+"").reduce(_+","+_))

    println("threshold: "+threshold+" curentDistance: "+curentDistance)
    ("FORM_INPUT",curentDistance>threshold)
  }

  override def evaluate(v1: Any, v2: Any): (String, Boolean) = {
    doEvaluate(v1.asInstanceOf[Array[Double]],v2.asInstanceOf[ListBuffer[Array[Double]]])
  }
}
  • 用户密码评估

通过计算用户当前密码和历史密码做余弦相似度计算,计算出当前向量之间的夹角,通过向量夹角评估用户的输入是否存在异常。

在这里插入图片描述

在这里插入图片描述

import java.util

import com.baizhi.evaluate.RiskEvaluate

import scala.collection.mutable.ListBuffer

class PasswordEvaluate extends RiskEvaluate{


  /**
    *
    * @param currentPawssword:当前输入向量
    * @param historyPasswords:历史输入向量数组
    * @return
    */
  def doEvaluate(currentPawssword: String, historyPasswords: ListBuffer[String]): (String, Boolean) = {
    val wordBag = historyPasswords.flatMap(password=> password.toCharArray).distinct.toList.sortBy(c=>c)
    println("词袋:"+wordBag.mkString(","))
    var historyPwddwordVectors=new ListBuffer[Array[Int]]
    for(pwd<-historyPasswords){
      val vector = convertString2Vectors(wordBag,pwd)
      historyPwddwordVectors += vector
    }
    val currentVector = convertString2Vectors(wordBag,currentPawssword)
    var simlarity:Double=0.0
    for(v <- historyPwddwordVectors){
      var s:Double=calculateSimlarity(currentVector,v)
      println("s:"+s)
      if(s>simlarity){ //获取自大相似度
        simlarity=s
      }
    }
    if(simlarity < 0.95){
      return  ("PASSWORD_INPUT",true)
    }
    return ("FORM_INPUT",false)
  }
  private  def calculateSimlarity(c: Array[Int], v: Array[Int]): Double = {
       var vectorMutli=0.0
       for(i <- 0 until c.length){
         vectorMutli +=c(i)*v(i)
       }
       return vectorMutli/(Math.sqrt(c.map(item=>item*item).reduce(_+_))* Math.sqrt(v.map(item=>item*item).reduce(_+_)))
  }
  private def convertString2Vectors(wordBag:List[Char],pwd:String):Array[Int]={
      val charMap = new util.HashMap[Char,Int]()
      for(c<-pwd.toCharArray){
        if(charMap.containsKey(c)){
          charMap.put(c,charMap.get(c)+1)
        }else{
          charMap.put(c,1)
        }
      }
      val vector = new Array[Int](wordBag.length)
      for(i <- 0 until wordBag.length){
        var count=0
        if(charMap.containsKey(wordBag(i))){
          count=charMap.get(wordBag(i))
        }else{
          count=0
        }
        vector(i)=count
      }
      vector
  }

  override def evaluate(v1: Any, v2: Any): (String, Boolean) = {
    doEvaluate(v1.asInstanceOf[String],v2.asInstanceOf[ListBuffer[String]])
  }
}

  • 登陆位移评估(IP评估)
class LoginLocationEvaluate extends RiskEvaluate{

  def doEvaluate(currentLogin: Location, lastLogin: Location): (String, Boolean) = {
    var distance=getDistanceOfMeter(currentLogin.latitude,currentLogin.longtitude,lastLogin.latitude,lastLogin.longtitude)
    var speed=distance/((currentLogin.timestamp-lastLogin.timestamp)*1.0/(1000*3600))
    println("speed:"+speed)
    return ("FORM_INPUT",speed>310)
  }

  def getDistanceOfMeter(lat1: Double, lng1: Double, lat2: Double, lng2: Double): Double = {
      val a = rad(lat1) - rad(lat2)
      val b = rad(lng1) - rad(lng2)
      println(a+" "+b)
      var s = 2 * Math.asin(Math.sqrt(Math.pow(Math.sin(a / 2), 2) + Math.cos( rad(lat1)) * Math.cos( rad(lat2)) * Math.pow(Math.sin(b / 2), 2)))
      println(s)
      s = s * EARTH_RADIUS
      s = Math.round(s * 10000) / 10000;
      return s
  }

  private def rad(d: Double) = d * Math.PI / 180.0

  /**
    * 地球半径:6378.137KM
    */
  private val EARTH_RADIUS = 6378.137

  override def evaluate(v1: Any, v2: Any): (String, Boolean) = {
    doEvaluate(v1.asInstanceOf[Location],v2.asInstanceOf[Location])
  }
}

  • 设备信息评估
class LoginLocationEvaluate extends RiskEvaluate{

  def doEvaluate(currentLogin: Location, lastLogin: Location): (String, Boolean) = {
    var distance=getDistanceOfMeter(currentLogin.latitude,currentLogin.longtitude,lastLogin.latitude,lastLogin.longtitude)
    var speed=distance/((currentLogin.timestamp-lastLogin.timestamp)*1.0/(1000*3600))
    println("speed:"+speed)
    return ("FORM_INPUT",speed>310)
  }

  def getDistanceOfMeter(lat1: Double, lng1: Double, lat2: Double, lng2: Double): Double = {
      val a = rad(lat1) - rad(lat2)
      val b = rad(lng1) - rad(lng2)
      println(a+" "+b)
      var s = 2 * Math.asin(Math.sqrt(Math.pow(Math.sin(a / 2), 2) + Math.cos( rad(lat1)) * Math.cos( rad(lat2)) * Math.pow(Math.sin(b / 2), 2)))
      println(s)
      s = s * EARTH_RADIUS
      s = Math.round(s * 10000) / 10000;
      return s
  }

  private def rad(d: Double) = d * Math.PI / 180.0

  private val EARTH_RADIUS = 6378.137
  override def evaluate(v1: Any, v2: Any): (String, Boolean) = {
    doEvaluate(v1.asInstanceOf[Location],v2.asInstanceOf[Location])
  }
}

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值