基于MinHashLSH的用户相似度

1 篇文章 0 订阅

背景:

1、用户量300W+

2、每个用户10个特征,找出两两用户十个特征值中最少有三个一样的用户对

3、资源相对不足,最高可用内存10T,核数4000

开发阶段:

1、笛卡尔积:crossJoin

众所周知此方法有点脑残,300W*300W可想而知有多么低下

2、MinHashLSH

2.1、初版

2.2、修改版(此文不放置代码,跟初版区别不大,且效率优化可以忽略)

2.3、优化版,最红用作项目

资源分配:

集群限制,每个executo至多分配55G内存,核数不限,故分配 executor 100个,core5个,内存50G,内存溢出,导致executor挂掉从而导致整个任务失败,故后来加大分区,修改spark.sql.shuffle.partitions参数,及将每个executor的core降低至2,乃至1,但是还是失败,故舍弃初版及修改版

优化版使用 executor 50个,每个executor分配2个core,内存分配55G

代码如下:

2.1 初版

# !/usr/bin/env python
# coding=utf-8

import sys
from pyspark.sql import SparkSession

import time
import datetime

import pyspark
from pyspark.sql import SparkSession
import pyspark.sql.functions as fn
from pyspark.sql.types import ArrayType, StringType, IntegerType, StructType, DoubleType

from collections import Counter
from pyspark.sql.functions import monotonically_increasing_id

from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import OneHotEncoderEstimator
from pyspark.ml.feature import MinHashLSH

from pyspark.ml.linalg import Vectors, Vector, VectorUDT
from pyspark.ml.feature import VectorAssembler
from pyspark.sql.functions import col


def one_hot(line, ncat):
    temp = [0] * ncat
    for i in line:
        temp[int(i)] = 1
    return temp


def get_arraylen(line):
    return len(line) * [1.0]


def main():
    t0 = time.time()
    """
    日例行化 参数为输入表的

    """
    if len(sys.argv) >= 2:
        sub_date = sys.argv[1]
        pri_date = "p_" + sub_date
    else:
        raise

    print("sud_date: ", sub_date, pri_date)
    """
    data 为输入数据,此代码作为参考,实际为读hive获取
    
    """

    spark = SparkSession.builder.appName("xxx").getOrCreate()
    sc = spark.sparkContext
    data = spark.createDataFrame([
        (20210420, "230155", "qimei_xxx"),
        (20210420, "295085", "ip_xxx"),
        (20210420, "666373", "qimei_xxx1")])

    data.show(10, False)
    """
    +---------------+-------+--------------------+
    |tdbank_imp_date|   puid|             feature|
    +---------------+-------+--------------------+
    |       20210416|4855420|      qimei_h_4_471b|
    |       20210416|4855420|       mac_q_5_4C637|
    |       20210416|4855420|        osversion_11|
    |       20210416|4855420|phonetype_Redmi K...|
    |       20210416|4855420| qimei_q_5_4855420_d|   为了规整,如果某个用户的某个特征为空时则特征值为 特征_puid_无效标签
    |       20210416|4855420|        mac_h_4_A2F6|
    |       20210416|4855420|  devicename_picasso|
    |       20210416|4855420|      clientver_8140|
    |       20210416|4855420|postipb_read_116....|
    |       20210416|4855420|      manufact_Redmi|
    +---------------+-------+--------------------+
    """



    print(data.rdd.getNumPartitions())
    print("all data is ", data.count())
    t1 = time.time()
    print("used time 1 ", t1 - t0)

    """ 将特征值转为数字标签 """
    indexer = StringIndexer(inputCol="feature", outputCol="categoryIndex")
    data = indexer.fit(data).transform(data)
    data = data.withColumn("categoryIndex", data.categoryIndex.cast("int"))
    """ 获取所有特征值的去重个数  """
    ncat = data.select("feature").distinct().count()
    print("nact is {}".format(ncat))

    """ 转换数据结构 """
    udf_one_hot = fn.udf(lambda line, ncat: one_hot(line, ncat), ArrayType(IntegerType()))
    data = data.groupby("puid").agg(fn.array_sort(fn.collect_set(data.categoryIndex)).alias("feature_in"))
    data = data.withColumn("onehot_fea", udf_one_hot(data.feature_in, fn.lit(ncat)))
    data.show(10)
    """
    feature_in 为一个用户的特征值对应和的数字标签的数组,onehot_fea 则是一个长度为 ncat长度数组,下标为feature_in中数字时,值为1,其他为0
    +-------+--------------------+--------------------+
    |   puid|          feature_in|          onehot_fea|
    +-------+--------------------+--------------------+
    |4855420|[8, 16, 21, 84, 1...|[0, 0, 0, 0, 0, 0...|
    |8775248|[1, 4, 33, 34, 45...|[0, 1, 0, 0, 1, 0...|
    |2780732|[6, 9, 17, 452, 9...|[0, 0, 0, 0, 0, 0...|
    |1410210|[5, 9, 15, 336, 4...|[0, 0, 0, 0, 0, 1...|
    |1307992|[1, 3, 9, 421, 70...|[0, 1, 0, 1, 0, 0...|
    |8029916|[2, 5, 8, 14, 444...|[0, 0, 1, 0, 0, 1...|
    |5966117|[7, 8, 17, 62, 24...|[0, 0, 0, 0, 0, 0...|
    |7395239|[0, 2, 85, 204, 2...|[1, 0, 1, 0, 0, 0...|
    |6368951|[0, 1, 6, 112, 16...|[1, 1, 0, 0, 0, 0...|
    |1801428|[7, 11, 18, 252, ...|[0, 0, 0, 0, 0, 0...|
    +-------+--------------------+--------------------+
    
    """
    """ 
    【udf_arraylen = fn.udf(lambda s: get_arraylen(s), ArrayType(DoubleType()))
    data = data.withColumn("l", udf_arraylen(data.feature_in))
    data_udf = fn.udf(lambda count, slices, value: Vectors.sparse(count, slices, value), VectorUDT())
    data = data.withColumn("sparse_data", data_udf(fn.lit(ncat), data.feature_in, data.l))
    print("---- before data :----")
    data.show(10)】
    代码可以舍弃,可将 udf_one_hot()转换后的数组转换为Vector向量座位算法的输入 Vector,结果时一样的,不过效率基本上没有什么区别,故本次就不放代码了,如有需要并没有思路的可联系我
      
    """
    udf_arraylen = fn.udf(lambda s: get_arraylen(s), ArrayType(DoubleType()))
    data = data.withColumn("l", udf_arraylen(data.feature_in))
    data_udf = fn.udf(lambda count, slices, value: Vectors.sparse(count, slices, value), VectorUDT())
    data = data.withColumn("sparse_data", data_udf(fn.lit(ncat), data.feature_in, data.l))
    print("---- before data :----")
    data.show(10)
    """
    sparse_data:向量
    +-------+--------------------+--------------------+--------------------+--------------------+
    |   puid|          feature_in|          onehot_fea|                   l|         sparse_data|
    +-------+--------------------+--------------------+--------------------+--------------------+
    |4855420|[8, 16, 21, 84, 1...|[0, 0, 0, 0, 0, 0...|[1.0, 1.0, 1.0, 1...|(5011,[8,16,21,84...|
    |8775248|[1, 4, 33, 34, 45...|[0, 1, 0, 0, 1, 0...|[1.0, 1.0, 1.0, 1...|(5011,[1,4,33,34,...|
    |2780732|[6, 9, 17, 452, 9...|[0, 0, 0, 0, 0, 0...|[1.0, 1.0, 1.0, 1...|(5011,[6,9,17,452...|
    |1410210|[5, 9, 15, 336, 4...|[0, 0, 0, 0, 0, 1...|[1.0, 1.0, 1.0, 1...|(5011,[5,9,15,336...|
    |1307992|[1, 3, 9, 421, 70...|[0, 1, 0, 1, 0, 0...|[1.0, 1.0, 1.0, 1...|(5011,[1,3,9,421,...|
    |8029916|[2, 5, 8, 14, 444...|[0, 0, 1, 0, 0, 1...|[1.0, 1.0, 1.0, 1...|(5011,[2,5,8,14,4...|
    |5966117|[7, 8, 17, 62, 24...|[0, 0, 0, 0, 0, 0...|[1.0, 1.0, 1.0, 1...|(5011,[7,8,17,62,...|
    |7395239|[0, 2, 85, 204, 2...|[1, 0, 1, 0, 0, 0...|[1.0, 1.0, 1.0, 1...|(5011,[0,2,85,204...|
    |6368951|[0, 1, 6, 112, 16...|[1, 1, 0, 0, 0, 0...|[1.0, 1.0, 1.0, 1...|(5011,[0,1,6,112,...|
    |1801428|[7, 11, 18, 252, ...|[0, 0, 0, 0, 0, 0...|[1.0, 1.0, 1.0, 1...|(5011,[7,11,18,25...|
    +-------+--------------------+--------------------+--------------------+--------------------+
    
    data.show(1, False)
    4855420|[8, 16, 21, 84, 146, 494, 1572, 2769, 3336, 4825]|[feature_in的数组]|[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]|(5011,[8,16,21,84,146,494,1572,2769,3336,4825],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]) 
    """
    print("----------------------")

    mh = MinHashLSH(inputCol="sparse_data", outputCol="hashes", numHashTables=70)
    model = mh.fit(data)
    res = model.transform(data)
    print("Approximately joining dfA and dfB on distance smaller than 0.3:")
    """
    approxSimilarityJoin:
    参数:threshold  为步长,也就是结果值(distCol:JaccardDistance)的阈值
    当threshold=1时,过滤掉JaccardDistance>1的数据
    """
    ns = model.approxSimilarityJoin(data, data, threshold=1, distCol="JaccardDistance") \
        .select(col("datasetA.puid").alias("account_id"),
                col("datasetB.puid").alias("pair_id"),
                (1-col("JaccardDistance")).alias("jarkd"))
    """ 
    过滤掉account_id=pair_id 数据 及避免 account_id$pair_id 与pair_id$account_id同时存在
    
     计算后发现是  差集/并集
    每个用户特征值有10个,当交集为2,并集为18,相似度应该为 0.1111111111111111,但是结果为0.88888888888888,故用1减去结果
    +----------+-------+------------------+
    |account_id|pair_id|   JaccardDistance|
    +----------+-------+------------------+
    |    349012| 826662|0.9473684210526316|
    |    523415| 347975|0.8888888888888888|
    |    670110| 186066|0.8888888888888888|
    |    144489| 136390|0.8235294117647058|
    |    187163| 130920|              0.75|
    |    415934| 038163|0.8888888888888888|
    |    196361| 798520|0.8888888888888888|
    |    145462| 511206|0.8235294117647058|
    |    050439| 352543|0.8888888888888888|
    |    382843| 160124|0.6666666666666667|
    +----------+-------+------------------+
    """
    ps= ns.filter(ns.account_id > ns.pair_id).filter(ns.jarkd> 0.17)
    ps.show(10)
    t2 = time.time()
    print("used time 2 ", t2 - t1)
    print("数据过滤完了-")



if __name__ == "__main__":
    main()

2.3 优化版

# !/usr/bin/env python
# coding=utf-8

import sys
from pyspark.sql import SparkSession

import time
import datetime

import pyspark
from pyspark.sql import SparkSession
import pyspark.sql.functions as fn
from pyspark.sql.types import ArrayType, StringType, IntegerType, StructType, DoubleType

from collections import Counter
from pyspark.sql.functions import monotonically_increasing_id

from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import OneHotEncoderEstimator
from pyspark.ml.feature import MinHashLSH

from pyspark.ml.linalg import Vectors, Vector, VectorUDT
from pyspark.ml.feature import VectorAssembler, CountVectorizer
from pyspark.sql.functions import col


def one_hot(line, ncat):
    temp = [0] * ncat
    for i in line:
        temp[int(i)] = 1
    return temp


def arr_to_vec(arr):
    return Vectors.dense(arr)


def get_arraylen(line):
    return len(line) * [1.0]


def main():
    t0 = time.time()

    if len(sys.argv) >= 2:
        sub_date = sys.argv[1]
        pri_date = "p_" + sub_date
    else:
        raise
    print("sud_date: ", sub_date, pri_date)

    spark = SparkSession.builder.appName("xxx").getOrCreate()
    sc = spark.sparkContext
    data = spark.createDataFrame([
        (20210420, "230155", "qimei_xxx"),
        (20210420, "295085", "ip_xxx"),
        (20210420, "666373", "qimei_xxx1")])

    data.show(10,False)
    print("all data is ", data.count())
    t1 = time.time()
    print("used time 1 ", t1 - t0)
    """ 原字符串转数字字符串,目的减少数据大小,如果数据不大,可以忽略此步骤 """
    indexer = StringIndexer(inputCol="feature", outputCol="categoryIndex")
    data = indexer.fit(data).transform(data)
    data = data.withColumn("categoryIndex", data.categoryIndex.cast("string"))
    ncat = data.select("feature").distinct().count()

    print("nact is {}".format(ncat))
    """  数据转换,将一个用特征聚合为一行 """
    data = data.groupby("puid").agg(fn.array_sort(fn.collect_set(data.categoryIndex)).alias("feature_in"))
    data.show(10, False)

    """
    +------+----------------------------------------------------------------------------+
    |puid  |feature_in                                                                  |
    +------+----------------------------------------------------------------------------+
    |391682|[1.0, 12.0, 144602.0, 175238.0, 1854.0, 1994.0, 4.0, 45241.0, 952.0, 997.0] |
    |643803|[1.0, 1276.0, 2.0, 2256.0, 307.0, 321.0, 49770.0, 5.0, 59129.0, 787.0]      |
    |556523|[1.0, 12.0, 126684.0, 133357.0, 148.0, 3.0, 303.0, 375.0, 409.0, 48367.0]   |
    |797570|[116424.0, 14.0, 18.0, 1840.0, 201767.0, 6.0, 67216.0, 824.0, 87.0, 89.0]   |
    |085702|[0.0, 1.0, 13.0, 13726.0, 1825.0, 1954.0, 212334.0, 2185.0, 27425.0, 277.0] |
    |941807|[1.0, 12.0, 13540.0, 198640.0, 19945.0, 2241.0, 295.0, 342.0, 4.0, 4213.0]  |
    |331001|[1111.0, 123214.0, 1661.0, 197694.0, 3.0, 3592.0, 48.0, 543.0, 7350.0, 9.0] |
    |428344|[1.0, 13.0, 1534.0, 172596.0, 1820.0, 203.0, 2545.0, 3604.0, 5525.0, 5872.0]|
    |174960|[0.0, 111153.0, 119519.0, 1932.0, 1940.0, 2.0, 2915.0, 5844.0, 6430.0, 9.0] |
    |670580|[0.0, 1776.0, 18890.0, 297.0, 3.0, 62.0, 66.0, 7.0, 89683.0, 95278.0]       |
    +------+----------------------------------------------------------------------------+
    """

    """ 结构转换 """
    cv = CountVectorizer(inputCol='feature_in', outputCol="features")
    model = cv.fit(data)
    result = model.transform(data)
    mh = MinHashLSH(inputCol="features", outputCol="hashes")
    model_mh = mh.fit(result)
    model_mh.transform(result)
    result.show(10, False)
    """
    +------+----------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
    |puid  |feature_in                                                                  |features                                                                                          |
    +------+----------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
    |391682|[1.0, 12.0, 144602.0, 175238.0, 1854.0, 1994.0, 4.0, 45241.0, 952.0, 997.0] |(229227,[1,4,12,953,1005,1854,1981,39388,144156,210236],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])|
    |643803|[1.0, 1276.0, 2.0, 2256.0, 307.0, 321.0, 49770.0, 5.0, 59129.0, 787.0]      |(229227,[1,2,5,307,321,785,1273,2266,34460,155683],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])     |
    |556523|[1.0, 12.0, 126684.0, 133357.0, 148.0, 3.0, 303.0, 375.0, 409.0, 48367.0]   |(229227,[1,3,12,148,303,376,410,40533,134438,140706],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])   |
    |797570|[116424.0, 14.0, 18.0, 1840.0, 201767.0, 6.0, 67216.0, 824.0, 87.0, 89.0]   |(229227,[6,14,18,87,88,829,1850,128161,152435,179155],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])  |
    |085702|[0.0, 1.0, 13.0, 13726.0, 1825.0, 1954.0, 212334.0, 2185.0, 27425.0, 277.0] |(229227,[0,1,13,278,1844,1956,2191,12367,37866,170883],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]) |
    |941807|[1.0, 12.0, 13540.0, 198640.0, 19945.0, 2241.0, 295.0, 342.0, 4.0, 4213.0]  |(229227,[1,4,12,295,342,2287,4154,13232,23730,176528],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])  |
    |331001|[1111.0, 123214.0, 1661.0, 197694.0, 3.0, 3592.0, 48.0, 543.0, 7350.0, 9.0] |(229227,[3,9,48,543,1103,1654,3549,7841,88613,142030],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])  |
    |428344|[1.0, 13.0, 1534.0, 172596.0, 1820.0, 203.0, 2545.0, 3604.0, 5525.0, 5872.0]|(229227,[1,13,202,1518,1816,2525,3589,5805,6112,199099],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])|
    |174960|[0.0, 111153.0, 119519.0, 1932.0, 1940.0, 2.0, 2915.0, 5844.0, 6430.0, 9.0] |(229227,[0,2,9,1917,1944,2881,6267,7135,69982,202606],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])  |
    |670580|[0.0, 1776.0, 18890.0, 297.0, 3.0, 62.0, 66.0, 7.0, 89683.0, 95278.0]       |(229227,[0,3,7,62,66,297,1783,16787,94528,95521],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])       |
    +------+----------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+
    
    """

    print("Approximately joining dfA and dfB on distance smaller than 0.3:")

    ns = model_mh.approxSimilarityJoin(result, result, threshold=1, distCol="JaccardDistance") \
        .select(col("datasetA.puid").alias("account_id"),
                col("datasetB.puid").alias("pair_id"),
                (1-col("JaccardDistance")).alias("jarkd"))
    ps = ns.filter(ns.account_id > ns.pair_id).filter(ns.jarkd> ns.0.17)
    ps.show(10)
    t2 = time.time()
    print("used time 2 ", t2 - t1)
    print("数据过滤完了-")


if __name__ == "__main__":
    main()

 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

一天两晒网

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值