(13-4-02)服装推荐系统:实现推荐模型(2)基于ResNet的图像推荐模型

12.6.3  基于ResNet的图像推荐模型

在上面的商品推荐和排序文件predict_model.py中,也调用了文件resnet_image_based_model.py中的功能模块。编写文件resnet_image_based_model.py,功能是实现了一个基于ResNet的图像推荐模型。文件resnet_image_based_model.py的具体实现流程如下:

1)创建类resnet_based_prevence,该类的主要功能是实现基于ResNet的图像推荐模型,包括数据处理、特征工程、模型定义和编译等功能。通过调用这些方法,可以生成用于训练集成模型的数据、进行特征转换和构建图像推荐模型。首先定义初始化方法__init__(self, is_training, config_path = CONFIGURATION_PATH),这是类resnet_based_prevence的构造方法,用于初始化类的属性。

  1. is_training: 一个布尔值,表示模型是否处于训练状态。
  2. config_path: 配置文件的路径,默认为CONFIGURATION_PATH。

对应的实现代码如下所示:

class resnet_based_prevence: 

    def __init__(self, is_training , config_path = CONFIGURATION_PATH):

        self.is_training = is_training 
        self.config_path = config_path       
        self.ensemble_model_thresholds = {}
        self.all_models = []
        self.merge_stack_model = None
        self.model_thresholds = 0.5

(2)编写方法generate_data_for_nth_ensemble_model(self, train_tran, ensemble_model_number, pos_neg_ratio),功能是生成用于第n个集成模型训练的数据。

  1. train_tran: 训练数据集。
  2. ensemble_model_number: 集成模型的编号。
  3. pos_neg_ratio: 正样本和负样本的比例。  

对应的实现代码如下所示: 

    def generate_data_for_nth_ensemble_model(self, train_tran, ensemble_model_number, pos_neg_ratio):
    
        #Split -ve and +ve sample from dataset
        train_tran_pos = train_tran[train_tran.label == 1]
        #train_tran_pos.user_id.nunique()
        train_tran_neg = train_tran[train_tran.label == 0]

        #Count number of +ve sample we have for each user based on that we will get -ve sample for each user for a given ensemble_model_number
        train_tran_neg = (train_tran_neg.merge((train_tran_pos[['user_id','label']]
                                                .groupby('user_id')['label']
                                                .count()
                                                .reset_index(name = 'cnt')
                                            ), 
                                            on = 'user_id', 
                                            how = 'inner')
                        )
        train_tran_neg['total_neg_sample_per_ensemble'] = train_tran_neg['cnt'] * pos_neg_ratio

        #train_tran_neg.groupby('user_id').label.count()

        total_neg_sample_per_ensemble = len(train_tran_pos) * pos_neg_ratio

        #Generate -ve sample based on the total +ve sample we have per user
        df_train_tran = pd.DataFrame()

        group_neg_user = train_tran_neg.groupby('user_id')
        ensemble_number = ensemble_model_number

        for i, x in enumerate(group_neg_user.groups):

            grp_key = group_neg_user.get_group(x)    

            total_neg_sample_per_user = grp_key.iloc[0,grp_key.columns.get_loc("total_neg_sample_per_ensemble")]

            data_start_index = ensemble_number * total_neg_sample_per_user
            data_end_index = data_start_index + total_neg_sample_per_user

            if i == 0:        
                df_train_tran = grp_key[['user_id','item_id','label','image_path']][data_start_index: data_end_index] #grp_key.nth(list(range(0, 10)))
            
            else:
                df_train_tran = pd.concat([df_train_tran, 
                                        (grp_key[['user_id','item_id','label','image_path']][data_start_index: data_end_index])],
                                        axis = 0)
        
        df_train_tran = pd.concat([train_tran_pos[['user_id','item_id','label','image_path']], df_train_tran], axis = 0)
        #df_train_tran = df_train_tran.sort_values(by = 'user_id')
        #Shuffle that will help for data pass for training
        df_train_tran = df_train_tran.sample(frac = 1).reset_index(drop = True)
        
        del [train_tran_pos, train_tran_neg]
        gc.collect()
        
        return df_train_tran

3)编写方法feature_eng(self, X)实现特征工程,对输入数据进行预处理和特征转换。参数X表单后输入的数据。对应的实现代码如下所示:

    def feature_eng(self, X):

        log.write_log('Transform mapping customer/article to user/item started...', log.logging.DEBUG)
        engg = Pipeline( steps = [
                                        ('transform_article_mapping', transform_article_mapping(config_path = self.config_path)),

                                        ('transform_customer_mapping', transform_customer_mapping(hash_conversion = True, config_path = self.config_path)),
        ])

        X = engg.fit_transform(X)  
        log.write_log('Transform mapping customer/article to user/item completed...', log.logging.DEBUG)

        log.write_log(f'Map article id to image path started for {str(X.shape[0])}...', log.logging.DEBUG)
        X['image_path'] = list(map(get_image_path, X['article_id']))
        log.write_log('Map article id to image path completed...', log.logging.DEBUG)

        X = X[X.image_path != ""]

        return X

4)编写方法model_def(self, parms, unique_user, unique_item),功能是定义模型结构和编译模型,各个参数的具体说明如下:

  1. parms: 模型的超参数字典。
  2. unique_user: 唯一用户数。
  3. unique_item: 唯一物品数。

对应的实现代码如下所示:

    def model_def(self, parms, unique_user, unique_item):

        #HyperParamaters
        SEED = parms["SEED"]
        L2_reg = parms["L2_reg"]
        CHANNEL = parms["CHANNEL"]
        IMAGE_SIZES = parms["IMAGE_SIZES"]
        EMBEDDING_U = parms["EMBEDDING_U"]
        EMBEDDING_I = parms["EMBEDDING_I"]
        NUM_UNIQUE_ITEMS = unique_item
        NUM_UNIQUE_USERS = unique_user
        EMBEDDING_IMG = parms["EMBEDDING_IMG"]
        LEARNING_RATE = parms["LEARNING_RATE"]
        GLOBAL_BATCH_SIZE = parms["GLOBAL_BATCH_SIZE"]
        INTER_EMBEDDING_I =  parms["INTER_EMBEDDING_I"]
        
        TOTAL_TRAINABLE_LAYERS : 176
        NUMBER_NON_TRAINABLE_LAYERS = TOTAL_TRAINABLE_LAYERS - parms["FINE_TUNE_LAYERS"]

        num_replicas_in_sync : 8 #TPU
        LEARNING_RATE = LEARNING_RATE * num_replicas_in_sync
                                    
        #***************************** Optimizer, Loss Function and Metric *************************

        init_lr = LEARNING_RATE 
        #print(f"Learning rate(lr): {init_lr}")
        params = {}
        params['alpha'] = 0.8 
        params['num_replicas_in_sync'] = num_replicas_in_sync #TPU
        params['global_batch_size'] = GLOBAL_BATCH_SIZE
        params['from_logits'] = True

        fn_loss = WeightLossBinaryCrossentropy(param = params)

        fn_optimizer = Adam(learning_rate = init_lr) 

        #***************************** Define Model ************************* 
        weight_initializers = RandomUniform(minval = NUM_UNIQUE_USERS-1, maxval = 1, seed = SEED)


        #***************************** User Embedding ************************* 
        User_Input = Input(shape = (1,), name = 'User_Input')  

        
        User_Embed = Embedding(input_dim = NUM_UNIQUE_USERS, 
                                input_length = 1,
                                output_dim = EMBEDDING_U,
                                embeddings_initializer = weight_initializers,
                                name = 'User_Embed'
                                )(User_Input)
        
        User_Embed_Batch_Normalize = BatchNormalization(name = 'User_Embed_Batch_Normalize')(User_Embed) 
        user_embedding = Flatten(name = "user_embedding")(User_Embed_Batch_Normalize) 
        

        #***************************** Image Embedding ************************* 
        Image_Input = Input(shape = ((IMAGE_SIZES, IMAGE_SIZES, CHANNEL)), name = 'Image_Input')
        model_RESENT50 = ResNet50(weights = 'imagenet', include_top = False, 
                                    pooling = 'avg',
                                    input_shape = (IMAGE_SIZES, IMAGE_SIZES, CHANNEL)
                                    )
        model_RESENT50.trainable = True  
        number_of_layers = len(model_RESENT50.layers)
        print('Number of layers: ', number_of_layers)

        # Freeze all the layers before the `fine_tune_at` layer
        for layer in range(0, NUMBER_NON_TRAINABLE_LAYERS):
            model_RESENT50.layers[layer].trainable =  False

        non_trainable_layers_cnt = 0
        trainable_layers_cnt = 0

        for layer in range(0,len(model_RESENT50.layers)):

            if model_RESENT50.layers[layer].trainable == True:
                trainable_layers_cnt += 1

            elif model_RESENT50.layers[layer].trainable == False:
                non_trainable_layers_cnt += 1

        print('Number of non trainable layers in ResNet.....', non_trainable_layers_cnt) 
        print('Number of trainable layers in ResNet.....', trainable_layers_cnt)


        Image_RESNET_Output = model_RESENT50(Image_Input)
        
        Image_Embed_Dense = Dense(units = EMBEDDING_IMG,
                                    activation = 'relu',
                                    kernel_regularizer = l2(L2_reg), 
                                    kernel_initializer = weight_initializers,
                                    name = 'Image_Embed_Dense')(Image_RESNET_Output)
        Image_Embed  = BatchNormalization(name = 'image_embedding')(Image_Embed_Dense) 


        #***************************** Item embedding *************************

        Item_Input = Input(shape = (1,), name = 'Item_Input')
        
        Item_Embed = Embedding(input_dim = NUM_UNIQUE_ITEMS,
                                input_length = 1,
                                output_dim = INTER_EMBEDDING_I, 
                                embeddings_initializer = weight_initializers,
                                name = "Item_Embed"
                                )(Item_Input)
        
        Item_Embed_Batch_Normalize = BatchNormalization(name = 'Item_Embed_Batch_Normalize')(Item_Embed)
        Item_Embed_ReShape = Flatten(name = 'Item_Embed_Reshape')(Item_Embed_Batch_Normalize)
        
        Item_Image_Embedding = Concatenate(axis = 1, name = 'Item_Image_Concate')([Item_Embed_ReShape, Image_Embed])
        Item_Image_Embed_Dense = Dense(units = EMBEDDING_I, 
                                        activation = 'relu',
                                        kernel_regularizer = l2(L2_reg), 
                                        kernel_initializer = weight_initializers,
                                        name = 'Item_Image_Embed_Dense')(Item_Image_Embedding) 
        item_embedding = BatchNormalization(name = 'item_embedding')(Item_Image_Embed_Dense)


        #***************************** Model *************************
        dot_user_item = Multiply(name = 'mul_user_item')([user_embedding, item_embedding])
        logits = tf.math.reduce_sum(dot_user_item, 1, name = 'reduce_sum_logits')
        y_hat = logits

        Img_Rec = Model(inputs = [User_Input, Item_Input, Image_Input], outputs = [y_hat], name = 'Image_Recommendation')

        Img_Rec.compile(optimizer = fn_optimizer, 
                       loss = fn_loss
                    )

        return Img_Rec

5编写方法train_merge_model_def(self),用于定义合并集成模型进行训练。它使用tf.keras.Sequential()创建一个顺序模型,并添加各个层。输入层的形状为(4,),其中包含一个具有5个单元和sigmoid激活方法的全连接层。最后,输出层有1个单元和sigmoid激活方法。该模型使用Adam优化器进行编译,采用二元交叉熵损失方法,并计算多个指标,如召回率、真阳性和假阴性。该方法返回合并集成模型。对应的实现代码如下所示:

    def train_merge_model_def(self):

        Merge_Ensemble_Rec = tf.keras.Sequential()
    
        Merge_Ensemble_Rec.add(tf.keras.layers.Input(shape=(4,), name = 'input'))
        Merge_Ensemble_Rec.add(tf.keras.layers.Dense(5, activation = 'sigmoid', name = 'layer_1'))
        #model.add(tf.keras.layers.Dense(2, activation = 'sigmoid', name = 'layer_2')) #Adding new layer is not helping as recll and preciesson turn to be 0
        #model.add(tf.keras.layers.Dense(1, activation = 'relu', name = 'layer_2'))
        Merge_Ensemble_Rec.add(tf.keras.layers.Dense(1, activation = 'sigmoid', name = 'output'))
        hp_learning_rate = 0.4

        Merge_Ensemble_Rec.compile(
              optimizer = tf.keras.optimizers.Adam(learning_rate = hp_learning_rate),
              loss = tf.keras.losses.BinaryCrossentropy(),
              metrics = [tf.keras.metrics.Recall(name = "recall"), 
                         #tf.keras.metrics.Precision(name = "precision"),
                         tf.keras.metrics.TruePositives(name = "true_positives"),
                         #tf.keras.metrics.FalsePositives(name = "false_positives"),
                         tf.keras.metrics.FalseNegatives(name = "false_negatives")
                        ]
              )
        
        return Merge_Ensemble_Rec

6编写方法train(self, X)训练基于图像的集成模型。它接受数据集X作为输入。方法首先通过feature_eng方法对数据集进行预处理。然后,它加载模型参数并初始化训练所需的变量。方法迭代遍历各个集成模型,并使用generate_data_for_nth_ensemble_model方法为每个集成模型生成数据。训练数据被分批处理,并在每个时期进行训练。使用Img_Rec.train_on_batch()计算损失,并计算每个时期的平均损失。训练损失被保存到文件中,并在训练结束时保存集成模型。方法还在每个时期后清理内存并进行垃圾回收。对应的实现代码如下所示:

    def train(self, X):

        #Pipeline to transform customer/article and generate image path
        X = self.feature_eng(X)

        #Load model model paramaters
        parms = read_yaml_key(self.config_path,'image-based-ensemble-models','param')

        ############################   Define model      ############################
        unique_user = X['customer_id'].nunique()
        unique_item = X['article_id'].nunique()
        Img_Rec = self.model_def(parms, unique_user, unique_item)


        GLOBAL_BATCH_SIZE = parms["GLOBAL_BATCH_SIZE"]
        #current_lr = parms['LEARNING_RATE']        
        number_ensemble = read_yaml_key(self.config_path,'image-based-ensemble-models','number_ensemble_models')
        epochs = read_yaml_key(self.config_path,'image-based-ensemble-models','epochs')
        training_model_loss = read_yaml_key(self.config_path,'image-based-ensemble-models','training_model_loss')
        end_of_training_loss  = read_yaml_key(self.config_path,'image-based-ensemble-models','end_of_training_loss')
        saved_training_model = read_yaml_key(self.config_path,'image-based-ensemble-models','saved_training_model')
        pos_neg_ratio = 10
        
        epoch_training_loss = []
        epoch_loss_metric = Mean()         
        for ensemble in range(0, number_ensemble):

            #print(f'Ensemble batch {ensemble}')

            df_train_tran = self.generate_data_for_nth_ensemble_model(X, ensemble, pos_neg_ratio)
        
            train_batch = (tf.data.Dataset
                        .from_tensor_slices((df_train_tran['user_id'],
                                                df_train_tran['item_id'],
                                                df_train_tran['image_path'],
                                                df_train_tran['label']
                                            ))
                        .map(decode_train_image, num_parallel_calls = tf.data.experimental.AUTOTUNE) 
                        .prefetch(GLOBAL_BATCH_SIZE) 
                        .batch(GLOBAL_BATCH_SIZE)                             
                        )

            for epoch in range(0, epochs):

                step_training_loss = []     
                epoch_loss_metric.reset_states()
                batch_cnt = 0        

                
                for  Users, Items, Image_Embeddings, Labels in train_batch:
            
                    loss  = Img_Rec.train_on_batch(x = [Users, Items, Image_Embeddings], y = [Labels])
                    epoch_loss_metric.update_state(loss)
                    step_training_loss.append(loss)

                    """
                    if batch_cnt % 10 == 0:
                        template = ("Epoch {}, Batch {}, Current Batch Loss: {}, Average Loss: {}, Lr: {}")
                        print(template.format(epoch + 1, 
                                            batch_cnt, 
                                            loss, 
                                            epoch_loss_metric.result().numpy(), 
                                            current_lr))
                        """

                    batch_cnt += 1
                    
                    del [Users, Items, Image_Embeddings, Labels]
                    gc.collect()       


                epoch_loss = float(epoch_loss_metric.result().numpy()) 
                epoch_training_loss.append(epoch_loss) 
                #print('Average training losses over epoch done %d: %.4f' % (epoch, epoch_loss,)) 
                
                # Save training loss
                save_file_path = training_model_loss + 'cp-epoch:{epoch:d}-step-loss.npz' 
                save_file_path = save_file_path.format(epoch = epoch, ensemble = 0)   
                hlpwrite.save_compressed_numpy_array_data(save_file_path, step_training_loss)  

                #print('='*50)
                #print('\n')
                #print('\n')
                gc.collect()
            
            # Save training loss per epoch
            save_file_path = end_of_training_loss + 'cp-epoch-loss.npz'
            save_file_path = save_file_path.format(ensemble = 0)  
            hlpwrite.save_compressed_numpy_array_data(save_file_path, epoch_training_loss) 
            

            # Save the ensemble model
            save_file_path = saved_training_model + '/Img_Rec_model.h5'
            save_file_path = save_file_path.format(epoch = epoch, ensemble = 0)   

            if not os.path.exists(os.path.dirname(save_file_path)):
                os.makedirs(os.path.dirname(save_file_path))

            Img_Rec.save(save_file_path)
            print(f"Saved model after end of epoch: {epoch}")

            del [train_batch]
            gc.collect()

7编写方法load_all_image_based_models(self, n_models=-1)加载训练好的基于图像的集成模型。它首先检查模型是否已经加载。如果尚未加载,则根据配置设置获取模型文件的路径。然后,它迭代遍历由n_models参数指定的模型数量(如果传入-1,则加载所有模型)。每个模型使用tf.keras.models中的load_model()加载,并添加到all_models列表中。另外,此方法还针对每个集成模型调用load_threshold_model()方法。对应的实现代码如下所示:

    def load_all_image_based_models(self, n_models = -1 ):
        if len(self.all_models) == 0 :

            log.write_log('Load model started...', log.logging.DEBUG)
            models_paths = os.path.join( 
                                        hlpread.read_yaml_key(CONFIGURATION_PATH, 'model', 'output_folder'),
                                        hlpread.read_yaml_key(CONFIGURATION_PATH, 'image-based-ensemble-models', 'models-ensemble-outputs-folder'),
                                        )
            #ensemble_models_paths = models_paths

            #ensemble = 'ensemble_{ensemble:d}'
            ensemble_models_paths = os.path.join(models_paths,
                                                 hlpread.read_yaml_key(CONFIGURATION_PATH, 
                                                                      'image-based-ensemble-models', 
                                                                      'ensemble_folder'),
                                                 hlpread.read_yaml_key(CONFIGURATION_PATH, 
                                                                      'image-based-ensemble-models', 
                                                                      'saved_model')
                                                )    

            self.all_models = []
            
            if n_models == -1:
                n_models = 0
                for entry in os.listdir(models_paths):
                    if re.search('ensemble_', entry):
                        n_models += 1

            for i in range(n_models):

                model_path = ensemble_models_paths.format(ensemble = i)  
                if os.path.exists(model_path) == True:

                    #self.all_models.append(model_from_json(read_object(model_path)))
                    self.all_models.append(load_model(model_path, custom_objects = {'WeightLossBinaryCrossentropy': WeightLossBinaryCrossentropy}))
                    self.load_theshold_model(i)
                
                else:
                    log.write_log(f'Ensemble model: {model_path} does not exists.', log.logging.DEBUG) 

            log.write_log('Load model completed...', log.logging.DEBUG)

        if self.merge_stack_model == None:

            log.write_log('Load merge ensemble started...', log.logging.DEBUG)

            models_paths = os.path.join( 
                                        hlpread.read_yaml_key(CONFIGURATION_PATH, 'model', 'output_folder'),
                                        hlpread.read_yaml_key(CONFIGURATION_PATH, 'image-based-ensemble-models', 'models-ensemble-outputs-folder'),
                                        hlpread.read_yaml_key(CONFIGURATION_PATH, 'image-based-ensemble-models', 'merge_ensemble'),
                                        hlpread.read_yaml_key(CONFIGURATION_PATH, 'image-based-ensemble-models', 'merge_ensemble_saved_model'),
                                        )

            self.merge_stack_model = load_model(models_paths)
            self.model_thresholds = self.merge_ensemble_threshold()
            log.write_log('Load merge ensemble completed...', log.logging.DEBUG)

8编写方法load_theshold_model(self, nmodel)加载特定集成模型的阈值值,它根据模型索引nmodel从配置设置中获取阈值值,并将其添加到ensemble_model_thresholds字典中。        对应的实现代码如下所示:    

    def load_theshold_model(self, nmodel):

        threshold = hlpread.read_yaml_key(CONFIGURATION_PATH, 'image-based-ensemble-models', 'ensemble-thresholds')
        self.ensemble_model_thresholds[nmodel] = threshold[nmodel]

9编写方法merge_ensemble_threshold(self)从配置设置中获取合并集成模型的阈值值,并返回该阈值值。对应的实现代码如下所示:

    def merge_ensemble_threshold(self):

        return hlpread.read_yaml_key(CONFIGURATION_PATH, 'image-based-ensemble-models', 'merge-ensemble-threshold')['threshold']

另外,文件resnet_image_based_model.py调用了如下所示的自定义文件模块的功能:

  1. src.models.loss.WeightLossBinaryCrossentropy:用于自定义的加权二分类交叉熵损失函数。
  2. src.models.pipeline.transform_customer_mapping:用于转换客户映射的自定义管道。
  3. src.models.pipeline.transform_article_mapping:用于转换文章映射的自定义管道。
  4. src.models.eval_metric.evaluate_metric:用于评估指标的自定义方法。
  5. utils.images_utils:用于处理图像的自定义工具方法。
  6. utils.read_utils:用于读取文件的自定义工具方法。
  7. utils.write_utils:用于写入文件的自定义工具方法。
  8. logs.logger:用于记录日志的自定义模块。

未完待续

  • 7
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

码农三叔

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值