103、rasa_nlu 集成tf.estimator 做分类器 tf.data 做ETL部分,

最近一直在看tf dev summit 2018 , tf dev summit 2019

人家说一个东西从无到有刚开始可能有些神奇,比较好发展。但当时间一长的时候,再能把它做好就不容易啦

今天就来介绍一下tensorflow 的一些高级api,并且结合rasa_nlu 实际操作一下,构建一个分类器学习一下tf高级

API的用法。

主要使用的组件有

tf.estimator : (1) 谷歌说这个支持大规模分布式机器学习

                     (2) 并且很容易的就可以转换到其它方法上去,而且不需要修改很多代码

tf.data         :(1) 可以很快速地加载数据,支持并行加载数据

                     (2)支持ETL过程。extract , transform,load (分batch_size,shuffle,repeat等方法)

                     (3)支持prefetch 当GPU在训练数据的时候可以调用一部分的cpu资源将数据prefetch到GPU上面

tf.saved_model:

                     (1)可以导出*.pb类型的模型文件,这种类型文件支持tensorflow serving, tensorflow lite.

                     (2)与*.ckpt文件不同的是这种文件同时保存了图结构和参数,而ckpt文件只有结点的参数和权重

tf.example:

                      (1)构建一个序列化对象传给tensorflow serving 或者predictior进行预测

tf.feature_column:

                      (1)用来描述定义训练数据的结构,以便在后边解析训练数据和预测的时候解析预测数据使用

 

 

 

下边是主要修改的代码部分

训练过程:

X, Y, intents_for_X = self._prepare_data_for_training(training_data, intent_dict)

        num_classes = len(intent_dict)

        # define classes number to classified
        head = tf.contrib.estimator.multi_class_head(n_classes=num_classes)

        # define feature spec for input x parsing
        feature_names = ['a_in']
        self.feature_columns = [tf.feature_column.numeric_column(key=k,shape=[1, X.shape[1]]) for k in feature_names]

        x_tensor = {'a_in': X}
        intents_for_X = intents_for_X.astype(np.int32)

        # set gpu and tf graph confing
        tf.logging.set_verbosity(tf.logging.INFO)
        config_proto = self.get_config_proto(self.component_config)

        # sparse_softmax_cross_entropy , build linear classified model
        self.estimator = tf.contrib.estimator.LinearEstimator(
                                                     head = head,
                                                     feature_columns=self.feature_columns,
                                                     optimizer='Ftrl',
                                                     config=tf.estimator.RunConfig(session_config=config_proto)
                                                 )
        # train model
        self.estimator.train(input_fn=lambda: self.input_fn(x_tensor,
                                                  intents_for_X,
                                                  self.batch_size,
                                                  shuffle_num=1000,
                                                  mode = tf.estimator.ModeKeys.TRAIN),
                                                  max_steps=2000)
        # evaluate model
        results = self.estimator.evaluate(input_fn=lambda: self.input_fn(x_tensor,
                                                  intents_for_X,
                                                  self.batch_size,
                                                  shuffle_num=1000,
                                                  mode = tf.estimator.ModeKeys.PREDICT))

 

 

 

(2)保存模型的过程:

     首先定义将预测数据解析为标准结构的函数

    def input_fn(self,features, labels, batch_size, shuffle_num, mode):
        """
         build tf.data set for input pipeline

        :param features: type dict() , define input x structure for parsing
        :param labels: type np.array input label
        :param batch_size: type int number ,input batch_size
        :param shuffle_num: type int number , random select the data
        :param mode: type string ,tf.estimator.ModeKeys.TRAIN or tf.estimator.ModeKeys.PREDICT
        :return: set() with type of (tf.data , and labels)
        """
        dataset = tf.data.Dataset.from_tensor_slices((features, labels))
        if mode == tf.estimator.ModeKeys.TRAIN:
            dataset = dataset.shuffle(shuffle_num).batch(batch_size).repeat(self.epochs)
        else:
            dataset = dataset.batch(batch_size)
        iterator = dataset.make_one_shot_iterator()
        data, labels = iterator.get_next()
        return data, labels

   

   然后进行训练

    def train(self, training_data, cfg=None, **kwargs):
        # type: (TrainingData, Optional[RasaNLUModelConfig], **Any) -> None
        """Train the embedding intent classifier on a data set."""

        intent_dict = self._create_intent_dict(training_data)

        if len(intent_dict) < 2:
            logger.error("Can not train an intent classifier. "
                         "Need at least 2 different classes. "
                         "Skipping training of intent classifier.")
            return

        self.inv_intent_dict = {v: k for k, v in intent_dict.items()}
        self.encoded_all_intents = self._create_encoded_intents(intent_dict)

        X, Y, intents_for_X = self._prepare_data_for_training(training_data, intent_dict)

        num_classes = len(intent_dict)

        # define classes number to classified
        head = tf.contrib.estimator.multi_class_head(n_classes=num_classes)

        # define feature spec for input x parsing
        feature_names = ['a_in']
        self.feature_columns = [tf.feature_column.numeric_column(key=k,shape=[1, X.shape[1]]) for k in feature_names]

        x_tensor = {'a_in': X}
        intents_for_X = intents_for_X.astype(np.int32)

        # set gpu and tf graph confing
        tf.logging.set_verbosity(tf.logging.INFO)
        config_proto = self.get_config_proto(self.component_config)

        # sparse_softmax_cross_entropy , build linear classified model
        self.estimator = tf.contrib.estimator.LinearEstimator(
                                                     head = head,
                                                     feature_columns=self.feature_columns,
                                                     optimizer='Ftrl',
                                                     config=tf.estimator.RunConfig(session_config=config_proto)
                                                 )
        # train model
        self.estimator.train(input_fn=lambda: self.input_fn(x_tensor,
                                                  intents_for_X,
                                                  self.batch_size,
                                                  shuffle_num=1000,
                                                  mode = tf.estimator.ModeKeys.TRAIN),
                                                  max_steps=2000)
        # evaluate model
        results = self.estimator.evaluate(input_fn=lambda: self.input_fn(x_tensor,
                                                  intents_for_X,
                                                  self.batch_size,
                                                  shuffle_num=1000,
                                                  mode = tf.estimator.ModeKeys.PREDICT))

        print(results)

 

 

 

(3)保存模型的过程

        

   def persist(self, model_dir):
        # type: (Text) -> Dict[Text, Any]
        """Persist this model into the passed directory.
        Return the metadata necessary to load the model again."""
        if self.estimator is None:
            return {"classifier_file": None}

        # build feature spec for tf.example parsing
        feature_spec = tf.feature_column.make_parse_example_spec(self.feature_columns)
        # build tf.example parser
        serving_input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
        # export tf model
        path = self.estimator.export_savedmodel(model_dir, serving_input_receiver_fn)
        # decode model path to string
        file_dir = os.path.basename(path).decode('utf-8')


        with io.open(os.path.join(
                model_dir,
                self.name + "_inv_intent_dict.pkl"), 'wb') as f:
            pickle.dump(self.inv_intent_dict, f)
        with io.open(os.path.join(
                model_dir,
                self.name + "_encoded_all_intents.pkl"), 'wb') as f:
            pickle.dump(self.encoded_all_intents, f)

        return {"classifier_file": file_dir}

 

 

(3) 加载模型的过程:

     

from tensorflow.contrib import predictor as Pred

predict = Pred.from_saved_model(export_dir=os.path.join(model_dir,file_name),config=config_proto)

据说使用tensorflow.contrib.predictor加载模型进行预测比普通的加载模型的方式快10x

 

(4)再次进行预测的过程

 

X = message.get("text_features").tolist()
            examples = []
            feature = {}
            # convert input x to tf.feature with float feature spec
            feature['a_in'] = tf.train.Feature(float_list=tf.train.FloatList(value=X))
            # build tf.example for prediction
            example = tf.train.Example(
                features=tf.train.Features(
                    feature=feature
                )
            )
            # serialize tf.example to string
            examples.append(example.SerializeToString())

            # Make predictions.
            result_dict = self.predictor({'inputs': examples})
            result_score_list = result_dict['scores'][0]
            max_score = np.max(result_dict['scores'][0])
            max_index = np.argmax(result_dict['scores'][0])

 

运行的结果如下所示

 

整个项目的链接如下:

https://github.com/GaoQ1/rasa_nlu_gq

 

 

 

整个训练过程log如下:

training process :


C:\Users\weizhen.zhao\Documents\GitHub\rasa_nlu_gq\venv\Scripts\python.exe C:/Users/weizhen.zhao/Documents/GitHub/rasa_nlu_gq/rasa_nlu_gao/train.py -c sample_configs/config_embedding_bert_intent_estimator_classifier.yml --data data/examples/luis/HighTalkSQSWLuisAppStaging-GA-20180824.json --path projects/bert_gongan_v4
C:\Users\weizhen.zhao\Documents\GitHub\rasa_nlu_gq\rasa_nlu_gao\utils\__init__.py:236: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  return yaml.load(read_file(filename, "utf-8"))
server config:
                max_batch_size	=	256                           
                 prefetch_size	=	10                            
            fixed_embed_length	=	False                         
           ventilator <-> sink	=	tcp://127.0.0.1:52870         
                   num_process	=	2                             
                 pooling_layer	=	[-2]                          
                           xla	=	False                         
                          cors	=	*                             
                   max_seq_len	=	25                            
                python_version	=	3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)]
                     http_port	=	None                          
           priority_batch_size	=	16                            
                   zmq_version	=	4.3.1                         
         show_tokens_to_client	=	False                         
                       verbose	=	False                         
                     statistic	=	{'num_data_request': 0, 'num_sys_request': 1, 'num_active_client': 0, 'max_request_per_client': 1, 'num_total_seq': 0, 'num_total_request': 1, 'avg_request_per_client': 1.0, 'num_total_client': 1, 'num_max_request_per_client': 1, 'num_min_request_per_client': 1, 'min_request_per_client': 1}
                    device_map	=	[]                            
                    num_worker	=	1                             
               tuned_model_dir	=	None                          
                worker -> sink	=	tcp://127.0.0.1:52893         
                          fp16	=	False                         
                           cpu	=	False                         
                        client	=	9d87beb2-78ba-4d70-b7e9-d2f257e598fb
                   config_name	=	bert_config.json              
                 pyzmq_version	=	18.0.1                        
                  mask_cls_sep	=	False                         
                 graph_tmp_dir	=	None                          
             server_start_time	=	2019-03-25 10:47:28.331322    
                     ckpt_name	=	bert_model.ckpt               
           gpu_memory_fraction	=	0.5                           
         num_concurrent_socket	=	8                             
           server_current_time	=	2019-03-25 10:54:09.843259    
                server_version	=	1.8.4                         
                     model_dir	=	E:\chinese_L-12_H-768_A-12\chinese_L-12_H-768_A-12
                          port	=	5555                          
                      port_out	=	5556                          
            tensorflow_version	=	['1', '12', '0']              
          ventilator -> worker	=	['tcp://127.0.0.1:52871', 'tcp://127.0.0.1:52872', 'tcp://127.0.0.1:52873', 'tcp://127.0.0.1:52874', 'tcp://127.0.0.1:52875', 'tcp://127.0.0.1:52876', 'tcp://127.0.0.1:52877', 'tcp://127.0.0.1:52878']
              pooling_strategy	=	2                             
              http_max_connect	=	10                            
Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\WEIZHE~1.ZHA\AppData\Local\Temp\jieba.cache
Loading model cost 0.609 seconds.
Prefix dict has been built succesfully.
Epochs:   0%|          | 0/17 [00:00<?, ?it/s]C:\Users\weizhen.zhao\AppData\Local\Programs\Python\Python35\lib\site-packages\bert_serving\client\__init__.py:285: UserWarning: some of your sentences have more tokens than "max_seq_len=25" set on the server, as consequence you may get less-accurate or truncated embeddings.
here is what you can do:
- disable the length-check by create a new "BertClient(check_length=False)" when you do not want to display this warning
- or, start a new server with a larger "max_seq_len"
  '- or, start a new server with a larger "max_seq_len"' % self.length_limit)
Epochs: 100%|██████████| 17/17 [00:04<00:00,  3.97it/s]
2019-03-25 10:54:19 WARNING  tensorflow  - Using temporary folder as model directory: C:\Users\WEIZHE~1.ZHA\AppData\Local\Temp\tmp_9yu6pnr
2019-03-25 10:54:19.980918: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-03-25 10:54:20.231528: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7085
pciBusID: 0000:01:00.0
totalMemory: 6.00GiB freeMemory: 4.96GiB
2019-03-25 10:54:20.231786: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-03-25 10:54:23.162725: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-25 10:54:23.162866: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2019-03-25 10:54:23.162946: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2019-03-25 10:54:23.164279: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3072 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
2019-03-25 10:54:37.740434: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-03-25 10:54:37.740607: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-25 10:54:37.740745: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2019-03-25 10:54:37.740828: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2019-03-25 10:54:37.740966: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3072 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
{'average_loss': 0.077875145, 'loss': 0.076562986, 'global_step': 1800, 'accuracy': 0.9995338}
2019-03-25 10:54:56.449880: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-03-25 10:54:56.450049: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-25 10:54:56.450182: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2019-03-25 10:54:56.450264: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2019-03-25 10:54:56.450398: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3072 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
2019-03-25 10:54:56 WARNING  tensorflow  - From C:\Users\weizhen.zhao\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\estimator\estimator.py:1044: calling SavedModelBuilder.add_meta_graph_and_variables (from tensorflow.python.saved_model.builder_impl) with legacy_init_op is deprecated and will be removed in a future version.
Instructions for updating:
Pass your op to the equivalent parameter main_op instead.

Process finished with exit code 0

 

预测过程的log如下

 

C:\Users\weizhen.zhao\Documents\GitHub\rasa_nlu_gq\venv\Scripts\python.exe C:/Users/weizhen.zhao/Documents/GitHub/rasa_nlu_gq/rasa_nlu_gao/server.py -c sample_configs/config_embedding_bert_intent_estimator_classifier.yml --path projects/bert_gongan_v4
2019-03-25 10:57:31 WARNING  py.warnings  - C:\Users\weizhen.zhao\Documents\GitHub\rasa_nlu_gq\rasa_nlu_gao\utils\__init__.py:236: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  return yaml.load(read_file(filename, "utf-8"))

2019-03-25 10:57:31+0800 [-] Log opened.
2019-03-25 10:57:31+0800 [-] Site starting on 5000
2019-03-25 10:57:31+0800 [-] Starting factory <twisted.web.server.Site object at 0x00000219FB3C4EB8>
2019-03-25 10:58:08+0800 [-] server config:
2019-03-25 10:58:08+0800 [-]                    zmq_version	=	4.3.1                         
2019-03-25 10:58:08+0800 [-]            priority_batch_size	=	16                            
2019-03-25 10:58:08+0800 [-]                      http_port	=	None                          
2019-03-25 10:58:08+0800 [-]                 worker -> sink	=	tcp://127.0.0.1:52893         
2019-03-25 10:58:08+0800 [-]           ventilator -> worker	=	['tcp://127.0.0.1:52871', 'tcp://127.0.0.1:52872', 'tcp://127.0.0.1:52873', 'tcp://127.0.0.1:52874', 'tcp://127.0.0.1:52875', 'tcp://127.0.0.1:52876', 'tcp://127.0.0.1:52877', 'tcp://127.0.0.1:52878']
2019-03-25 10:58:08+0800 [-]                tuned_model_dir	=	None                          
2019-03-25 10:58:08+0800 [-]                            cpu	=	False                         
2019-03-25 10:58:08+0800 [-]               pooling_strategy	=	2                             
2019-03-25 10:58:08+0800 [-]                 server_version	=	1.8.4                         
2019-03-25 10:58:08+0800 [-]                           fp16	=	False                         
2019-03-25 10:58:08+0800 [-]               http_max_connect	=	10                            
2019-03-25 10:58:08+0800 [-]                 python_version	=	3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)]
2019-03-25 10:58:08+0800 [-]                      statistic	=	{'avg_request_per_second': 3.7288529001263235, 'num_total_seq': 2145, 'num_total_client': 2, 'max_last_two_interval': 385.6274952, 'min_request_per_client': 1, 'avg_size_per_request': 112.5, 'min_request_per_second': 0.002593176089483362, 'num_active_client': 0, 'num_max_last_two_interval': 1, 'num_max_request_per_client': 1, 'avg_last_two_interval': 22.924579470588235, 'num_min_last_two_interval': 1, 'num_total_request': 19, 'max_request_per_second': 4.194699493825813, 'max_size_per_request': 128, 'max_request_per_client': 18, 'num_min_request_per_second': 1, 'min_size_per_request': 97, 'min_last_two_interval': 0.23839609999998856, 'num_data_request': 17, 'num_max_request_per_second': 1, 'avg_request_per_client': 9.5, 'num_min_size_per_request': 1, 'num_sys_request': 2, 'num_max_size_per_request': 1, 'num_min_request_per_client': 1}
2019-03-25 10:58:08+0800 [-]                     num_worker	=	1                             
2019-03-25 10:58:08+0800 [-]                     device_map	=	[]                            
2019-03-25 10:58:08+0800 [-]              server_start_time	=	2019-03-25 10:47:28.331322    
2019-03-25 10:58:08+0800 [-]                         client	=	140dba62-641d-486b-a605-7dabadacba09
2019-03-25 10:58:08+0800 [-]                    config_name	=	bert_config.json              
2019-03-25 10:58:08+0800 [-]                           cors	=	*                             
2019-03-25 10:58:08+0800 [-]                  pooling_layer	=	[-2]                          
2019-03-25 10:58:08+0800 [-]                       port_out	=	5556                          
2019-03-25 10:58:08+0800 [-]                   mask_cls_sep	=	False                         
2019-03-25 10:58:08+0800 [-]          show_tokens_to_client	=	False                         
2019-03-25 10:58:08+0800 [-]                  prefetch_size	=	10                            
2019-03-25 10:58:08+0800 [-]             tensorflow_version	=	['1', '12', '0']              
2019-03-25 10:58:08+0800 [-]                      ckpt_name	=	bert_model.ckpt               
2019-03-25 10:58:08+0800 [-]                 max_batch_size	=	256                           
2019-03-25 10:58:08+0800 [-]            gpu_memory_fraction	=	0.5                           
2019-03-25 10:58:08+0800 [-]          num_concurrent_socket	=	8                             
2019-03-25 10:58:08+0800 [-]                           port	=	5555                          
2019-03-25 10:58:08+0800 [-]            ventilator <-> sink	=	tcp://127.0.0.1:52870         
2019-03-25 10:58:08+0800 [-]                      model_dir	=	E:\chinese_L-12_H-768_A-12\chinese_L-12_H-768_A-12
2019-03-25 10:58:08+0800 [-]                    max_seq_len	=	25                            
2019-03-25 10:58:08+0800 [-]             fixed_embed_length	=	False                         
2019-03-25 10:58:08+0800 [-]                  graph_tmp_dir	=	None                          
2019-03-25 10:58:08+0800 [-]                    num_process	=	2                             
2019-03-25 10:58:08+0800 [-]                            xla	=	False                         
2019-03-25 10:58:08+0800 [-]                  pyzmq_version	=	18.0.1                        
2019-03-25 10:58:08+0800 [-]                        verbose	=	False                         
2019-03-25 10:58:08+0800 [-]            server_current_time	=	2019-03-25 10:58:08.595281    
2019-03-25 10:58:08+0800 [-] bert model loaded
2019-03-25 10:58:08.707810: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-03-25 10:58:08.961376: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7085
pciBusID: 0000:01:00.0
totalMemory: 6.00GiB freeMemory: 4.96GiB
2019-03-25 10:58:08.961634: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-03-25 10:58:09.804785: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-25 10:58:09.804962: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2019-03-25 10:58:09.805046: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2019-03-25 10:58:09.805242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3072 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
2019-03-25 10:58:12+0800 [-] Building prefix dict from the default dictionary ...
2019-03-25 10:58:12+0800 [-] Loading model from cache C:\Users\WEIZHE~1.ZHA\AppData\Local\Temp\jieba.cache
2019-03-25 10:58:13+0800 [-] Loading model cost 0.644 seconds.
2019-03-25 10:58:13+0800 [-] Prefix dict has been built succesfully.
2019-03-25 10:58:13+0800 [-] "127.0.0.1" - - [25/Mar/2019:02:58:12 +0000] "GET /parse?q=%E4%BB%8A%E5%A4%A9%E5%A4%A9%E6%B0%94%E6%80%8E%E4%B9%88%E6%A0%B7 HTTP/1.1" 200 1340 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36"
2019-03-25 10:58:13+0800 [-] "127.0.0.1" - - [25/Mar/2019:02:58:12 +0000] "GET /favicon.ico HTTP/1.1" 404 233 "http://localhost:5000/parse?q=%E4%BB%8A%E5%A4%A9%E5%A4%A9%E6%B0%94%E6%80%8E%E4%B9%88%E6%A0%B7" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36"
2019-03-25 10:58:25+0800 [-] "127.0.0.1" - - [25/Mar/2019:02:58:24 +0000] "GET /parse?q=%E4%BB%8A%E5%A4%A9%E4%BD%A0%E5%90%83%E4%BA%86%E5%90%97 HTTP/1.1" 200 1334 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36"
2019-03-25 10:58:39+0800 [-] "127.0.0.1" - - [25/Mar/2019:02:58:38 +0000] "GET /parse?q=%E5%A6%82%E4%BD%95%E5%8A%9E%E7%90%86%E7%A4%BE%E4%BF%9D%E5%8D%A1 HTTP/1.1" 200 1340 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36"

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值