trt is perf_client

perf_client

A critical part of optimizing the inference performance of your model is being able to measure changes in performance as you experiment with different optimization strategies. The perf_clientapplication performs this task for the Triton Inference Server. The perf_client is included with the client examples which are available from several sources.
优化推断性能的有争议的部分是perf_client,能够在对模型选用不同的优化策略时度量性能,该部分包含在client例子中。

The perf_clientgenerates inference requests to your model and measures the throughput and latency of those requests. To get representative results, the perf_clientmeasures the throughput and latency over a time window, and then repeats the measurements until it gets stable values. By default the perf_clientuses average latency to determine stability but you can use the --percentile flag to stabilize results based on that confidence level. For example, if --percentile=95 is used the results will be stabilized using the 95-th percentile request latency. For example:
该部分发送请求到模型,计算吞吐量和延迟。为了结果有代表性,perf_client通过一个时间窗口来计算这两个参量,不断重复直到这两个参量基本稳定。默认perf_client中使用平均延迟来判断是否稳定,也可以用置信度区间–percentile【可以多测试几次来判断对延迟参数产生的影响】参数来是结果稳定。比如,–percentile=95会使用第95部分来稳定请求的延迟参数。

$ perf_client -m resnet50_netdef --percentile=95
*** Measurement Settings ***
  Batch size: 1
  Measurement window: 5000 msec
  Stabilizing using p95 latency

Request concurrency: 1
  Client:
    Request count: 809
    Throughput: 161.8 infer/sec
    p50 latency: 6178 usec
    p90 latency: 6237 usec
    p95 latency: 6260 usec
    p99 latency: 6339 usec
    Avg HTTP time: 6153 usec (send/recv 72 usec + response wait 6081 usec)
  Server:
    Request count: 971
    Avg request latency: 4824 usec (overhead 10 usec + queue 39 usec + compute 4775 usec)

Inferences/Second vs. Client p95 Batch Latency
Concurrency: 1, 161.8 infer/sec, latency 6260 usec

Request Concurrency

By default perf_clientmeasures your model’s latency and throughput using the lowest possible load on the model. To do this perf_clientsends one inference request to the server and waits for the response. When that response is received, the perf_clientimmediately sends another request, and then repeats this process during the measurement windows. The number of outstanding inference requests is referred to as the request concurrency, and so by default perf_client uses a request concurrency of 1.
默认,在测试延迟和吞时使用的可能是最慢的load,所以perf_client计算的时间是发送一个请求到服务器并等待反馈。收到反馈后,perf_client会立刻发送另一个请求,重复下去。请求的处理速率还取决于并发量,默认并发数为1.

Using the --concurrency-range :: option you can have perf_client collect data for a range of request concurrency levels. Use the --help option to see complete documentation for this and other options. For example, to see the latency and throughput of your model for request concurrency values from 1 to 4:
可选参数–concurrency-range ::能设置兵法量,详细用法查–help可知,比如下面设置并发量从1到4。

$ perf_client -m resnet50_netdef --concurrency-range 1:4
*** Measurement Settings ***
  Batch size: 1
  Measurement window: 5000 msec
  Latency limit: 0 msec
  Concurrency limit: 4 concurrent requests
  Stabilizing using average latency

Request concurrency: 1
  Client:
    Request count: 804
    Throughput: 160.8 infer/sec
    Avg latency: 6207 usec (standard deviation 267 usec)
    p50 latency: 6212 usec
...
Request concurrency: 4
  Client:
    Request count: 1042
    Throughput: 208.4 infer/sec
    Avg latency: 19185 usec (standard deviation 105 usec)
    p50 latency: 19168 usec
    p90 latency: 19218 usec
    p95 latency: 19265 usec
    p99 latency: 19583 usec
    Avg HTTP time: 19156 usec (send/recv 79 usec + response wait 19077 usec)
  Server:
    Request count: 1250
    Avg request latency: 18099 usec (overhead 9 usec + queue 13314 usec + compute 4776 usec)

Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, 160.8 infer/sec, latency 6207 usec
Concurrency: 2, 209.2 infer/sec, latency 9548 usec
Concurrency: 3, 207.8 infer/sec, latency 14423 usec
Concurrency: 4, 208.4 infer/sec, latency 19185 usec

Understanding The Output

For each request concurrency level perf_clientreports latency and throughput as seen from the client (that is, as seen by perf_client) and also the average request latency on the server.
每个并发层面,perf_client会计算客户端延迟、吞吐和服务器的平均请求延迟。

The server latency measures the total time from when the request is received at the server until the response is sent from the server. Because of the HTTP and GRPC libraries used to implement the server endpoints, total server latency is typically more accurate for HTTP requests as it measures time from first byte received until last byte sent. For both HTTP and GRPC the total server latency is broken-down into the following components:
服务器的延迟是收到请求到发出反馈的总时间,因为HTTP和GRPC,HTTP服务的总服务器延迟更加典型和精确,它统计的是第一个字节接受到最后一个字节发送的的区域时间。HTTP和GRPC的总延迟都受下面成分影响:

queue: The average time spent in the inference schedule queue by a request waiting for an instance of the model to become available.
队列:请求队列中等待模型调用的请求所话费的平均时间

compute: The average time spent performing the actual inference, including any time needed to copy data to/from the GPU.
计算:实际推理的平均耗时,包含复制数据到GPU的时间

The client latency time is broken-down further for HTTP and GRPC as follows:
客户端的延迟在HTTP和GRPC请求下受影响的因素有:

HTTP: send/recv indicates the time on the client spent sending the request and receiving the response. response wait indicates time waiting for the response from the server.
HTTP:客户端的收发时间指发出请求和收到反馈的时间,等待反馈的时间指等待服务器的反馈时间

GRPC: (un)marshal request/response indicates the time spent marshalling the request data into the GRPC protobuf and unmarshalling the response data from the GRPC protobuf. response wait indicates time writing the GRPC request to the network, waiting for the response, and reading the GRPC response from the network.
GRPC:发送请求到GRPC protobuf以其获取反馈的时间就是其收发时间,等待反馈的时间指从把请求写入GRPC的网络中到等待请求到从网络中读取GRPC反馈的时间。

Use the verbose (-v) option to perf_clientto see more output, including the stabilization passes run for each request concurrency level.
参数-v 可以看到更多输出,比如像对每个并发级别的参数稳定过程。

Visualizing Latency vs. Throughput

The perf_clientprovides the -f option to generate a file containing CSV output of the results:
该模块还提供-f参数可以生成结果的CSV文件。

$ perf_client -m resnet50_netdef --concurrency-range 1:4 -f perf.csv
$ cat perf.csv
Concurrency,Inferences/Second,Client Send,Network+Server Send/Recv,Server Queue,Server Compute,Client Recv,p50 latency,p90 latency,p95 latency,p99 latency
1,160.8,68,1291,38,4801,7,6212,6289,6328,7407
3,207.8,70,1211,8346,4786,8,14379,14457,14536,15853
4,208.4,71,1014,13314,4776,8,19168,19218,19265,19583
2,209.2,67,1204,3511,4756,7,9545,9576,9588,9627

You can import the CSV file into a spreadsheet to help visualize the latency vs inferences/second tradeoff as well as see some components of the latency. Follow these steps:
可以把CSV文件导入到spreadsheet中查看延迟和吞吐及其他信息。

  • Open this spreadsheet
  • Make a copy from the File menu “Make a copy…”
  • Open the copy
  • Select the A1 cell on the “Raw Data” tab
  • From the File menu select “Import…”
  • Select “Upload” and upload the file
  • Select “Replace data at selected cell” and then select the “Import data” button

Input Data

Use the --help option to see complete documentation for all input data options. By default perf_clientsends random data to all the inputs of your model. You can select a different input data mode with the --input-data option:
参数–help可以看到所有输入数据选项的文档。默认perf_client按输入shape的要求发送随机数据到模型,你也可以选择用不同的数据模式,使用–input-data参数就可以实现。

  • random: (default) Send random data for each input.

  • zero: Send zeros for each input.

  • directory path: A path to a directory containing a binary file for each input, named the same as the input. Each binary file must contain the data required for that input for a batch-1 request. Each file should contain the raw binary representation of the input in row-major order.
    文件夹路径,包含每个输入的二进制文件,与input同名,每个二进制文件必须包含batch为1的请求数据。每个文件应按行优先顺序包含输入数据的原始二进制表示形式。

  • file path: A path to a JSON file containing data to be used with every inference request. See the “Real Input Data” section for further details. –input-data can be provided multiple times with different file paths to specific multiple JSON files.
    文件路径,json文件得路径,里面内容是模型请求的数据。详情参见Real Input Data模块,–input-data可以多次使用来指明多个json文件的路径。

For tensors with with STRING datatype there are additional options --string-length and --string-data that may be used in some cases (see --help for full documentation).
字符串了行得向量有特殊的参数,–string-length和–string-data来用于某些特殊场景。

For models that support batching you can use the -b option to indicate the batch-size of the requests that perf_clientshould send. For models with variable-sized inputs you must provide the --shape argument so that perf_clientknows what shape tensors to use. For example, for a model that has an input called IMAGE that has shape [ 3, N, M ], where N and M are variable-size dimensions, to tell perf_client to send batch-size 4 requests of shape [ 3, 224, 224 ]:
支持batch得模型可用-b参数来引入batch-size参数,可变shape输入需要指明–shape参数。例如,如果模型接受的是[3,N,M]输入,模型接受batch-size是4的请求可按如下方式书写:

$ perf_client -m mymodel -b 4 --shape IMAGE:3,224,224

Real Input Data

The performance of some models is highly dependent on the data used. For such cases users can provide data to be used with every inference request made by client in a JSON file. The perf_clientwill use the provided data when sending inference requests in a round-robin fashion.
有些模型的性能依赖于输入的数据,这种情况下,客户端可以通过json文件来发送请求,客户端会以循环方式发送请求。

Each entry in the “data” array must specify all input tensors with the exact size expected by the model from a single batch. The following example describes data for a model with inputs named, INPUT0 and INPUT1, shape [4, 4] and data type INT32:
下面data数组中的每个元素都要满足输入张量的形状要求,当然,这是在单例模式下,下面的例子就给出了名为INPUT0和INPUT1,形状为[4, 4],类型为INT32的模型输入。

{
  "data" :
   [
      {
        "INPUT0" : [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
        "INPUT1" : [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
      },
      {
        "INPUT0" : [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
        "INPUT1" : [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
      },
      {
        "INPUT0" : [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
        "INPUT1" : [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
      },
      {
        "INPUT0" : [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
        "INPUT1" : [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
      }
      .
      .
      .
    ]
}

Kindly note that the [4, 4] tensor has been flattened in a row-major format for the inputs.
需要注意的是,[4, 4]张量是按行优先展开的。

A part from specifying explicit tensors, users can also provide Base64 encoded binary data for the tensors. Each data object must list its data in a row-major order. The following example highlights how this can be acheived:
显示指明张量的,也可以将张量数据转成base64编码,每个数据对象以行优先的顺序枚举所有元素。下面是实现例子:

{
  "data" :
   [
      {
        "INPUT0" : {"b64": "YmFzZTY0IGRlY29kZXI="},
        "INPUT1" : {"b64": "YmFzZTY0IGRlY29kZXI="}
      },
      {
        "INPUT0" : {"b64": "YmFzZTY0IGRlY29kZXI="},
        "INPUT1" : {"b64": "YmFzZTY0IGRlY29kZXI="}
      },
      {
        "INPUT0" : {"b64": "YmFzZTY0IGRlY29kZXI="},
        "INPUT1" : {"b64": "YmFzZTY0IGRlY29kZXI="}
      },
      .
      .
      .
    ]
}

In case of sequence models, multiple data streams can be specified in the JSON file. Each sequence will get a data stream of its own and the client will ensure the data from each stream is played back to the same correlation id. The below example highlights how to specify data for multiple streams for a sequence model with a single input named INPUT, shape [1] and data type STRING:
在序列模型中,json文件中可以给出多个数据流,每个模型会获取自己需要的数据,确保获取的模型是list同一位置的数据。下面的例子可以看到如何使用,一个名为INPUT的输入,shape是1,数据类型是string。

{
  "data" :
    [
      [
        {
          "INPUT" : ["1"]
        },
        {
          "INPUT" : ["2"]
        },
        {
          "INPUT" : ["3"]
        },
        {
          "INPUT" : ["4"]
        }
      ],
      [
        {
          "INPUT" : ["1"]
        },
        {
          "INPUT" : ["1"]
        },
        {
          "INPUT" : ["1"]
        }
      ],
      [
        {
          "INPUT" : ["1"]
        },
        {
          "INPUT" : ["1"]
        }
      ]
    ]
}

The above example describes three data streams with lengths 4, 3 and 2 respectively. The perf_clientwill hence produce sequences of length 4, 3 and 2 in this case.
上面的例子描述了3个数据流,长度分别为4,3,2。

Users can also provide an optional “shape” field to the tensors. This is especially useful while profiling the models with variable-sized tensors as input. The specified shape values are treated as an override and client still expects default input shapes to be provided as a command line option (see –shape) for variable-sized inputs. In the absence of “shape” field, the provided defaults will be used. Below is an example json file for a model with single input “INPUT”, shape [-1,-1] and data type INT32:
用户也可提供“shape”参数,尤其是在可变输入的模型中。该参数给定的shape被当成是覆盖,客户端仍然会期望从命令指数中给出可变shape。当shape缺省时,会使用默认的。下面是以INPUT为输入,[-1,-1]为shape的INT32型输入:

{
  "data" :
   [
      {
        "INPUT" :
              {
                  "content": [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
                  "shape": [2,8]
              }
      },
      {
        "INPUT" :
              {
                  "content": [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
                  "shape": [8,2]
              }
      },
      {
        "INPUT" :
              {
                  "content": [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
              }
      },
      {
        "INPUT" :
              {
                  "content": [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
                  "shape": [4,4]
              }
      }
      .
      .
      .
    ]
}

Shared Memory

By default perf_clientsends input tensor data and receives output tensor data over the network. You can instead instruct perf_clientto use system shared memory or CUDA shared memory to communicate tensor data. By using these options you can model the performance that you can achieve by using shared memory in your application. Use --shared-memory=system to use system (CPU) shared memory or --shared-memory=cuda to use CUDA shared memory.
默认perf_client会发送输入数据,接受输出数据。也可以让perf_client使用系统的共享内存或者CUDA的内存来获取数据,使用这些设置能提高模型性能,参数–shared-memory=system是使用CPU的共享内存,–shared-memory=cuda是使用CUDA的共享存储。

Communication Protocol

By default perf_clientuses HTTP to communicate with the inference server. The GRPC protocol can be specificed with the -i option. If GRPC is selected the --streaming option can also be specified for GRPC streaming.
默认perf_client使用HTTP与服务器进行数据交流,GRPC需要制定-i参数,如果选择GRPC,–streaming可以在使用GRPC流时指定。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

sophia_xw

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值