python同时同步发送多个请求_为什么在python中向asyncio服务器发送多个请求的时间增加了?...

I wrote a pythonic server with socket. that should receives requests at the same time(parallel) and respond them parallel.

When i send more than one request to it, the time of answering increase more than i expected.

server:

import datetime

import asyncio, timeit

import json, traceback

from asyncio import get_event_loop

requestslist = []

loop = asyncio.get_event_loop()

async def handleData(reader, writer):

message = ''

clientip = ''

data = bytearray()

print("Async HandleData", datetime.datetime.utcnow())

try:

start = timeit.default_timer()

data = await reader.readuntil(separator=b'\r\n\r\n')

msg = data.decode(encoding='utf-8')

len_csharp_message = int(msg[msg.find('content-length:') + 15:msg.find(';dmnid'):])

data = await reader.read(len_csharp_message)

message = data.decode(encoding='utf-8')

clientip = reader._transport._extra['peername'][0]

clientport = reader._transport._extra['peername'][1]

print('\nData Received from:', clientip, ':', clientport)

if (clientip, message) in requestslist:

reader._transport._sock.close()

else:

requestslist.append((clientip, message))

# adapter_result = parallel_members(message_dict, service, dmnid)

adapter_result = '''[{"name": {"data": "data", "type": "str"}}]'''

body = json.dumps(adapter_result, ensure_ascii=False)

print(body)

contentlen = len(bytes(str(body), 'utf-8'))

header = bytes('Content-Length:{}'.format(contentlen), 'utf-8')

result = header + bytes('\r\n\r\n{', 'utf-8') + body + bytes('}', 'utf-8')

stop = timeit.default_timer()

print('total_time:', stop - start)

writer.write(result)

writer.close()

writer.close()

# del writer

except Exception as ex:

writer.close()

print(traceback.format_exc())

finally:

try:

requestslist.remove((clientip, message))

except:

pass

def main(*args):

print("ready")

loop = get_event_loop()

coro = asyncio.start_server(handleData, 'localhost', 4040, loop=loop, limit=204800000)

srv = loop.run_until_complete(coro)

loop.run_forever()

if __name__ == '__main__':

main()

When i send single request, it tooke 0.016 sec.

but for more request, this time increase.

cpu info : intel xeon x5650

client:

import multiprocessing, subprocess

import time

from joblib import Parallel, delayed

def worker(file):

subprocess.Popen(file, shell=False)

def call_parallel (index):

print('begin ' , index)

p = multiprocessing.Process(target=worker(index))

p.start()

print('end ' , index)

path = r'python "/test-Client.py"' # ## client address

files = [path, path, path, path, path, path, path, path, path, path, path, path]

Parallel(n_jobs=-1, backend="threading")(delayed(call_parallel)(i) for index,i in enumerate(files))

for this client that send 12 requests synchronous, total time for per request is 0.15 sec.

I expect for any number requests, the time be fixed.

解决方案

What is request

Single request (roughly saying) consists of the following steps:

write data to network

waste time waiting for answer

read answer from network

№1/№3 processed by your CPU very fast. Step №2 - is a bytes journey from your PC to some server (in another city, for example) and back by wires: it usually takes much more time.

How asynchronous requests work

Asynchronous requests are not really "parallel" in terms of processing: it's still your single CPU core that can process one thing at a time. But running multiple async requests allows you to use step №2 of some request to do steps №1/№3 of other request instead of just wasting huge amount of time. That's a reason why multiple async requests usually would finish earlier then same amount of sync ones.

Running async code without network delay

But when you run things locally, step №2 doesn't take much time: your PC and server are the same thing and bytes don't go to network journey. There is just no time that can be used in step №2 to start new request. Only your single CPU core works processing one thing at a time.

You should test requests against server that answers with some delay to see results you expect.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: Python使用协程发送多个HTTP请求的示例如下:import asyncio import aiohttp async def fetch(session, url): async with session.get(url) as response: return await response.text() async def main(): async with aiohttp.ClientSession() as session: html = await fetch(session, 'http://www.example.com/') print(html) if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main()) ### 回答2: Python中可以使用协程技术同时发送多个http请求的示例可以用到`asyncio`和`aiohttp`这两个库。 首先,需要导入相应的库: ```python import asyncio import aiohttp ``` 然后,定义一个异步函数,该函数将会进行http请求发送和处理: ```python async def fetch(session, url): async with session.get(url) as response: return await response.text() ``` 接下来,可以定义一个异步函数用来发送多个http请求: ```python async def main(): async with aiohttp.ClientSession() as session: urls = [ 'http://example.com', 'http://example.org', 'http://example.net' ] tasks = [] for url in urls: task = asyncio.create_task(fetch(session, url)) tasks.append(task) # 并发发送http请求 responses = await asyncio.gather(*tasks) # 处理响应结果 for response in responses: print(response) ``` 最后,可以运行这个异步函数来获取响应结果: ```python if __name__ == '__main__': asyncio.run(main()) ``` 在这个示例中,`asyncio.gather()`方法用于同时运行多个协程任务,并且会返回这些协程的结果。`asyncio.create_task()`方法用于创建一个协程任务。 以上就是一个使用协程技术同时发送多个http请求的示例。 ### 回答3: 使用协程同时发送多个HTTP请求的示例可以参考以下代码: ```python import asyncio import aiohttp async def fetch(session, url): async with session.get(url) as response: return await response.text() async def main(): urls = [ 'https://www.example.com/page1', 'https://www.example.com/page2', 'https://www.example.com/page3' ] async with aiohttp.ClientSession() as session: tasks = [] for url in urls: task = asyncio.ensure_future(fetch(session, url)) tasks.append(task) responses = await asyncio.gather(*tasks) for response in responses: print(response) loop = asyncio.get_event_loop() loop.run_until_complete(main()) ``` 在上面的代码中,我们使用了asyncio和aiohttp库来进行协程的处理和HTTP请求发送。 示例中的`fetch`函数定义了通过aiohttp发送HTTP请求的逻辑,它使用`session.get()`方法发送GET请求并返回响应的内容。 在`main`函数中,我们定义了需要发送多个URL以及一个`ClientSession`对象来处理HTTP请求。然后使用`asyncio.ensure_future()`函数将每个URL对应的请求任务添加到一个任务列表中。最后使用`await asyncio.gather()`函数来同时执行这些任务,并等待所有任务完成。 在示例中,我们打印了每个响应的内容,你可以根据自己的需求对每个响应进行进一步的处理。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值