linux多少个端口,Linux允许python使用多少个网络端口?

所以我一直在尝试在

python中多线程一些互联网连接.我一直在使用多处理模块,所以我可以绕过“Global Interpreter Lock”.但似乎系统只为python提供了一个开放的连接端口,或者至少它只允许一次连接发生.这是我所说的一个例子.

*请注意,这是在Linux服务器上运行

from multiprocessing import Process,Queue

import urllib

import random

# Generate 10,000 random urls to test and put them in the queue

queue = Queue()

for each in range(10000):

rand_num = random.randint(1000,10000)

url = ('http://www.' + str(rand_num) + '.com')

queue.put(url)

# Main funtion for checking to see if generated url is active

def check(q):

while True:

try:

url = q.get(False)

try:

request = urllib.urlopen(url)

del request

print url + ' is an active url!'

except:

print url + ' is not an active url!'

except:

if q.empty():

break

# Then start all the threads (50)

for thread in range(50):

task = Process(target=check,args=(queue,))

task.start()

因此,如果你运行它,你会注意到它在函数上启动了50个实例,但一次只运行一个.您可能认为“全球口译员锁”正在这样做但事实并非如此.尝试将函数更改为数学函数而不是网络请求,您将看到所有50个线程同时运行.

那么我必须使用套接字吗?或者我能做些什么可以让python访问更多端口?或者有什么我没有看到的?让我知道你的想法!谢谢!

*编辑

所以我编写了这个脚本来更好地使用请求库进行测试.好像我之前没有对它进行过这样的测试. (我主要使用urllib和urllib2)

from multiprocessing import Process,Queue

from threading import Thread

from Queue import Queue as Q

import requests

import time

# A main timestamp

main_time = time.time()

# Generate 100 urls to test and put them in the queue

queue = Queue()

for each in range(100):

url = ('http://www.' + str(each) + '.com')

queue.put(url)

# Timer queue

time_queue = Queue()

# Main funtion for checking to see if generated url is active

def check(q,t_q): # args are queue and time_queue

while True:

try:

url = q.get(False)

# Make a timestamp

t = time.time()

try:

request = requests.head(url,timeout=5)

t = time.time() - t

t_q.put(t)

del request

except:

t = time.time() - t

t_q.put(t)

except:

break

# Then start all the threads (20)

thread_list = []

for thread in range(20):

task = Process(target=check,time_queue))

task.start()

thread_list.append(task)

# Join all the threads so the main process don't quit

for each in thread_list:

each.join()

main_time_end = time.time()

# Put the timerQueue into a list to get the average

time_queue_list = []

while True:

try:

time_queue_list.append(time_queue.get(False))

except:

break

# Results of the time

average_response = sum(time_queue_list) / float(len(time_queue_list))

total_time = main_time_end - main_time

line = "Multiprocessing: Average response time: %s sec. -- Total time: %s sec." % (average_response,total_time)

print line

# A main timestamp

main_time = time.time()

# Generate 100 urls to test and put them in the queue

queue = Q()

for each in range(100):

url = ('http://www.' + str(each) + '.com')

queue.put(url)

# Timer queue

time_queue = Queue()

# Main funtion for checking to see if generated url is active

def check(q,timeout=5)

t = time.time() - t

t_q.put(t)

del request

except:

t = time.time() - t

t_q.put(t)

except:

break

# Then start all the threads (20)

thread_list = []

for thread in range(20):

task = Thread(target=check,time_queue))

task.start()

thread_list.append(task)

# Join all the threads so the main process don't quit

for each in thread_list:

each.join()

main_time_end = time.time()

# Put the timerQueue into a list to get the average

time_queue_list = []

while True:

try:

time_queue_list.append(time_queue.get(False))

except:

break

# Results of the time

average_response = sum(time_queue_list) / float(len(time_queue_list))

total_time = main_time_end - main_time

line = "Standard Threading: Average response time: %s sec. -- Total time: %s sec." % (average_response,total_time)

print line

# Do the same thing all over again but this time do each url at a time

# A main timestamp

main_time = time.time()

# Generate 100 urls and test them

timer_list = []

for each in range(100):

url = ('http://www.' + str(each) + '.com')

t = time.time()

try:

request = requests.head(url,timeout=5)

timer_list.append(time.time() - t)

except:

timer_list.append(time.time() - t)

main_time_end = time.time()

# Results of the time

average_response = sum(timer_list) / float(len(timer_list))

total_time = main_time_end - main_time

line = "Not using threads: Average response time: %s sec. -- Total time: %s sec." % (average_response,total_time)

print line

如您所见,它是多线程的.实际上,我的大部分测试表明,线程模块实际上比多处理模块更快. (我不明白为什么!)以下是我的一些结果.

Multiprocessing: Average response time: 2.40511314869 sec. -- Total time: 25.6876308918 sec.

Standard Threading: Average response time: 2.2179402256 sec. -- Total time: 24.2941861153 sec.

Not using threads: Average response time: 2.1740363431 sec. -- Total time: 217.404567957 sec.

这是在我的家庭网络上完成的,我服务器上的响应时间要快得多.我认为我的问题是间接回答的,因为我在一个更复杂的脚本上遇到了问题.所有的建议都帮助我很好地优化了它.谢谢大家!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值