workshop 会议_HTTP Workshop和Python

workshop 会议

From the 25th to the 27th of July this year, I was in Stockholm, attending the 2nd HTTP Workshop. This event, a (so far) annual gathering of some of the leading experts in HTTP, is one of the most valuable events I’ve ever had the fortune to attend. Given that I was attending the event at least in part as a representative of the Python community, I thought it’d be good to share some of my thoughts and opinions in the wake of this event with that community.

从今年7月25日至27日,我在斯德哥尔摩参加了第二届HTTP研讨会 。 迄今为止(迄今为止)每年都有一些HTTP领域的领先专家聚会,这一活动是我有幸参加的最有价值的活动之一。 考虑到我至少部分地以Python社区的代表的身份参加了此次活动,所以我认为与该社区分享这次活动后的一些想法和见解是一件好事。

But first, let’s talk about the event itself. When pressed I’ve described the event as a “conference”, but that’s not really an accurate portrayal. 40 people attended, and basically spend three days with all 40 of us sat in the same room discussing HTTP. We had a loose framework of sessions to kick discussions off: some people shared data they collected, others shared insights they’d gained during implementation of HTTP/2, and still others (cough cough Google) talked about the ways in which we could push the web forward even further.

但首先,让我们谈谈事件本身。 当我按下时,我将该事件描述为“会议”,但这并不是一个准确的描述。 40人参加了会议,基本上花了三天时间,我们40个人坐在同一个房间里讨论HTTP。 我们有一个宽松的会议框架来开始讨论:有些人分享了他们收集的数据,另一些人分享了在HTTP / 2实施过程中获得的见解,还有一些人(咳嗽的谷歌)谈论了我们可以推进的方式网络向前发展。

These sessions were mostly just a starting off point, however. Conversations were allowed to flow naturally, with different participants asking questions or interjecting supporting material as the discussion progressed. These discussions were almost inevitably of both high value and very dense informational content: with all the attendees being experts in one or more aspects of HTTP in both theory and practice, there was a phenomenal opportunity to learn things during these discussions.

但是,这些会议大部分只是起点。 对话自然而然地进行,随着讨论的进行,不同的参与者提出问题或插入支持材料。 这些讨论几乎不可避免地具有很高的价值和非常密集的信息内容:由于所有与会者都是HTTP的一个或多个方面的理论和实践专家,因此在这些讨论中有很大的学习机会。

参会者 (Attendees)

40 people showed up to the workshop, and they can be fairly neatly divided into two groups. The first group came from the major technology companies involved in the web. For example, Mozilla sent 5 attendees, Google sent 4, and Facebook, Akamai, and Apple sent 3. There was also a long tail of big web companies that sent one or two people, including most of the obvious organisations like Microsoft, CloudFlare, Fastly, and the BBC.

40人参加了研讨会,他们可以很整齐地分为两组。 第一组来自网络中的主要技术公司。 例如,Mozilla发送了5位与会者,Google发送了4位与会者,Facebook,Akamai和Apple发送了3位与会者。大型网络公司也派出了很多人,派出了一两个人,其中包括微软,CloudFlare,很快,和BBC。

The other major collection of folks represented were individual implementers, most of whom came from open source backgrounds. In particular we had a great density of some of the earliest OSS HTTP/2 implementers, from across a wide variety of implementations. We had Brad Fitzpatrick for Go, Daniel Stenberg for Curl, Kazuho Oku for H2O, Tatsuhiro Tsujikawa for nghttp2, Stefan Eissing for Apache mod_h2, and myself for Python. On top of that we had a series of other HTTP and HTTP/2 experts from the OSS community, like Moto Ishizawa and Poul-Henning Kamp.

代表的其他主要人群是个人实施者,其中大多数来自开源背景。 特别是,我们从各种实现中收集了一些最早的OSS HTTP / 2实现者。 对于Go,我们有Brad Fitzpatrick;对于Curl,有Daniel Danenberg;对于H2O,有Kazuho Oku;对于nghttp2,有Tsutsukawa Tsujikawa;对于Apache mod_h2,有Stefan Eissing;对于Python,我自己。 最重要的是,我们还有来自OSS社区的其他HTTP和HTTP / 2专家,例如Moto Ishizawa和Poul-Henning Kamp。

There was a third, smaller group, which could most accurately be described as “observers”. These were folks representing organisations and interests that were not currently heavily involved in the HTTP space, but for whom it was extremely relevant. For example, the GSMA sent a representative.

第三小组较小,可以最准确地描述为“观察者”。 这些人代表的组织和利益团体目前尚未大量参与HTTP领域,但与他们息息相关。 例如, GSMA派出了代表。

This wide variety of attendee backgrounds ensured that the space managed not to be an echo chamber. For almost any idea or proposal, there was guaranteed to be at least one person with a well-argued position in favour, and at least one with a well-argued position in opposition. That kind of environment is great for ensuring that people come out of the discussion with a much better understanding of the problem, even if it doesn’t always move the people on each side.

各种各样的与会者背景确保了该空间不会成为回音室。 对于几乎任何想法或提议,都保证至少有一个赞成立场很好的人,至少有一个反对立场很好的人。 这种环境非常适合确保人们离开讨论区,更好地理解问题,即使这种情况并不总是能使双方都动起来。

主题 (Topics)

Over the three days, we covered a lot of topics. Unsurprisingly, with 40 people in attendance, not everything was of equal relevance to each person. Rather than summarise the entire meeting, then, I’ll focus on the topics that seemed like they were most relevant to the Python community and the future development of HTTP for Python web developers.

在三天的时间里,我们涵盖了很多主题。 毫不奇怪,有40人参加,并不是每个人都具有同等的相关性。 然后,我将不集中讨论整个会议,而是集中讨论似乎与Python社区以及Python Web开发人员的HTTP未来开发最相关的主题。

服务器推送 (Server Push)

One of the headline discussions in the first day, from my perspective, ended up being about HTTP/2 Server Push. Several implementers from CDNs presented their data on the effectiveness of server push. The headline from this data is that it seems that we have inadvertently given web developers the idea that pushing a resource is “free”, and therefore that all resources should automatically be pushed.

从我的角度来看,第一天的头条讨论最终是关于HTTP / 2 Server Push的。 CDN的一些实施者介绍了有关服务器推送效果的数据。 这些数据的标题是,我们似乎无意中使Web开发人员认为推送资源是“免费的”,因此应该自动推送所有资源。

This is, of course, not true. While pushes can in principle be automatically rejected, in practice it’s not always easy for a browser to know that it doesn’t need a resource. As a result, server admins that push all their static resources may waste bandwidth transmitting resources that the client already has and can serve from cache, or could at least serve using an “If-Modified-Since” ⇒ 304 Not Modified dance that transfers relatively few bytes.

当然,这是不正确的。 虽然原则上可以自动拒绝推送,但实际上,浏览器并不总是很容易知道不需要资源。 结果,推送所有静态资源的服务器管理员可能会浪费带宽资源传输客户端已经拥有并可以从缓存中服务的资源,或者至少可以使用“ If-Modified-Since”⇒304 Not Modified舞来相对转移几个字节。

Worse, the win-case for server push is not as good as the lose-case. If you push an unneeded resource, that consumes all the bandwidth of that resource and gets in the way of delivering other things. On the other hand, if you fail to push that resource and the client needs it, the only cost is 1RTT of latency in page load. On many connections, that may be a bad trade-off, and that’s especially true if your static resources are cacheable.

更糟糕的是,服务器推送的成功案例不及失败案例。 如果您推送不需要的资源,则会消耗该资源的所有带宽,并阻碍其他内容的交付。 另一方面,如果您无法推送该资源而客户端需要它,则唯一的代价就是页面加载延迟为1RTT。 在许多连接上,这可能是一个不好的权衡,尤其是在静态资源可缓存的情况下。

There is some work ongoing in the HTTP working group to provide browsers and servers tools to address this problem. For example, Kazuho Oku is proposing a specification to allow browsers to provide a digest of the resources they already have in the cache, which allows servers to only push resources they know the clients don’t have. While I was lukewarm on the proposal before this meeting (it seemed like a lot of complexity to solve a small problem), with the extra data the CDNs shared I’m now strongly in favour of it: it allows us to make much better use of limited network resources.

HTTP工作组中正在进行一些工作,以提供浏览器和服务器工具来解决此问题。 例如,Kazuho Oku提出了一项规范,以允许浏览器提供其在缓存中已经拥有的资源的摘要,从而允许服务器仅推送客户端不知道的资源。 尽管我在本次会议之前对提案感到不满(解决一个小问题似乎很复杂),但CDN共享的额外数据使我非常赞成:它可以使我们更好地利用它网络资源有限。

The other remarkable problem that revealed itself during the course of the discussions were that none of the major browsers actually store pushed resources into the cache. That’s remarkable, given that one of the most regularly discussed use-cases for server push is cache-priming. I got the impression from the discussion that the browsers weren’t really that pleased with this state of affairs, so I wouldn’t be surprised to see this change in the future.

在讨论过程中发现的另一个显着问题是,没有哪个主流浏览器实际将推送的资源存储到缓存中。 考虑到服务器推送最经常讨论的用例之一是缓存启动,这非常了不起。 从讨论中我得到的印象是,浏览器对这种情况并不十分满意,因此将来看到这种变化不会令我感到惊讶。

QUIC (QUIC)

Jana doesn't like it when we call it TCP/2

On the second day, Jana Iyengar from Google presented some of Google’s work on QUIC. This was enormously interesting, partly because I haven’t had a chance to play much with QUIC yet, but mostly because Jana’s background is very different from most of the other attendees. Unlike the rest of us, whose expertise is mostly at the application layer, Jana has a transport layer background, which gives him very different insight into the kinds of ideas that were thrown around at the workshop.

第二天,来自Google的Jana Iyengar展示了Google在QUIC上的一些作品。 这非常有趣,部分原因是我还没有机会与QUIC一起玩很多游戏,但是主要是因为Jana的背景与大多数其他参与者大不相同。 与我们其他人(他们的专业知识主要在应用程序层上)不同,Jana具有传输层背景,这使他对研讨会上提出的各种想法有不同的见解。

QUIC is a topic I’ve had mixed feelings about for a while, mostly because the amount of effort required for me to implement QUIC is an order of magnitude more than the work required to implement HTTP/2. In no small part, this is because QUIC is as much a transport protocol as it is an application protocol, which means that we need to tackle things like congestion control: a much more complex topic than almost anything we face in HTTP/2. This kind of extra workload is no problem for Google, with its army of engineers, but for the open source community it represents a drastic increase in both workload and technical expertise required. After all, most people in the OSS community who are experts in implementing HTTP are not experts in implementing TCP!

QUIC是一个让我感到困惑的话题,主要是因为我实现QUIC所需的工作量比实现HTTP / 2所需的工作量大一个数量级。 在很大程度上,这是因为QUIC既是传输协议,又是应用程序协议,这意味着我们需要解决拥塞控制之类的问题:这比我们在HTTP / 2中几乎所面对的任何事情都要复杂得多。 对于Google及其工程师队伍来说,这种额外的工作量不是问题,但对于开源社区来说,这意味着工作量和所需的技术专长都在急剧增加。 毕竟,OSS社区中大多数实施HTTP的专家都不是实施TCP的专家!

However, as the discussion continued I became increasingly comfortable with the progress of QUIC. Fundamentally at this point, Google and the other tech giants are expending enormous engineering resources on eking ever smaller latency improvements out of the web. This is a laudable goal: a lower latency web is good for everyone, and I commend their progress. However, it’s not clear that Python web servers and clients gain an enormous amount from trying to keep up.

但是,随着讨论的继续,我对QUIC的进展越来越满意。 从根本上讲,Google和其他技术巨头正在花费大量的工程资源来从网络上获得越来越小的延迟改进。 这是一个值得称赞的目标:较低延迟的Web对每个人都有好处,我赞扬他们的进步。 但是,尚不清楚Python Web服务器和客户端是否会从保持同步中获得巨大收益。

Trying to squeeze 5ms lower latency from your network doesn’t help the rendering time of your Python web page as much as optimising your Python code, or moving to a different backend language, or using PyPy. If you’re running Python in your web server and you aren’t behind a CDN, then QUIC is not your highest priority in the search for performance.

尝试将网络延迟降低5毫秒,这与优化Python代码,迁移到另一种后端语言或使用PyPy相比,对Python网页的渲染时间没有多大帮助。 如果您在Web服务器中运行Python并且没有落在CDN后面,那么QUIC并不是追求性能的最高优先级。

However, once your web site gets big enough to put behind a CDN, your origin server is no longer the primary way to gain performance! Happily, the big CDN providers do have the engineering resources to deploy QUIC relatively rapidly. For Python clients the latency gains are a bit more helpful, but again your client will likely gain more by just finding other work to do while it waits for data than by optimising the latency that much. This means that, realistically, those of us in the Python community can afford to wait until a good QUIC library comes along: we have other performance bridges to cross first.

但是,一旦您的网站足够大,可以放置CDN,原始服务器就不再是提高性能的主要方法! 令人高兴的是,大型CDN提供商确实拥有工程资源,可以相对快速地部署QUIC。 对于Python客户端,等待时间的增加会有所帮助,但是与等待时间的优化相比,通过寻找等待数据的其他工作,客户端可能会获得更多的收益。 这意味着,实际上,Python社区中的我们可以负担得起,直到出现一个好的QUIC库:我们还有其他性能桥梁需要优先考虑。

On top of all that, QUIC is not going to be anything near as useful in data center environments where we have high-bandwidth, low-latency, good-quality networks. It turns out TCP works great in environments like that. Given that a large number of Python servers and clients are deployed in just such an environment, the improvements of QUIC are much less noticeable.

最重要的是,QUIC在拥有高带宽,低延迟,高质量网络的数据中心环境中几乎没有什么用处。 事实证明,TCP在这样的环境中运行良好。 鉴于在这样的环境中部署了大量Python服务器和客户端,因此QUIC的改进就不那么明显了。

That means that when someone starts work on and open-sources a good QUIC library (by which I mean not the stack lifted out of Chromium, but one that exposes a C ABI and doesn’t require the Go runtime, so I can bind it from Python) I’ll happily bind it and make whatever changes we need to get QUIC in our Python servers and clients. I’ll even do so early in the development process and provide feedback and bug fixes. But I’m not in any rush to build one myself: I don’t think I’ve got the skill set to build a good one, and I think the time it would take me to get that skill set is better deployed elsewhere in the HTTP space.

这意味着当有人开始工作并开源一个良好的QUIC库时(我的意思是不是栈从Chromium中移出,而是一个公开了C ABI且不需要Go运行时的库,因此我可以绑定它来自Python)我将很高兴地绑定它,并进行我们需要在Python服务器和客户端中获得QUIC的任何更改。 我什至会在开发过程的早期就这样做,并提供反馈和错误修复。 但是我并不急于自己建立自己的技能:我认为我没有具备建立良好技能的技能,而且我认为花时间才能更好地将其应用于其他领域。 HTTP空间。

However, I still want to see the IETF involved in the QUIC process. One way or another, Google is going to do things like QUIC: its business model depends on them. That means that those of us in the IETF shouldn’t fight back against this kind of model just because it’s not of immediate use to the long tail of the web. Instead, we should encourage Google to keep coming back: at least that way the web remains open and free, and those of us in the long tail can make sure that Google makes engineering decisions that don’t close the door on us.

但是,我仍然希望看到IETF参与QUIC流程。 谷歌将以一种或另一种方式做类似QUIC的事情:其业务模式取决于它们。 这意味着IETF中的我们这些人不应仅仅因为它不能立即用于网络的长尾而反抗这种模型。 相反,我们应该鼓励Google继续前进:至少这样一来,网络才能保持开放和免费,而我们这些人可以确保Google做出的工程决策不会妨碍我们。

盲缓存 (Blind Caching)

I don’t have a lot of useful things to say here, other than it’s a great idea. This attempts to solve some of the problems we have caused with forward proxies by allowing proxies to cache content in an encrypted form, which will be decrypted on the browser. This seems like a good idea.

除了这是一个好主意之外,我在这里没有很多有用的话要说。 这试图通过允许代理以加密形式缓存内容(将在浏览器上解密)来解决由正向代理引起的一些问题。 这似乎是个好主意。

最快乐的眼球 (Happiest Eyeballs)

The workshop briefly discussed the suggestion from Fastly that perhaps clients should attempt to connect to all of the IP addresses returned in a DNS response and use the one that connects the fastest. This is basically an extension of the Happy Eyeballs algorithm for IPv4/IPv6 to attempt to route around congested or otherwise damaged routes.

研讨会简要讨论了Fastly的建议,即客户端可能应尝试连接到DNS响应中返回的所有IP地址,并使用连接速度最快的IP地址。 这基本上是对IPv4 / IPv6的“快乐眼球”算法的扩展,以尝试绕过拥挤或损坏的路由。

This seems like an interesting enough idea, and I suspect a browser implementer will give this a shot sometime over the next year to see how it works out. Watch this space.

这似乎是一个足够有趣的想法,我怀疑浏览器实现者会在明年的某个时候尝试一下,看看它如何实现。 关注此空间。

One of the constant sources of problems in HTTP is that there is no well-defined format for how your HTTP header field value should look. Some header fields have values that are just strings (e.g. Location), some have values that are simple comma-separated lists, others have incredibly complex parameter-based syntaxes, and still others have weirdo one-off things that don’t look anything like any others (I’m looking right at you, Cookie, you monster). This means that clients and servers frequently have many one-off parsers for different header field values which all share relatively little code: bad for programmers and for web security in general, given how many of these clients and servers are written in unsafe languages.

HTTP中持续存在的问题之一是HTTP标头字段值的外观没有明确定义的格式。 一些标头字段的值只是字符串(例如Location),一些标头字段的值是简单的逗号分隔列表,另一些标头字段具有难以置信的复杂的基于参数的语法,还有一些标头字段看起来像一次性的东西其他任何人(我正看着你, Cookie ,你这个怪物)。 这意味着客户端和服务器通常具有许多一次性的解析器,用于不同的标头字段值,并且共享的代码相对较少:这对程序员和整个Web安全都是不利的,因为这些客户端和服务器中有多少是用不安全的语言编写的。

Julian Reschke has been leading a proposal to consider a standard structure for new header field values. The initial proposal has been JSON, and that was discussed a bit at the workshop. Generally speaking the workshop seems lukewarm towards JSON, in no small part because JSON has a fairly surprising set of rules around parsing numbers. For example, the number 0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000100e400 is treated by NodeJS’ built-in JSON parser as 100, but by Ruby 2.0.0’s as 0. That is likely to be somewhat problematic (understatement of the year!), and opens implementations up to nasty attacks caused by understanding a header field value differently to their peers.

朱利安·雷施克(Julian Reschke)一直在领导一项提议,为新的标头字段值考虑标准结构。 最初的建议是JSON,在研讨会上进行了一些讨论。 一般来说,研讨会似乎对JSON不冷不热,这在很大程度上要归功于JSON在解析数字方面有一套相当令人惊讶的规则。 例如,数0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000100e400通过的NodeJS的内置JSON解析器100,而是由Ruby2.0.0的为0。这很可能是有点问题(今年轻描淡写!)处理,并打开实现高达讨厌通过对标头字段值的理解与对等字段不同而引起的攻击。

However, the general consensus is that having a well-defined structured format for header field values would be an extremely good idea. Expect more work to be done here in the next year, with hopefully a draft specification coming soon. Along with a better serialization format, one can only hope!

但是,普遍的共识是,对于报头字段值使用定义良好的结构化格式将是一个非常好的主意。 预计明年将在这里做更多的工作,希望规范草案能尽快到来。 随着更好的序列化格式,人们只能寄希望于!

HTTP / 2的调试信息 (Debug Information for HTTP/2)

The OSS implementers amongst the group also talked briefly about some of the problems that they’ve bumped into with implementing HTTP/2. Given that there was quite a lot of consistency across the board with the kinds of problems people bump into, this was likely to be an ongoing problem, particularly for new implementers.

小组中的OSS实施者还简要介绍了他们在实施HTTP / 2时遇到的一些问题。 鉴于人们遇到的各种问题在各个方面都具有很大的一致性,这很可能是一个持续存在的问题,特别是对于新实施者。

The discussion progressed far enough that Brad Fitzpatrick and I decided to prototype a possible solution to the problem. This solution would allow HTTP/2 servers to essentially reveal information about what they believe the state of the connection is. This allows a number of things. Firstly, implementers of clients who find their connections failing can interrogate the server and check what the state is. Disagreeing about this state is the cause of almost all interop bugs, and so seeing what one side believes the state is allows for much more debuggability. Secondly, and just as importantly, it allows tools like h2spec to test implementations not just for whether they emit the correct frames and data, but also to essentially interrogate them for “thoughtcrime”: it can check that the server got that behaviour right on purpose or just by accident.

讨论进行得足够深入,以至于我和布拉德·菲茨帕特里克(Brad Fitzpatrick)决定为解决该问题提供原型。 该解决方案将允许HTTP / 2服务器从本质上揭示有关它们认为连接状态是什么的信息。 这允许很多事情。 首先,发现连接失败的客户端的实现者可以询问服务器并检查状态。 对此状态的分歧是几乎所有互操作性错误的原因,因此,了解一方认为该状态的状态可提供更多的可调试性。 其次,同样重要的是,它允许h2spec之类的工具不仅测试实现是否发出正确的帧和数据,还可以对其进行实质性的“思想犯罪”测试:它可以检查服务器是否故意使行为正确。或只是偶然。

This has been well-enough-received that Brad and I have written up a draft proposal for how it should work, which you can find here. We’ve also received a lot of good ideas about further enhancements. If you want to see it live on the web, you can point your HTTP/2-enabled client or browser to either Brad’s Go implementation or my Python implementation.

我和布拉德(Brad)已经就其应如何工作起草了一份提案草案,对此已广为接受,您可以在此处找到。 我们还收到了许多关于进一步增强功能的好主意。 如果您想在网上看到它,可以将启用HTTP / 2的客户端或浏览器指向Brad's Go实现我的Python实现

Over the next few weeks I’ll be pushing forward trying to get a good feel for how this looks more generally. I’ll then add hooks into hyper-h2 to allow servers that want to to automatically generate this body: I may even allow hyper-h2 to automatically respond to requests for this information, with no need for server authors to do anything at all!

在接下来的几周里,我将继续努力,以使人们对它的整体外观有所了解。 然后,我将钩子添加到hyper-h2中,以允许希望自动生成此主体的服务器:我什至可以允许hyper-h2自动响应对此信息的请求,而无需服务器作者进行任何操作!

最后的想法 (Final Thoughts)

This was my first HTTP Workshop, and I think it was enormously valuable. In the space of three days I learned more and gained more insights about HTTP than I have done in the previous 12 months, and I knew more than most about HTTP before I went! I’m extremely hopeful that these events continue in the coming years, and that I am able to attend the next one.

这是我的第一个HTTP Workshop,我认为它非常有价值。 在过去三天的时间里,我比过去12个月了解了更多有关HTTP的知识,并获得了更多的见解,而且在我去之前,我对HTTP的了解最多。 我非常希望这些活动在未来的几年中能够继续下去,并且能够参加下一个活动。

I also want to mention that it was a genuine honour and pleasure to represent the Python community at this event. Only two attendees represented language communities (Brad for Go, me for Python), and I think it’s fantastic that the Python community was able to have a voice at this kind of meeting. It bodes really well for the future health of the Python HTTP ecosystem, and shows the continuing importance and relevance of the Python programming language and community.

我还想提到,在这次活动中代表Python社区是一种真正的荣幸和荣幸。 只有两个人代表语言社区(Brad是Go,我是Python),我认为Python社区能够在这种会议上发表意见真是太好了。 对于Python HTTP生态系统的未来健康而言,这确实是一个好兆头,并且显示了Python编程语言和社区的持续重要性。

I’d also like to thank my employer, Hewlett Packard Enterprise, who have done excellent work in supporting the Python HTTP ecosystem for the last year and who enabled me to do the work required to bring value to an event like this one. Long may it last!

我还要感谢我的雇主Hewlett Packard Enterprise,他在过去一年中在支持Python HTTP生态系统方面做得非常出色,并且使我能够完成为这样的活动带来价值所需的工作。 可以持续多久!

翻译自: https://www.pybloggers.com/2016/07/the-http-workshop-and-python/

workshop 会议

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值