goroutines通道在go中的应用

本文探讨了如何利用Go语言的goroutine和通道实现高效并发处理。通过具体案例,文章讲解了请求级和服务器级处理的最佳实践,涵盖了从基础知识到高级技巧的内容。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

The first part of my Go series, I discuss different applications for best using goroutines and channels with an emphasis towards server-sided Go development.

在Go系列的第一部分中,我讨论了使用goroutine通道来最好地使用不同的应用程序,重点是服务器端Go开发。

Before looking through concrete examples, lets look into a comparison of concurrency and parallelism. Awhile back I read an interesting definition comparing the two:

在研究具体示例之前,让我们先比较一下并发性和并行性。 不久前,我读到了一个有趣的定义,将两者进行了比较:

Concurrency is the composition of independently executing processes, while parallelism is the simultaneous execution of computation. Parallelism is about executing many things at once, it’s focus is execution. While concurrency is about dealing with many things at once, it’s focus is structure.

并发是独立执行进程的组成,而并行是计算的同时执行。 并行性是关于一次执行很多事情,其重点是执行。 并发是一次处理很多事情,但重点是结构。

This explanation does a good job of comparing and contrasting the two, which I’ve often seen misunderstood. Understanding the differences between the two is useful in this age of software development where multi-threaded patterns become increasingly more prevalent for cloud-native architecture.

这种解释在比较和对比这两者方面做得很好,我经常看到这两个例子被误解了。 在当今的软件开发时代,了解多线程之间的差异非常有用,因为在该时代,多线程模式在云原生架构中变得越来越普遍。

Communicating Sequential Processes

顺序流程沟通

Go provides an easy to understand paradigm of concurrency known as Communicating Sequential Processes (CSP.) In a short but simple definition this is a message passing paradigm of concurrency through the use of channels, which can be thought of as a queue of messages.

Go提供了一种易于理解的并发范例,称为通信顺序过程(CSP)。在一个简短但简单的定义中,这是一条通过使用通道传递并发范式的消息,可以将其视为消息队列。

You may also be familiar with some other popular paradigms of concurrency seen in other languages:

您可能还熟悉其他语言中常见的其他一些流行的并发范例:

  • Actor Model — Erlang, Scala

    演员模型— Erlang,Scala
  • Threads — Java, C#, C++

    线程— Java,C#,C ++

I won’t go into a full review of these as that can easily be an entire discussion in itself. Although, I would like to note no one paradigm is better at solving concurrency, they each have their tradeoffs and use-cases. There’s even some really neat community written libraries where different paradigms have been implemented across different languages, like the actor model in Go!

我将不对这些内容进行完整的评论,因为这很容易成为整个讨论的内容。 尽管我想指出,没有任何一种范例可以更好地解决并发性,但是它们各自都有其取舍和用例。 甚至还有一些社区编写的非常整洁的库,其中跨多种语言实现了不同的范例,例如Go中actor模型

Like most things in Go, it’s implementation of concurrency shines in its simplicity and efficiency. This is partly due to its first-class functions for both goroutines and channels which provide an easy to use interface for concurrently passing messages around your application runtime. In addition, the very little overhead of goroutines within Go’s scheduler can allow for you to potentially spawn millions of simultaneous tasks. This is dramatically less overhead than thread implementations seen in other languages. Ardan Labs has a much more detailed article better exampling how it works.

像Go中的大多数事情一样,并发的实现以其简单性和效率着称。 这部分是由于其对goroutine通道的一流功能,它们提供了易于使用的界面,用于在应用程序运行时同时传递消息。 此外,Go的调度程序中的goroutine很少,可以让您潜在地产生数百万个同时执行的任务。 与其他语言中的线程实现相比,这大大减少了开销。 Ardan Labs的文章更为详尽,可以更好地说明其工作原理。

For this article, I will stick with some examples of best practices, techniques and applications for using Go’s concurrency API. If you’re new to Go and you haven’t already, I recommend going through A Tour of Go first.

在本文中,我将坚持使用Go并发API的一些最佳实践,技术和应用程序示例。 如果您还不熟悉Go,还建议先进行一次Tour A Go

🎉例子 (🎉 Examples)

Now for the fun part, let’s jump into some examples and patterns of applying these concepts to real problems.

现在,对于有趣的部分,让我们跳入将这些概念应用于实际问题的一些示例和模式。

For the sack of this article, I will mention two different types of processes and their scopes: request-level and server-level processes. When writing an application, especially server applications, you’ll be working with these two different scenarios very often.

对于本文的解说,我将提到两种不同类型的进程及其作用域:请求级服务器级进程。 在编写应用程序(尤其是服务器应用程序)时,您将经常使用这两种不同的方案。

Request-Level Processing

请求级处理

This could be thought of as temporarily running process based on the scope of a request.

可以将其视为基于请求范围的临时运行过程。

One of the most common examples for a request-level process would be a request(s) being received by an HTTP server. You might be writing a service which is serving a RESTful API through HTTP. As you receive requests, you’ll want to asynchronously process the request, possibly execute some business logic and/or access some data storage medium. The scope of each request should be executed simultaneous of one another to prevent blocking new requests. Another common example might be reading messages off of message queues like Kafka or RabbitMQ simultaneously and processing the message through a stream.

请求级过程的最常见示例之一是HTTP服务器接收到的请求。 您可能正在编写通过HTTP服务RESTful API的服务。 收到请求时,您将需要异步处理请求,可能执行一些业务逻辑和/或访问某些数据存储介质。 每个请求的范围应彼此同时执行,以防止阻塞新请求。 另一个常见的示例可能是同时从诸如Kafka或RabbitMQ的消息队列中读取消息,并通过处理消息。

Let’s look at some examples where we use goroutines and channels in this scenario. Below is a very simple use-case with a goroutine where we spawn an asynchronous process and wait.

让我们看一些在这种情况下使用goroutine和通道的示例。 下面是一个带有goroutine的非常简单的用例,我们在其中生成一个异步进程并等待。

resCh := make(chan bool)


go func(ch chan<- bool) {
  time.Sleep(1 * time.Second)
  ch <- true
}(resCh)


res := <-resCh
fmt.Print(res)

As seen, we are spawning a goroutine of an anonymous function. We pass in a send-only channel to the goroutine to return back a value to the main goroutine after waiting for one second. The variable declaration res will block until a message is written to it from the channel.

如图所示,我们正在生成一个匿名函数的goroutine。 我们在等待一秒钟后将一个仅发送通道传递给goroutine,以将值返回给主goroutine。 变量声明res将阻塞,直到从通道向其写入消息为止。

Alright, that’s the simplest use-case, let’s expand on that and process multiple things asynchronously.

好吧,这是最简单的用例,让我们对其进行扩展并异步处理多个事情。

resCh := make(chan string, 2)


var waitGroup sync.WaitGroup
waitGroup.Add(2)


go func(ch chan<- string, wg *sync.WaitGroup) {
  time.Sleep(1 * time.Second)
  ch <- "Hello"
  wg.Done()
}(resCh, &waitGroup)


go func(ch chan<- string, wg *sync.WaitGroup) {
  time.Sleep(1 * time.Second)
  ch <- "World"
  wg.Done()
}(resCh, &waitGroup)


waitGroup.Wait()


for len(resCh) > 0 {
  res := <-resCh
  fmt.Println(res)
}

The Go standard library provides a library sync for handling synchronization of goroutines called WaitGroup. As you can see in the example, we create a WaitGroup and waitGroup.Add(2), as we know up front the amount of goroutines we spawn. This also means we can set the channels buffer size to match the total expected responses. This is okay but this is a static example and doesn’t provide much flexibility. Let’s change this example and make it dynamic.

Go标准库提供了用于处理名为WaitGroup的goroutine同步的库sync 。 如您在示例中所见,我们创建了一个WaitGroup和waitGroup.Add(2) ,因为我们waitGroup.Add(2)知道我们生成的goroutine的数量。 这也意味着我们可以设置通道缓冲区大小以匹配总的预期响应。 没关系,但这是一个静态示例,并没有提供太多的灵活性。 让我们更改此示例并使它动态。

bufferSize := 5
wordCh := make(chan string, bufferSize)
words := strings.Split("the quick brown fox jumped over the lazy dog", " ")


var waitGroup sync.WaitGroup


// what is wrong here?
for _, word := range words {
  waitGroup.Add(1)
  go func(ch chan<- string, wg *sync.WaitGroup) {
    time.Sleep(1 * time.Second)
    ch <- word
    wg.Done()
  }(wordCh, &waitGroup)
}


done := make(chan bool)
go func(d chan bool) {
  for res := range wordCh {
    fmt.Print(fmt.Sprintf("%s ", res))
  }
  close(d)
}(done)


waitGroup.Wait()
close(wordCh)
<-done

Now we have a more interesting use-case. Let’s step through it.

现在,我们有了一个更有趣的用例。 让我们逐步进行。

  1. Create a channel with a small buffer size.

    创建一个具有较小缓冲区大小的通道。
  2. Creating an array of words (let’s assume we don’t know the size and pretend it’s a stream of words.)

    创建一个单词数组(假设我们不知道大小,并假装它是单词流)。
  3. For each word in the stream, we increment the wait group and spawn a new goroutine.

    对于流中的每个单词,我们增加等待组并生成一个新的goroutine。
  4. Then spawn a separate goroutine to read the messages off the channel until the channel is closed.

    然后生成一个单独的goroutine,以从通道读取消息,直到关闭通道。
  5. Wait and block until all goroutines are Done.

    等待并阻塞,直到所有goroutine完成。
  6. Close the wordCh channel. This is important, we are sending a signal to the separate channel saying that we have finished processing new words.

    关闭wordCh通道。 这很重要,我们正在向另一个通道发送信号,说我们已经完成了对新单词的处理。

  7. Close the done channel to unblock the main goroutine.

    关闭done通道以解除对主goroutine的阻止。

Note: It’s usually important to have some mechanism to stop or short-circuit a goroutine! I provide a better example below with context.

注意:通常,重要的是要有一种机制来停止或使goroutine短路! 我在下面提供了一个context更好的示例。

What is the expected output?

什么是预期的输出?

the quick brown fox jumped over the lazy dog

No, not likely, this would assume the process is synchronous. We have to remember that these messages are processed asynchronously using goroutines, therefore we lose the guarantee of order for the output. Although, there’s still another problem.

不,不太可能,这将假定该过程是同步的。 我们必须记住,这些消息是使用goroutine异步处理的,因此我们无法保证输出的顺序。 虽然,还有另一个问题。

The actual output will be “dog” N times for the total amount of words. What 🤯?! The reason for this is a common Go gotcha! Ahh, what a frustrating intended behavior that all Go developers run across at some point. There’s a good explanation on the official Go GitHub Wiki, but simply put, the variable in the for loop’s closure is a pointer. As the main goroutines loop finishes before the spawned goroutines Print functions occur, the pointer of word references the last element from the array and each Print will contain the same value in our case. The solution is simple, copy the variable. Since my first time running across this, it looks like the IDE GoLand now provides a warning to this problem, nice!

实际输出为单词总数的N倍。 什么🤯?! 原因是常见的问题! 啊,所有Go开发人员在某个时刻遇到的令人沮丧的预期行为。 在官方的Go GitHub Wiki上有一个很好的解释,但是简单地说,for循环的闭包中的变量是一个指针。 当主goroutines循环在生成goroutines Print函数发生之前完成时, word的指针引用了数组中的最后一个元素,在我们的例子中,每个Print将包含相同的值。 解决方法很简单,复制变量。 自从我第一次遇到此问题以来,IDE GoLand现在就为该问题提供了警告,很好!

for _, word := range words {
  waitGroup.Add(1)
  go func(ch chan<- string, wg *sync.WaitGroup, someWord string) {
    time.Sleep(1 * time.Second)
    ch <- someWord
    wg.Done()
  }(wordCh, &waitGroup, word) // copy the word as a parameter
}

One last example that I would like to cover which I believe is important for any request-level process is handling deadlines. Whenever performing any kind of I/O bounded operation, whether this is a network request or even a long running in-memory calculation, you should account for it to either take too long or worst-case not even finish. Similar to how we signaled to stop our goroutine from the previous example, we can do the same with a deadline. One way of handling this problem is with the standard library context. Context can be used in a few different ways, as a timer and/or deadline signaler or key value map (not recommended.)

我想介绍的最后一个例子是处理截止日期,我认为这对任何请求级流程都很重要。 每当执行任何类型的I / O有界操作时,无论是网络请求,还是长时间运行的内存中计算,都应考虑到它花费的时间太长或最坏情况甚至没有完成。 与我们从上一个示例中发出停止Goroutine信号的方式类似,我们可以在截止日期之前执行相同的操作。 解决此问题的一种方法是使用标准库context 。 上下文可以通过几种不同的方式来使用,例如计时器和/或截止期限信号器或键值映射(不建议使用)。

// we ignore the cancelFunc here but in a real scenario, we'd want to invoke the cancelFunc if we finish within the deadline
newCtx, _ := context.WithDeadline(context.Background(), time.Now().Add(1*time.Second))
go func(ctx context.Context) {
  for {
    select {
    case <-ctx.Done():
      fmt.Println("close goroutine")
      return
    }
    // do something else
  }
}(newCtx)


<-newCtx.Done()
fmt.Println("finished")

In this example, we are using the deadline signal. We fork the background parent context and set a deadline 1 second in the future. When the deadline occurs, it will close the Done channel which can be used to signal to our goroutines that we should handle any clean up and return. Alternatively, you can also use any channel to signal to close the goroutine if there’s no specific time.

在此示例中,我们使用截止日期信号。 我们派生背景父级上下文,并将截止日期设置为将来的1秒钟。 当截止日期到来时,它将关闭“完成”通道,该通道可用于向goroutine发出信号,告知我们应处理所有清理并返回。 另外,如果没有特定时间,您还可以使用任何通道发出信号以关闭goroutine。

Great, so now we’ve covered some common examples for a request-level use of goroutines and channels. Let’s move on.

太好了,所以现在我们已经涵盖了一些常见示例,它们在请求级使用goroutine和通道。 让我们继续前进。

Server-Level Processing

服务器级处理

I like to think of server-level processes as long running processes that live throughout your runtime. In the same HTTP server example, you can think of the HTTP server’s instance as the long running process. There might be cases where you’ll need to write custom processes that live throughout your runtime, such as a custom cache handler, router, message queue consumer/producer, scheduler or job runner, etc.

我喜欢将服务器级进程视为运行时中长期运行的进程。 在同一HTTP服务器示例中,您可以将HTTP服务器的实例视为长时间运行的过程。 在某些情况下,您可能需要编写贯穿整个运行时的自定义进程,例如自定义缓存处理程序,路由器,消息队列使用者/生产者,调度程序或作业运行程序等。

Again, let’s look at a simple use-case first with a token cache and refresher:

再次,让我们先看一个简单的用例,它带有令牌缓存和刷新器:

func main() {
	done := make(chan bool)
	t := TokenCache{
		token:      randomString(),
		expiration: time.Now(),
		lock:       &sync.Mutex{},
	}


	// spawn our process
	go t.Start(done)


	for i := 0; i < 3; i++ {
		fmt.Println(t.GetToken())
		time.Sleep(1 * time.Second)
	}


	close(done)
	fmt.Println("shutdown")
}


type TokenCache struct {
	token      string
	expiration time.Time
	lock       *sync.Mutex
}


func (t *TokenCache) Start(done chan bool) {
	for {
		select {
		case <-done:
			fmt.Println("closing goroutine")
			return
		default:
			if time.Now().After(t.expiration) {
				t.lock.Lock()
				fmt.Println("getting new token")


				// do something
				t.token = randomString()


				t.expiration = time.Now().Add(2 * time.Second)
				t.lock.Unlock()
			}
		}
	}
}


func (t *TokenCache) GetToken() string {
	t.lock.Lock()
	defer t.lock.Unlock()
	return t.token
}

Perhaps our application depends on using a token with HTTP requests, this might be a JWT to authenticate against some remote resource. We want to store our token in-memory and automatically refresh it when it expires. To do this, one option is to create a process that will run from the start of our application till its shutdown while periodically refreshing the token. Typically, this process will be created and spawned in the initialization of our application.

也许我们的应用程序依赖于对HTTP请求使用令牌,这可能是JWT来针对某些远程资源进行身份验证。 我们希望将令牌存储在内存中,并在令牌过期时自动刷新。 为此,一种选择是创建一个过程,该过程将从应用程序的开始一直运行到关闭为止,同时会定期刷新令牌。 通常,将在我们的应用程序初始化中创建并产生此过程。

In the example, we create a TokenCache struct that provides a Start and GetToken methods. Then create a new goroutine of the Start process that accepts a done channel for shutting down — like the previous examples and will continue processing until closed. It will periodically refresh the token and lock the TokenCache with mutual exclusion (mutex) in case another goroutine accesses the GetToken in the middle of the refresh.

在示例中,我们创建一个提供了StartGetToken方法的TokenCache结构。 然后,像前面的示例一样,在Start进程中创建一个新的goroutine,该例程接受已完成的通道以进行关闭-并将继续处理直到关闭。 如果另一个goroutine在刷新过程中访问GetToken ,它将定期刷新令牌并使用互斥(mutex)锁定TokenCache

What can go wrong in this scenario? What happens if the underlying refresh method causes a panic? Our goroutine will panic the entire server and crash the application! This is obviously not good behavior for these types of processes. Unlike the request-level examples that usually have middleware to recover from an HTTP request’s (or other request flow) panic, we have to implement our own logic. In most cases it’s ideal to emit a metric or log the problem, then recover and continue trying again. If the problem continues to occur, you can set a threshold for your tolerance of these fatal panics. Another solution for when there’s no way to recover is to either return an error or signal back to the main application goroutine that we need to shutdown gracefully and return a status code indicating the failure. This is important because there might be other messages in different processes that are in-memory that need to be flushed through the system before closing.

在这种情况下会出什么问题? 如果基础的刷新方法引起紧急情况,会发生什么? 我们的goroutine将使整个服务器崩溃并使应用程序崩溃! 对于这些类型的过程,这显然不是好的行为。 与通常具有中间件以从HTTP请求(或其他请求流)紧急情况中恢复的请求级示例不同,我们必须实现自己的逻辑。 在大多数情况下,理想的情况是发出一个度量标准或记录问题,然后恢复并继续尝试。 如果问题继续发生,则可以设置您对这些致命恐慌的承受能力的阈值。 当无法恢复时,另一种解决方案是返回错误或向主应用程序goroutine发送信号,告知我们需要正常关闭并返回指示故障的状态代码。 这很重要,因为在关闭过程中,内存中的不同进程中可能还有其他消息需要通过系统刷新。

...
func (t *TokenCache) Start(done chan bool) {
	defer func() {
		if r := recover(); r != nil {
			fmt.Println("we recovered!")
			
			// some possible solutions:
			// add some delay, emit a metric or log, return an error, send a signal to the main goroutine,
			// or increment a custom tolerance value and determine whether to exit or not


                        // if everything is good, lets restart the process
			go t.Start(done) // this will recursively restart the process and escape the original callstack
		}
	}()
...

Additionally, having the caller invoke GetToken and potentially wait on a lock, isn’t a very message-driven pattern for concurrency. A better solution to this would be to allow callers to receive a channel for when the token is available. This can be thought of as a Future or Promise.

此外,让调用方调用GetToken并可能等待锁定,这并不是消息驱动的并发模式。 对此的更好解决方案是允许调用者接收令牌何时可用的通道。 这可以被视为未来或承诺

func (t *TokenCache) GetToken() <-chan string {
	res := make(chan string)
	go func(chan string) {
		t.lock.Lock()
		defer t.lock.Unlock()
		res <- t.token
		close(res)
	}(res)
	return res
}

We can still use the mutex lock internally as the state for our TokenCache but I think we can still do better. Some nice to have enhancements would be waiting on a response for a potential new token rather than blocking with the lock and the support of context as a parameter to our GetToken method. This would be beneficial to our caller and lets them better handle their own logic.

我们仍然可以在内部将互斥锁用作TokenCache的状态,但我认为我们仍然可以做得更好。 可以进行一些改进的功能很不错,那就是等待潜在的新令牌的响应,而不是使用锁和context作为我们GetToken方法的参数来进行阻止。 这将对我们的呼叫者有利,并使他们更好地处理自己的逻辑。

type request struct {
	response chan<- string
	ctx      context.Context
}


type TokenCache struct {
	token        string
	expiration   time.Time
	requestToken chan request
}


func (t *TokenCache) Start(done chan bool) {
	defer func() {
		if r := recover(); r != nil {
			...
		}
	}()
	for {
		select {
		case <-done:
			fmt.Println("closing goroutine")
			return
		case req := <-t.requestToken:
			t.refresh(req.ctx)
			req.response <- t.token
			close(req.response)
		default:
			ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
			t.refresh(ctx)
			cancel()
		}
	}
}


func (t *TokenCache) GetToken(ctx context.Context) <-chan string {
	res := make(chan string)
	go func(req request) {
		t.requestToken <- req
	}(request{
		response: res,
		ctx:      ctx,
	})
	return res
}

By creating a request message and processing the request directly in our refresh process, we can pass our ctx to the refresh implementation directly. Usually it’s an anti-pattern to store context in a struct but for the use-case of a message it’s perfectly acceptable. There’s a small chance that the caller will have a delay if they happen to request right during the expiration, if not, the refresh is caught in the default, making sequential requests fulfill their Future without being blocked by the refresh process. A couple things I didn’t include in this example is with the new changes, when done is closed, we should close the request channel too and have a check in our GetToken method indicating the shutdown. We could expand on this by changing the return type to a <-chan Response message that can include an error.

通过在刷新过程中创建request消息并直接处理请求,我们可以将ctx直接传递给刷新实现。 通常,将上下文存储在结构中是一种反模式,但是对于消息的用例来说,这是完全可以接受的。 如果调用方恰巧在到期时要求正确,则很有可能会有延迟,如果不是,则刷新将被捕获为default ,从而使顺序请求满足其Future而不会受到刷新过程的阻止。 在此示例中,我没有包括的几项是新更改,当完成关闭时,我们也应该关闭请求通道,并在GetToken方法中进行检查以指示关闭。 我们可以通过将返回类型更改为可能包含错误的<-chan Response消息来扩展此功能。

Misc.

杂项

I didn’t explain all of the syntax in the examples, so here’s a quick overview of some of the features around the Go goroutine and channel API:

我没有在示例中解释所有语法,因此下面是有关Go goroutine和channel API的一些功能的快速概述:

  • You can change the size of a channel with make(chan T, size).

    您可以使用make(chan T, size)更改通道的make(chan T, size)

  • For a channel you can use the len function, but you can also check the capacity using cap.

    对于频道,您可以使用len功能,但也可以使用cap来检查容量。

  • You can change the signature of a channel for parameters and return types to be send-only chan<- T, or receive-only <-chan T.

    您可以将参数和返回类型的通道签名更改为仅发送chan<- T或仅接收<-chan T

  • You can close a channel close(ch) and check whether a channel is closed res, isClosed := <- ch.

    您可以关闭通道close(ch)并检查通道是否关闭res, isClosed := <- ch

  • You can range through a channel which will loop until the channel closes.

    你可以range ,通过一个通道,将循环播放,直到通道关闭。

  • You cannot get a return value from a goroutine, because go is not an expression, e.g. val := go something(). Instead use a channel like in the examples to pass a message to the parent goroutine (this is particularly useful for error handling.) Remember we are using CSP, a message passing concurrency paradigm.

    您无法从goroutine获得返回值,因为go不是表达式,例如val := go something() 。 而是使用示例中的通道将消息传递给父goroutine(这对于错误处理特别有用。)请记住,我们使用的是CSP,即消息传递并发范例。

概要 (Summary)

Hopefully this article will be useful to some of you! When I first started developing Go and getting into more high-performance server-sided Go development, I found it very difficult to find many resources on these kinds of scenarios. These are just some best practices that I’ve learned from my own experiences and from compiling together what I’ve been able to learn from others.

希望本文对您中的某些人有用! 当我刚开始开发Go并进入更高性能的服务器端Go开发时,我发现在这种情况下很难找到很多资源。 这些只是我从自己的经验中学习到的最佳实践,也是我从其他人那里学到的东西汇总而成的。

If you’re interested in taking your goroutine and channel experience to the next level, a lot of these techniques can be written more elegantly as a stream or pipeline. The official Go blog has some more advanced examples if you’re interested.

如果您有兴趣将Goroutine和频道体验提升到一个新的水平,那么可以将这些技术中的许多技术更优雅地编写为流或管道。 如果您有兴趣, Go官方博客提供了一些更高级的示例

Image for post

Thanks for reading. Feel free to leave a comment if you have any questions or see anything wrong!

谢谢阅读。 如有任何疑问或发现任何问题,请随时发表评论!

翻译自: https://medium.com/@alexsniffin/applications-of-goroutines-channels-in-go-2e24c478d71d

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值