Concurrency

Overview

        Design your program as a collection of independent processes

        Design these processes to eventually run in parallel

        Design your code so that the outcome is always the same

In detail

        group code/data by identifying independant tasks

        no race conditions

        no deadlocks

        more workers = faster execution

Communicating Sequential Processes

        Tony Hoare, 1978

        Each process is built for sequential execution

        Data is communicated between processes via channels. No shared state

        Scale by adding more of the same.

Go's concurrency toolset

        go routines

        channels

        select

        sync package

Channels

        think of it as a bucket chain

        3 components: sender, buffer, receiver

        the buffer is optional

package main

import "fmt"

func main_no_buffer() {
	unbuffered := make(chan int)
	//blocks
	// a := <-unbuffered
	// fmt.Println(a)

	//also block
	//unbuffered <- 1

	go func() {
		a := <-unbuffered
		fmt.Println(a)
	}()
	unbuffered <- 1

}

func main() {
	buffered := make(chan int, 1)

	// still blocks
	// a := <-buffered
	// fmt.Println(a)

	buffered <- 1
	fmt.Println(buffered)

	// blocks -- buffer full
	buffered <- 3

}

Remeber?

        no deadlocks

        more workers = faster execution

Blocking can lead to deadlocks

Blocking can precent scaling

Closing channels

        Close sends a special "closed" message

        The receiver will at some point see "closed". yah! nothing to do

        if you try to send more: panic

        if you close twice: panic


func main() {
	c := make(chan int)
	// only the sender knows. so always close a channel from the sender side
	close(c)
	// receive and print
	// always return two values
	// 0 as it is the zero value of int
	// false because of no more data or "returned value is not valid"
	fmt.Println(<-c)

}

Select

        like a switch statement on channel operations

        the order of cases does not matter at all

        there is a default case too

        the first non-blocking case is chosen (send and/or receive)

making channels non-blocking


func TryReceive(c <-chan int) (data int, more, ok bool) {
	select {
	case data, more = <-c:
		return data, more, true
	default:
		return 0, true, false
	}
}

func TryReceiveWithTimeout(c <-chan int, duration time.Duration) (data int, more, ok bool) {
	select {
	case data, more = <-c:
		return data, more, true
	case <-time.After(duration):
		return 0, true, false
	}
}

func Fanout(In <-chan int, OutA, OutB chan int) {
	for data := range In { //receive until closed
		// send to first non-blocking channel
		select {
		case OutA <- data:
		case OutB <- data:
		}
	}
}

func Turnout(InA, InB <-chan int, OutA, OutB chan int) {
	for {
		// receive from first non-blocking
		select {
		case data, more = <-InA:
		case data, more = <-InB:
		}
		if !more {
			// ....?
			return
		}
		// send to first non-blocking
		select {
		case OutA <- data:
		case OutB <- data:
		}
	}
}

func Turnout1(Quit <-chan int, InA, InB, OutA, OutB chan int) {
	for {
		select {
		case data = <-InA:
		case data = <-InB:

			// remeber: close generate a message. actually this is an anti-pattern but you can argue that quit acts as a delegate
		case <-Quit:
			close(InA)
			close(InB)
			// flush the remaining data
			Fanout(InA, OutA, OutB)
			Fanout(InB, OutA, OutB)
			return
		}
	}
}

where channels fail

        you can create deadlocks with channels

        Channels pass aournd copies, which can impact perforamnce

        Passing pointers to channels can create race conditions

        what about "natural shared" structures like caches or registries?

Mutexes are not an optional solution

        mutexes are like toilets. the longer you occupy them, the longer the queue gets

        read/write mutexes can only reduce the problem

        using multiple mutexes will cause deadlocks sonner or later

        all-in-all not the solution we are looking for

Three shades of code

        blocking

                the program may get locked up for undefined time

                lock free

                        at least one part of your program is always making progress

                wait free = all parts of your program are always making progress

Atomic operations

        sync atomic package

        store,load,add,swap and compareAndSwap

        mapped to thread-safe CPU instructions

        these instructions only work on integer types

        only about 10-60x slower than their non-atomic counterparts

guidelines for non-blocking code

        donot switch between atomic and non-atomic functons

target and exploit situations which enforce uniqueness

avoid chaning two things at a time

        sometimes you can exploit bit operations

        sometimes intelligent ordering can do the trick

        sometimes it is just not possible at all

Concurrency in pratice

        avoid blocking, avoid race conditions

        use channels to avoid shared state

        use select to manage channels

        where channels do not work

                try to use tools from the sync package first

                in simple cases or when really neeeded: try blockless code

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值