限流设计

限流设计

限流在日常的服务可以防止突发的流量将系统完全打垮,在日常的业务中,有些服务需要保持一定的稳定性,这种稳定性可以降低提供服务的响应速度,或者在系统资源在比较紧张的时候,为了避免宕机限制一定的访问速度,这就是常见的限流的场景。

简单的限流实现

常见的限流的实现方式,有如下两种;

  • 漏桶方式,即每一个请求都对应一滴水,在开始的时候都是满的,然后桶每隔一段时间就往外楼一滴水,当获得这滴水的时候,才可以处理请求,如果没有获得则等待或放弃处理。
  • 令牌桶方式,令牌桶即指以匀速向桶里面添加令牌,服务的时候需要从桶中获取令牌,令牌的数量可以配置,如果有令牌可用则处理请求,否则就等待或者放弃处理。

有了如上的认识之后,我们分别查看一下当前的开源的实现方案。

django-rest-framework限流实现

该框架对应于限流的文档,从使用方式上来看,throttling在设计的时候就类似与权限设计一样,用来控制客户端访问API的速率,又或者是为了限制某种资源的使用。

首先简单概述一下使用的方式,在django的settings.py文件中需要配置好需要限制的速率;

REST_FRAMEWORK = {
  	...
    'DEFAULT_THROTTLE_RATES': {
        'anon': '5/m',   # 没有用户的接口的访问频率的限制 即每分钟5次
        'user': '5/m'    # 针对每个用户的访问的限制速率   即每分钟5次
    }
}

在配置文件中配置anon和user关键字的限流速率的限制,为什么是这两个关键字是因为这两个关键字对应的是AnonRateThrottle和UserRateThrottle两个不同的限流类,然后再编写的API接口中添加限流类;

from rest_framework.views import APIView
from rest_framework.response import Response


class UserTest(APIView):
    from rest_framework.throttling import UserRateThrottle
    throttle_classes = (UserRateThrottle, )

    def get(self, reqeust, *args, **kwargs):
        return Response({"detail": "user test"})
      
      
class AnonTest(APIView):
    from rest_framework.throttling import AnonRateThrottle
    throttle_classes = (AnonRateThrottle, )

    def get(self, reqeust, *args, **kwargs):
        return Response({"detail": "anon test"})

此时再APIView的执行的过程中,会调用initial方法来执行对应的throttle_classes,并来检查对应是否符合限流条件(有关rest-framework关于APIView的执行流程,大家可参考以前博文)。此时再APIView类中有如下执行流程;

		def check_throttles(self, request):
        """
        Check if request should be throttled.
        Raises an appropriate exception if the request is throttled.
        """
        for throttle in self.get_throttles():               # 调用限流类
            if not throttle.allow_request(request, self):   # 调用限流的方法检查
                self.throttled(request, throttle.wait())    # 返回给API需要等待的时间
        ...
        
  	def initial(self, request, *args, **kwargs):
        """
        Runs anything that needs to occur prior to calling the method handler.
        """
				... 
        self.check_permissions(request)  # 检查权限
        self.check_throttles(request)    # 检查是否满足限流条件

此时我们来查看BaseThrottle类;

class BaseThrottle(object):
    """
    Rate throttling of requests.
    """

    def allow_request(self, request, view):   # 继承的类一定要实现该方法
        """
        Return `True` if the request should be allowed, `False` otherwise.
        """
        raise NotImplementedError('.allow_request() must be overridden')

    def get_ident(self, request):
        """
        Identify the machine making the request by parsing HTTP_X_FORWARDED_FOR
        if present and number of proxies is > 0. If not use all of
        HTTP_X_FORWARDED_FOR if it is available, if not use REMOTE_ADDR.
        """
        xff = request.META.get('HTTP_X_FORWARDED_FOR')
        remote_addr = request.META.get('REMOTE_ADDR')
        num_proxies = api_settings.NUM_PROXIES    # 获取有关缓存的key的值

        if num_proxies is not None:
            if num_proxies == 0 or xff is None:
                return remote_addr
            addrs = xff.split(',')
            client_addr = addrs[-min(num_proxies, len(addrs))]
            return client_addr.strip()

        return ''.join(xff.split()) if xff else remote_addr

    def wait(self):
        """
        Optionally, return a recommended number of seconds to wait before
        the next request.
        """
        return None   # 返回需要等待的时间

而AnonRateThrottle和UserRateThrottle继承自SimpleRateThrottle;

class SimpleRateThrottle(BaseThrottle):
    """
    A simple cache implementation, that only requires `.get_cache_key()`
    to be overridden.

    The rate (requests / seconds) is set by a `rate` attribute on the View
    class.  The attribute is a string of the form 'number_of_requests/period'.

    Period should be one of: ('s', 'sec', 'm', 'min', 'h', 'hour', 'd', 'day')

    Previous request information used for throttling is stored in the cache.
    """
    cache = default_cache 													 # 缓存的方式默认缓存
    timer = time.time 															 # 缓存的时间
    cache_format = 'throttle_%(scope)s_%(ident)s'    # 缓存到cache中的key
    scope = None
    THROTTLE_RATES = api_settings.DEFAULT_THROTTLE_RATES   # 缓存的速率

    def __init__(self):
        if not getattr(self, 'rate', None):
            self.rate = self.get_rate()
        self.num_requests, self.duration = self.parse_rate(self.rate)  # 解析缓存的数据

    def get_cache_key(self, request, view):
        """
        Should return a unique cache-key which can be used for throttling.
        Must be overridden.

        May return `None` if the request should not be throttled.
        """
        raise NotImplementedError('.get_cache_key() must be overridden')

    def get_rate(self):
        """
        Determine the string representation of the allowed request rate.
        """
        if not getattr(self, 'scope', None):
            msg = ("You must set either `.scope` or `.rate` for '%s' throttle" %
                   self.__class__.__name__)
            raise ImproperlyConfigured(msg)

        try:
            return self.THROTTLE_RATES[self.scope]    # 获取配置文件中的配置的限流速率值
        except KeyError:
            msg = "No default throttle rate set for '%s' scope" % self.scope
            raise ImproperlyConfigured(msg)

    def parse_rate(self, rate):
        """
        Given the request rate string, return a two tuple of:
        <allowed number of requests>, <period of time in seconds>
        """
        if rate is None:
            return (None, None)
        num, period = rate.split('/')
        num_requests = int(num)
        duration = {'s': 1, 'm': 60, 'h': 3600, 'd': 86400}[period[0]]  # 解析速率,只能解析秒分小时天
        return (num_requests, duration)

    def allow_request(self, request, view):
        """
        Implement the check to see if the request should be throttled.

        On success calls `throttle_success`.
        On failure calls `throttle_failure`.
        """
        if self.rate is None:
            return True

        self.key = self.get_cache_key(request, view)    # 获取缓存的key
        if self.key is None:
            return True

        self.history = self.cache.get(self.key, [])     # 获取存在缓存中的数据
        self.now = self.timer() 												# 当前的时间

        # Drop any requests from the history which have now passed the
        # throttle duration
        while self.history and self.history[-1] <= self.now - self.duration:   # 如果有历史记录将历史记录中超过限时时间段的数据删除掉
            self.history.pop()
        if len(self.history) >= self.num_requests:   # 如果当前剩余的请求数大于限流的请求数则限流
            return self.throttle_failure()
        return self.throttle_success()    					 # 否则返回成功

    def throttle_success(self):
        """
        Inserts the current request's timestamp along with the key
        into the cache.
        """
        self.history.insert(0, self.now) 						# 将当前的时间节点插入到历史列表的第一个位置
        self.cache.set(self.key, self.history, self.duration)   # 重新缓存历史数据并设置过期时间
        return True

    def throttle_failure(self):
        """
        Called when a request to the API has failed due to throttling.
        """
        return False

    def wait(self):
        """
        Returns the recommended next request time in seconds.
        """
        if self.history:   					# 计算当前还有多少剩余的时间就可以访问
            remaining_duration = self.duration - (self.now - self.history[-1])
        else:
            remaining_duration = self.duration

        available_requests = self.num_requests - len(self.history) + 1
        if available_requests <= 0:
            return None

        return remaining_duration / float(available_requests)

从整个逻辑可以看出SimpleRateThrottle类基本上就实现了基础的限流计数的功能,实现限流的计数的功能主要是基于当前的时间点往前查看配置的时间节点的数,将超过该日期的数删除掉,列表中剩余的数就是当前已经访问过的次数,再对比剩余的请求数与限流的数目的大小来决定是否限流,从该实现方式来看,基本的实现思路就是漏桶方式。

class AnonRateThrottle(SimpleRateThrottle):
    """
    Limits the rate of API calls that may be made by a anonymous users.

    The IP address of the request will be used as the unique cache key.
    """
    scope = 'anon'

    def get_cache_key(self, request, view):
        if request.user.is_authenticated:                 # 检查该用户是否任务如果认证的用户则不检查该接口
            return None  # Only throttle unauthenticated requests.

        return self.cache_format % {
            'scope': self.scope,
            'ident': self.get_ident(request)
        }                                    # 使用远端IP或者http头部中保存的数据来作为key


class UserRateThrottle(SimpleRateThrottle):
    """
    Limits the rate of API calls that may be made by a given user.

    The user id will be used as a unique cache key if the user is
    authenticated.  For anonymous requests, the IP address of the request will
    be used.
    """
    scope = 'user'

    def get_cache_key(self, request, view):
        if request.user.is_authenticated:
            ident = request.user.pk 									# 使用用户的键值
        else:
            ident = self.get_ident(request)  					# 如果没有登录则退化为为登录的接口限制

        return self.cache_format % {
            'scope': self.scope,
            'ident': ident
        } 																						# 获取key

至此,django-rest-framework的限流的方案基于漏桶的方式的整个的实现过程,当然在官方文档中还有介绍有关如何自定义多个限流的方式,但是整个限流的实现思路如上所述。

go实现的rate限流方案

示例代码

package main

import (
	"context"
	"fmt"
	"golang.org/x/time/rate"
	"time"
)

func main() {
	limi := rate.Every(1*time.Second)
	l := rate.NewLimiter(limi, 2)

	for i := 0; i < 10; i++ {
		err := l.Wait(context.Background())
		if err != nil {
			fmt.Println("error  ", err)
			return
		}
		fmt.Println("get  ", time.Now(), err)
		if i == 5 {
			time.Sleep(10*time.Second)
		}
	}

}

运行示例代码就可以看见,当前的限制是每秒钟限制两个请求,通过示例代码输出;

get   2020-10-23 17:49:34.894311 +0800 CST m=+0.000264239 <nil>
get   2020-10-23 17:49:34.894551 +0800 CST m=+0.000504642 <nil>
get   2020-10-23 17:49:35.898776 +0800 CST m=+1.004723867 <nil>
get   2020-10-23 17:49:36.899156 +0800 CST m=+2.005098325 <nil>
get   2020-10-23 17:49:37.898214 +0800 CST m=+3.004150712 <nil>
get   2020-10-23 17:49:38.896705 +0800 CST m=+4.002635543 <nil>
get   2020-10-23 17:49:48.898746 +0800 CST m=+14.004620270 <nil>
get   2020-10-23 17:49:48.898877 +0800 CST m=+14.004751431 <nil>
get   2020-10-23 17:49:49.903021 +0800 CST m=+15.008889285 <nil>
get   2020-10-23 17:49:50.898853 +0800 CST m=+16.004715408 <nil>

可以看出在34秒启动和48秒的时候,一下就获取到了两个令牌,在其他情况下每次都是获取一个令牌,通过输出也可以看出rate的实现方案也是当前的漏桶的方案实现的。

// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.

// Package rate provides a rate limiter.
package rate

import (
	"context"
	"fmt"
	"math"
	"sync"
	"time"
)

// Limit defines the maximum frequency of some events.
// Limit is represented as number of events per second.
// A zero Limit allows no events.
type Limit float64

// Inf is the infinite rate limit; it allows all events (even if burst is zero).
const Inf = Limit(math.MaxFloat64)

// Every converts a minimum time interval between events to a Limit.
func Every(interval time.Duration) Limit {   // 获取转换之后的速率值 转换成每秒生成多少个令牌
	if interval <= 0 {
		return Inf
	}
	return 1 / Limit(interval.Seconds())
}

// A Limiter controls how frequently events are allowed to happen.
// It implements a "token bucket" of size b, initially full and refilled
// at rate r tokens per second.
// Informally, in any large enough time interval, the Limiter limits the
// rate to r tokens per second, with a maximum burst size of b events.
// As a special case, if r == Inf (the infinite rate), b is ignored.
// See https://en.wikipedia.org/wiki/Token_bucket for more about token buckets.
//
// The zero value is a valid Limiter, but it will reject all events.
// Use NewLimiter to create non-zero Limiters.
//
// Limiter has three main methods, Allow, Reserve, and Wait.
// Most callers should use Wait.
//
// Each of the three methods consumes a single token.
// They differ in their behavior when no token is available.
// If no token is available, Allow returns false.
// If no token is available, Reserve returns a reservation for a future token
// and the amount of time the caller must wait before using it.
// If no token is available, Wait blocks until one can be obtained
// or its associated context.Context is canceled.
//
// The methods AllowN, ReserveN, and WaitN consume n tokens.
type Limiter struct {
	limit Limit 							// 当前的频率
	burst int 								// 总得令牌数

	mu     sync.Mutex
	tokens float64 						// 当前桶里有的令牌数
	// last is the last time the limiter's tokens field was updated
	last time.Time            // 最后一次令牌更新时间
	// lastEvent is the latest time of a rate-limited event (past or future)
	lastEvent time.Time
}

// Limit returns the maximum overall event rate.
func (lim *Limiter) Limit() Limit {
	lim.mu.Lock()
	defer lim.mu.Unlock()
	return lim.limit
}

// Burst returns the maximum burst size. Burst is the maximum number of tokens
// that can be consumed in a single call to Allow, Reserve, or Wait, so higher
// Burst values allow more events to happen at once.
// A zero Burst allows no events, unless limit == Inf.
func (lim *Limiter) Burst() int {
	return lim.burst
}

// NewLimiter returns a new Limiter that allows events up to rate r and permits
// bursts of at most b tokens.
func NewLimiter(r Limit, b int) *Limiter {   // 生成一个Limiter
	return &Limiter{
		limit: r,
		burst: b,
	}
}

// Allow is shorthand for AllowN(time.Now(), 1).
func (lim *Limiter) Allow() bool {
	return lim.AllowN(time.Now(), 1)     // 判断当前是否可以通过一个请求
}

// AllowN reports whether n events may happen at time now.
// Use this method if you intend to drop / skip events that exceed the rate limit.
// Otherwise use Reserve or Wait.
func (lim *Limiter) AllowN(now time.Time, n int) bool {   // 判断当前是否可以通过n个请求
	return lim.reserveN(now, n, 0).ok
}

// A Reservation holds information about events that are permitted by a Limiter to happen after a delay.
// A Reservation may be canceled, which may enable the Limiter to permit additional events.
type Reservation struct {    // 最终返回的数据结构
	ok        bool
	lim       *Limiter
	tokens    int
	timeToAct time.Time
	// This is the Limit at reservation time, it can change later.
	limit Limit
}

// OK returns whether the limiter can provide the requested number of tokens
// within the maximum wait time.  If OK is false, Delay returns InfDuration, and
// Cancel does nothing.
func (r *Reservation) OK() bool {  // 是否满足限流条件
	return r.ok
}

// Delay is shorthand for DelayFrom(time.Now()).
func (r *Reservation) Delay() time.Duration {
	return r.DelayFrom(time.Now())
}

// InfDuration is the duration returned by Delay when a Reservation is not OK.
const InfDuration = time.Duration(1<<63 - 1)

// DelayFrom returns the duration for which the reservation holder must wait
// before taking the reserved action.  Zero duration means act immediately.
// InfDuration means the limiter cannot grant the tokens requested in this
// Reservation within the maximum wait time.
func (r *Reservation) DelayFrom(now time.Time) time.Duration {   // 如果获取不到token返回当前等待的时间
	if !r.ok {
		return InfDuration
	}
	delay := r.timeToAct.Sub(now)
	if delay < 0 {
		return 0
	}
	return delay
}

// Cancel is shorthand for CancelAt(time.Now()).
func (r *Reservation) Cancel() {   // 取消当前等待的令牌
	r.CancelAt(time.Now())
	return
}

// CancelAt indicates that the reservation holder will not perform the reserved action
// and reverses the effects of this Reservation on the rate limit as much as possible,
// considering that other reservations may have already been made.
func (r *Reservation) CancelAt(now time.Time) {  // 将获得已经得到的令牌返回并新增这段等待时间生成的token
	if !r.ok {
		return
	}

	r.lim.mu.Lock()
	defer r.lim.mu.Unlock()

	if r.lim.limit == Inf || r.tokens == 0 || r.timeToAct.Before(now) {
		return
	}

	// calculate tokens to restore
	// The duration between lim.lastEvent and r.timeToAct tells us how many tokens were reserved
	// after r was obtained. These tokens should not be restored.
	restoreTokens := float64(r.tokens) - r.limit.tokensFromDuration(r.lim.lastEvent.Sub(r.timeToAct))
	if restoreTokens <= 0 {
		return
	}
	// advance time to now
	now, _, tokens := r.lim.advance(now)
	// calculate new number of tokens
	tokens += restoreTokens
	if burst := float64(r.lim.burst); tokens > burst {
		tokens = burst
	}
	// update state
	r.lim.last = now
	r.lim.tokens = tokens
	if r.timeToAct == r.lim.lastEvent {
		prevEvent := r.timeToAct.Add(r.limit.durationFromTokens(float64(-r.tokens)))
		if !prevEvent.Before(now) {
			r.lim.lastEvent = prevEvent
		}
	}

	return
}

// Reserve is shorthand for ReserveN(time.Now(), 1).
func (lim *Limiter) Reserve() *Reservation {
	return lim.ReserveN(time.Now(), 1)
}

// ReserveN returns a Reservation that indicates how long the caller must wait before n events happen.
// The Limiter takes this Reservation into account when allowing future events.
// ReserveN returns false if n exceeds the Limiter's burst size.
// Usage example:
//   r := lim.ReserveN(time.Now(), 1)
//   if !r.OK() {
//     // Not allowed to act! Did you remember to set lim.burst to be > 0 ?
//     return
//   }
//   time.Sleep(r.Delay())
//   Act()
// Use this method if you wish to wait and slow down in accordance with the rate limit without dropping events.
// If you need to respect a deadline or cancel the delay, use Wait instead.
// To drop or skip events exceeding rate limit, use Allow instead.
func (lim *Limiter) ReserveN(now time.Time, n int) *Reservation {   // 获取token
	r := lim.reserveN(now, n, InfDuration)
	return &r
}

// Wait is shorthand for WaitN(ctx, 1).
func (lim *Limiter) Wait(ctx context.Context) (err error) {   // 阻塞等待获取token
	return lim.WaitN(ctx, 1)
}

// WaitN blocks until lim permits n events to happen.
// It returns an error if n exceeds the Limiter's burst size, the Context is
// canceled, or the expected wait time exceeds the Context's Deadline.
// The burst limit is ignored if the rate limit is Inf.
func (lim *Limiter) WaitN(ctx context.Context, n int) (err error) {  // 等待获取n个token
	lim.mu.Lock() 											// 加锁获取当前数据
	burst := lim.burst
	limit := lim.limit
	lim.mu.Unlock()

	if n > burst && limit != Inf {
		return fmt.Errorf("rate: Wait(n=%d) exceeds limiter's burst %d", n, lim.burst)
	}
	// Check if ctx is already cancelled
	select {
	case <-ctx.Done():    			// 如果应用程序传入的ctx取消则返回错误
		return ctx.Err()
	default:
	}
	// Determine wait limit
	now := time.Now() 					// 获取当前时间
	waitLimit := InfDuration 		// 等待的时间
	if deadline, ok := ctx.Deadline(); ok {
		waitLimit = deadline.Sub(now)   // 设置ctx的等待时间为最大等待时间
	}
	// Reserve
	r := lim.reserveN(now, n, waitLimit)   	// 获取n个token,并最多等待waitLimit时间
	if !r.ok {
		return fmt.Errorf("rate: Wait(n=%d) would exceed context deadline", n)
	}
	// Wait if necessary
	delay := r.DelayFrom(now)               // 获取当前等待的时间
	if delay == 0 {
		return nil
	}
	t := time.NewTimer(delay) 							// 设置定时器等待
	defer t.Stop()
	select {
	case <-t.C:
		// We can proceed.
		return nil
	case <-ctx.Done():
		// Context was canceled before we could proceed.  Cancel the
		// reservation, which may permit other events to proceed sooner.
		r.Cancel()
		return ctx.Err()
	}
}

// SetLimit is shorthand for SetLimitAt(time.Now(), newLimit).
func (lim *Limiter) SetLimit(newLimit Limit) {     // 设置一个新的limit
	lim.SetLimitAt(time.Now(), newLimit)
}

// SetLimitAt sets a new Limit for the limiter. The new Limit, and Burst, may be violated
// or underutilized by those which reserved (using Reserve or Wait) but did not yet act
// before SetLimitAt was called.
func (lim *Limiter) SetLimitAt(now time.Time, newLimit Limit) {
	lim.mu.Lock()
	defer lim.mu.Unlock()

	now, _, tokens := lim.advance(now)

	lim.last = now
	lim.tokens = tokens
	lim.limit = newLimit
}

// SetBurst is shorthand for SetBurstAt(time.Now(), newBurst).
func (lim *Limiter) SetBurst(newBurst int) {   // 更新桶的令牌总数
	lim.SetBurstAt(time.Now(), newBurst)
}

// SetBurstAt sets a new burst size for the limiter.
func (lim *Limiter) SetBurstAt(now time.Time, newBurst int) {   // 从某个时刻开始更新令牌总数
	lim.mu.Lock()
	defer lim.mu.Unlock()

	now, _, tokens := lim.advance(now)

	lim.last = now
	lim.tokens = tokens
	lim.burst = newBurst
}

// reserveN is a helper method for AllowN, ReserveN, and WaitN.
// maxFutureReserve specifies the maximum reservation wait duration allowed.
// reserveN returns Reservation, not *Reservation, to avoid allocation in AllowN and WaitN.
func (lim *Limiter) reserveN(now time.Time, n int, maxFutureReserve time.Duration) Reservation {                     // 最核心的需要等待的时间
	lim.mu.Lock()

	if lim.limit == Inf {           // 如果是无限等待则返回当前可以获取等待
		lim.mu.Unlock()
		return Reservation{
			ok:        true,
			lim:       lim,
			tokens:    n,
			timeToAct: now,
		}
	}

	now, last, tokens := lim.advance(now)    // 获取token 最后一次的访问

	// Calculate the remaining number of tokens resulting from the request.
	tokens -= float64(n) 										 // 获取剩余的tokens

	// Calculate the wait duration
	var waitDuration time.Duration
	if tokens < 0 {        									// 如果tokens小于0则计算还需要多久可以获得足够的tokens
		waitDuration = lim.limit.durationFromTokens(-tokens)
	}

	// Decide result
	ok := n <= lim.burst && waitDuration <= maxFutureReserve  // 如果当前的n小于桶的总数并且当前等待的时间小于传入的时间则是可以放行

	// Prepare reservation
	r := Reservation{      // 返回数值
		ok:    ok,
		lim:   lim,
		limit: lim.limit,
	}
	if ok {
		r.tokens = n
		r.timeToAct = now.Add(waitDuration)   // 如果成功则设置当前发放的tokens 并更新时间
	}

	// Update state
	if ok { 																// 如果成功更新状态
		lim.last = now
		lim.tokens = tokens
		lim.lastEvent = r.timeToAct
	} else {
		lim.last = last
	}

	lim.mu.Unlock()
	return r
}

// advance calculates and returns an updated state for lim resulting from the passage of time.
// lim is not changed.
func (lim *Limiter) advance(now time.Time) (newNow time.Time, newLast time.Time, newTokens float64) {
	last := lim.last
	if now.Before(last) { 			// 获取最后一次的时间,如果当前时间大于last则设置last为now
		last = now 
	}

	// Avoid making delta overflow below when last is very old.
	maxElapsed := lim.limit.durationFromTokens(float64(lim.burst) - lim.tokens) // 获取桶减掉当前已经发放的token数,计算还剩余多久
	elapsed := now.Sub(last)  // 获取当前离最后一次相隔多久的时间  设置成最大的值
	if elapsed > maxElapsed {
		elapsed = maxElapsed       
	}

	// Calculate the new number of tokens, due to time that passed.
	delta := lim.limit.tokensFromDuration(elapsed)   // 计算这段时间可以生产多个tokens
	tokens := lim.tokens + delta  									 // 设置当前的总的tokens数
	if burst := float64(lim.burst); tokens > burst {  // 如果tokens超过桶的容量则设置为桶的容量大小
		tokens = burst
	}

	return now, last, tokens
}

// durationFromTokens is a unit conversion function from the number of tokens to the duration
// of time it takes to accumulate them at a rate of limit tokens per second.
func (limit Limit) durationFromTokens(tokens float64) time.Duration {
	seconds := tokens / float64(limit)     	// 计算生产tokens需要多久
	return time.Nanosecond * time.Duration(1e9*seconds)
}

// tokensFromDuration is a unit conversion function from a time duration to the number of tokens
// which could be accumulated during that duration at a rate of limit tokens per second.
func (limit Limit) tokensFromDuration(d time.Duration) float64 {
	// Split the integer and fractional parts ourself to minimize rounding errors.
	// See golang.org/issues/34861.
	sec := float64(d/time.Second) * float64(limit)   // 计算d段时间可以生产多少个tokens
	nsec := float64(d%time.Second) * float64(limit)
	return sec + nsec/1e9
}

从go给的time的rate的实现方式来看,限流算法也是基于漏桶方式,通过比较最后一次的访问的时间,从当前时间往前推算,看tokens是否能够满足限流条件。

总结

本文通过对比当前两种开源的漏桶限流方式的具体实现,来了解了当前相对比较主流的限流的实现方案,两个实现方式都是基于当前时刻与过去更新的时刻来计算是否满足限流条件,大家平常如果需要重新造乱子也可以参考这种实现方式,实现思路都比较统一。由于本人才疏学浅,如有错误请批评指正。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值