zookeeper简单整理及基于zookeeper实现分布式锁

一、简述

1.主要应用在分布式锁、统一命名服务、配置管理、负载均衡等场景;
2.Zookeeper服务通常由多台服务器节点构成,只要其中超过一半的节点存活,Zookeeper即可正常对外提供服务,
3.Znode基于内存,具有目录和文件两重属性:即Znode既可以当做文件往里面写东西,又可以当做目录在下面挂载其他Znode;
4.Zookeeper的leader选举机制?待完善

二、主要特性

1.文件系统;
2.通知机制: 客户端对Znode注册监听器watcher监听某一节点的数据及状态变化,当此节点的数据及状态发生变化时及时通知客户端,进行某些操作;(发布订阅模式)
3.节点类型:持久节点、临时节点(客户端断开服务连接,临时节点自动删除)、顺序节点(按照创建时间顺序进行编号)其中包括:持久顺序节点、临时顺序节点;
其中基于Zookeeper实现的分布式锁利用通知机制及节点特性;

三、分布式锁实现原理简述

1.首先在Zookeeper创建parentLock永久节点;
2.client1想要获取锁,先在parentLock节点下创建临时顺序节点lock1,并获取parentLock下所有节点并排序,client1检查lock1是否是最靠前节点,是的话,获取锁;
3.如果此时client2想要获取锁,先在parentLock节点下创建临时顺序节点lock2,并获取parentLock下所有节点并排序,client2检查lock2是否是最靠前节点,否的话,监听前面节点lock1,并进行wait;
4.若有其他client重复上述过程;
5.如果client1节点显示delete删除节点lock1或client宕掉导致断开和服务连接,则lock1被删除,zookeeper会将lock1状态变化推送给client2,client2重新获取parentLock下所有节点并检查自己是否是最靠前节点,是则获取锁;
流程图如下:
待补充

四、使用分布式锁实现防抖简单代码(其中结合redis进行双重判断)

前述:

1.很多时候客户在操作前端时会瞬间多次使用同一功能,导致后端服务多次没必要的请求,未防止此类事件,结合zookeeper、redis实现基本的防抖功能;
2.此代码基于curator框架(对zookeeper基本操作进行一定的封装),在后端通过切面、及利用自定义注解等方式实现;
1.自定义防止重复提交注解
/**
 * @Description 不支持重复提交注解标识
 */
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface NotSupportRepeatSubmit {
    boolean value() default true;
}
2.实现自定义注解业务切面
import com.alibaba.fastjson.JSON;
import lombok.extern.slf4j.Slf4j;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.context.annotation.Configuration;
import org.springframework.stereotype.Component;
import org.springframework.web.context.request.RequestContextHolder;
import org.springframework.web.context.request.ServletRequestAttributes;
import org.springframework.web.servlet.DispatcherServlet;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.lang.reflect.Method;

/**
 * @Description 控制重复提交的切面
 */
@Slf4j
@Aspect
@Component
@Configuration
@ConditionalOnClass(DispatcherServlet.class)
public class RepeatSubmitAspect {

    @Autowired(required = false)
    private RedisService redisService;

    @Autowired(required = false)
    private ZKLock zkLock;

    @Pointcut("@annotation(com.***.core.annotation.NotSupportRepeatSubmit)")
    public void annotationPointcut() {
        //仅作为切点,方法体为空
    }

    /**
     * 通过注解NotSupportRepeatSubmit,控制重复提交的切面
     *
     * @param pjp
     * @return
     * @throws Throwable
     */
    @Around("annotationPointcut()")
    public Object controllerRepeatSubmit(ProceedingJoinPoint pjp) throws Throwable {
        ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
        RequestWrapper request = (RequestWrapper) attributes.getRequest();

        // 需要开启 redis 和 zookeeper,才能开启分布式环境下的重复提交的控制
        // 只对Post请求进行重复提交控制
        if (null != redisService && null != zkLock && null != request &&
                ("POST".equals(request.getMethod())
                        || "PUT".equals(request.getMethod())
                        || "DELETE".equals(request.getMethod()))) {
            MethodSignature methodSignature = (MethodSignature) pjp.getSignature();
            Method method = methodSignature.getMethod();
            NotSupportRepeatSubmit annotation = method.getAnnotation(NotSupportRepeatSubmit.class);
            if (annotation.value()) {
                StringBuilder stringBuilder = new StringBuilder()
                        .append(request.getRequestURI())
                        .append("-")
                        .append(pjp.getSignature().getDeclaringTypeName())
                        .append(".")
                        .append(pjp.getSignature().getName());

                // 获取参数, 只取自定义的参数, 自带的HttpServletRequest, HttpServletResponse不管
                if (pjp.getArgs().length > 0) {
                    for (Object o : pjp.getArgs()) {
                        if (o instanceof HttpServletRequest || o instanceof HttpServletResponse) {
                            continue;
                        }
                        stringBuilder.append(JSON.toJSONString(o))
                                .append("-");
                    }
                }
                String requestHashCode = "RepeatSubmitRequest:" + stringBuilder.toString().hashCode();

                if (null == redisService.getValue(requestHashCode)) {
                    InterProcessMutex lock = zkLock.lock(requestHashCode);
                    if (null != lock) {
                        try {
                            if (null != redisService.getValue(requestHashCode)) {
                                log.info("重复请求:" + stringBuilder.toString());
                                return HttpStatus.TooManyRequests;
                            }
                            redisService.save(requestHashCode, "hashcode", 10 * 1000);
                        } catch (Exception ex) {
                            ex.printStackTrace();
                            log.error("重复提交逻辑中与redis交互报错", ex);
                        } finally {
                            zkLock.releaseLock(lock, requestHashCode);
                        }
                    }
                } else {
                    log.info("重复请求:" + stringBuilder.toString());
                    return HttpStatus.TooManyRequests;
                }
            }
        }

        return pjp.proceed();
    }
}

3.基于zk的分布式锁
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.stereotype.Component;

import java.util.concurrent.TimeUnit;
import java.util.function.Supplier;

/**
 * 基于zk的分布式锁
 */
@Component
@ConditionalOnBean(ZookeeperConfiguration.class)
public class ZKLock {
    private static final Logger logger = LoggerFactory.getLogger(ZKLock.class);

    @Autowired
    private CuratorFramework client;

    public static final String RootLockPath = "/****-lock/";

    /**
     * 加锁
     *
     * @param path
     * @return
     */
    public InterProcessMutex lock(String path) {
        return lock(path, 500, TimeUnit.SECONDS);
    }

    /**
     * 加锁
     *
     * @param path
     * @param timeout
     * @param timeUnit
     * @return
     */
    public InterProcessMutex lock(String path, long timeout, TimeUnit timeUnit) {
        path = RootLockPath + path;
        InterProcessMutex lock = new InterProcessMutex(this.client, path);
        boolean success;
        try {
            success = lock.acquire(timeout, timeUnit);
        } catch (Exception e) {
            e.printStackTrace();
            throw new BaseException("obtain lock error " + e.getMessage() + ", path " + path);
        }
        if (success) {
            return lock;
        } else {
            return null;
        }
    }

    /**
     * 加锁,需要回调,回调结束后自动释放锁
     *
     * @param path
     * @param callback
     * @param <T>
     * @return
     */
    public <T> T lockWithCallback(String path, Supplier<T> callback) {
        return lockWithCallback(path, 5, TimeUnit.SECONDS, callback);
    }

    /**
     * 加锁,需要回调,回调结束后自动释放锁
     *
     * @param path
     * @param timeout
     * @param timeUnit
     * @param callback
     * @param <T>
     * @return
     */
    public <T> T lockWithCallback(String path, long timeout, TimeUnit timeUnit, Supplier<T> callback) {
        path = RootLockPath + path;
        InterProcessMutex lock = new InterProcessMutex(this.client, path);
        boolean success = false;
        try {
            try {
                success = lock.acquire(timeout, timeUnit);
            } catch (Exception e) {
                e.printStackTrace();
                throw new BaseException("obtain lock error " + e.getMessage() + ", path " + path);
            }
            if (success && null != callback) {
                return callback.get();
            } else {
                return null;
            }
        } finally {
            try {
                if (success) {
                    lock.release();
                }
            } catch (Exception e) {
                e.printStackTrace();
                logger.error("release lock error {}, path {}", e.getMessage(), path);
            }
        }
    }

    /**
     * 初始化
     */
    public void init() {
        this.client.start();
    }

    /**
     * 释放锁
     *
     * @param lock
     */
    public boolean releaseLock(InterProcessMutex lock) {
        try {
            if (null != lock) {
                lock.release();
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 释放锁,并删除锁节点
     *
     * @param lock
     */
    public boolean releaseLock(InterProcessMutex lock, String path) {
        try {
            lock.release();
            path = RootLockPath + path;
            this.client.delete().inBackground().forPath(path);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 销毁
     */
    public void destroy() {
        try {
            if (null != this.client) {
                this.client.close();
            }
        } catch (Exception e) {
            e.printStackTrace();
            logger.error("stop zookeeper client error {}", e.getMessage());
        }
    }
}

五、与基于Redis实现分布式锁比较

redis分布式锁:加锁、解锁、锁超时
先判断key锁是否存在,存在加锁失败,不存在set(key,threadId,expire),设置超时时间expire防止某一线程加锁后挂掉无法显示delete释放锁,使用set()是原子操作,防止set后无法显示设置超时时间;当某一线程获取锁后业务代码执行时间较长,超过超时时间,自动释放锁后,另一线程获取锁,线程A业务执行完后删除B的锁,这是误删,所以加一个threadId,在删锁时判断threadId;
又为了上述情况的两个线程同时获取锁,可以在A获取锁后起一个守护线程在A快到期时给A续期;

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值