自建DHCP服务之kea

准备信息

  • 官方资料:https://www.isc.org/kea/
  • 安装版本
    • kea - DHCP主程序:v1.9.10
    • stork - 仪表盘:v0.19.0
  • 安装环境:Debian 11 x64
  • 项目仓库
  • 安装目标
    • 1、使用kea实现DHCP
    • 2、使用stork用作Web仪表盘(只能看不能操作)
  • 安装依赖

    简易安装项

    • gcc
    • g++
    • curl
    • make
    • autoconf
    • libtool
    • libmysql+±dev
    • libboost-system-dev

    需要配置安装项

    • mysql/mariadb
    • PostgreSQL

    源码编译安装

    • log4cplus

安装kea步骤

注意:本次安装过程所用操作使用root账户进行

  • 1、初始化环境
    • 安装依赖项
      apt install gcc g++ curl make libtool autoconf libmysql++-dev libboost-system-dev
      
    • 安装log4cplus
      仓库缺少ThreadPool,需要手动下载:
      • 下载链接:Github
      • 附件:文末
      mkdir ~/src/log4cplus
      cd ~/src/log4cplus
      wget https://github.com/log4cplus/log4cplus/archive/refs/tags/REL_2_0_7.tar.gz
      tar zxvf REL_2_0_7.tar.gz
      cd log4cplus-REL_2_0_7/
      # 此处需要将下载的ThreadPool.h头文件放到 threadpool/目录
      ./configure
      make
      make install
      
  • 2、安装并配置数据库(此处安装mariadb-server)
    # apt安装
    apt install mariadb-server
    
    # 初始化mariadb
    mysql_secure_installation
    # 输入原来的root密码,没有密码直接回车就可以了
    # Enter current password for root (enter for none): 
    # 更改root密码?
    # Change the root password?
    # 删除匿名用户?
    # Remove anonymous users?
    # 禁用root远程登录?
    # Disallow root login remotely?
    # 是否删除test测试数据库?
    # Remove test database and access to it?
    # 重新加载权限数据表
    # Reload privilege tables now?
    
    # 若需要更改远程访问
    # 更改数据库配置,绑定IP设置为0.0.0.0
    vim /etc/mysql/mariadb.conf.d/50-server.cnf
    # 更改 bind-address 字段
    
    # 创建数据库与用户
    mysql -u root
    MariaDB [mysql]> use mysql;
    MariaDB [mysql]> select host,user from user;
    +-----------+-------------+
    | Host      | User        |
    +-----------+-------------+
    | localhost | mariadb.sys |
    | localhost | mysql       |
    | localhost | root        |
    +-----------+-------------+
    3 rows in set (0.002 sec)
    # 创建数据库 CREATE DATABASE <database>;
    MariaDB [mysql]> CREATE DATABASE w21DHCP;
    
    # 创建数据库用户 CREATE USER '<user>'@'localhost' IDENTIFIED BY '<password>';
    MariaDB [mysql]> CREATE USER 'w21dhcp'@'localhost' IDENTIFIED BY 'w21@dhcp';
    MariaDB [mysql]> flush privileges;
    
    # 授权数据库给对应用户所有权 GRANT ALL ON <database>.* to '<user>'@'localhost' IDENTIFIED BY '<password>';
    MariaDB [mysql]> GRANT ALL ON w21DHCP.* to 'w21dhcp'@'localhost' IDENTIFIED BY 'w21@dhcp';
    MariaDB [mysql]> flush privileges;
    
  • 3、编译安装kea
    mkdir ~/src/kea
    cd ~/src/kea
    wget https://github.com/isc-projects/kea/archive/refs/tags/Kea-1.9.10.tar.gz
    tar zxvf Kea-1.9.10.tar.gz
    cd kea-Kea-1.9.10/
    autoreconf --install
    ./configure --with-mysql
    make
    make install
    
  • 4、初始化kea数据库
    此处提供两种初始化方法
    • 方法1:通过kea-admin自动初始化
    # kea-admin db-init mysql -u <user> -p <password> -n <database>
    kea-admin db-init mysql -u w21dhcp -p w21@dhcp -n w21DHCP
    
    • 方法2:手动导入Mysql数据表
    # CONNECT <database>;
    # SOURCE path-to-kea/share/kea/scripts/mysql/dhcpdb_create.mysql
    mysql> CONNECT w21DHCP;
    mysql> SOURCE path-to-kea/share/kea/scripts/mysql/dhcpdb_create.mysql
    

安装stork仪表盘

参考链接:Installing the Stork Server

  • 1、安装stork

    Ubuntu/Debian

    curl -1sLf 'https://dl.cloudsmith.io/public/isc/stork/cfg/setup/bash.deb.sh' | sudo bash
    sudo apt install isc-stork-server
    

    CentOS/RHEL/Fedora

    curl -1sLf 'https://dl.cloudsmith.io/public/isc/stork/cfg/setup/bash.rpm.sh' | sudo bash
    sudo dnf install isc-stork-server
    
  • 2、安装PostgreSQL并配置数据库

    apt-get install postgresql
    su postgresql
    createdb storkdb
    psql
    postgres=# CREATE USER stork WITH PASSWORD 'w21@stork';
    postgres=# GRANT ALL PRIVILEGES ON DATABASE storkdb TO stork;
    
  • 2、配置数据库
    编辑配置文件:/etc/stork/server.env

    STORK_DATABASE_HOST - PostgreSQL 数据库的地址;默认是本地主机
    STORK_DATABASE_PORT - PostgreSQL 数据库的端口;默认值为5432
    STORK_DATABASE_NAME - 数据库名称;默认是stork
    STORK_DATABASE_USER_NAME - 连接数据库的用户名;默认是stork
    STORK_DATABASE_PASSWORD - 连接到数据库的用户名的密码
    
  • 3、启动stork

    # 启动
    systemctl start isc-stork-server
    
    # 添加开机自启动
    systemctl enable isc-stork-server
    

    浏览器访问:http://{IP}:{8080}
    帐号:admin
    密码:admin

附件

ThreadPool.h

// -*- C++ -*-
// Copyright (c) 2012-2015 Jakob Progsch
//
// This software is provided 'as-is', without any express or implied
// warranty. In no event will the authors be held liable for any damages
// arising from the use of this software.
//
// Permission is granted to anyone to use this software for any purpose,
// including commercial applications, and to alter it and redistribute it
// freely, subject to the following restrictions:
//
//    1. The origin of this software must not be misrepresented; you must not
//    claim that you wrote the original software. If you use this software
//    in a product, an acknowledgment in the product documentation would be
//    appreciated but is not required.
//
//    2. Altered source versions must be plainly marked as such, and must not be
//    misrepresented as being the original software.
//
//    3. This notice may not be removed or altered from any source
//    distribution.
//
// Modified for log4cplus, copyright (c) 2014-2015 Václav Zeman.

#ifndef THREAD_POOL_H_7ea1ee6b_4f17_4c09_b76b_3d44e102400c
#define THREAD_POOL_H_7ea1ee6b_4f17_4c09_b76b_3d44e102400c

#include <vector>
#include <queue>
#include <memory>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <future>
#include <atomic>
#include <functional>
#include <stdexcept>
#include <algorithm>
#include <cassert>


namespace progschj {

class ThreadPool {
public:
    explicit ThreadPool(std::size_t threads
        = (std::max)(2u, std::thread::hardware_concurrency()));
    template<class F, class... Args>
    auto enqueue(F&& f, Args&&... args)
        -> std::future<
#if defined(__cpp_lib_is_invocable) && __cpp_lib_is_invocable >= 201703
            typename std::invoke_result<F&&, Args&&...>::type
#else
            typename std::result_of<F&& (Args&&...)>::type
#endif
    >;
    void wait_until_empty();
    void wait_until_nothing_in_flight();
    void set_queue_size_limit(std::size_t limit);
    void set_pool_size(std::size_t limit);
    ~ThreadPool();

private:
    void start_worker(std::size_t worker_number,
        std::unique_lock<std::mutex> const &lock);

    // need to keep track of threads so we can join them
    std::vector< std::thread > workers;
    // target pool size
    std::size_t pool_size;
    // the task queue
    std::queue< std::function<void()> > tasks;
    // queue length limit
    std::size_t max_queue_size = 100000;
    // stop signal
    bool stop = false;

    // synchronization
    std::mutex queue_mutex;
    std::condition_variable condition_producers;
    std::condition_variable condition_consumers;

    std::mutex in_flight_mutex;
    std::condition_variable in_flight_condition;
    std::atomic<std::size_t> in_flight;

    struct handle_in_flight_decrement
    {
        ThreadPool & tp;

        handle_in_flight_decrement(ThreadPool & tp_)
            : tp(tp_)
        { }

        ~handle_in_flight_decrement()
        {
            std::size_t prev
                = std::atomic_fetch_sub_explicit(&tp.in_flight,
                    std::size_t(1),
                    std::memory_order_acq_rel);
            if (prev == 1)
            {
                std::unique_lock<std::mutex> guard(tp.in_flight_mutex);
                tp.in_flight_condition.notify_all();
            }
        }
    };
};

// the constructor just launches some amount of workers
inline ThreadPool::ThreadPool(std::size_t threads)
    : pool_size(threads)
    , in_flight(0)
{
    std::unique_lock<std::mutex> lock(this->queue_mutex);
    for (std::size_t i = 0; i != threads; ++i)
        start_worker(i, lock);
}

// add new work item to the pool
template<class F, class... Args>
auto ThreadPool::enqueue(F&& f, Args&&... args)
    -> std::future<
#if defined(__cpp_lib_is_invocable) && __cpp_lib_is_invocable >= 201703
      typename std::invoke_result<F&&, Args&&...>::type
#else
      typename std::result_of<F&& (Args&&...)>::type
#endif
      >
{
#if defined(__cpp_lib_is_invocable) && __cpp_lib_is_invocable >= 201703
    using return_type = typename std::invoke_result<F&&, Args&&...>::type;
#else
    using return_type = typename std::result_of<F&& (Args&&...)>::type;
#endif


    auto task = std::make_shared< std::packaged_task<return_type()> >(
            std::bind(std::forward<F>(f), std::forward<Args>(args)...)
        );

    std::future<return_type> res = task->get_future();

    std::unique_lock<std::mutex> lock(queue_mutex);
    if (tasks.size () >= max_queue_size)
        // wait for the queue to empty or be stopped
        condition_producers.wait(lock,
            [this]
            {
                return tasks.size () < max_queue_size
                    || stop;
            });

    // don't allow enqueueing after stopping the pool
    if (stop)
        throw std::runtime_error("enqueue on stopped ThreadPool");

    tasks.emplace([task](){ (*task)(); });
    std::atomic_fetch_add_explicit(&in_flight,
        std::size_t(1),
        std::memory_order_relaxed);
    condition_consumers.notify_one();

    return res;
}


// the destructor joins all threads
inline ThreadPool::~ThreadPool()
{
    std::unique_lock<std::mutex> lock(queue_mutex);
    stop = true;
    pool_size = 0;
    condition_consumers.notify_all();
    condition_producers.notify_all();
    condition_consumers.wait(lock, [this]{ return this->workers.empty(); });
    assert(in_flight == 0);
}

inline void ThreadPool::wait_until_empty()
{
    std::unique_lock<std::mutex> lock(this->queue_mutex);
    this->condition_producers.wait(lock,
        [this]{ return this->tasks.empty(); });
}

inline void ThreadPool::wait_until_nothing_in_flight()
{
    std::unique_lock<std::mutex> lock(this->in_flight_mutex);
    this->in_flight_condition.wait(lock,
        [this]{ return this->in_flight == 0; });
}

inline void ThreadPool::set_queue_size_limit(std::size_t limit)
{
    std::unique_lock<std::mutex> lock(this->queue_mutex);

    if (stop)
        return;

    std::size_t const old_limit = max_queue_size;
    max_queue_size = (std::max)(limit, std::size_t(1));
    if (old_limit < max_queue_size)
        condition_producers.notify_all();
}

inline void ThreadPool::set_pool_size(std::size_t limit)
{
    if (limit < 1)
        limit = 1;

    std::unique_lock<std::mutex> lock(this->queue_mutex);

    if (stop)
        return;

    std::size_t const old_size = pool_size;
    assert(this->workers.size() >= old_size);

    pool_size = limit;
    if (pool_size > old_size)
    {
        // create new worker threads
        // it is possible that some of these are still running because
        // they have not stopped yet after a pool size reduction, such
        // workers will just keep running
        for (std::size_t i = old_size; i != pool_size; ++i)
            start_worker(i, lock);
    }
    else if (pool_size < old_size)
        // notify all worker threads to start downsizing
        this->condition_consumers.notify_all();
}

inline void ThreadPool::start_worker(
    std::size_t worker_number, std::unique_lock<std::mutex> const &lock)
{
    assert(lock.owns_lock() && lock.mutex() == &this->queue_mutex);
    assert(worker_number <= this->workers.size());

    auto worker_func =
        [this, worker_number]
        {
            for(;;)
            {
                std::function<void()> task;
                bool notify;

                {
                    std::unique_lock<std::mutex> lock(this->queue_mutex);
                    this->condition_consumers.wait(lock,
                        [this, worker_number]{
                            return this->stop || !this->tasks.empty()
                                || pool_size < worker_number + 1; });

                    // deal with downsizing of thread pool or shutdown
                    if ((this->stop && this->tasks.empty())
                        || (!this->stop && pool_size < worker_number + 1))
                    {
                        // detach this worker, effectively marking it stopped
                        this->workers[worker_number].detach();
                        // downsize the workers vector as much as possible
                        while (this->workers.size() > pool_size
                             && !this->workers.back().joinable())
                            this->workers.pop_back();
                        // if this is was last worker, notify the destructor
                        if (this->workers.empty())
                            this->condition_consumers.notify_all();
                        return;
                    }
                    else if (!this->tasks.empty())
                    {
                        task = std::move(this->tasks.front());
                        this->tasks.pop();
                        notify = this->tasks.size() + 1 ==  max_queue_size
                            || this->tasks.empty();
                    }
                    else
                        continue;
                }

                handle_in_flight_decrement guard(*this);

                if (notify)
                {
                    std::unique_lock<std::mutex> lock(this->queue_mutex);
                    condition_producers.notify_all();
                }

                task();
            }
        };

    if (worker_number < this->workers.size()) {
        std::thread & worker = this->workers[worker_number];
        // start only if not already running
        if (!worker.joinable()) {
            worker = std::thread(worker_func);
        }
    } else
        this->workers.push_back(std::thread(worker_func));
}

} // namespace progschj

#endif // THREAD_POOL_H_7ea1ee6b_4f17_4c09_b76b_3d44e102400c

/usr/local/etc/kea/kea-dhcp4.conf

{

"Dhcp4": {
    "interfaces-config": {
        "interfaces": [
            "ens18/172.16.0.2",
            "ens19/172.16.16.2",
            "ens20/172.16.32.2",
            "ens21/192.168.101.2"
        ]
    },
    "control-socket": {
        "socket-type": "unix",
        "socket-name": "/tmp/kea4-ctrl-socket"
    },
    "lease-database": {
        "type": "memfile",
        "lfc-interval": 3600
    },
    "expired-leases-processing": {
        "reclaim-timer-wait-time": 10,
        "flush-reclaimed-timer-wait-time": 25,
        "hold-reclaimed-time": 3600,
        "max-reclaim-leases": 100,
        "max-reclaim-time": 250,
        "unwarned-reclaim-cycles": 5
    },

    "renew-timer": 900,
    "rebind-timer": 1800,
    "valid-lifetime": 3600,
    "option-data": [
        {
            "name": "domain-name-servers",
            "data": "114.114.114.114, 180.76.76.76"
        }
    ],

    "client-classes": [
        {
            "name": "voip",
            "test": "substring(option[60].hex,0,6) == 'Aastra'",
            "next-server": "192.0.2.254",
            "server-hostname": "hal9000",
            "boot-file-name": "/dev/null"
        }
    ],

    "subnet4": [
        {
            "subnet": "172.16.0.0/20",
            "pools": [ { "pool": "172.16.13.20 - 172.16.15.254" } ],
            "option-data": [
                {
                    "name": "routers",
                    "data": "172.16.0.1"
                }
            ],
            "reservations": [
                {
                    "hw-address": "76:A4:3B:55:00:7B",
                    "ip-address": "172.16.0.10"
                }
            ]
        },

        {
            "subnet": "172.16.16.0/20",
            "pools": [ { "pool": "172.16.20.30 - 172.16.31.254" } ],
            "option-data": [
                {
                    "name": "routers",
                    "data": "172.16.16.1"
                }
            ],
            "reservations": [
                {
                    "hw-address": "DE:F2:A8:0A:EC:7D",
                    "ip-address": "172.16.16.10"
                }
            ]
        },

        {
            "subnet": "172.16.32.0/20",
            "pools": [ { "pool": "172.16.32.20 - 172.16.47.254" } ],
            "option-data": [
                {
                    "name": "routers",
                    "data": "172.16.32.1"
                }
            ]
        },
        
        {
            "subnet": "192.168.101.2/24",
            "pools": [ { "pool": "192.168.101.30 - 192.168.101.254" } ],
            "option-data": [
                {
                    "name": "routers",
                    "data": "192.168.101.1"
                }
            ]
        }
    ],

    "loggers": [
    {
        "name": "kea-dhcp4",
        "output_options": [
            {
                "output": "/usr/local/var/log/kea-dhcp4.log"
            }
        ],
        "severity": "INFO",
        "debuglevel": 0
    }
  ]
}
}

/etc/network/interfaces

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug ens19
iface ens19 inet dhcp

auto ens18
iface ens18 inet static
	address 172.16.0.2/20
	gateway 172.16.0.1

auto ens20
iface ens20 inet static
	address 172.16.16.2/20
	gateway 172.16.16.1

auto ens21
iface ens21 inet static
	address 172.16.32.2/20
	gateway 172.16.32.1

auto ens22
iface ens22 inet static
	address 192.168.101.2/24
	gateway 192.168.101.1
Kea DHCP(Dynamic Host Configuration Protocol)是一个灵活且高度可扩展的DHCP服务器软件,它由IETF (Internet Engineering Task Force) 委员会开发,用于自动分配IP地址和网络配置给网络中的设备。Kea DHCP的主要优点和缺点如下: **优点:** 1. **可定制性和灵活性**:Kea提供了强大的插件系统,可以根据需求自定义功能,支持IPv4和IPv6地址分配。 2. **安全性**:内置安全机制,如租约保护、身份验证和授权,有助于防止恶意攻击和未经授权的配置。 3. **高可靠性**:模块化设计使得故障排查和维护更方便,且支持高可用性和集群部署。 4. **审计与日志**:支持详细的审计和日志记录,便于跟踪网络活动和事件。 5. **易扩展**:能够处理大规模网络和动态变化的需求。 **缺点:** 1. **学习曲线陡峭**:Kea的功能强大,对于新手来说可能有一定的学习成本,特别是配置复杂场景。 2. **资源消耗**:对于大型网络,管理和维护这样一个复杂的系统可能会占用较多的服务器资源。 3. **性能开销**:尽管可扩展,但处理大量请求时,性能可能不如一些专为简化部署而优化的传统DHCP服务器。 4. **依赖于IETF标准**:这意味着某些新功能或优化可能需要等待标准化过程完成,导致更新周期可能较长。 5. **社区和文档**:虽然Kea社区活跃,但对于某些用户来说,文档和教程可能不够全面,尤其是在早期版本中。 **相关问题--:** 1. Kea如何处理IPv6地址分配? 2. 如何在Kea中启用身份验证? 3. Kea的模块化设计如何影响其性能?
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值