leveldb代码阅读(7)——插入数据

        1、DBImpl::Put 函数用于数据插入,内部调用的是DB::Put
        2、DB::Put的流程如下:
        3、创建一个批量写任务WriteBatch
        4、把需要插入数据库的键值对插入到批量写任务中
        5、调用Write函数写入
        
        DBImpl::Write函数的流程:
        1、创建一个写入器Writer
        2、设置写入器对应的批量写任务(即我们需要插入的数据)、以及选项等
        3、把写入器添加到写入器列表中
        4、如果当前的写入者不是列表中的第一个元素,那么就会一直等待,直到它变成列表中的第一个元素,才进行写入操作
        5、调用MakeRoomForWrite确定写入策略并分配空间
        6、调用BuildBatchGroup对当前写入者构建一个批量任务
        7、设置批量任务的序号
        8、先在日志文件中写入数据(这里很重要)
        9、然后同步
        10、调用WriteBatchInternal::InsertInto,把输入插入内存表中
        11、通知后台进程有数据写入了,如果可以的话就进行数据压缩合并的工作
        
        MakeRoomForWrite的流程:
        1、进入一个无限循环
        2、如果后台出错,那么跳出循环
        3、如果允许延时,而且第0层的文件数量已经达到阈值,那么睡眠1秒
        4、如果不允许延时,而且系统占用的内存还没达到阈值,那么可以进行写入,跳出循环
        5、如果只读的内存表不为空,那么等待后台进程把它写入硬盘中
        6、如果第0层的文件数量已经达到阈值,那么等待
        7、否则,把指向只读内存表的指针指向当前内存表,然后创建一个新的内存表用于数据的写入
        8、调用MaybeScheduleCompaction,如果可以就调用后台进程对数据下进行压缩合并
        
        DBImpl::BuildBatchGroup的流程:
        1、选择writer队列的第一个writer
        2、返回它的批量任务result
        3、然然后跳过第一个writer
        4、遍历writer队列中剩余的writer,对每一个writer做如下处理
        5、如果writer是同步写,那么不应该包含在批量任务中,立即跳出循环
        6、如果此writer的批量任务不为空,那么把它的批量任务添加到result中,同时last_writer记录最后处理的writer,回到4继续循环
        7、这样result中就记录了writer队列中所有writer的批量任务了
        8、返回result
        
        WriteBatchInternal::InsertInto的流程:
        1、创建一个插入者MemTableInserter,设置它的相应属性

        2、调用WriteBatch::Iterate函数,对批量任务中的每一个任务进行处理(处理的过程要区分是插入还是删除)。处理的结果就是把数据插入到了内存table中

// 写批量任务
Status DBImpl::Write(const WriteOptions& options, WriteBatch* my_batch)
{
    // 定义一个写入者
    Writer w(&mutex_);

    // 设置对应的属性
    w.batch = my_batch;
    w.sync = options.sync;
    w.done = false;

    // 互斥锁
    MutexLock l(&mutex_);

    // 把写入者加入写入者列表中
    writers_.push_back(&w);

    // 只有当这个写入者成为列表中的第一个写入者的时候才进行下面的处理,否则一直等待
    while (!w.done && &w != writers_.front()) {
        w.cv.Wait();
    }
    if (w.done) {
        return w.status;
    }

    // May temporarily unlock and wait.
    // 分配空间
    Status status = MakeRoomForWrite(my_batch == NULL);

    // 分配一个序号
    uint64_t last_sequence = versions_->LastSequence();
    Writer* last_writer = &w;

    if (status.ok() && my_batch != NULL) {  // NULL batch is for compactions

        // 构建批量任务
        WriteBatch* updates = BuildBatchGroup(&last_writer);

        // 设置批量任务的序号
        WriteBatchInternal::SetSequence(updates, last_sequence + 1);

        // 因为一个批量任务中有多个任务,每个任务都有一个序号,所以需要对最后的序号做调整
        last_sequence += WriteBatchInternal::Count(updates);

        // Add to log and apply to memtable.  We can release the lock
        // during this phase since &w is currently responsible for logging
        // and protects against concurrent loggers and concurrent writes
        // into mem_.
        {
            mutex_.Unlock();

            // 写入日志中,在写数据之前都要先把数据写道日志文件中,这是为了安全和方便数据恢复
            status = log_->AddRecord(WriteBatchInternal::Contents(updates));
            bool sync_error = false;

            // 如果有同步选项,那么就先进行同步
            if (status.ok() && options.sync) {
                status = logfile_->Sync();
                if (!status.ok()) {
                    sync_error = true;
                }
            }

            if (status.ok()) {
                // 把批量的写任务的数据插入到内存表中
                status = WriteBatchInternal::InsertInto(updates, mem_);
            }
            mutex_.Lock();
            if (sync_error) {
                // The state of the log file is indeterminate: the log record we
                // just added may or may not show up when the DB is re-opened.
                // So we force the DB into a mode where all future writes fail.
                RecordBackgroundError(status);
            }
        }
        if (updates == tmp_batch_) tmp_batch_->Clear();

        // 设置当前版本中的最后一个序号
        versions_->SetLastSequence(last_sequence);
    }

    // 不断的通知后台进程,有数据可以写
    while (true) {
        // 把一个个的写入者弹出来
        Writer* ready = writers_.front();
        writers_.pop_front();
        if (ready != &w) {
            ready->status = status;
            ready->done = true;
            ready->cv.Signal();
        }
        if (ready == last_writer) break;
    }

    // Notify new head of write queue
    if (!writers_.empty()) {
        writers_.front()->cv.Signal();
    }

    return status;
}
// REQUIRES: mutex_ is held
// REQUIRES: this thread is currently at the front of the writer queue
// 为写分配空间
Status DBImpl::MakeRoomForWrite(bool force) {
    mutex_.AssertHeld();
    assert(!writers_.empty());
    // 是否允许延后
    bool allow_delay = !force;
    Status s;

    // 进入循环
    while (true)
    {
        // 如果后台任务出错,那么跳出循环
        if (!bg_error_.ok()) {
            // Yield previous error
            s = bg_error_;
            break;
        }
        // 如果允许延时,而且第0层的文件的数量已经达到阈值
        else if (allow_delay &&
                   versions_->NumLevelFiles(0) >= config::kL0_SlowdownWritesTrigger)
        {
            // We are getting close to hitting a hard limit on the number of
            // L0 files.  Rather than delaying a single write by several
            // seconds when we hit the hard limit, start delaying each
            // individual write by 1ms to reduce latency variance.  Also,
            // this delay hands over some CPU to the compaction thread in
            // case it is sharing the same core as the writer.
            mutex_.Unlock();
            // 睡眠
            env_->SleepForMicroseconds(1000);
            allow_delay = false;  // Do not delay a single write more than once
            mutex_.Lock();
        }
        // 如果不允许延时,而且占用系统占用的内存量还没达到阈值,那么可以直接写
        else if (!force &&
                   (mem_->ApproximateMemoryUsage() <= options_.write_buffer_size)) {
            // There is room in current memtable
            break;
        }
        // 如果只读表不为空,表示当前的数据表已经满了,需要等待写入硬盘
        else if (imm_ != NULL) {
            // We have filled up the current memtable, but the previous
            // one is still being compacted, so we wait.
            Log(options_.info_log, "Current memtable full; waiting...\n");
            bg_cv_.Wait();
        }
        // 第0层的文件的数量大于阈值了,那么需要等待
        else if (versions_->NumLevelFiles(0) >= config::kL0_StopWritesTrigger) {
            // There are too many level-0 files.
            Log(options_.info_log, "Too many L0 files; waiting...\n");
            bg_cv_.Wait();
        }
        // 否则,创建一个新文件(内存文件)
        else {
            // Attempt to switch to a new memtable and trigger compaction of old
            assert(versions_->PrevLogNumber() == 0);
            uint64_t new_log_number = versions_->NewFileNumber();
            WritableFile* lfile = NULL;
            s = env_->NewWritableFile(LogFileName(dbname_, new_log_number), &lfile);
            if (!s.ok()) {
                // Avoid chewing through file number space in a tight loop.
                versions_->ReuseFileNumber(new_log_number);
                break;
            }
            delete log_;
            delete logfile_;
            logfile_ = lfile;
            logfile_number_ = new_log_number;
            log_ = new log::Writer(lfile);
            imm_ = mem_;
            has_imm_.Release_Store(imm_);
            mem_ = new MemTable(internal_comparator_);
            mem_->Ref();
            force = false;   // Do not force another compaction if have room
            MaybeScheduleCompaction();
        }
    }
    return s;
}
// REQUIRES: Writer list must be non-empty
// REQUIRES: First writer must have a non-NULL batch
// 构建批量任务
WriteBatch* DBImpl::BuildBatchGroup(Writer** last_writer) {
    assert(!writers_.empty());

    // 第一个写入者
    Writer* first = writers_.front();

    // 所属的批量任务
    WriteBatch* result = first->batch;
    assert(result != NULL);

    // 返回这个批量任务的大小
    size_t size = WriteBatchInternal::ByteSize(first->batch);

    // Allow the group to grow up to a maximum size, but if the
    // original write is small, limit the growth so we do not slow
    // down the small write too much.
    size_t max_size = 1 << 20;
    if (size <= (128<<10)) {
        max_size = size + (128<<10);
    }

    *last_writer = first;
    std::deque<Writer*>::iterator iter = writers_.begin();

    // 跳过第一个写入者
    ++iter;  // Advance past "first"

    // 岁与每一个写入者
    for (; iter != writers_.end(); ++iter) {
        Writer* w = *iter;

        // 如果是同步写,那么不应该包含在批量任务中
        if (w->sync && !first->sync) {
            // Do not include a sync write into a batch handled by a non-sync write.
            break;
        }

        // 如果写入者的·批量任务不为空
        if (w->batch != NULL)
        {
            size += WriteBatchInternal::ByteSize(w->batch);
            if (size > max_size) {
                // Do not make batch too big
                break;
            }

            // Append to *result
            if (result == first->batch) {
                // Switch to temporary batch instead of disturbing caller's batch
                result = tmp_batch_;
                assert(WriteBatchInternal::Count(result) == 0);
                WriteBatchInternal::Append(result, first->batch);
            }

            // 添加到内部的写批量任务中
            WriteBatchInternal::Append(result, w->batch);
        }
        *last_writer = w;
    }
    return result;
}




评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值