virtio驱动_0020 virtio-blk简易驱动

virtio 是一种 I/O 半虚拟化解决方案,是一套通用 I/O 设备虚拟化的程序,是对半虚拟化 Hypervisor 中的一组通用 I/O 设备的抽象。对比其他设备有宿主计算机模拟,virtio设备效率更高。

virtio架构图:

41e714f63eb53169654a468ee93f862c.png

最上面一排是不同的设备,如块设备,网络设备,控制台等

virtio 层属于控制层,负责设备跟宿主OS之间的通知机制(kick,notify)和控制流程,而 virtio-vring 则负责具体数据流转发。

vring 包含三个部分,描述符数组 desc,可用的 available ring 和使用过的 used ring。

struct vring_desc {
        /* Address (guest-physical). */
        u64 addr;
        /* Length. */
        u32 len;
        /* The flags as indicated above. */
        u16 flags;
        /* We chain unused descriptors via this, too */
        u16 next;
};

vring描述符结构。

struct vring_avail {
        u16 flags;
        u16 idx;
        u16 ring[];
};
struct vring_used_elem {
        /* Index of start of used descriptor chain. */
        u32 id;
        /* Total length of the descriptor chain which was used (written to) */
        u32 len;
};

struct vring_used {
        u16 flags;
        u16 idx;
        struct vring_used_elem ring[];
};

avail跟used结构。

static inline void vring_init(struct vring *vr, unsigned int num, void *p,
                              unsigned long align)
{
        vr->num = num;
        vr->desc = p;
        vr->avail = (void *)((char *)p + num*sizeof(struct vring_desc));
        vr->used = (void *)(((unsigned long)&vr->avail->ring[num] + sizeof(u16)
                + align-1) & ~(align - 1));
}

static inline unsigned vring_size(unsigned int num, unsigned long align)
{
        return ((sizeof(struct vring_desc) * num + sizeof(u16) * (3 + num)
                 + align - 1) & ~(align - 1))
                + sizeof(u16) * 3 + sizeof(struct vring_used_elem) * num;
}

vring_size计算vring需要的内存大小,num是vring的描述符数量,为2^N,如128,256,512等。

首先放的是num个描述符vring_desc,然后是vring_avail,在vring_avail结尾有个16字位的used_event,因此是是(3+num)。之后放得是vring_used,同样,后面有个16位的avail_event,因此也是sizeof(u16) * 3。

vring_used需要4K对齐。

used_event用来通知宿主os,读取到哪里了。avail_event用来通知宿主os,写到哪里了。

设置avail_event,用来通知宿主os有新命令需要处理。宿主os完成之后产生中断,处理完成之后设置used_event,表示数据处理过了,这样宿主os才会有数据的时候继续发生中断。

desc 用于存储一些关联的描述符,每个描述符记录一个对 buffer 的描述,available ring 则用于 guest 端表示当前有哪些描述符是可用的,而 used ring 则表示 host 端哪些描述符已经被使用。

Virtio 使用 virtqueue 来实现 I/O 机制,每个 virtqueue 就是一个承载大量数据的队列,具体使用多少个队列取决于需求,例如,virtio 网络驱动程序(virtio-net)使用两个队列(一个用于接受,另一个用于发送),而 virtio 块驱动程序(virtio-blk)仅使用一个队列。

比如读取硬盘。取得virtqueue队列,virtio-blk就一个队列。然后往里面添加3条结构化数据。

struct addr_size {
  unsigned long  vp_addr;           /* 物理地址 */
  u32 vp_size;               /* 大小 */
  u32 vp_flag;   /*标记,如读,写*/
};

第一条是请求命令数据:

 struct blk_outhdr {
            /* VIRTIO_BLK_T* */
            u32 type;
            /* io priority. */
            u32 ioprio;
            /* Sector (ie. 512 byte offset) */
            u64 sector;
  };

告诉驱动是读还是写(type),优先级,扇区号。

第二条是输出缓冲区,即扇区读取到哪里,缓冲区大小。

第三条就1个字节,用来指示操作结果,0表示成功。

module/blk/virtio_blk.c

struct blk_req {
    struct blk_outhdr hdr;
    uchar status;
};
static struct blk_req rq;
static void virtio_blk_read(ulong sector)
{
    struct addr_size phys[3];
    rq.hdr.type = VIRTIO_BLK_T_IN;
    rq.hdr.ioprio = 0;
    rq.hdr.sector = sector;
    phys[0].vp_addr = V2P((ulong)&rq.hdr);
    phys[0].vp_size = sizeof(rq.hdr);
    phys[0].vp_flag = VRING_DESC_F_READ;
    phys[1].vp_addr = V2P((ulong)buf);
    phys[1].vp_size = 512;
    phys[1].vp_flag = VRING_DESC_F_WRITE;
    uchar status = 0;
    phys[2].vp_addr = V2P((ulong)&rq.status);
    phys[2].vp_size = sizeof(status);
    pyhs[2].vp_flag = VRING_DESC_F_WRITE;
    virtio_to_queue(to_virtio_dev_t(&pci_vblk), 0, phys, 3, &rq);

}

virtio_to_queue把3个描述符写入vring,然后kick通知宿主os。

libs/libvirtio/virtio.c

int
virtio_to_queue(virtio_dev_t dev, int qidx, struct addr_size *bufs,
                size_t num, void *data)
{
    u16_t free_first;
    int left;
    struct virtio_queue *q = &dev->queues[qidx];
    struct vring *vring = &q->vring;

    ASSERT(0 <= qidx && qidx <= dev->num_queues);

    if (!data)
        panic("%s: NULL data received queue %d", __func__, qidx);

    free_first = q->free_head;

    left = (int)q->free_num - (int)num;
    pci_d("free_first:%016lx,left:%d,dev->threads:%d,n:%dn",
         free_first,left, dev->threads,num);
    if (left < dev->threads)
        set_indirect_descriptors(dev, q, bufs, num);
    else
        set_direct_descriptors(q, bufs, num);
    /* Next index for host is old free_head */
    vring->avail->ring[vring->avail->idx % q->num] = free_first;

    /* Provided by the caller to identify this slot */
    q->data[free_first] = data;

   /* Make sure the host sees the new descriptors */
    virtio_wmb(true);

    /* advance last idx */
    vring->avail->idx += 1;

    /* Make sure the host sees the avail->idx */
    virtio_rmb(true);

    /* kick it! */
    kick_queue(dev, qidx);
    return 0;
}

kick_queue通知宿主os进行处理。

void *data为驱动数据,到时候宿主os完成之后通知驱动调用virtio_from_queue会返回。

这里为struct blk_req rq; 目前比较简单,就一个状态跟hdr请求结构。状态为第三个描述符:

    phys[2].vp_addr = V2P((ulong)&rq.status);
    phys[2].vp_size = sizeof(status);
    pyhs[2].vp_flag = VRING_DESC_F_WRITE;

由宿主os填写是否操作成功。

int virtio_from_queue(virtio_dev_t dev, int qidx, void **data, size_t * len)
{
    struct virtio_queue *q;
    struct vring *vring;
    struct vring_used_elem *uel;
    struct vring_desc *vd;
    int count = 0;
    u16_t idx;
    u16_t used_idx;

    ASSERT(0 <= qidx && qidx < dev->num_queues);

    q = &dev->queues[qidx];
    vring = &q->vring;
    /* Make sure we see changes done by the host */
    virtio_rmb(true);

    /* The index from the host */
    used_idx = vring->used->idx % q->num;

    /* We already saw this one, nothing to do here */
    if (q->last_used == used_idx)
        return -1;
    /* Get the vring_used element */
    uel = &q->vring.used->ring[q->last_used];

    /* Update the last used element */
    q->last_used = (q->last_used + 1) % q->num;

    /* index of the used element */
    idx = uel->id % q->num;

    ASSERT(q->data[idx] != NULL);

    /* Get the descriptor */
    vd = &vring->desc[idx];

    /* Unconditionally set the tail->next to the first used one */
    ASSERT(vring->desc[q->free_tail].flags & VRING_DESC_F_NEXT);
    vring->desc[q->free_tail].next = idx;

    /* Find the last index, eventually there has to be one
     * without a the next flag.
     *
     * FIXME: Protect from endless loop
     */
    while (vd->flags & VRING_DESC_F_NEXT) {

        if (vd->flags & VRING_DESC_F_INDIRECT)
            clear_indirect_table(dev, vd);

        idx = vd->next;
        vd = &vring->desc[idx];
        count++;
    }

    /* Didn't count the last one */
    count++;

    if (vd->flags & VRING_DESC_F_INDIRECT)
        clear_indirect_table(dev, vd);

    /* idx points to the tail now, update the queue */
    q->free_tail = idx;
    ASSERT(!(vd->flags & VRING_DESC_F_NEXT));

    /* We can always connect the tail with the head */
    vring->desc[q->free_tail].next = q->free_head;
    vring->desc[q->free_tail].flags = VRING_DESC_F_NEXT;

    q->free_num += count;
   ASSERT(q->free_num <= q->num);

    *data = q->data[uel->id];
    q->data[uel->id] = NULL;
    if (len != NULL)
        *len = uel->len;
    q->last_used_idx++;
    if (!(vring->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))
        virtio_store_mb(true,vring->used_event,q->last_used_idx);

    return 0;
}

virtio_from_queue用于处理结果。

下面做个virtio_blk的实验:

module/blk/virtio_blk.c

static void irq_handler(int n)
{
    uchar isr =
    virtio_conf_readb(to_virtio_dev_t(&pci_vblk), VIRTIO_PCI_ISR);
    ack_lapic_irq();
    virtio_queue_disable_intr(virtio_get_queue(to_virtio_dev_t(&pci_vblk), 0));
    printk("****virtio_blk_irq_handler**** irq:%d,isr:%d cpu:%dn", n,isr,smp_processor_id());
    virtio_blk_check_queue();
}
void virtio_blk_read_config(struct virtio_blk *p)
{
    virtio_dev_t pvirtio = to_virtio_dev_t(p);
    u32 offset = virtio_pci_config_offset(to_virtio_dev_t(p));

    virtio_conf_read(pvirtio, offset, &p->_config,
                     sizeof(p->_config));
    if (virtio_get_guest_feature_bit(to_virtio_dev_t(p), VIRTIO_BLK_F_SIZE_MAX)) {
        pci_d("VIRTIO_BLK_F_SIZE_MAX:%xn",p->_config.size_max);

    }
    if (virtio_get_guest_feature_bit(to_virtio_dev_t(p), VIRTIO_BLK_F_SEG_MAX)) {
        pci_d("VIRTIO_BLK_F_SEG_MAX:%xn",p->_config.seg_max);

    }
    if (virtio_get_guest_feature_bit(to_virtio_dev_t(p), VIRTIO_BLK_F_GEOMETRY)) {
        pci_d("VIRTIO_BLK cylinders:%x,heads:%x,sectors:%xn",
             p->_config.geometry.cylinders,
             p->_config.geometry.heads,p->_config.geometry.sectors);

    }
    if (virtio_get_guest_feature_bit(to_virtio_dev_t(p), VIRTIO_BLK_F_BLK_SIZE)) {
        pci_d("VIRTIO_BLK_F_BLK_SIZE:%xn",p->_config.blk_size);

    }
    if (virtio_get_guest_feature_bit(to_virtio_dev_t(p), VIRTIO_BLK_F_TOPOLOGY)) {
        pci_d("VIRTIO_BLK topology,physical_block_exp:%x,alignment:%x,min_io_size:%x,opt_io_size:%xn",
             p->_config.physical_block_exp,p->_config.alignment_offset,
             p->_config.min_io_size,p->_config.opt_io_size);

    }
    if (virtio_get_guest_feature_bit(to_virtio_dev_t(p), VIRTIO_BLK_F_CONFIG_WCE)) {
        pci_d("VIRTIO_BLK_F_WCE:%xn",p->_config.wce);

    }
    if (virtio_get_guest_feature_bit(to_virtio_dev_t(p), VIRTIO_BLK_F_RO)) {
        pci_d("VIRTIO_BLK readonlyn");

    }


}
static char buf[512];
struct blk_req {
    struct blk_outhdr hdr;
    pthread thread;
    uchar status;
};
static struct blk_req rq;
static void virtio_blk_read(ulong sector)
{
    struct addr_size phys[3];
    rq.hdr.type = VIRTIO_BLK_T_IN;
    rq.hdr.ioprio = 0;
    rq.hdr.sector = sector;
    rq.thread = current;
    phys[0].vp_addr = V2P((ulong)&rq.hdr);
    phys[0].vp_size = sizeof(rq.hdr);
    phys[0].vp_flags = VRING_DESC_F_READ;
    phys[1].vp_addr = V2P((ulong)buf);
    phys[1].vp_size = 512;
    phys[1].vp_flags = VRING_DESC_F_WRITE;
    uchar status = 0;
    phys[2].vp_addr = V2P((ulong)&rq.status);
    phys[2].vp_size = sizeof(status);
    phys[2].vp_flags = VRING_DESC_F_WRITE;
    virtio_to_queue(to_virtio_dev_t(&pci_vblk), 0, phys, 3, &rq);
    suspend_thread();

}
static void virtio_blk_check_queue(void)
{
    struct blk_req *p;
    size_t len;

    /* Put the received packets into the recv list */
    while (virtio_from_queue(to_virtio_dev_t(&pci_vblk),0 , (void **)&p, &len)
           == 0) {
        pci_d("virtio_blk_from_queue:%lx,len:%x,status:%d,thread:%sn", p, len,
             p->status,p->thread->name);
        wake_up_thread(p->thread);
    }

}
int init_virtio_blk()
{
    virtio_dev_t dev = to_virtio_dev_t(&pci_vblk);

    if (virtio_setup_device(dev, VIRTIO_BLK_DEVICE_ID, 1)) {
        virtio_blk_read_config(&pci_vblk);
        //virtio_driver_init(dev);


        pci_dump_config(to_pci_device_t(&pci_vblk));
        msi_register_one(to_pci_device_t(&pci_vblk),0,irq_handler);


        virtio_device_ready(to_virtio_dev_t(&pci_vblk));
        memset(buf,0x12,512);
        //set_timeout_nsec(1000000000UL, vblk_timeout,dev);
        virtio_blk_read(1);
        dump_mem(buf,512);
        virtio_blk_read(1);
        dump_mem(buf,512);

    }
    return OK;
}
DECLARE_MODULE(virtio_blk_drivers, 0, main);

virtio_blk_read发送读取命令之后就suspend_thread挂起当前线程。

在中断处理中如果读取到数据,virtio_blk_check_queue,就恢复挂起的线程。

先把buf内容设置为0x12,然后读取内容,打印内存。

读取2次是测试中断是否正常。即第一次读取之后要通知used_event,否则第二次读取的时候不会发生中断,但是如果去检测队列,是可以读取到内容的。比如在定时器里面去检查队列。

运行结果:

VIRTIO_BLK_F_SIZE_MAX:0
_is_mmio:1,is_64:0,_is_prefetchable:0
VIRTIO_BLK_F_SEG_MAX:7e
addr_size:1000
VIRTIO_BLK cylinders:2,heads:10,sectors:3f
addr_64:febd1000
VIRTIO_BLK_F_BLK_SIZE:200
msix_table_bar:1,val:1,off:44
VIRTIO_BLK topology,physical_block_exp:0,alignment:0,min_io_size:0,opt_io_size:0
[0:4.0] vid:id = 1af4:1000
VIRTIO_BLK_F_WCE:1
    bar[1]: 32bits addr=000000000000C040 size=20
VIRTIO_BLK readonly
    bar[2]: 32bits addr=00000000FEBD1000 size=1000
[0:3.0] vid:id = 1af4:1001
    IRQ = 11
    bar[1]: 32bits addr=000000000000C000 size=40
    Have MSI-X!    bar[2]: 32bits addr=00000000FEBD0000 size=1000
        msix_location: 64    IRQ = 11
        msix_ctrl: 2    Have MSI-X!        msix_msgnum: 3        msix_location: 64        msix_table_bar: 1        msix_ctrl: 1        msix_table_offset: 0        msix_msgnum: 2        msix_pba_bar: 1        msix_table_bar: 1        msix_pba_offset: 2048
        msix_table_offset: 0ap start done
        msix_pba_bar: 1        msix_pba_offset: 2048
register_irq_handler:100,handler:FFFFFFFF81009120
pci_msix_enable msix_bar:ffffffff810334a0,1
map phy:fea00000 to vaddr:ffffffe8fea00000,&pte:ffffffff80010fa8,pte:fea0009b,511,419,501
ctrl:8001, c001 c001,num:2
msix_mask_entry:0,ffffffe8febd000c,1
msix_mask_entry:1,ffffffe8febd001c,1
msix_mask_entry:0,ffffffe8febd000c,1
msix_write_entry addr:ffffffe8febd0000,msiaddr:fee00000,data:4064
get:fee00000,4064
msix unmask entry :ffffffe8febd000c,1
unmask :0,0,0
free_first:0000000000000000,left:125,dev->threads:0,n:3
vd->addr:10331a0,vd->len:10,vd->flags:1
vd->addr:10331c0,vd->len:200,vd->flags:3
vd->addr:10331b8,vd->len:1,vd->flags:3
cmos:2020-6-5 14:37:3
****virtio_blk_irq_handler**** irq:100,isr:1 cpu:0
virtio_blk_from_queue:ffffffff810331a0,len:201,status:0,thread:kmain
addr:ffffffff810331c0:
~810331c0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810331d0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810331e0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810331f0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033200 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033210 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033220 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033230 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033240 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033250 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033260 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033270 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033280 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033290 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810332a0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810332b0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810332c0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810332d0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810332e0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810332f0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033300 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033310 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033320 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033330 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033340 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033350 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033360 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033370 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033380 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033390 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810333a0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810333b0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
free_first:0000000000000003,left:125,dev->threads:0,n:3
vd->addr:10331a0,vd->len:10,vd->flags:1
vd->addr:10331c0,vd->len:200,vd->flags:3
vd->addr:10331b8,vd->len:1,vd->flags:3
****virtio_blk_irq_handler**** irq:100,isr:1 cpu:0
virtio_blk_from_queue:ffffffff810331a0,len:201,status:0,thread:kmain
addr:ffffffff810331c0:
~810331c0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810331d0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810331e0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810331f0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033200 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033210 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033220 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033230 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033240 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033250 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033260 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033270 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033280 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033290 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810332a0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810332b0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810332c0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810332d0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810332e0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810332f0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033300 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033310 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033320 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033330 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033340 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033350 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033360 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033370 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033380 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~81033390 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810333a0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
~810333b0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
pstatus:ffffffff81022680,name:slub_mm,status:0
ht:ffffffff810139c0,0
task:ffffffff81022580
alloc 4k:2000
create_thread_oncpu:ffffffff810225d0,stack:ffffffff8003b000,name:tasklet 0
cpu:0,flag:1,state:1
switch first to tasklet ffffffff8003cff8,from:kmain cpu:0
new stack:ffffffff8003cff8,old stack:ffffffff81021ef0
pml4base 102e000  102c000  102b000 511 510 0
thread_main:ffffffff810225d0,ffffffff810139c0,tasklet
smp_thread_main:ffffffff810139c0
flag:9
KHeap: Free:ffffffff81014000,addr,size:2000

可以看到内存打印了2次,内容都是0,那是user.img生成的时候,清0了。

QEMUOPTS =-enable-kvm -cpu host,+x2apic  
  -device virtio-blk-pci,id=blk0,drive=hd0,scsi=off 
   -drive file=./usr.img,if=none,id=hd0,cache=none,aio=native
   -netdev type=tap,script=qemu-ifup.sh,id=net0 -device virtio-net-pci,netdev=net0 
    -vga vmware -display vnc=192.168.10.2:10
  -smp 2 -m 512 $(QEMUEXTRA)

yaos.img: $(OUT)/bootblock $(OUT)/kernel.elf
        dd if=/dev/zero of=yaos.img count=10000
        dd if=$(OUT)/bootblock of=yaos.img conv=notrunc
        dd if=$(OUT)/kernel.elf of=yaos.img seek=1 conv=notrunc

usr.img:  $(OUT)/kernel.elf
        dd if=/dev/zero of=usr.img count=100

qemuimg: yaos.img
        @echo Ctrl+a h for help
        $(QEMU) -serial mon:stdio  -nographic $(QEMUOPTS) -hda yaos.img
qemu:   out/kernel.elf user.img
        @echo Ctrl+a h for help
        $(QEMU) -serial mon:stdio  -nographic $(QEMUOPTS) -kernel out/kernel.elf

QEMUOPTS 做了修改,创建了virtio-blk硬盘,文件为usr.img

运行本例:

git clone https://github.com/saneee/x86_64_kernel.git
cd 0020
make qemu
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值