pci 总线注册& pcie 设备枚举

PCIe学习笔记之pcie初始化枚举和资源分配流程代码分析_linux pcie bar空间初始化代码-CSDN博客

 

subsys_initcall(acpi_init);

acpi_init

static int __init acpi_init(void)
{
        int result;

        if (acpi_disabled) {
                printk(KERN_INFO PREFIX "Interpreter disabled.\n");
                return -ENODEV;
        }

        acpi_kobj = kobject_create_and_add("acpi", firmware_kobj);
        if (!acpi_kobj) {
                printk(KERN_WARNING "%s: kset create error\n", __func__);
                acpi_kobj = NULL;
        }

        result = acpi_bus_init();
        if (result) {
                kobject_put(acpi_kobj);
                disable_acpi();
                return result;
        }

        pci_mmcfg_late_init();
        acpi_iort_init();
        acpi_scan_init();
        acpi_ec_init();
        acpi_debugfs_init();
        acpi_sleep_proc_init();
        acpi_wakeup_device_init();
        acpi_debugger_init();
        acpi_setup_sb_notify_handler();
        return 0;
}

 acpi_scan_init

int __init acpi_scan_init(void)
{
        int result;
        acpi_status status;
        struct acpi_table_stao *stao_ptr;

        acpi_pci_root_init();
        acpi_pci_link_init();
        acpi_processor_init();
        acpi_platform_init();
        acpi_lpss_init();
        acpi_apd_init();
        acpi_cmos_rtc_init();
        acpi_container_init();
        acpi_memory_hotplug_init();
        acpi_watchdog_init();
        acpi_pnp_init();
        acpi_int340x_thermal_init();
        acpi_amba_init();
        acpi_init_lpit();

        acpi_scan_add_handler(&generic_device_handler);

        /*
         * If there is STAO table, check whether it needs to ignore the UART
         * device in SPCR table.
         */
        status = acpi_get_table(ACPI_SIG_STAO, 0,
                                (struct acpi_table_header **)&stao_ptr);
        if (ACPI_SUCCESS(status)) {
                if (stao_ptr->header.length > sizeof(struct acpi_table_stao))
                        printk(KERN_INFO PREFIX "STAO Name List not yet supported.");

                if (stao_ptr->ignore_uart)
                        acpi_get_spcr_uart_addr();
        }

        acpi_gpe_apply_masked_gpes();
        acpi_update_all_gpes();

        /*
         * Although we call __add_memory() that is documented to require the
         * device_hotplug_lock, it is not necessary here because this is an
         * early code when userspace or any other code path cannot trigger
         * hotplug/hotunplug operations.
         */
        mutex_lock(&acpi_scan_lock);
        /*
         * Enumerate devices in the ACPI namespace.
         */
        result = acpi_bus_scan(ACPI_ROOT_OBJECT);
        if (result)
                goto out;

        result = acpi_bus_get_device(ACPI_ROOT_OBJECT, &acpi_root);
        if (result)
                goto out;

        /* Fixed feature devices do not exist on HW-reduced platform */
        if (!acpi_gbl_reduced_hardware) {
                result = acpi_bus_scan_fixed();
                if (result) {
                        acpi_detach_data(acpi_root->handle,
                                         acpi_scan_drop_device);
                        acpi_device_del(acpi_root);
                        put_device(&acpi_root->dev);
                        goto out;
                }
        }

        acpi_scan_initialized = true;

 out:
        mutex_unlock(&acpi_scan_lock);
        return result;
}

 

 acpi_pci_root_init

void __init acpi_pci_root_init(void)
{
        acpi_hest_init();
        if (acpi_pci_disabled)
                return;

        pci_acpi_crs_quirks();
        acpi_scan_add_handler_with_hotplug(&pci_root_handler, "pci_root");
}

 pci_root_handler

static struct acpi_scan_handler pci_root_handler = {
        .ids = root_device_ids,
        .attach = acpi_pci_root_add,
        .detach = acpi_pci_root_remove,
        .hotplug = {
                .enabled = true,
                .scan_dependent = acpi_pci_root_scan_dependent,
        },
};

android/kernel/msm-5.4/drivers/acpi/pci_root.c

acpi_pci_root_add


static int acpi_pci_root_add(struct acpi_device *device,
                             const struct acpi_device_id *not_used)
{
        unsigned long long segment, bus;
        acpi_status status;
        int result;
        struct acpi_pci_root *root;
        acpi_handle handle = device->handle;
        int no_aspm = 0;
        bool hotadd = system_state == SYSTEM_RUNNING;
        bool is_pcie;

        root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
        if (!root)
                return -ENOMEM;

        segment = 0;
        status = acpi_evaluate_integer(handle, METHOD_NAME__SEG, NULL,
                                       &segment);
        if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) {
                dev_err(&device->dev,  "can't evaluate _SEG\n");
                result = -ENODEV;
                goto end;
        }

        /* Check _CRS first, then _BBN.  If no _BBN, default to zero. */
        root->secondary.flags = IORESOURCE_BUS;
        status = try_get_root_bridge_busnr(handle, &root->secondary);
        if (ACPI_FAILURE(status)) {
                /*
                 * We need both the start and end of the downstream bus range
                 * to interpret _CBA (MMCONFIG base address), so it really is
                 * supposed to be in _CRS.  If we don't find it there, all we
                 * can do is assume [_BBN-0xFF] or [0-0xFF].
                 */
                root->secondary.end = 0xFF;
                dev_warn(&device->dev,
                         FW_BUG "no secondary bus range in _CRS\n");
                status = acpi_evaluate_integer(handle, METHOD_NAME__BBN,
                                               NULL, &bus);
                if (ACPI_SUCCESS(status))
                        root->secondary.start = bus;
                else if (status == AE_NOT_FOUND)
                        root->secondary.start = 0;
                else {
                        dev_err(&device->dev, "can't evaluate _BBN\n");
                        result = -ENODEV;
                        goto end;
                }
        }

        root->device = device;
        root->segment = segment & 0xFFFF;
        strcpy(acpi_device_name(device), ACPI_PCI_ROOT_DEVICE_NAME);
        strcpy(acpi_device_class(device), ACPI_PCI_ROOT_CLASS);
        device->driver_data = root;

        if (hotadd && dmar_device_add(handle)) {
                result = -ENXIO;
                goto end;
        }

        pr_info(PREFIX "%s [%s] (domain %04x %pR)\n",
               acpi_device_name(device), acpi_device_bid(device),
               root->segment, &root->secondary);

        root->mcfg_addr = acpi_pci_root_get_mcfg_addr(handle);

        is_pcie = strcmp(acpi_device_hid(device), "PNP0A08") == 0;
        negotiate_os_control(root, &no_aspm, is_pcie);

        /*
         * TBD: Need PCI interface for enumeration/configuration of roots.
         */

        /*
         * Scan the Root Bridge
         * --------------------
         * Must do this prior to any attempt to bind the root device, as the
         * PCI namespace does not get created until this call is made (and
         * thus the root bridge's pci_dev does not exist).
         */
        root->bus = pci_acpi_scan_root(root);
        if (!root->bus) {
                dev_err(&device->dev,
                        "Bus %04x:%02x not present in PCI namespace\n",
                        root->segment, (unsigned int)root->secondary.start);
                device->driver_data = NULL;
                result = -ENODEV;
                goto remove_dmar;
        }

        if (no_aspm)
                pcie_no_aspm();

        pci_acpi_add_bus_pm_notifier(device);
        device_set_wakeup_capable(root->bus->bridge, device->wakeup.flags.valid);

        if (hotadd) {
                pcibios_resource_survey_bus(root->bus);
                pci_assign_unassigned_root_bus_resources(root->bus);
                /*
                 * This is only called for the hotadd case. For the boot-time
                 * case, we need to wait until after PCI initialization in
                 * order to deal with IOAPICs mapped in on a PCI BAR.
                 *
                 * This is currently x86-specific, because acpi_ioapic_add()
                 * is an empty function without CONFIG_ACPI_HOTPLUG_IOAPIC.
                 * And CONFIG_ACPI_HOTPLUG_IOAPIC depends on CONFIG_X86_IO_APIC
                 * (see drivers/acpi/Kconfig).
                 */
                acpi_ioapic_add(root->device->handle);
        }

        pci_lock_rescan_remove();
        pci_bus_add_devices(root->bus);
        pci_unlock_rescan_remove();
        return 1;

remove_dmar:
        if (hotadd)
                dmar_device_remove(handle);
end:
        kfree(root);
        return result;
}

 

LINUX/android/kernel/msm-5.4/arch/arm64/kernel/pci.c

pci_acpi_scan_root

pci_acpi_scan_root, pcie枚举流程的入口

/* Interface called from ACPI code to setup PCI host controller */
struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
{
        struct acpi_pci_generic_root_info *ri;
        struct pci_bus *bus, *child;
        struct acpi_pci_root_ops *root_ops;
        struct pci_host_bridge *host;

        ri = kzalloc(sizeof(*ri), GFP_KERNEL);
        if (!ri)
                return NULL;

        root_ops = kzalloc(sizeof(*root_ops), GFP_KERNEL);
        if (!root_ops) {
                kfree(ri);
                return NULL;
        }

        ri->cfg = pci_acpi_setup_ecam_mapping(root);
        if (!ri->cfg) {
                kfree(ri);
                kfree(root_ops);
                return NULL;
        }

        root_ops->release_info = pci_acpi_generic_release_info;
        root_ops->prepare_resources = pci_acpi_root_prepare_resources;
        //获取对应芯片平台的pci_ops
        root_ops->pci_ops = &ri->cfg->ops->pci_ops;
        bus = acpi_pci_root_create(root, root_ops, &ri->common, ri->cfg);
        if (!bus)
                return NULL;

        /* If we must preserve the resource configuration, claim now */
        host = pci_find_host_bridge(bus);
        if (host->preserve_config)
                pci_bus_claim_resources(bus);

        /*
         * Assign whatever was left unassigned. If we didn't claim above,
         * this will reassign everything.
         */
        pci_assign_unassigned_root_bus_resources(bus);

        list_for_each_entry(child, &bus->children, node)
                pcie_bus_configure_settings(child);

        return bus;
}

 pci_acpi_setup_ecam_mapping

/*
 * Lookup the bus range for the domain in MCFG, and set up config space
 * mapping.
 */
static struct pci_config_window *
pci_acpi_setup_ecam_mapping(struct acpi_pci_root *root)
{
        struct device *dev = &root->device->dev;
        struct resource *bus_res = &root->secondary;
        u16 seg = root->segment;
        struct pci_ecam_ops *ecam_ops;
        struct resource cfgres;
        struct acpi_device *adev;
        struct pci_config_window *cfg;
        int ret;

        ret = pci_mcfg_lookup(root, &cfgres, &ecam_ops);
        if (ret) {
                dev_err(dev, "%04x:%pR ECAM region not found\n", seg, bus_res);
                return NULL;
        }

        adev = acpi_resource_consumer(&cfgres);
        if (adev)
                dev_info(dev, "ECAM area %pR reserved by %s\n", &cfgres,
                         dev_name(&adev->dev));
        else
                dev_warn(dev, FW_BUG "ECAM area %pR not reserved in ACPI namespace\n",
                         &cfgres);

        cfg = pci_ecam_create(dev, &cfgres, bus_res, ecam_ops);
        if (IS_ERR(cfg)) {
                dev_err(dev, "%04x:%pR error %ld mapping ECAM\n", seg, bus_res,
                        PTR_ERR(cfg));
                return NULL;
        }

        return cfg;
}

 pci_mcfg_lookup

int pci_mcfg_lookup(struct acpi_pci_root *root, struct resource *cfgres,
                    struct pci_ecam_ops **ecam_ops)
{
        struct pci_ecam_ops *ops = &pci_generic_ecam_ops;
        struct resource *bus_res = &root->secondary;
        u16 seg = root->segment;
        struct mcfg_entry *e;
        struct resource res;

        /* Use address from _CBA if present, otherwise lookup MCFG */
        if (root->mcfg_addr)
                goto skip_lookup;

        /*
         * We expect the range in bus_res in the coverage of MCFG bus range.
         */
        list_for_each_entry(e, &pci_mcfg_list, list) {
                if (e->segment == seg && e->bus_start <= bus_res->start &&
                    e->bus_end >= bus_res->end) {
                        root->mcfg_addr = e->addr;
                }

        }

skip_lookup:
        memset(&res, 0, sizeof(res));
        if (root->mcfg_addr) {
                res.start = root->mcfg_addr + (bus_res->start << 20);
                res.end = res.start + (resource_size(bus_res) << 20) - 1;
                res.flags = IORESOURCE_MEM;
        }

        /*
         * Allow quirks to override default ECAM ops and CFG resource
         * range.  This may even fabricate a CFG resource range in case
         * MCFG does not have it.  Invalid CFG start address means MCFG
         * firmware bug or we need another quirk in array.
         */
        pci_mcfg_apply_quirks(root, &res, &ops);
        if (!res.start)
                return -ENXIO;

        *cfgres = res;
        *ecam_ops = ops;
        return 0;
}

 pci_mcfg_apply_quirks


static void pci_mcfg_apply_quirks(struct acpi_pci_root *root,
                                  struct resource *cfgres,
                                  struct pci_ecam_ops **ecam_ops)
{
#ifdef CONFIG_PCI_QUIRKS
        u16 segment = root->segment;
        struct resource *bus_range = &root->secondary;
        struct mcfg_fixup *f;
        int i;

        for (i = 0, f = mcfg_quirks; i < ARRAY_SIZE(mcfg_quirks); i++, f++) {
                if (pci_mcfg_quirk_matches(f, segment, bus_range)) {
                        if (f->cfgres.start)
                                *cfgres = f->cfgres;
                        if (f->ops)
                                *ecam_ops =  f->ops;
                        dev_info(&root->device->dev, "MCFG quirk: ECAM at %pR for %pR with %ps\n",
                                 cfgres, bus_range, *ecam_ops);
                        return;
                }
        }
#endif
}

 mcfg_quirks

pcie 对rc操作的ops_host 是 rc吗-CSDN博客

rc 的全程是root compose,一般有host bridage + host bus + 几个 host port组成,是最靠近cpu的pci device。对rc的操作有一个专门的ops,每家的都不一样。这个ops 一般实在pci_mcfg_match_quirks 中根据bios传递过来的mcfg_oem_id/mcfg_oem_table_id/mcfg_oem_revision 来决定。
kernel中所支持的ops都在mcfg_quirks中
                       
原文链接:https://blog.csdn.net/tiantao2012/article/details/65934961

static struct mcfg_fixup mcfg_quirks[] = {
/*      { OEM_ID, OEM_TABLE_ID, REV, SEGMENT, BUS_RANGE, ops, cfgres }, */

#ifdef CONFIG_ARM64

#define AL_ECAM(table_id, rev, seg, ops) \
        { "AMAZON", table_id, rev, seg, MCFG_BUS_ANY, ops }

        AL_ECAM("GRAVITON", 0, 0, &al_pcie_ops),
        AL_ECAM("GRAVITON", 0, 1, &al_pcie_ops),
        AL_ECAM("GRAVITON", 0, 2, &al_pcie_ops),
        AL_ECAM("GRAVITON", 0, 3, &al_pcie_ops),
        AL_ECAM("GRAVITON", 0, 4, &al_pcie_ops),
        AL_ECAM("GRAVITON", 0, 5, &al_pcie_ops),
        AL_ECAM("GRAVITON", 0, 6, &al_pcie_ops),
        AL_ECAM("GRAVITON", 0, 7, &al_pcie_ops),

#define QCOM_ECAM32(seg) \
        { "QCOM  ", "QDF2432 ", 1, seg, MCFG_BUS_ANY, &pci_32b_ops }

        QCOM_ECAM32(0),
        QCOM_ECAM32(1),
        QCOM_ECAM32(2),
        QCOM_ECAM32(3),
        QCOM_ECAM32(4),
        QCOM_ECAM32(5),
        QCOM_ECAM32(6),
        QCOM_ECAM32(7),

#define HISI_QUAD_DOM(table_id, seg, ops) \
        { "HISI  ", table_id, 0, (seg) + 0, MCFG_BUS_ANY, ops }, \
        { "HISI  ", table_id, 0, (seg) + 1, MCFG_BUS_ANY, ops }, \
        { "HISI  ", table_id, 0, (seg) + 2, MCFG_BUS_ANY, ops }, \
        { "HISI  ", table_id, 0, (seg) + 3, MCFG_BUS_ANY, ops }

        HISI_QUAD_DOM("HIP05   ",  0, &hisi_pcie_ops),
        HISI_QUAD_DOM("HIP06   ",  0, &hisi_pcie_ops),
        HISI_QUAD_DOM("HIP07   ",  0, &hisi_pcie_ops),
        HISI_QUAD_DOM("HIP07   ",  4, &hisi_pcie_ops),
        HISI_QUAD_DOM("HIP07   ",  8, &hisi_pcie_ops),
        HISI_QUAD_DOM("HIP07   ", 12, &hisi_pcie_ops),

#define THUNDER_PEM_RES(addr, node) \
        DEFINE_RES_MEM((addr) + ((u64) (node) << 44), 0x39 * SZ_16M)

#define THUNDER_PEM_QUIRK(rev, node) \
        { "CAVIUM", "THUNDERX", rev, 4 + (10 * (node)), MCFG_BUS_ANY,       \
          &thunder_pem_ecam_ops, THUNDER_PEM_RES(0x88001f000000UL, node) },  \
        { "CAVIUM", "THUNDERX", rev, 5 + (10 * (node)), MCFG_BUS_ANY,       \
          &thunder_pem_ecam_ops, THUNDER_PEM_RES(0x884057000000UL, node) },  \
        { "CAVIUM", "THUNDERX", rev, 6 + (10 * (node)), MCFG_BUS_ANY,       \
          &thunder_pem_ecam_ops, THUNDER_PEM_RES(0x88808f000000UL, node) },  \
        { "CAVIUM", "THUNDERX", rev, 7 + (10 * (node)), MCFG_BUS_ANY,       \
          &thunder_pem_ecam_ops, THUNDER_PEM_RES(0x89001f000000UL, node) },  \
        { "CAVIUM", "THUNDERX", rev, 8 + (10 * (node)), MCFG_BUS_ANY,       \
          &thunder_pem_ecam_ops, THUNDER_PEM_RES(0x894057000000UL, node) },  \
        { "CAVIUM", "THUNDERX", rev, 9 + (10 * (node)), MCFG_BUS_ANY,       \
          &thunder_pem_ecam_ops, THUNDER_PEM_RES(0x89808f000000UL, node) }

#define THUNDER_ECAM_QUIRK(rev, seg)                                    \
        { "CAVIUM", "THUNDERX", rev, seg, MCFG_BUS_ANY,                 \
        &pci_thunder_ecam_ops }

        /* SoC pass2.x */
        THUNDER_PEM_QUIRK(1, 0),
        THUNDER_PEM_QUIRK(1, 1),
        THUNDER_ECAM_QUIRK(1, 10),

        /* SoC pass1.x */
        THUNDER_PEM_QUIRK(2, 0),        /* off-chip devices */
        THUNDER_PEM_QUIRK(2, 1),        /* off-chip devices */
        THUNDER_ECAM_QUIRK(2,  0),
        THUNDER_ECAM_QUIRK(2,  1),
        THUNDER_ECAM_QUIRK(2,  2),
        THUNDER_ECAM_QUIRK(2,  3),
        THUNDER_ECAM_QUIRK(2, 10),
        THUNDER_ECAM_QUIRK(2, 11),
        THUNDER_ECAM_QUIRK(2, 12),
        THUNDER_ECAM_QUIRK(2, 13),

#define XGENE_V1_ECAM_MCFG(rev, seg) \
        {"APM   ", "XGENE   ", rev, seg, MCFG_BUS_ANY, \
                &xgene_v1_pcie_ecam_ops }

#define XGENE_V2_ECAM_MCFG(rev, seg) \
        {"APM   ", "XGENE   ", rev, seg, MCFG_BUS_ANY, \
                &xgene_v2_pcie_ecam_ops }

        /* X-Gene SoC with v1 PCIe controller */
        XGENE_V1_ECAM_MCFG(1, 0),
        XGENE_V1_ECAM_MCFG(1, 1),
        XGENE_V1_ECAM_MCFG(1, 2),
        XGENE_V1_ECAM_MCFG(1, 3),
        XGENE_V1_ECAM_MCFG(1, 4),
        XGENE_V1_ECAM_MCFG(2, 0),
        XGENE_V1_ECAM_MCFG(2, 1),
        XGENE_V1_ECAM_MCFG(2, 2),
        XGENE_V1_ECAM_MCFG(2, 3),
        XGENE_V1_ECAM_MCFG(2, 4),
        /* X-Gene SoC with v2.1 PCIe controller */
        XGENE_V2_ECAM_MCFG(3, 0),
        XGENE_V2_ECAM_MCFG(3, 1),
        /* X-Gene SoC with v2.2 PCIe controller */
        XGENE_V2_ECAM_MCFG(4, 0),
        XGENE_V2_ECAM_MCFG(4, 1),
        XGENE_V2_ECAM_MCFG(4, 2),

#define ALTRA_ECAM_QUIRK(rev, seg) \
        { "Ampere", "Altra   ", rev, seg, MCFG_BUS_ANY, &pci_32b_read_ops }

        ALTRA_ECAM_QUIRK(1, 0),
        ALTRA_ECAM_QUIRK(1, 1),
        ALTRA_ECAM_QUIRK(1, 2),
        ALTRA_ECAM_QUIRK(1, 3),
        ALTRA_ECAM_QUIRK(1, 4),
        ALTRA_ECAM_QUIRK(1, 5),
        ALTRA_ECAM_QUIRK(1, 6),
        ALTRA_ECAM_QUIRK(1, 7),
        ALTRA_ECAM_QUIRK(1, 8),
        ALTRA_ECAM_QUIRK(1, 9),
        ALTRA_ECAM_QUIRK(1, 10),
        ALTRA_ECAM_QUIRK(1, 11),
        ALTRA_ECAM_QUIRK(1, 12),
        ALTRA_ECAM_QUIRK(1, 13),
        ALTRA_ECAM_QUIRK(1, 14),
        ALTRA_ECAM_QUIRK(1, 15),
#endif /* ARM64 */
};

 

#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
/* ECAM ops for 32-bit access only (non-compliant) */
struct pci_ecam_ops pci_32b_ops = {
        .bus_shift      = 20,
        .pci_ops        = {
                .map_bus        = pci_ecam_map_bus,
                .read           = pci_generic_config_read32,
                .write          = pci_generic_config_write32,
        }
};

/* ECAM ops for 32-bit read only (non-compliant) */
struct pci_ecam_ops pci_32b_read_ops = {
        .bus_shift      = 20,
        .pci_ops        = {
                .map_bus        = pci_ecam_map_bus,
                .read           = pci_generic_config_read32,
                .write          = pci_generic_config_write,
        }
};
#endif

acpi_pci_root_create

1.枚举过程

1.1 acpi_pci_root_add

1.2 pci_acpi_scan_root(枚举开始)

1.3 acpi_pci_root_create

1.4 pci_scan_child_bus(枚举执行的重点函数)

1.5 pci_scan_slot(pci_scan_single_device才是做事的)

1.6 pci_scan_device

1.7 pci_device_add

1.8 pci_scan_bridge
                        
原文链接:https://blog.csdn.net/u013253075/article/details/123301127

struct pci_bus *acpi_pci_root_create(struct acpi_pci_root *root,
                                     struct acpi_pci_root_ops *ops,
                                     struct acpi_pci_root_info *info,
                                     void *sysdata)
{
        int ret, busnum = root->secondary.start;
        struct acpi_device *device = root->device;
        int node = acpi_get_node(device->handle);
        struct pci_bus *bus;
        struct pci_host_bridge *host_bridge;
        union acpi_object *obj;

        info->root = root;
        info->bridge = device;
        info->ops = ops;
        INIT_LIST_HEAD(&info->resources);
        snprintf(info->name, sizeof(info->name), "PCI Bus %04x:%02x",
                 root->segment, busnum);

        if (ops->init_info && ops->init_info(info))
                goto out_release_info;
        if (ops->prepare_resources)
                ret = ops->prepare_resources(info);
        else
                ret = acpi_pci_probe_root_resources(info);
        if (ret < 0)
                goto out_release_info;

        pci_acpi_root_add_resources(info);
        pci_add_resource(&info->resources, &root->secondary);
       //注册pci_ops操作接口
        bus = pci_create_root_bus(NULL, busnum, ops->pci_ops,
                                  sysdata, &info->resources);
        if (!bus)
                goto out_release_info;

        host_bridge = to_pci_host_bridge(bus->bridge);
        if (!(root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL))
                host_bridge->native_pcie_hotplug = 0;
        if (!(root->osc_control_set & OSC_PCI_SHPC_NATIVE_HP_CONTROL))
                host_bridge->native_shpc_hotplug = 0;
        if (!(root->osc_control_set & OSC_PCI_EXPRESS_AER_CONTROL))
                host_bridge->native_aer = 0;
        if (!(root->osc_control_set & OSC_PCI_EXPRESS_PME_CONTROL))
                host_bridge->native_pme = 0;
        if (!(root->osc_control_set & OSC_PCI_EXPRESS_LTR_CONTROL))
                host_bridge->native_ltr = 0;

        /*
         * Evaluate the "PCI Boot Configuration" _DSM Function.  If it
         * exists and returns 0, we must preserve any PCI resource
         * assignments made by firmware for this host bridge.
         */
        obj = acpi_evaluate_dsm(ACPI_HANDLE(bus->bridge), &pci_acpi_dsm_guid, 1,
                                IGNORE_PCI_BOOT_CONFIG_DSM, NULL);
        if (obj && obj->type == ACPI_TYPE_INTEGER && obj->integer.value == 0)
                host_bridge->preserve_config = 1;
        ACPI_FREE(obj);
        //开始枚举设备

        pci_scan_child_bus(bus);
        pci_set_host_bridge_release(host_bridge, acpi_pci_root_release_info,
                                    info);
        if (node != NUMA_NO_NODE)
                dev_printk(KERN_DEBUG, &bus->dev, "on NUMA node %d\n", node);
        return bus;

out_release_info:
        __acpi_pci_root_release_info(info);
        return NULL;
}

pci_create_root_bus

struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
                struct pci_ops *ops, void *sysdata, struct list_head *resources)
{
        int error;
        struct pci_host_bridge *bridge;

        bridge = pci_alloc_host_bridge(0);
        if (!bridge)
                return NULL;

        bridge->dev.parent = parent;

        list_splice_init(resources, &bridge->windows);
        bridge->sysdata = sysdata;
        bridge->busnr = bus;
        //注册bridge ops = pci_ops
        bridge->ops = ops;
         //注册bus ops
        error = pci_register_host_bridge(bridge);
        if (error < 0)
                goto err_out;

        return bridge->bus;

err_out:
        put_device(&bridge->dev);
        return NULL;
}
EXPORT_SYMBOL_GPL(pci_create_root_bus);

pci_register_host_bridge


static int pci_register_host_bridge(struct pci_host_bridge *bridge)
{
        struct device *parent = bridge->dev.parent;
        struct resource_entry *window, *n;
        struct pci_bus *bus, *b;
        resource_size_t offset;
        LIST_HEAD(resources);
        struct resource *res;
        char addr[64], *fmt;
        const char *name;
        int err;

        bus = pci_alloc_bus(NULL);
        if (!bus)
                return -ENOMEM;

        bridge->bus = bus;

        /* Temporarily move resources off the list */
        list_splice_init(&bridge->windows, &resources);
        bus->sysdata = bridge->sysdata;
        bus->msi = bridge->msi;
        //注册pcie bus ops 
        bus->ops = bridge->ops;
        bus->number = bus->busn_res.start = bridge->busnr;
#ifdef CONFIG_PCI_DOMAINS_GENERIC
        bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
#endif

        b = pci_find_bus(pci_domain_nr(bus), bridge->busnr);
        if (b) {
                /* Ignore it if we already got here via a different bridge */
                dev_dbg(&b->dev, "bus already known\n");
                err = -EEXIST;
                goto free;
        }

        dev_set_name(&bridge->dev, "pci%04x:%02x", pci_domain_nr(bus),
                     bridge->busnr);

        err = pcibios_root_bridge_prepare(bridge);
        if (err)
                goto free;

        err = device_add(&bridge->dev);
        if (err) {
                put_device(&bridge->dev);
                goto free;
        }
        bus->bridge = get_device(&bridge->dev);
        device_enable_async_suspend(bus->bridge);
        pci_set_bus_of_node(bus);
        pci_set_bus_msi_domain(bus);

        if (!parent)
                set_dev_node(bus->bridge, pcibus_to_node(bus));

        bus->dev.class = &pcibus_class;
        bus->dev.parent = bus->bridge;

        dev_set_name(&bus->dev, "%04x:%02x", pci_domain_nr(bus), bus->number);
        name = dev_name(&bus->dev);

        err = device_register(&bus->dev);
        if (err)
                goto unregister;

        pcibios_add_bus(bus);

        /* Create legacy_io and legacy_mem files for this bus */
        pci_create_legacy_files(bus);

        if (parent)
                dev_info(parent, "PCI host bridge to bus %s\n", name);
        else
                pr_info("PCI host bridge to bus %s\n", name);

        /* Add initial resources to the bus */
        resource_list_for_each_entry_safe(window, n, &resources) {
                list_move_tail(&window->node, &bridge->windows);
                offset = window->offset;
                res = window->res;

                if (res->flags & IORESOURCE_BUS)
                        pci_bus_insert_busn_res(bus, bus->number, res->end);
                else
                        pci_bus_add_resource(bus, res, 0);

                if (offset) {
                        if (resource_type(res) == IORESOURCE_IO)
                                fmt = " (bus address [%#06llx-%#06llx])";
                        else
                                fmt = " (bus address [%#010llx-%#010llx])";

                        snprintf(addr, sizeof(addr), fmt,
                                 (unsigned long long)(res->start - offset),
                                 (unsigned long long)(res->end - offset));
                } else
                        addr[0] = '\0';

                dev_info(&bus->dev, "root bus resource %pR%s\n", res, addr);
        }

        down_write(&pci_bus_sem);
        list_add_tail(&bus->node, &pci_root_buses);
        up_write(&pci_bus_sem);

        return 0;

unregister:
        put_device(&bridge->dev);
        device_del(&bridge->dev);

free:
        kfree(bus);
        return err;
}

pci_scan_child_bus

pci_scan_bridge_extend

pci_scan_child_bus_extend

pci_scan_single_device

PCIe设备枚举的软件实现

        1. 设备的扫描从pci_scan_root_bus_bridge开始,首先需要先向系统注册一个host bridge,在注册的过程中需要创建一个root bus,也就是bus 0,在pci_register_host_bridge函数中,主要是一系列的初始化和注册工作,此外还为总线分配资源,包括地址空间等;

        2. pci_scan_child_bus开始,从bus 0向下扫描并添加设备,这个过程由pci_scan_child_bus_extend来完成;

        3. 从pci_scan_child_bus_extend的流程可以看出,主要有两大块:

                • PCI设备扫描,从循环也能看出来,每条总线支持32个设备,每个设备支持8个功能,扫描完设备后将设备注册进系统,pci_scan_device的过程中会去读取PCI设备的配置空间,获取BAR的相关信息;

                • PCI桥设备扫描,PCI桥是用于连接上一级PCI总线和下一级PCI总线的,当发现有下一级总线时,创建子结构,并再次调用pci_scan_child_bus_extend的函数来扫描下一级的总线,从这个过程看,就是一个递归过程。

                        
原文链接:https://blog.csdn.net/relax33/article/details/128182253

以RK3568为例,枚举完成后的topology如图:

各种数据结构之前的关系为:

 

PCIe地址空间
PCIe地址空间包含三类:Cofiguration配置空间;Memory空间;IO空间 

注: PCIe spec规定,IO地址空间只为兼容早期的PCI设备,在新设计中应当使用MMIO(Memory Mapped IO),设备中的内部存储和寄存器都统一映射到存储地址空间(Memory Address Space)

每个PCIe设备(endpoint、bridge)都包含一个配置空间(4k), 对于endpoint设备配置空间header为Type0,包含6个32位的BAR(Base Address Register)寄存器,对于bridge设备配置空间header为Type1,包含2个32位的BAR寄存器。通过BAR寄存器可以分别映射PCIe的memory空间和IO空间到系统的设备地址空间和系统的IO空间中。

QCOM QCS8250 PCIe地址空间

reg = <0x60100000 0x10000>表示配置空间

ranges: <local_addr cpu_addr size>

        • local_addr:字节数由所在节点的#address-cells决定; 此处为3,local_addr的第一个数字右移24位与0x03作与表示地址空间类型,如

                (0x01000000>>24)&0x3==0x01表示IO地址空间;

                (0x02000000>>24)&0x3==0x02(32位) or 0x03(64位)表示MEM地址空间.

        • cpu_addr:字节数由父节点的#address-cells决定

        • size:字节数由所在节点的#size-cells决定

上述ranges的含义为:

PCIe IO地址空间0x60200000---0x602FFFFF映射到CPU地址空间0x60200000---0x602FFFFF,size:1MB

PCIe MEM地址空间0x60300000---0x63FFFFFF映射到CPU地址空间0x60300000---0x63FFFFFF,size:61MB

注: bus-range和ranges地址信息加入到(struct pci_host_bridge)bridge->windows链表中管理

BARs寄存器初始化流程

以32位Endpoint设备请求2MB NP-MMIO示例

        1. BAR[0:3]:只读位,含义如下:

                bit0:0 = Memory request;1 = IO request

                bit[1:2]: 00 = 32-bit decoding; 10 = 64-bit decoding

                bit[3]: 0 = non-prefetchable; 1= prefetchable

        2. 软件写全1到BAR寄存器,然后读回BAR寄存器的值,2的最低可写位次方就是BAR空间需要的大小,此处最低可写位为21,则BAR大小2^21=2MB

        3. 系统为设备分配2MB的地址空间,然后将其实地址写入到BAR寄存器中,此处写到BAR寄存器的值为0x60400000

CPU写该endpoint BAR0寄存器示意图如下:

                        

原文链接:https://blog.csdn.net/relax33/article/details/128182253

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: PCI总线PCIe标准都是计算机内部的总线标准,但它们在性能特点上有很大的差异。 PCI总线是一种较早的总线标准,它的传输速度较慢,最高传输速度只有133MB/s,而且只能支持32位的数据传输。这意味着PCI总线的带宽有限,无法满足高速数据传输的需求。 相比之下,PCIe标准则是一种更为先进的总线标准,它的传输速度更快,最高传输速度可达16GB/s,而且支持多种数据传输模式,包括1x、2x、4x、8x和16x等。这意味着PCIe标准的带宽更大,可以满足高速数据传输的需求。 此外,PCIe标准还支持热插拔和热替换功能,可以在不关闭计算机的情况下更换硬件设备,这是PCI总线所不具备的功能。 综上所述,PCIe标准的性能特点更加优越,可以满足更高的数据传输需求,而且还具备更多的功能特点。 ### 回答2: PCI总线PCIe标准是两种常见的计算机总线,虽然它们都是计算机内部连接设备之间的通道,但在性能特点方面存在明显的差异。 PCI总线是一种传统的计算机总线标准,它的带宽和传输速率都相对较低。PCI总线的带宽是32位,传输速率是33MHz,最大传输速率为133MB/s。这意味着PCI总线不能满足当今高带宽设备的需求,因为数据传输速度和数据流量无法满足高端设备的要求。 相反,PCIe标准是一种新一代的、高效的计算机总线标准,它的性能比PCI总线更强大。PCIe标准可以提供更高的带宽和传输速率,可以同时传输多个数据流。PCIe标准的带宽和传输速率取决于插槽的版本和数量,最高速度可以达到16GT/s,每个通道的带宽为每秒1GB以上,这远远超出了PCI总线的性能。 此外,PCIe标准还具有其他优点,如独立的流量控制和数据包处理能力、更快的信令速度、更高的快速传输能力和更强的多处理器支持能力。 总的来说,PCIe标准比PCI总线更加先进和高效,它可以提供更快的数据传输速度和更大的数据流量,可以满足当今高端设备的要求。随着技术的不断发展和计算机应用的不断进步,PCIe标准将在未来继续占据主导地位。 ### 回答3: PCI总线PCIe标准都是计算机内部的总线标准,用于连接主板上的各个设备PCI总线最初是由英特尔公司于1992年推出的,而PCIe标准则是在2004年出现的。虽然它们都是计算机内部总线,但它们在性能上有很大的不同。 首先,PCI总线的传输速度比较慢,PCI总线的传输速度最高只能达到133MB/s。然而,PCIE标准的传输速度要比PCI总线快得多,速度最高达到16GB/s,这意味着它可以更快地传输数据给其他设备。这种跨越式的改进使得PCIe被广泛应用于高性能计算和图形处理等领域。 其次,PCIe标准具有更高的可扩展性。PCIe的信道数量没有限制,国际标准最高支持256条,这意味着可以连接更多的设备,同时保持较高的性能。PCI总线总线带宽受到固定的传输带宽和资源共享的限制,因此无法像PCIe那样容易地进行扩展。此外,PCIe标准也支持更高的电压,它还可以支持更多的功耗。 最后,PCIe标准也具有更好的处理数据精度。PCI总线只能支持32位计算,但PCIe标准支持PCe4.0将与之前版本比,增加了数据位宽度和速度,提高了数据传输精度,同时也提升了数据吞吐量,从而能够支持更高的计算需求。 总而言之,与PCI总线相比,PCIe标准具有更高的传输速度、更大的扩展性和更好的数据精度。这使得PCIe大大提高了计算机系统的性能,并促进了计算机技术的快速发展。然而,PCI总线在一些应用中仍然需要使用,例如连接低速设备。无论是PCI总线还是PCIe标准,它们都是计算机中非常重要的总线标准,它们的不同也体现了技术的进步和计算机领域能力的提升。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值