element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet & Tria Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • About Us
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      • Japan
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Vietnam
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Avnet Boards Forums
  • Products
  • Dev Tools
  • Avnet & Tria Boards Community
  • Avnet Boards Forums
  • More
  • Cancel
Avnet Boards Forums
Mini-ITX Hardware Design Not seeing PCIe with custom project
  • Forum
  • Documents
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Avnet Boards Forums to participate - click to join for free!
Actions
  • Share
  • More
  • Cancel
Forum Thread Details
  • State Not Answered
  • Replies 9 replies
  • Subscribers 347 subscribers
  • Views 1674 views
  • Users 0 members are here
Related

Not seeing PCIe with custom project

Former Member
Former Member over 10 years ago

Hello!

I have a 7z100 miniITX board, and I have been able to run through the entirety of the wonderful "Ubuntu Desktop Linux" instructions, and got Ubuntu 12.11 running. I was then able to get Ubuntu 14.04 running from the Linaro 14.10 developer build. I believe I have everything running... except for PCIe.

I have been trying a number of things (modifications to my device tree, different kernel configuration options, using the "linux-xlnx" Xilinx kernel instead of the Analog Devices kernel the "Ubuntu Desktop Linux" instructions use, etc), but nothing has worked yet.

I've verified that the board itself is okay by taking the ready_to_testsd_image_nic file from the "PCIe Root Complex Reference Design" project. I can see the PCIe bus, and I can see a card identified if one is plugged in, at boot and using lspci.

I've changed my .bit/.hdf files to the ones used in the "PCIe Root Complex Reference Design", and I've rebuilt the FSBL, and BOOT.bin. I have used menuconfig on the linux-xlnx kernel to ensure the PCIe support is built-in, and I've tried several different changes to the device tree's pci bus, comparing what building a device tree using these instructions: http://www.wiki.xilinx.com/Build+Device+Tree+Blob
generates versus what is listed in the .patch file from the "PCIe Root Complex Reference Design" project. I had to make changes to the generated ethernet device in the device tree to get it to function, so that made me suspicious that the generated device tree was incomplete/incorrect.

For comparison, here is what dmesg and lspci have to say about available devices on my Ubuntu 14.04 build:

root@linaro-developer:~# dmesg | grep pci
ehci-pci: EHCI PCI platform driver
root@linaro-developer:~# lspci -vvv

and here is what the example project reports to the same commands:

zynq> dmesg | grep pci
xaxi_pcie_set_bridge_resource:pci_space: 0x02000000 pci_addr:0x0000000060000000 size: 0x0000000010000000
xaxi_pcie_set_bridge_resource:Setting resource in Memory Space
PCI host bridge /amba@0/axi-pcie@50000000 (primary) ranges:
pci_bus 0000:00: root bus resource [mem 0x60000000-0x6fffffff]
pci_bus 0000:00: root bus resource [io  0x1000-0xffff]
pci_bus 0000:00: No busn resource found for root bus, will use [bus 00-ff]
pci 0000:00:00.0: [10ee:0706] type 01 class 0x060400
pci 0000:00:00.0: reg 10: [mem 0x00000000-0x3fffffff]
pci 0000:01:00.0: [10de:1183] type 00 class 0x030000
pci 0000:01:00.0: reg 10: [mem 0x00000000-0x00ffffff]
pci 0000:01:00.0: reg 14: [mem 0x00000000-0x07ffffff 64bit pref]
pci 0000:01:00.0: reg 1c: [mem 0x00000000-0x01ffffff 64bit pref]
pci 0000:01:00.0: reg 24: [io  0x0000-0x007f]
pci 0000:01:00.0: reg 30: [mem 0x00000000-0x0007ffff pref]
pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
pci_bus 0000:00: busn_res: [bus 00-ff] end is updated to 01
pci 0000:00:00.0: BAR 0: can't assign mem (size 0x40000000)
pci 0000:00:00.0: BAR 8: assigned [mem 0x60000000-0x6bffffff]
pci 0000:00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
pci 0000:01:00.0: BAR 1: assigned [mem 0x60000000-0x67ffffff 64bit pref]
pci 0000:01:00.0: BAR 3: assigned [mem 0x68000000-0x69ffffff 64bit pref]
pci 0000:01:00.0: BAR 0: assigned [mem 0x6a000000-0x6affffff]
pci 0000:01:00.0: BAR 6: assigned [mem 0x6b000000-0x6b07ffff pref]
pci 0000:01:00.0: BAR 5: assigned [io  0x1000-0x107f]
pci 0000:00:00.0: PCI bridge to [bus 01]
pci 0000:00:00.0:   bridge window [io  0x1000-0x1fff]
pci 0000:00:00.0:   bridge window [mem 0x60000000-0x6bffffff]
ehci-pci: EHCI PCI platform driver
zynq> lspci
00:00.0 Class 0604: 10ee:0706
01:00.0 Class 0300: 10de:1183
zynq>

Does anyone have any places for me to start looking? At this point, I'm happy to share device tree files, .config files, or try anything anyone suggests.

I do have one direct question: One difference between the "PCIe Root Complex Reference Design" kernel and the latest from the linux-xlnx master is the version: the example project uses 3.9 while the latest is 3.18. One Xilinx-PCIe-related change in 3.18 is the axi-pcie driver has been mainlined for the first time. Could it be there is a problem with the kernel build that the new mainlined driver isn't being included?

Any help or suggestions are greatly appreciated, in the mean time I will attempt to get an older kernel built and see if that makes a difference, and report back! Thanks!

  • Sign in to reply
  • Cancel
Parents
  • Former Member
    0 Former Member over 10 years ago

    Thanks for the link - it looks like I'm in line with the recommendations. One thing that is different is the interrupts line. The Xilinx Wiki has the line as:

    interrupts = < 0 52 4 >;

    where as the patch file for the reference design, and the Xilinx forum post that solved a similar problem list what I have:

    interrupts = < 0 59 4 >;

    I need to investigate this more; right now I'm not sure where the interrupt numbers are defined.

    Separately, I've been trying to resolve what I think may be the problem: there isn't enough virtual memory to allocate the pci_space. I missed it before because the error doesn't have 'pci' in the line, which I was grepping for.

    Reference design kernel boot messages:

    hw-breakpoint: found 5 (+1 reserved) breakpoint and 1 watchpoint registers.
    hw-breakpoint: maximum watchpoint size is 4 bytes.
    AXI PCIe Root Port Probe Successful
    xaxi_pcie_set_bridge_resource:pci_space: 0x02000000 pci_addr:0x0000000060000000 size: 0x0000000010000000
    xaxi_pcie_set_bridge_resource:Setting resource in Memory Space

    My project kernel boot messages:

    hw-breakpoint: found 5 (+1 reserved) breakpoint and 1 watchpoint registers.
    hw-breakpoint: maximum watchpoint size is 4 bytes.
    zynq-ocm f800c000.ps7-ocmc: ZYNQ OCM pool: 256 KiB @ 0xf0080000
    vmap allocation for size 268439552 failed: use vmalloc=<size> to increase size.
    xaxi_pcie_of_probe: Port Initalization failed
    xaxi_pcie_init: Root Port Probe failed

    So it looks like the PCIe driver tries to allocate ~256 MB out of virtual memory, and fails. I tried to do a few things:

    1) I slowly increased the amount of vmem by specifying vmalloc=<size> on the kernel's command line, per the error's suggestion. I got to 496M before the kernel would panic, it looks like the maximum allocation is slightly less than 512M.
    2) I removed the ocmc device, which was using virtual memory, and the reference design doesn't use.
    3) I rebuilt the kernel with less CMA reserved memory (by default it is 128 MiB, I reduced it to what the reference design uses, 16 MiB)

    None of this helped. Here is the /proc/meminfo for my design after boot:

    root@linaro-developer:~# cat /proc/meminfo
    MemTotal:        1027084 kB
    MemFree:          982348 kB
    MemAvailable:     970204 kB
    Buffers:            5172 kB
    Cached:            18156 kB
    SwapCached:            0 kB
    Active:            15352 kB
    Inactive:          16224 kB
    Active(anon):       8280 kB
    Inactive(anon):      172 kB
    Active(file):       7072 kB
    Inactive(file):    16052 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    HighTotal:        524288 kB
    HighFree:         505224 kB
    LowTotal:         502796 kB
    LowFree:          477124 kB
    SwapTotal:             0 kB
    SwapFree:              0 kB
    Dirty:                 0 kB
    Writeback:             0 kB
    AnonPages:          8176 kB
    Mapped:             4532 kB
    Shmem:               208 kB
    Slab:               8184 kB
    SReclaimable:       3100 kB
    SUnreclaim:         5084 kB
    KernelStack:         496 kB
    PageTables:          336 kB
    NFS_Unstable:          0 kB
    Bounce:                0 kB
    WritebackTmp:          0 kB
    CommitLimit:      513540 kB
    Committed_AS:      40156 kB
    VmallocTotal:     499712 kB
    VmallocUsed:        2620 kB
    VmallocChunk:     244020 kB

    It looks like while very little vmem is used, the largest chunk is slightly less than what the PCI bus needs. That chunk size did grow as I increased vmalloc, just not enough.

    Am I on the right track here? I just looked at the reference design's /proc/meminfo:

    zynq> cat /proc/meminfo
    MemTotal:        1032348 kB
    MemFree:          969308 kB
    Buffers:             200 kB
    Cached:             4688 kB
    SwapCached:            0 kB
    Active:             1724 kB
    Inactive:           3884 kB
    Active(anon):        720 kB
    Inactive(anon):        0 kB
    Active(file):       1004 kB
    Inactive(file):     3884 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    HighTotal:        270336 kB
    HighFree:         247800 kB
    LowTotal:         762012 kB
    LowFree:          721508 kB
    SwapTotal:             0 kB
    SwapFree:              0 kB
    Dirty:                16 kB
    Writeback:             0 kB
    AnonPages:           676 kB
    Mapped:             1284 kB
    Shmem:                 0 kB
    Slab:               5232 kB
    SReclaimable:       2212 kB
    SUnreclaim:         3020 kB
    KernelStack:         344 kB
    PageTables:          116 kB
    NFS_Unstable:          0 kB
    Bounce:                0 kB
    WritebackTmp:          0 kB
    CommitLimit:      516172 kB
    Committed_AS:       2772 kB
    VmallocTotal:     245760 kB
    VmallocUsed:        2756 kB
    VmallocChunk:     145660 kB

    I would expect to see the vmem large enough to accommodate the pcie bus allocation, but it isn't. So I'm no longer sure if the error produced in my design is trust worthy.

    Once again, your help is much appreciated, and any further insight is very welcome!

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Verify Answer
    • Cancel
Reply
  • Former Member
    0 Former Member over 10 years ago

    Thanks for the link - it looks like I'm in line with the recommendations. One thing that is different is the interrupts line. The Xilinx Wiki has the line as:

    interrupts = < 0 52 4 >;

    where as the patch file for the reference design, and the Xilinx forum post that solved a similar problem list what I have:

    interrupts = < 0 59 4 >;

    I need to investigate this more; right now I'm not sure where the interrupt numbers are defined.

    Separately, I've been trying to resolve what I think may be the problem: there isn't enough virtual memory to allocate the pci_space. I missed it before because the error doesn't have 'pci' in the line, which I was grepping for.

    Reference design kernel boot messages:

    hw-breakpoint: found 5 (+1 reserved) breakpoint and 1 watchpoint registers.
    hw-breakpoint: maximum watchpoint size is 4 bytes.
    AXI PCIe Root Port Probe Successful
    xaxi_pcie_set_bridge_resource:pci_space: 0x02000000 pci_addr:0x0000000060000000 size: 0x0000000010000000
    xaxi_pcie_set_bridge_resource:Setting resource in Memory Space

    My project kernel boot messages:

    hw-breakpoint: found 5 (+1 reserved) breakpoint and 1 watchpoint registers.
    hw-breakpoint: maximum watchpoint size is 4 bytes.
    zynq-ocm f800c000.ps7-ocmc: ZYNQ OCM pool: 256 KiB @ 0xf0080000
    vmap allocation for size 268439552 failed: use vmalloc=<size> to increase size.
    xaxi_pcie_of_probe: Port Initalization failed
    xaxi_pcie_init: Root Port Probe failed

    So it looks like the PCIe driver tries to allocate ~256 MB out of virtual memory, and fails. I tried to do a few things:

    1) I slowly increased the amount of vmem by specifying vmalloc=<size> on the kernel's command line, per the error's suggestion. I got to 496M before the kernel would panic, it looks like the maximum allocation is slightly less than 512M.
    2) I removed the ocmc device, which was using virtual memory, and the reference design doesn't use.
    3) I rebuilt the kernel with less CMA reserved memory (by default it is 128 MiB, I reduced it to what the reference design uses, 16 MiB)

    None of this helped. Here is the /proc/meminfo for my design after boot:

    root@linaro-developer:~# cat /proc/meminfo
    MemTotal:        1027084 kB
    MemFree:          982348 kB
    MemAvailable:     970204 kB
    Buffers:            5172 kB
    Cached:            18156 kB
    SwapCached:            0 kB
    Active:            15352 kB
    Inactive:          16224 kB
    Active(anon):       8280 kB
    Inactive(anon):      172 kB
    Active(file):       7072 kB
    Inactive(file):    16052 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    HighTotal:        524288 kB
    HighFree:         505224 kB
    LowTotal:         502796 kB
    LowFree:          477124 kB
    SwapTotal:             0 kB
    SwapFree:              0 kB
    Dirty:                 0 kB
    Writeback:             0 kB
    AnonPages:          8176 kB
    Mapped:             4532 kB
    Shmem:               208 kB
    Slab:               8184 kB
    SReclaimable:       3100 kB
    SUnreclaim:         5084 kB
    KernelStack:         496 kB
    PageTables:          336 kB
    NFS_Unstable:          0 kB
    Bounce:                0 kB
    WritebackTmp:          0 kB
    CommitLimit:      513540 kB
    Committed_AS:      40156 kB
    VmallocTotal:     499712 kB
    VmallocUsed:        2620 kB
    VmallocChunk:     244020 kB

    It looks like while very little vmem is used, the largest chunk is slightly less than what the PCI bus needs. That chunk size did grow as I increased vmalloc, just not enough.

    Am I on the right track here? I just looked at the reference design's /proc/meminfo:

    zynq> cat /proc/meminfo
    MemTotal:        1032348 kB
    MemFree:          969308 kB
    Buffers:             200 kB
    Cached:             4688 kB
    SwapCached:            0 kB
    Active:             1724 kB
    Inactive:           3884 kB
    Active(anon):        720 kB
    Inactive(anon):        0 kB
    Active(file):       1004 kB
    Inactive(file):     3884 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    HighTotal:        270336 kB
    HighFree:         247800 kB
    LowTotal:         762012 kB
    LowFree:          721508 kB
    SwapTotal:             0 kB
    SwapFree:              0 kB
    Dirty:                16 kB
    Writeback:             0 kB
    AnonPages:           676 kB
    Mapped:             1284 kB
    Shmem:                 0 kB
    Slab:               5232 kB
    SReclaimable:       2212 kB
    SUnreclaim:         3020 kB
    KernelStack:         344 kB
    PageTables:          116 kB
    NFS_Unstable:          0 kB
    Bounce:                0 kB
    WritebackTmp:          0 kB
    CommitLimit:      516172 kB
    Committed_AS:       2772 kB
    VmallocTotal:     245760 kB
    VmallocUsed:        2756 kB
    VmallocChunk:     145660 kB

    I would expect to see the vmem large enough to accommodate the pcie bus allocation, but it isn't. So I'm no longer sure if the error produced in my design is trust worthy.

    Once again, your help is much appreciated, and any further insight is very welcome!

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Verify Answer
    • Cancel
Children
No Data
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2026 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube