element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet & Tria Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • About Us
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      • Japan
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Vietnam
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Avnet Boards Forums
  • Products
  • Dev Tools
  • Avnet & Tria Boards Community
  • Avnet Boards Forums
  • More
  • Cancel
Avnet Boards Forums
Mini-ITX Hardware Design Not seeing PCIe with custom project
  • Forum
  • Documents
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Avnet Boards Forums to participate - click to join for free!
Actions
  • Share
  • More
  • Cancel
Forum Thread Details
  • State Not Answered
  • Replies 9 replies
  • Subscribers 347 subscribers
  • Views 1667 views
  • Users 0 members are here
Related

Not seeing PCIe with custom project

Former Member
Former Member over 10 years ago

Hello!

I have a 7z100 miniITX board, and I have been able to run through the entirety of the wonderful "Ubuntu Desktop Linux" instructions, and got Ubuntu 12.11 running. I was then able to get Ubuntu 14.04 running from the Linaro 14.10 developer build. I believe I have everything running... except for PCIe.

I have been trying a number of things (modifications to my device tree, different kernel configuration options, using the "linux-xlnx" Xilinx kernel instead of the Analog Devices kernel the "Ubuntu Desktop Linux" instructions use, etc), but nothing has worked yet.

I've verified that the board itself is okay by taking the ready_to_testsd_image_nic file from the "PCIe Root Complex Reference Design" project. I can see the PCIe bus, and I can see a card identified if one is plugged in, at boot and using lspci.

I've changed my .bit/.hdf files to the ones used in the "PCIe Root Complex Reference Design", and I've rebuilt the FSBL, and BOOT.bin. I have used menuconfig on the linux-xlnx kernel to ensure the PCIe support is built-in, and I've tried several different changes to the device tree's pci bus, comparing what building a device tree using these instructions: http://www.wiki.xilinx.com/Build+Device+Tree+Blob
generates versus what is listed in the .patch file from the "PCIe Root Complex Reference Design" project. I had to make changes to the generated ethernet device in the device tree to get it to function, so that made me suspicious that the generated device tree was incomplete/incorrect.

For comparison, here is what dmesg and lspci have to say about available devices on my Ubuntu 14.04 build:

root@linaro-developer:~# dmesg | grep pci
ehci-pci: EHCI PCI platform driver
root@linaro-developer:~# lspci -vvv

and here is what the example project reports to the same commands:

zynq> dmesg | grep pci
xaxi_pcie_set_bridge_resource:pci_space: 0x02000000 pci_addr:0x0000000060000000 size: 0x0000000010000000
xaxi_pcie_set_bridge_resource:Setting resource in Memory Space
PCI host bridge /amba@0/axi-pcie@50000000 (primary) ranges:
pci_bus 0000:00: root bus resource [mem 0x60000000-0x6fffffff]
pci_bus 0000:00: root bus resource [io  0x1000-0xffff]
pci_bus 0000:00: No busn resource found for root bus, will use [bus 00-ff]
pci 0000:00:00.0: [10ee:0706] type 01 class 0x060400
pci 0000:00:00.0: reg 10: [mem 0x00000000-0x3fffffff]
pci 0000:01:00.0: [10de:1183] type 00 class 0x030000
pci 0000:01:00.0: reg 10: [mem 0x00000000-0x00ffffff]
pci 0000:01:00.0: reg 14: [mem 0x00000000-0x07ffffff 64bit pref]
pci 0000:01:00.0: reg 1c: [mem 0x00000000-0x01ffffff 64bit pref]
pci 0000:01:00.0: reg 24: [io  0x0000-0x007f]
pci 0000:01:00.0: reg 30: [mem 0x00000000-0x0007ffff pref]
pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
pci_bus 0000:00: busn_res: [bus 00-ff] end is updated to 01
pci 0000:00:00.0: BAR 0: can't assign mem (size 0x40000000)
pci 0000:00:00.0: BAR 8: assigned [mem 0x60000000-0x6bffffff]
pci 0000:00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
pci 0000:01:00.0: BAR 1: assigned [mem 0x60000000-0x67ffffff 64bit pref]
pci 0000:01:00.0: BAR 3: assigned [mem 0x68000000-0x69ffffff 64bit pref]
pci 0000:01:00.0: BAR 0: assigned [mem 0x6a000000-0x6affffff]
pci 0000:01:00.0: BAR 6: assigned [mem 0x6b000000-0x6b07ffff pref]
pci 0000:01:00.0: BAR 5: assigned [io  0x1000-0x107f]
pci 0000:00:00.0: PCI bridge to [bus 01]
pci 0000:00:00.0:   bridge window [io  0x1000-0x1fff]
pci 0000:00:00.0:   bridge window [mem 0x60000000-0x6bffffff]
ehci-pci: EHCI PCI platform driver
zynq> lspci
00:00.0 Class 0604: 10ee:0706
01:00.0 Class 0300: 10de:1183
zynq>

Does anyone have any places for me to start looking? At this point, I'm happy to share device tree files, .config files, or try anything anyone suggests.

I do have one direct question: One difference between the "PCIe Root Complex Reference Design" kernel and the latest from the linux-xlnx master is the version: the example project uses 3.9 while the latest is 3.18. One Xilinx-PCIe-related change in 3.18 is the axi-pcie driver has been mainlined for the first time. Could it be there is a problem with the kernel build that the new mainlined driver isn't being included?

Any help or suggestions are greatly appreciated, in the mean time I will attempt to get an older kernel built and see if that makes a difference, and report back! Thanks!

  • Sign in to reply
  • Cancel
  • Former Member
    0 Former Member over 10 years ago

    Hi,

      It sounds like you have made some amazing progress combining the Ubuntu and PCIe Root Complex reference designs.  In fact, I suspect you are further along than just about anyone else at this point.

      With the number of changes that take place regularly in the Linux mainline, I think it is a very good bet that the modifications made between the 3.9 and 3.18 releases of the kernel could be a the root (pardon the pun) of the problem.   The first thing I would be inclined to try is to back up to the older kernel build, and you are already doing that.  Please let us know the results of that back in this thread.

    There are also numerous issues updating the Ubuntu 12.04 root file system in the reference design to the latest 14.04 version, and I'm wondering that although you were able to work around those with the Linaro development release, if there still may be something unknown that is interfering with the PCIe drivers.   If you have success with the older kernel version, it is probably prudent to drop back to the Ubuntu 12.04 release and verify that the root complex and HDMI operate in tandem as they should.  

    Ron

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Verify Answer
    • Cancel
  • Former Member
    0 Former Member over 10 years ago

    One thing I should point out that may have been missed is that I was assuming you have combined the PCIe root complex and Ubuntu hardware designs.   The Ubuntu hardware platform does not have the PCIe interface built into it, so if you are using only the Ubuntu hardware platform with the new kernel, there is no way the PCIe design can work.   Since you have the Ubuntu design running with the updated software, it may be informative to update the PCIe Root Complex design separately to the 3.18 release and test to see if that works.  If it does, then it is more likely that the hardware platform you are using needs to be combined with the PCIe Root Complex design, rather than an issue with the updated kernels.

    Ron

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Verify Answer
    • Cancel
  • Former Member
    0 Former Member over 10 years ago

    Thanks for the responses, Ron!

    Just to clarify my intent: I'm attempting to run a Ubuntu 14.04 ext4 rootfs (not a ramfs, so this part is similar to the Ubuntu reference design), on a kernel no older than 3.13, that allows me to use the PCIe. My application doesn't use the HDMI port or a graphical desktop, and the linaro-developer build I'm using doesn't support a graphical desktop. In fact, Linaro didn't port a Ubuntu 14.04 Desktop build to ARMv7, at least not that I'm aware of.

    To answer your question about the Ubuntu hardware design -- once I confirmed I could get Ubuntu 14.04 running with the Ubuntu Desktop hardware design, I rebuilt the PCIe Root Complex reference design for my attempt to get the PCIe working and have been using that exclusively since then. To confirm that my rebuild was successful in enabling access to the PCIe bus, I re-built the BOOT.bin from the example project's ready_to_test folder with my .bit file, and then booted the mini-ITX board with that BOOT.bin and the .dtb and kernel image from the example project. It boots fine and I can see the bus just like the untouched ready_to_test files.

    I was able to drop back to the 3.16 kernel version, available in the linux-xlnx 2014.2 release. I rebuilt u-boot using the instructions for the PCIe Root Port design, but made the changes needed to avoid needing a ramfs (how to do this is outlined in the Ubuntu Desktop Linux instructions).

    I also rebuilt my device tree blob. There were some changes needed to get it functional: the inclusion of ttc resource definitions so the SD card was usable by the kernel and the removal of the 222223 1000000 operating-points from the cpu0 resource definition so cpu-freq would stop trying to switch to an invalid frequency. Once those were done, everything boots properly.

    This has gotten me a little further: here is my dmesg output:

    root@linaro-developer:~# dmesg | grep pci
    [    0.502758] xaxi_pcie_of_probe: Port Initalization failed
    [    0.508125] xaxi_pcie_init: Root Port Probe failed
    [    1.094143] ehci-pci: EHCI PCI platform driver

    This is the similar to what the raw PCIe reference design ready_to_test boot reports if there is no card plugged into the PCIe slot at boot:

    zynq> dmesg | grep pci
    xaxi_pcie_init_port: Link is Down
    xaxi_pcie_of_probe: Port Initalization failed
    xaxi_pcie_init: Root Port Probe failed
    ehci-pci: EHCI PCI platform driver

    It looks like the drivers are now included in the kernel, but they aren't seeing the card. They also don't report that the "link is down", which may be a clue.

    My problem appears identical to what was reported in this thread: http://forums.xilinx.com/t5/Embedded-Linux/Building-Petalinux-for-Mini-ITX-RC-PCIe-design/td-p/480898

    I verified my device tree entry for pcie looks identical to the 'solution' reported in that thread, but since I'm not using petalinux I don't know what the _defconfig they used to build the kernel looks like. I'm currently using the unmodified xilinx_zynq_defconfig, which does define CONFIG_XILINX_AXIPCIE and CONFIG_PCI_MSI.

    The PCIe error is reported well before the rootfs is mounted or Ubuntu is booted, which leads me to believe I either have a device tree file problem or a kernel configuration problem.

    I'm not sure what I'll try next, do you have any suggestions? Thanks again!

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Verify Answer
    • Cancel
  • Former Member
    0 Former Member over 10 years ago

    I agree that the issues are now isolated to the device tree and/or kernel configuration.  Check this link for some additional optional parameters for kernel configuration on Zynq for PCIe.  

    http://www.wiki.xilinx.com/Linux+PCIe

    Hopefully that will provide the clues to all you to correct any omissions in your kernel configuration.  If it appears the configuration settings you are using is complete, then the problem must be in the device tree, so you need to verify that the address map of your design corresponds precisely to the one specified in your device tree.  Pay close attention to the interrupt settings, which appear to have changed between releases.

    Ron

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Verify Answer
    • Cancel
  • Former Member
    0 Former Member over 10 years ago

    Thanks for the link - it looks like I'm in line with the recommendations. One thing that is different is the interrupts line. The Xilinx Wiki has the line as:

    interrupts = < 0 52 4 >;

    where as the patch file for the reference design, and the Xilinx forum post that solved a similar problem list what I have:

    interrupts = < 0 59 4 >;

    I need to investigate this more; right now I'm not sure where the interrupt numbers are defined.

    Separately, I've been trying to resolve what I think may be the problem: there isn't enough virtual memory to allocate the pci_space. I missed it before because the error doesn't have 'pci' in the line, which I was grepping for.

    Reference design kernel boot messages:

    hw-breakpoint: found 5 (+1 reserved) breakpoint and 1 watchpoint registers.
    hw-breakpoint: maximum watchpoint size is 4 bytes.
    AXI PCIe Root Port Probe Successful
    xaxi_pcie_set_bridge_resource:pci_space: 0x02000000 pci_addr:0x0000000060000000 size: 0x0000000010000000
    xaxi_pcie_set_bridge_resource:Setting resource in Memory Space

    My project kernel boot messages:

    hw-breakpoint: found 5 (+1 reserved) breakpoint and 1 watchpoint registers.
    hw-breakpoint: maximum watchpoint size is 4 bytes.
    zynq-ocm f800c000.ps7-ocmc: ZYNQ OCM pool: 256 KiB @ 0xf0080000
    vmap allocation for size 268439552 failed: use vmalloc=<size> to increase size.
    xaxi_pcie_of_probe: Port Initalization failed
    xaxi_pcie_init: Root Port Probe failed

    So it looks like the PCIe driver tries to allocate ~256 MB out of virtual memory, and fails. I tried to do a few things:

    1) I slowly increased the amount of vmem by specifying vmalloc=<size> on the kernel's command line, per the error's suggestion. I got to 496M before the kernel would panic, it looks like the maximum allocation is slightly less than 512M.
    2) I removed the ocmc device, which was using virtual memory, and the reference design doesn't use.
    3) I rebuilt the kernel with less CMA reserved memory (by default it is 128 MiB, I reduced it to what the reference design uses, 16 MiB)

    None of this helped. Here is the /proc/meminfo for my design after boot:

    root@linaro-developer:~# cat /proc/meminfo
    MemTotal:        1027084 kB
    MemFree:          982348 kB
    MemAvailable:     970204 kB
    Buffers:            5172 kB
    Cached:            18156 kB
    SwapCached:            0 kB
    Active:            15352 kB
    Inactive:          16224 kB
    Active(anon):       8280 kB
    Inactive(anon):      172 kB
    Active(file):       7072 kB
    Inactive(file):    16052 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    HighTotal:        524288 kB
    HighFree:         505224 kB
    LowTotal:         502796 kB
    LowFree:          477124 kB
    SwapTotal:             0 kB
    SwapFree:              0 kB
    Dirty:                 0 kB
    Writeback:             0 kB
    AnonPages:          8176 kB
    Mapped:             4532 kB
    Shmem:               208 kB
    Slab:               8184 kB
    SReclaimable:       3100 kB
    SUnreclaim:         5084 kB
    KernelStack:         496 kB
    PageTables:          336 kB
    NFS_Unstable:          0 kB
    Bounce:                0 kB
    WritebackTmp:          0 kB
    CommitLimit:      513540 kB
    Committed_AS:      40156 kB
    VmallocTotal:     499712 kB
    VmallocUsed:        2620 kB
    VmallocChunk:     244020 kB

    It looks like while very little vmem is used, the largest chunk is slightly less than what the PCI bus needs. That chunk size did grow as I increased vmalloc, just not enough.

    Am I on the right track here? I just looked at the reference design's /proc/meminfo:

    zynq> cat /proc/meminfo
    MemTotal:        1032348 kB
    MemFree:          969308 kB
    Buffers:             200 kB
    Cached:             4688 kB
    SwapCached:            0 kB
    Active:             1724 kB
    Inactive:           3884 kB
    Active(anon):        720 kB
    Inactive(anon):        0 kB
    Active(file):       1004 kB
    Inactive(file):     3884 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    HighTotal:        270336 kB
    HighFree:         247800 kB
    LowTotal:         762012 kB
    LowFree:          721508 kB
    SwapTotal:             0 kB
    SwapFree:              0 kB
    Dirty:                16 kB
    Writeback:             0 kB
    AnonPages:           676 kB
    Mapped:             1284 kB
    Shmem:                 0 kB
    Slab:               5232 kB
    SReclaimable:       2212 kB
    SUnreclaim:         3020 kB
    KernelStack:         344 kB
    PageTables:          116 kB
    NFS_Unstable:          0 kB
    Bounce:                0 kB
    WritebackTmp:          0 kB
    CommitLimit:      516172 kB
    Committed_AS:       2772 kB
    VmallocTotal:     245760 kB
    VmallocUsed:        2756 kB
    VmallocChunk:     145660 kB

    I would expect to see the vmem large enough to accommodate the pcie bus allocation, but it isn't. So I'm no longer sure if the error produced in my design is trust worthy.

    Once again, your help is much appreciated, and any further insight is very welcome!

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Verify Answer
    • Cancel
  • Former Member
    0 Former Member over 10 years ago

    I was able get the axi-pcie bus loaded, and my PCIe card recognized.

    root@linaro-developer:~# lspci
    00:00.0 PCI bridge: Xilinx Corporation Device 0706
    01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 660 Ti] (rev a1)

    I ended up reducing the amount of memory the pcie device tries to use in the device tree. For my initial experiment, I reduced it by an order of magnitude, from 0x10000000 to 0x1000000 (from 256 MB to 16 MB), by changing the ranges and reg lines:

    ranges = <0x2000000 0x0 0x60000000 0x60000000 0x0 0x1000000>;
    reg = <0x50000000 0x1000000>;

    Obviously this won't work long-term, and most of the BARs fail to mount:

    pci 0000:00:00.0: BAR 0: can't assign mem (size 0x40000000)
    pci 0000:00:00.0: BAR 8: can't assign mem (size 0xc000000)
    pci 0000:00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
    pci 0000:01:00.0: BAR 1: can't assign mem pref (size 0x8000000)
    pci 0000:01:00.0: BAR 3: can't assign mem pref (size 0x2000000)
    pci 0000:01:00.0: BAR 0: can't assign mem (size 0x1000000)
    pci 0000:01:00.0: BAR 6: can't assign mem pref (size 0x80000)
    pci 0000:01:00.0: BAR 5: assigned [io  0x1000-0x107f]

    So I'll need to do more memory tuning to make as much memory as possible available, but at least this proves it can be done.

    Thanks again for your suggestions, Ron -- they were invaluable.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Verify Answer
    • Cancel
  • Former Member
    0 Former Member over 9 years ago

    Hi Matt

    I'm basically trying to do EXACTLY what you are. If you are able to provide some pointers, please do.

    Did you confirm with NVIDIA that you can use a kernel version more recent than 3.13 to get the NVIDIA tool chain working on the Zynq SoC?

    Are you able to run CUDA on your platform now? Have a blog by any chance? :)

    Thanks!

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Verify Answer
    • Cancel
  • Former Member
    0 Former Member over 9 years ago

    I am also seeing a similar failure, using the axi-pcie core, I bumped my bar 0 size from 16 MB to 256 MB, and that did not change behavior:

    pci 0000:00:00.0: BAR 0: assigned [mem 0x60000000-0x6fffffff 64bit pref]
    pci 0000:00:00.0: BAR 8: no space for [mem size 0x00200000]
    pci 0000:00:00.0: BAR 8: failed to assign [mem size 0x00200000]
    pci 0000:01:00.0: BAR 9: no space for [mem size 0x00100000 64bit pref]
    pci 0000:01:00.0: BAR 9: failed to assign [mem size 0x00100000 64bit pref]
    pci 0000:01:00.0: BAR 0: no space for [mem size 0x00040000]
    pci 0000:01:00.0: BAR 0: failed to assign [mem size 0x00040000]
    pci 0000:02:01.0: BAR 9: no space for [mem size 0x00100000 64bit pref]
    pci 0000:02:01.0: BAR 9: failed to assign [mem size 0x00100000 64bit pref]
    pci 0000:03:00.0: BAR 0: no space for [mem size 0x00100000 64bit pref]
    pci 0000:03:00.0: BAR 0: failed to assign [mem size 0x00100000 64bit pref]

    Any advice?

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Verify Answer
    • Cancel
  • Former Member
    0 Former Member over 9 years ago

    I too am seeing similar behavior. I'm trying to resolve this issue with Xilinx.

    Any updates?

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Verify Answer
    • Cancel
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2026 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube