element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet & Tria Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • About Us
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Single-Board Computers
  • Products
  • Dev Tools
  • Single-Board Computers
  • More
  • Cancel
Single-Board Computers
Forum SBC Network Throughput
  • Blog
  • Forum
  • Documents
  • Files
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Single-Board Computers to participate - click to join for free!
Actions
  • Share
  • More
  • Cancel
Forum Thread Details
  • Replies 69 replies
  • Subscribers 58 subscribers
  • Views 8492 views
  • Users 0 members are here
  • nuttcp
  • network
  • raspberry-pi
  • bbb
  • BeagleBone
  • throughput
Related

SBC Network Throughput

morgaine
morgaine over 12 years ago

Our earlier lightweight CPU benchmarking provided some confidence that the various boards tested had no major performance faults and were working roughly inline with expectations given their clock speed and processor families.  Networking is an area of performance that either doesn't get measured much or that is measured by ad hoc means which are hard to compare, and implementation anomalies are known to occur occasionally.

 

To try to put this on a more quantitative and even footing, I've picked a network measurement system that has an extremely long pedigree, the TTCP family of utilities.  This has evolved from the original "ttcp" of the 1980's through "nttcp" and finally into "nuttcp".  It has become a very useful networking tool, simple to use with repeatable results, open source, cross-platform, and it works on both IPv4 and IPv6.  It's in the Debian repository, and if the O/S to be tested doesn't have it then it can be compiled from sources just by typing 'make' on the great majority of systems.  (I cross-compiled it for Angstrom.)

 

Usage is extremely simple.  A pair of machines is required to test the link between them.  One is nominated the 'server' and has "nuttcp -S" executed on it, which turns it into a daemon running in the background.  The other is nominated the 'client', and all the tests are run from it regardless of desired direction.  The two most common tests to run on the client are a Transmission Test (Tx) using "nuttcp -t server", and a Reception Test (Rx) using "nuttcp -r server", both executed on the client with the hostname or IP address of the 'server' provided as argument.

 

These simple tests transfer data at maximum rate in the specified direction over TCP (by default), for an interval of approximately 10 seconds, and on completion the measured throughput is returned in Mbps for easiest comparison with the rated Mbps speed of the link.  Here is a table showing my initial tests executed on various ARM client boards through a gigabit switch, with the server (nuttcp -S) running on a 2.33GHz Core2 Duo machine possessing a gigabit NIC.  The final set of results was obtained between the Core2 Duo and an old Xeon server over a fully gigabit network path, just to confirm that the Core2 Duo wasn't bottlenecked in the ARM board tests.

 

 

Max theoretical TCP throughput over 100Mbps Ethernet is 94.1482 Mbps with TCP TimeStamps, or 94.9285 w/o.

For fairness, rows are ordered by 4 attributes: 1) Fast or Gigabit, 2) TCP TS or not, 3) ARM Freq, 4) Rx Speed.

 

Submitter
Rx Mbps
Tx Mbps
Client Board
SoC
MHz
Limits
O/S, kernel, driver
selsinork30.6017.28233-OLinuXinoi.MX23 ARM926233No TSArchLinux 3.7.2-2
morgaine93.8472.82RPi Model BBCM2835700Raspbian 3.1.9+ #272
morgaine93.8493.75BB (white)AM3359720Angstrom v2012.01, 3.2.5+
Tim.Annan94.1491.74Gumstix PepperAM3359600100M modeYocto 9.0.0 Dylan, 3.2
morgaine93.8276.94RPi Model BBCM2835800Raspbian 3.1.9+  #272
morgaine93.8278.71RPi Model BBCM28358007/2012 u/sRaspbian 3.6.11+ #545
morgaine94.1478.87RPi Model BBCM28358009/2013 u/sRaspbian 3.6.11+ #545
morgaine93.8093.75BBBAM33591000Angstrom v2012.12, 3.8.6
selsinork93.9294.46Cubieboard2A20912VLAN TSDebian 7.1, 3.3.0+
morgaine94.1694.14BBBAM33591000Debian 7.0, 3.8.13-bone20
selsinork94.3394.55Cubieboard2A20912No TSDebian 7.1, 3.3.0+
selsinork94.9194.90BBBAM33591000No TSAngstrom 3.8.6
selsinork94.9494.91i.MX53-QSBi.MX53996No TS3.4.0+
selsinork243.30454.88Sabre-Litei.MX6996No TS3.0.15-ts-armv7l
Tim.Annan257.79192.22Gumstix PepperAM3359600Gbit modeYocto 9.0.0 Dylan, 3.2
notzed371.92324.49Parallella-16Zynq-70x0800Ubuntu Linaro
selsinork525.18519.41CubietruckA201000No TSLFS-ARM 3.4.67 + gmac
selsinork715.63372.17MinnowboardAtom E6401000No TSAngstrom 3.8.13-yocto
morgaine725.08595.28homebuiltE65502330PCI 33MHzGentoo 32-bit, 3.8.2, r8169
selsinork945.86946.38homebuiltE82002666PCIe X132-bit, 3.7.0, e1000

 

 

In addition to the results displayed in the table, I also ran servers (nuttcp -S) on all my boards and kicked off transfers in both directions from the x86 machine, and then followed that with board-to-board transfers just to check that the choice of clients and servers was not affecting results.  It wasn't, they are very repeatable regardless of the choice, the throughput always being limited by the slowest machine for the selected direction of transfer.  Running tests multiple times showed that variations typically held to less than 0.5%, probably a result of occasional unrelated network and/or machine activity.

 

The above measurements were performed over IPv4.  (See below for IPv6.)

 

Hint:  You can run nuttcp client commands even if a server is running on the same machine, so the most flexible approach is to execute "nuttcp -S" on all machines first, and then run client commands on any machine from anywhere to anywhere in any direction.

 

Initial observations:  The great uniformity in BeagleBone network throughput (both white and Black) stands out, and is clearly not affected by CPU clock speed.  Raspberry Pi Model B clearly has a problem on transmit (now confirmed to be limited by CPU clock) --- I'll have to investigate this further after upgrading my very old Raspbian version.  And finally, my x86 machinery and/or network gear is clearly operating at far below the rated gigabit equipment speed --- this will require urgent investigation and upgrades, especially of NIC bus interfaces.

 

Confirmation or disproval of my figures would be very welcome, as well as extending the tests to other boards and O/S versions.

 

Morgaine.

 

 

Addendum:  Note about maximum theoretical throughput added just above the table after analysis in thread below.

  • Sign in to reply
  • Cancel

Top Replies

  • morgaine
    morgaine over 12 years ago in reply to Former Member +1
    coder27 wrote: Is your RPi overclocked to 1000? Excellent observation!!! The answer is no --- I wrote "1000" in the table entirely because it has been so long since I've messed significantly with the Pi…
Parents
  • Former Member
    Former Member over 12 years ago

    I'll try this out on some of the Arm boards I have over the weekend.

     

    On your x86 machines low figures..  Gigabit wire speed is a much harder thing to accomplish, you'd need to investigate ethernet packet sizes, switch capability, and things like whether your ethernet adapter is PCI, PCIe and whether it's a PCIe x1, x4 etc.   Also, for example, I've seen various problems with onboard realtek 8169 series gigabit chips in the past, so you would have to look at the chip, the kernel and driver versions too.

    Typical cheap 'desktop' gigabit adapters will often struggle, server adapters are better but come with much increased cost and usually require a type of slot that's not available in a desktop.

     

    As you can imagine this becomes even more of a problem when you have a 10G card in your machine.

     

    Anyway, as a first step, here's some results from a 32bit x86 kernel 3.7.0 machine to a 64 bit x86 machine kernel 3.8.0

     

    1128.4432 MB /  10.01 sec =  945.8611 Mbps 3 %TX 11 %RX 0 retrans 0.26 msRTT

    1129.1255 MB /  10.01 sec =  946.3785 Mbps 3 %TX 15 %RX 0 retrans 0.25 msRTT

     

    client machine has the following NIC

    [4.540835] e1000e 0000:02:00.0 eth0: (PCI Express:2.5GT/s:Width x1)
    [4.540978] e1000e 0000:02:00.0 eth0: Intel(R) PRO/1000 Network Connection
    [4.541152] e1000e 0000:02:00.0 eth0: MAC: 1, PHY: 4, PBA No: D50854-003

     

    Asus P5Q with 4GB ram &

    Intel(R) Core(TM)2 Duo CPU E8200  @ 2.66GHz

     

     

    Server

    [1.200887] e1000e 0000:20:00.0 eth1: (PCI Express:2.5GT/s:Width x4)
    [1.201103] e1000e 0000:20:00.0 eth1: Intel(R) PRO/1000 Network Connection
    [1.201318] e1000e 0000:20:00.0 eth1: MAC: 0, PHY: 4, PBA No: D51930-006

     

    HP ML110 G6

    Intel(R) Core(TM) i7 CPU     870  @ 2.93GHz

     

    There's a Cisco SG300-52 switch sitting between them.

     

    The OS isn't particularly meaningful. once upon a time the client was Slackware but virtually everything on it has been replaced with self compiled stuff.  The server is entirely self compiled.

     

    Both TX and RX results drop to ~932Mbps when using IPv6

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • morgaine
    morgaine over 12 years ago in reply to Former Member

    Awesome info, thanks selsinork!

     

    The big problem is trying to summarize the info that most affects the results, so I've added a column "Limits" for information on anything that may be a significant limit or constraint on the throughput measured.  For example the bus through which the NIC operates can be a major limiting factor, and in my case is a known constraint on gig-to-gig transfers since my server's D-Link DGE-528T card is plugged into a lowly PCI slot running at 33MHz --- adequate at 100Mbps but certainly not at gigabit speeds.

     

    I've added your Asus machine as "client" in the table, and have made the assumption that your two nuttcp output lines correspond to client Rx and Tx thoughput in that order.  (Not that there is any huge difference of course.)

     

    I have machines with on-motherboard gigabit Ethernet  too (typically the NIC is on a PCIe channel of one or more lanes), so I'll try to find a box that's more suitable for network throughput tests and rerun everything.

     

    This is an interesting area of testing, since not only will we gain better understanding of our ARM boards but also improve our home networks as our bottlenecks show up in the numbers.  I intend to update any entry that is affected by improvements at the server end or in network infrastructure, since each entry is intended to quantify the client's Rx/Tx throughput only (the server and network are assumed faster).  This will be an asymptotic process geting progressively closer to valid measurements for the SBCs.

     

    As the engineering mantra says, you don't really know something until you measure it. image

     

     

    PS.  Disappointingly, upgrading the Pi kernel and  firmware with a whole year's improvements (now on 3.6.11+ #545) has only increased Pi's internally-constrained Tx throughput by 2.3%.  No other variables had been modified prior to running the new tests.  New table entry added.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • Former Member
    Former Member over 12 years ago in reply to morgaine

    Morgaine Dinova wrote:

     

    The big problem is trying to summarize the info that most affects the results,

    Yeah, and trying is probably a pointless task. Think of it more as a set of starting points to look at when you have a result that seems out of place.

     

    I've added your Asus machine as "client" in the table, and have made the assumption that your two nuttcp output lines correspond to client Rx and Tx thoughput in that order.  (Not that there is any huge difference of course.)

    Yes, receive than transmit, would probably be nice if the output reflected which direction.  The other bit of info provided in the output that's likely to be interesting/relevant is the transmitter and receiver CPU utilisation when running these tests. Being able to do the full 100Mbps in a synthetic test is one thing, but if it takes 100% CPU to do it then it's not realistic to expect to be able to do that while the system is doing other things like reading the file it's transferring from disk.

    I have machines with on-motherboard gigabit Ethernet  too (typically the NIC is on a PCIe channel of one or more lanes), so I'll try to find a box that's more suitable for network throughput tests and rerun everything.

    In my experience it's unusual for onboard gigabit NIC's on desktop class boards to use anything other than x1 and usually the cheapest chip they can find, even on high end servers it's often only x2. Multi-port add-in cards and 10G cards can be different of course.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • Former Member
    Former Member over 12 years ago in reply to morgaine

    Morgaine Dinova wrote:

    For example the bus through which the NIC operates can be a major limiting factor, and in my case is a known constraint on gig-to-gig transfers since my server's D-Link DGE-528T card is plugged into a lowly PCI slot running at 33MHz --- adequate at 100Mbps but certainly not at gigabit speeds.

    I was curious about this, 33MHz PCI has a theoretical max bandwidth of 132MB/s, which ought to be enough for a gigabit network in one direction at least.

     

    So, with a 33MHz PCI adapter using an Intel 82540EM which is a desktop adapter I get the following:

     

    root@solos2:~/nuttcp-6.1.2# ./nuttcp-6.1.2 -t 1.2.3.4

    1092.4375 MB /  10.01 sec =  915.5943 Mbps 7 %TX 15 %RX 0 retrans 0.23 msRTT

    root@solos2:~/nuttcp-6.1.2# ./nuttcp-6.1.2 -r 1.2.3.4

    1088.5742 MB /  10.00 sec =  912.7412 Mbps 4 %TX 28 %RX 0 retrans 0.19 msRTT

     

    trying to run a transmit and receive instance simultaneously gives

    root@solos2:~/nuttcp-6.1.2# ./nuttcp-6.1.2 -t 1.2.3.4

      832.2246 MB /  10.01 sec =  697.3698 Mbps 7 %TX 12 %RX 0 retrans 0.27 msRTT

    root@solos2:~/nuttcp-6.1.2# ./nuttcp-6.1.2 -r 1.2.3.4

      278.1735 MB /  10.00 sec =  233.2731 Mbps 1 %TX 15 %RX 0 retrans 1.94 msRTT

     

    so overall throughput roughly the same and obviously limited by PCI bandwidth. 

     

    The system I have this card in is an Atom 230 @ 1.6Ghz which is significantly slower than your E6550 based system, but does at least prove that card/driver could be a factor as the bus itself is theoretically capable of more.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
Reply
  • Former Member
    Former Member over 12 years ago in reply to morgaine

    Morgaine Dinova wrote:

    For example the bus through which the NIC operates can be a major limiting factor, and in my case is a known constraint on gig-to-gig transfers since my server's D-Link DGE-528T card is plugged into a lowly PCI slot running at 33MHz --- adequate at 100Mbps but certainly not at gigabit speeds.

    I was curious about this, 33MHz PCI has a theoretical max bandwidth of 132MB/s, which ought to be enough for a gigabit network in one direction at least.

     

    So, with a 33MHz PCI adapter using an Intel 82540EM which is a desktop adapter I get the following:

     

    root@solos2:~/nuttcp-6.1.2# ./nuttcp-6.1.2 -t 1.2.3.4

    1092.4375 MB /  10.01 sec =  915.5943 Mbps 7 %TX 15 %RX 0 retrans 0.23 msRTT

    root@solos2:~/nuttcp-6.1.2# ./nuttcp-6.1.2 -r 1.2.3.4

    1088.5742 MB /  10.00 sec =  912.7412 Mbps 4 %TX 28 %RX 0 retrans 0.19 msRTT

     

    trying to run a transmit and receive instance simultaneously gives

    root@solos2:~/nuttcp-6.1.2# ./nuttcp-6.1.2 -t 1.2.3.4

      832.2246 MB /  10.01 sec =  697.3698 Mbps 7 %TX 12 %RX 0 retrans 0.27 msRTT

    root@solos2:~/nuttcp-6.1.2# ./nuttcp-6.1.2 -r 1.2.3.4

      278.1735 MB /  10.00 sec =  233.2731 Mbps 1 %TX 15 %RX 0 retrans 1.94 msRTT

     

    so overall throughput roughly the same and obviously limited by PCI bandwidth. 

     

    The system I have this card in is an Atom 230 @ 1.6Ghz which is significantly slower than your E6550 based system, but does at least prove that card/driver could be a factor as the bus itself is theoretically capable of more.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
Children
  • morgaine
    morgaine over 12 years ago in reply to Former Member

    Very good points here.  It seems we need an nutpci utility to determine the real throughout of PCI. image

     

    Next time I'm opening machines I'll do some additional testing using the on-board PCIe NICs for comparison.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • morgaine
    morgaine over 12 years ago in reply to Former Member

    Further to your various comments about ARM SoCs with gigabit MACs widely having trouble obtaining good throughput over this fast medium, Olimex's latest blog post on "A20-SOM Update"  confirms this for A20 as well:

     

    OLIMEX Ltd wrote:

     

    What is a bit dissapointing is that Gigabit Ethernet can't achieve more than 330Mbit/s transfer with the current state of the drivers, there were few comments that this may improve a bit later, but ARMs seems not quite capable with Gigabit Interfaces if you read for other popular ARMs like imx6 also can't have full throughput of Gigabit Ethernet.

     

    So,  it seems to be a fairly common issue.  I wonder if there's any ARM SoC at all with gigabit throughput in the 900's of Mbps?  Maybe we should ask ARM whether there is any relevant limit in the bus fabric or CPU of modern ARMs at this speed, or whether the measured limitations are entirely due to peripheral issues in the licensees' SoCs.

     

    (I wouldn't expect an answer though.  ARM aren't known for their openness or willingness to engage the community.)

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • Former Member
    Former Member over 12 years ago in reply to morgaine

    Morgaine Dinova wrote:

     

    So,  it seems to be a fairly common issue.  I wonder if there's any ARM SoC at all with gigabit throughput in the 900's of Mbps? 

    I think it could be interesting to get a PCIe network card with known functional drivers connected to some Arm board that has PCIe.. That might give some idea if it's just a driver issue or something more fundamental  The errata for the i.MX6 suggested that it was a limitation on an internal bus somewhere, but didn't give much more detail than that.

     

    I may have to try and see if I have a PCIe NIC that'll work without the differential clock available. That was my original problem with the Sabre Lite, the PCIe header doesn't include the clock so you'd have to add an external chip to provide it.. I could find buffers for an existing clock, but not a clock generator.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • morgaine
    morgaine over 12 years ago in reply to Former Member

    I've described our data gathering over in the Parallella forum under Technical Q & A: "Ethernet throughput measurements".  It's possible that some of those who have already received their Parallella boards will want to measure their Ethernet throughput and contribute their results, which could be very interesting.

     

    Since the Zynq is not a low-end mass market SoC and has some very fast internal buses, I think there's a very good chance that this might be the first ARM device encountered that approaches the maximum theoretical throughput for gigabit Ethernet.

     

    Anyway, we'll see. image 

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • morgaine
    morgaine over 12 years ago in reply to morgaine

    We now have our first data pair for the Parallella's Zynq over gigabit Ethernet, courtesy of notzed.

     

    There were some severe networking problems encountered during notzed's tests though, and I expect that these figures will rise to over 400Mbps once those issues are resolved.  Andreas posted in that thread a link to Xilinx Application Note XAPP1082, in which Ethernet throughput is specified over the full range of frame payload sizes, and it tops out at just over 400Mbps without jumbo frames --- a throughput also seen by notzed in one test.  A different test utility is being used though.

     

    At this point I haven't yet understood the test conditions in that document properly, so I'm not sure if we're comparing like with like yet.  Anyway, this is a start, and any measurements are better than none.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • Former Member
    Former Member over 12 years ago in reply to morgaine

    Morgaine Dinova wrote:

    A different test utility is being used though.

    netperf is probably the better tool to use, it's more widely known and therefore easier to compare results.  It's also a lot more complex.

     

    nuttcp on the other hand is simple to build, doesn't appear to have any dependencies, is simple to use etc. Probably the biggest drawback is that we can't be sure the results compare properly with netperf.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • michaelkellett
    michaelkellett over 12 years ago in reply to morgaine

    @Morgaine,

     

    I can't quite match your reading of the Xilinx app note.

     

    On page 13, fig 10 the inbound performance is 679MBps for MTU = 1500,

    On page 12 fig 9 they show 683 for what I take to be a standard setup in terms of MTU.

     

    I can't find your 400Mbps figure .

     

    MK

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • Former Member
    Former Member over 12 years ago in reply to morgaine

    Morgaine Dinova wrote:

     

    We now have our first data pair for the Parallella's Zynq over gigabit Ethernet, courtesy of notzed.

    Just had a quick read at that, but as a suggestion, make sure that any sort of powersave is turned off and that cpu frequency governors at both ends are forced to performance.  I've seen variations in results where one end has a governor slowing down the clock because it sees little demand

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • morgaine
    morgaine over 12 years ago in reply to michaelkellett

    Michael Kellett wrote:

     

    I can't quite match your reading of the Xilinx app note.

     

    On page 13, fig 10 the inbound performance is 679MBps for MTU = 1500,

    On page 12 fig 9 they show 683 for what I take to be a standard setup in terms of MTU.

     

    I can't find your 400Mbps figure .

     

    It's from the graphs of throughput versus message size on pages 10 and 11, respectively for Processing System (PS) and Programmable Logic (PL) throughput as defined on page 1.  The AXI Ethernet bar graphs on page 13 may provide the more relevant figures (I hope so!), but I'm still figuring it out at this point.  I certainly expected far better figures than we saw for gigabit i.MX6 and AM3359, because Zynq is designed with high speed internal buses and for applications which need to feed the FPGA at high rate.

     

    Note that notzed's figures are nowhere near the 680 Mbps mark, although that could be due to a local fault.

     

     

    Addendum:  The Xilinx App Note says on page 2:

     

    Figure 1 shows the various Ethernet implementations on the ZC706 board.

    Note:  The three Ethernet links cannot be active at the same time because the ZC706 board offers only one SFP cage for the 1000BASE-X PHY. The PS-GEM0 is always tied to the RGMII Marvell PHY. The PS-GEM1 and the PL Ethernet shar e the 1000 BASE-X PHY so only two Et hernet Links can be active at a given time.

     

    Unfortunately I haven't yet been able to figure out from the Gen1 reference and schematic which of the three is actually in use on Parallella.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • michaelkellett
    michaelkellett over 12 years ago in reply to morgaine

    Thanks. BTW - the page 11 isn't using jumbo frames but just increasing the message size using normal sized frames. The Xilinx note about the dip in performance confirms this. The max performance still isn't that good.

     

    MK

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube