element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • About Us
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Single-Board Computers
  • Products
  • Dev Tools
  • Single-Board Computers
  • More
  • Cancel
Single-Board Computers
Forum Parallella $99 board now open hardware on Github
  • Blog
  • Forum
  • Documents
  • Files
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Single-Board Computers to participate - click to join for free!
Actions
  • Share
  • More
  • Cancel
Forum Thread Details
  • Replies 69 replies
  • Subscribers 60 subscribers
  • Views 6202 views
  • Users 0 members are here
  • zynq
  • xilinx
  • parallella
  • epiphany
  • cortex-a9
  • adapteva
  • arm
Related

Parallella $99 board now open hardware on Github

morgaine
morgaine over 12 years ago

It's probably spreading everywhere like wildfire, but I just read on Olimex's blog that Adapteva's Parallella kickstarter board now has almost all of its development materials on Github in Parallela and Adapteva repos, and is officially being launched as open hardware.

 

The 16-core board is priced at US$99 and its host ARM is a dual-core Cortex-A9 (Xilinx Zynq 7010 or 7020).  It comes with 1GB DDR3, host and client USB, native gigabit Ethernet and HDMI, so at that price this would be a fairly interesting board even without its 16-core Epiphany coprocessor.  (There's a 64-core version planned too.)  For more details see the Parallella Reference Manual.

 

This has all the makings of a pretty fun board.  I hope Element 14 has one eye open in that direction. image

 

Morgaine.

 

 

PS. Note the 4 x Parallella Expansion Connectors (PEC) on the bottom of the board, illustrated on page 19 of the manual and documented on page 26.  They look very flexible for projects, providing access to both Zynq and Epiphany resources.

  • Sign in to reply
  • Cancel

Top Replies

  • michaelkellett
    michaelkellett over 11 years ago in reply to johnbeetem +2
    I wonder why in these discussions so many people overlook Lattice. Easily the most fun FPGA company and they DO have FPGAs in phones. Their Ultra Low Density approach fits well with John's definition of…
  • Former Member
    Former Member over 12 years ago +1
    Morgaine Dinova wrote: PS. Note the 4 x Parallella Expansion Connectors (PEC) on the bottom of the board, illustrated on page 19 of the manual and documented on page 26. They look very flexible for projects…
  • morgaine
    morgaine over 12 years ago in reply to Former Member +1
    selsinork wrote: I've wondered about these for a while.. 16 or 64 cores of a specialised processor that probably can't run linux or other general purpose OS makes it highly niche. If they sell many of…
Parents
  • morgaine
    morgaine over 11 years ago

    Although Adapteva are still fulfilling their Kickstarter committment, their shop is already open for preorders of the 16-core Epiphany board for November delivery.  Three options appear to be available:

     

     

    Board Model
    GPIOXilinx Device
    Price
    Parallella-16No GPIOZynq-7010$99
    Parallella-16With GPIOZynq-7010$119
    Parallella-16With GPIOZynq-7020$199

     

     

    If "No GPIO" means none, zero, zilch, that doesn't appear very enticing, I must say.  If this describes the situation accurately, the range of application of the basic board will be a lot narrower than expected.  And if the Zynq-7020-based Parallella-16 costs $199, then the price of the Parallella-64 is probably going to be very unfriendly.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • Former Member
    Former Member over 11 years ago in reply to morgaine

    Morgaine Dinova wrote:

     

    If "No GPIO" means none, zero, zilch, that doesn't appear very enticing, I must say.  If this describes the situation accurately, the range of application of the basic board will be a lot narrower than expected.  And if the Zynq-7020-based Parallella-16 costs $199, then the price of the Parallella-64 is probably going to be very unfriendly.

    Given there's an 'optional upgrade' for the GPIO connectors it seems likely that the difference is simply down to installing the connectors.  Any volunteers to hand solder four of those ?

     

    In some ways you can see the reasoning, not having them will not prevent you doing software things on the Epiphany processor.  If you really want gpio, and don't care so much about the Epiphany there are probably better boards.

     

    Am I correct in thinking that the only difference between the 7010 and 7020 is more FPGA space ?  If so, what's this board really meant to be, a dev board for parallel processing on the Epiphany, or an FPGA dev board ?

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • morgaine
    morgaine over 11 years ago in reply to Former Member

    selsinork wrote:

     

    what's this board really meant to be, a dev board for parallel processing on the Epiphany, or an FPGA dev board ?

     

    If Adapteva had asked themselves that question very clearly and seriously, I suspect that Parallella would have a very different design and a very different cost.  As it stands, the main effect of the board will be to promote the Zynq range to a far greater number of people than Xilinx would normally expect, but to a far smaller number of people than Adapteva would like as an audience for Epiphany.

     

    It's a design from heaven ... for Xilinx.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • michaelkellett
    michaelkellett over 11 years ago in reply to morgaine

    I agree that the choice of Zynq seems a bit odd - is it that the Epiphany chip is not seriously useable with out the support of a 'big' ARM processor and  an FPGA to glue them together.

     

    I just had  a quick look on their website and it seemed that all theri applications diagrams showed the Epiphany connectedd to an FPGA.

     

    MK

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • Former Member
    Former Member over 11 years ago in reply to morgaine

    Morgaine Dinova wrote:

     

    As it stands, the main effect of the board will be to promote the Zynq range to a far greater number of people than Xilinx would normally expect, but to a far smaller number of people than Adapteva would like as an audience for Epiphany.

    My feeling was always that Epiphany would have a very small audience. Both the RPi and BBB have shown there's a market for a low priced board, but that a large section of the buyers are thinking 'media center'.

     

    Price wise, $99 doesn't compare well. The circuitco page is showing 74020 BBB shipped as of today.  So you have to wonder if the Parallella can ship enough to get to Michaels prices for 100K Zynq devices. Maybe they can, or maybe Xilinx have given them a good deal up-front, but either way I feel you're right and the Zynq will end up overshadowing the Epiphany.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • morgaine
    morgaine over 11 years ago in reply to Former Member

    selsinork wrote:

     

    Price wise, $99 doesn't compare well. The circuitco page is showing 74020 BBB shipped as of today.  So you have to wonder if the Parallella can ship enough to get to Michaels prices for 100K Zynq devices. Maybe they can, or maybe Xilinx have given them a good deal up-front, but either way I feel you're right and the Zynq will end up overshadowing the Epiphany.

     

    Whereas if Adapteva had mounted the Epiphany on a barebones Arduino shield or BeagleBone cape or Pi plate with minimal glue logic, the board could have ridden the huge wave of established ARM and AVR enthusiast communities and at Pi-type prices.  This seems clear from our ballpark cost examination above.

     

    Note that if 100k volumes would make Zynq prices plummet, they would do the same to the cost of the Epiphany chip, and so the price imbalance would remain.  Volume does not change the overall picture of a fundamentally misplaced choice of host pairing.  And with the greater volumes, Adapteva would even be making a profit in this early experimental phase, instead of having to pay the bulk of proceeds from sales to Xilinx.

     

    Just imagine if the Pi or BBB contained an additional component that is many times as expensive as their main SoCs.  The "Pi price niche" would not have materialized, and hence neither would have the enthusiastic mass adoption of Pi and the boards that followed it.  Adapteva may have made a big mistake.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • morgaine
    morgaine over 11 years ago in reply to michaelkellett

    Michael Kellett wrote:

     

    is it that the Epiphany chip is not seriously useable with out the support of a 'big' ARM processor and  an FPGA to glue them together.

     

    Epiphany has no dependency on its host processor except as a means of downloading code to it and then supplying the data for that code to process.  The Epiphany cores have neither instruction set nor architectural similarity to ARM and can work well with any host system that can supply the code and data sufficiently quickly.  There is no single definition for "sufficiently quickly" here --- in parallel programming, that is always going to be application-dependent.

     

    The Zynq's two Cortex-A9 cores are each not hugely faster than our BBB's Cortex-A8, and they're being clocked at only 800MHz according to the Parallella Gen-1 Reference manual so computationally they're each pretty much in the same speed ballpark as the BBB's single core.  In other words, the Zynq's ARM is not providing any significant speed advantage over much cheaper devices, at least per core.  Presumably both cores can feed the Epiphany simultaneously, and if so then there is a throughput advantage gained by using a dual-core ARM over a single-core device.  There is no shortage of multi-core ARMs these days though, and most of them probably cheaper than the Zynq because they're made for the consumer market.  (*)

     

    The host CPU(s) aside, there is also the question of interfacing to the Epiphany.  I expect that the reason why Adapteva have chosen to interface through an FPGA is because they haven't yet tied down the optimum way in which to interface to Epiphany.  After all, if they knew this already then there would be no need to use programmable logic and the extra costs associated with it.  Choosing a host SoC that combines ARM and FPGA looks like an ideal combination for their purposes, but this is true only if the cost of combining the two functions doesn't adversely affect the desired goal of bringing Epiphany into widescale use.  It seems to me that the Xilinx device is doing exactly that, because its huge price must be limiting Epiphany uptake.

     

    The FPGA design files have been made available by Adapteva so we'll be able to see what the interfacing requirements of Epiphany really are --- see the quite detailed "Parallella Platform Reference Design" white paper which also includes a (broken) link to the HDL, now available here.  Given that knowledge, I bet that a much more cost-effective host design can be found, one that can bring Epiphany to a far wider audience, more cores at a given price, and higher profits for Adapteva through focusing more strongly on their own chips rather than on costly distractions.

     

    ===

     

    (*) Addendum:  For example, the unit price of the dual-core Allwinner A20 is 12 euro from Olimex, and 9.60 for 50+.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • michaelkellett
    michaelkellett over 11 years ago in reply to morgaine

    The Epiphany is a sort of co-processor - it doesn't have peripherals of its own so it's always likely to need glue logic to fit it into a system that does anything useful. The really nice thing about the Zynq is the tightness and quality of the coupling between the FPGA and the ARM cores - nothing else (that I know of) comes close. So if you want the Epiphany to do the hard work for an ARM the Zynq is about the best solution on offer (in terms of performance) so it's a reasonable choice if the main goal is to show off the E at it's best.

     

    The problem with a great many low cost ARM (and other) processors (the Broadcom and Allwinner A20s included) is that they don't offer low latency high bandwidth data paths in and out of the core. You might get sata, pcie etc with quite reasonable bandwidth but awful latency.

     

    So if you want the Epiphany to do a lot of talking back and forth with the rest of the system you'll need an FPGA and if you want that to communicate well with the processor the Zynq is as good as it gets.

     

    The A20 at about £8 with a £10 fpga isnt going to be a lot cheaper than the Zynq but the performance will be a lot worse.

     

    So having thought about a bit more I think the Zynq does make sense - it lets their board address the widest range of applications and probably will enable them to demonstrate just how fast their chip can go.

     

    MK

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • Cancel
Reply
  • michaelkellett
    michaelkellett over 11 years ago in reply to morgaine

    The Epiphany is a sort of co-processor - it doesn't have peripherals of its own so it's always likely to need glue logic to fit it into a system that does anything useful. The really nice thing about the Zynq is the tightness and quality of the coupling between the FPGA and the ARM cores - nothing else (that I know of) comes close. So if you want the Epiphany to do the hard work for an ARM the Zynq is about the best solution on offer (in terms of performance) so it's a reasonable choice if the main goal is to show off the E at it's best.

     

    The problem with a great many low cost ARM (and other) processors (the Broadcom and Allwinner A20s included) is that they don't offer low latency high bandwidth data paths in and out of the core. You might get sata, pcie etc with quite reasonable bandwidth but awful latency.

     

    So if you want the Epiphany to do a lot of talking back and forth with the rest of the system you'll need an FPGA and if you want that to communicate well with the processor the Zynq is as good as it gets.

     

    The A20 at about £8 with a £10 fpga isnt going to be a lot cheaper than the Zynq but the performance will be a lot worse.

     

    So having thought about a bit more I think the Zynq does make sense - it lets their board address the widest range of applications and probably will enable them to demonstrate just how fast their chip can go.

     

    MK

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • Cancel
Children
  • morgaine
    morgaine over 11 years ago in reply to michaelkellett

    Michael Kellett wrote:

     

    The Epiphany is a sort of co-processor - it doesn't have peripherals of its own so it's always likely to need glue logic to fit it into a system that does anything useful. The really nice thing about the Zynq is the tightness and quality of the coupling between the FPGA and the ARM cores - nothing else (that I know of) comes close. So if you want the Epiphany to do the hard work for an ARM the Zynq is about the best solution on offer (in terms of performance) so it's a reasonable choice if the main goal is to show off the E at it's best.

     

    But the aim is not to accelerate one specific ARM SoC.  Adapteva shouldn't care less about one ARM or another, but only about their Epiphany device because that is where their fortunes lie.  The Zynq is really only a cost and a burden on the road to promoting Epiphany, assuming that other reasonable options are available.  That is the question I am asking or considering.  It's an engineering mistake to say "No" in advance of knowing the interfacing and throughput requirements, just because Zynq is a leader in ARM-FPGA integration.

     

    The FPGA in the Zynq undoubtedly has a maximum throughput vastly exceeding that of the ARM cores, based on our background knowledge of typical FPGAs.  This means that the Zynq's dual Cortex-A9 cannot be anywhere near optimum for feeding data through the  interface at the highest rate the FPGA can probably sustain.  Because of the Zynq's AXI Bus (shown on the whitepaper I linked in post #24), the Zynq is probably very efficient at this, but in the end the data is still being generated by a pair of lowly Cortex-A9 cores clocked at 800MHz.  The AXI Bus reduces bottlenecks but it can't speed up the ARMs.

     

    To turn this into an engineering analysis, what we need to know are the Epiphany's interfacing abilities and maximum I/O throughput.  For example, if it can be fed only by a single external data source at a time then a dual-core host will not improve matters (other than by being able to dedicate a core to that task).  At the other end of the spectrum, if all 12 of Epiphany-16's boundary cores can be fed simultaneously then clearly a dual-core host is barely going to scratch the surface of maximum data throughput.  In addition, and orthogonal to the issue of Epiphany's I/O parallelism, one also needs to know the maximum rate at which external data can be fed into Epiphany over each path --- if it's less than a Cortex-A9 core can deliver or significantly greater then there is no particular benefit in using this particular host processor from a throughput perspective.  It's all in the details, and can't be judged in advance of knowing them.

     

    On top of all the above, MIMD multiprocessors are notorious for having a throughput that is completely determined by the running application, as I know from personal experience.  First of all this is a function of the available parallelism in the problem,  secondly it is strongly influenced by the details of the software implementation, and thirdly it is at the mercy of communication and synchronization and read and write throughput within the array.  The combination of these will unavoidably mean that the chosen host is optimum for only a tiny faction of the very broad range of problems to which Epiphany can be applied.  "Zynq is best" would be an unjustified statement.

     

    And finally, many compute-bound problems require a large amount of MIMD processing but only occasional communication outside of the processing array, and for these even a Pi, BBB or even Arduino could suffice as host.  The sheer number of these boards in circulation would make Epiphany an overnight success if the approach taken had been to create simple daughterboards rather than the approach taken with Parallella.

     

    ===

     

    Addendum:  Looking at the Epiphany E16G301 datasheet, page 1 shows that each of the four eLinks connects directly to the four cores on that side of the array, so by accident it appears that my guess was right that the 12 boundary cores around the periphery of the array have direct connections brought out on the BGA (assuming that the diagram reflects physical reality of course).  The internal eMesh network allows any core to be reached from any eLink, but the cores that are directly connected have the fastest access whereas the others have to be routed from core to core internally, which is slower.  The diagram on page 6 shows the maximum throughput of each eLink to be 2 GB/s and hence the maximum aggregate external throughput of the chip is 8 GB/s.  (The Features bulletpoint list actually says 6.4GB/s, so maybe the 8GB/s includes framing overheads.)

     

    I'm continuing to read the docs.  My initial gut feeling is that two 800MHz ARM cores have no chance of feeding the four eLinks at max Epiphany data rate, and therefore the choice of Parallella host SoC doesn't have the aim of doing that.  To keep the Epiphany boundary cores from going hungry in a communications-bound application probably requires going off-board and hooking up to other Epiphany devices in a cluster.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • johnbeetem
    johnbeetem over 11 years ago in reply to morgaine

    Morgaine Dinova wrote:

     

    The FPGA in the Zynq undoubtedly has a maximum throughput vastly exceeding that of the ARM cores, based on our background knowledge of typical FPGAs.  This means that the Zynq's dual Cortex-A9 cannot be anywhere near optimum for feeding data through the  interface at the highest rate the FPGA can probably sustain.  Because of the Zynq's AXI Bus (shown on the whitepaper I linked in post #24), the Zynq is probably very efficient at this, but in the end the data is still being generated by a pair of lowly Cortex-A9 cores clocked at 800MHz.  The AXI Bus reduces bottlenecks but it can't speed up the ARMs.

    I believe you can also use the the FPGA fabric to talk to the system buses directly without going through the ARM cores, i.e., the FPGA can DMA to shared DRAM and also to peripheral devices like Gigabit Ethernet.  This way you can build very high performance data processing beyond what an ARM core can handle, and basically use the ARM to run control software with modest processing requirements.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • morgaine
    morgaine over 11 years ago in reply to johnbeetem

    John Beetem wrote:

     

    I believe you can also use the the FPGA fabric to talk to the system buses directly without going through the ARM cores, i.e., the FPGA can DMA to shared DRAM and also to peripheral devices like Gigabit Ethernet.

     

    While true, the Zynq doesn't have a monopoly on DMA.  Any reasonable ARM system can be expected to feed its DMA controllers at close to memory rate, and even Cortex-M* microcontrollers commonly feature crossbar-type internal buses so that different types of data transfer can occur in parallel and DMA controllers aren't starved by bus arbitration.  In other words, far cheaper ARMs could keep the Epiphany eLinks equally busy through DMA.

     

    Regarding Ethernet, that really comes down to DMA again.  There is no room in Epiphany core local memory (just 32KB per core) for full TCP/IP stacks, so the host will have to handle the networking, extract the data out of the protocol payload, and DMA can then fish it out of memory for feeding Epiphany.  But again, the Zynq doesn't have any special advantage for this since gigabit MACs are quite common in modern ARM SoCs (less so gigabit PHY, sadly).

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • johnbeetem
    johnbeetem over 11 years ago in reply to morgaine

    Morgaine Dinova wrote:

     

    Regarding Ethernet, that really comes down to DMA again.  There is no room in Epiphany core local memory (just 32KB per core) for full TCP/IP stacks, so the host will have to handle networking, extract the data  out of the protocol payload, and DMA can then fish it out of memory for feeding Epiphany.  But again, the Zynq doesn't have any special advantage for this since gigabit MACs are quite common in modern ARM SoCs (less so gigabit PHY, sadly).

    You could probably do wire-speed TCP/IP in the FPGA fabric, using block RAM for table look-up.

     

    How's the power consumption for GBE PHYs these days?  Maybe it's better to leave them off SoC so the chips don't get too hot.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • morgaine
    morgaine over 11 years ago in reply to johnbeetem

    John Beetem wrote:

     

    You could probably do wire-speed TCP/IP in the FPGA fabric, using block RAM for table look-up.

    TCP/IP protocol implemented entirely in FPGA block RAM?  You jest ... I hope. image

     

    No doubt small and well-defined auxiliary functions could be implemented in the FPGA fabric as part of a TCP offload engine (which are quite common nowadays), but to implement the whole thing in hardware simply doesn't make engineering sense because most parts of TCP/IP are not in the high-speed pathway or are rarely executed.

     

    How's the power consumption for GBE PHYs these days?  Maybe it's better to leave them off SoC so the chips don't get too hot.

     

    That was just poor phrasing on my part.  I meant that gigabit PHY are less common on ARM boards even when the host SoC provides gigabit MAC.  Your point about heat is a good one.  Gen0 Parallella recipients were complaining quite a lot about heat.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • johnbeetem
    johnbeetem over 11 years ago in reply to morgaine

    Morgaine Dinova wrote:

     

    John Beetem wrote:

     

    You could probably do wire-speed TCP/IP in the FPGA fabric, using block RAM for table look-up.

    TCP/IP protocol implemented entirely in FPGA block RAM?  You jest ... I hope. image

     

    No doubt small and well-defined auxiliary functions could be implemented in the FPGA fabric as part of a TCP offload engine (which are quite common nowadays), but to implement the whole thing in hardware simply doesn't make engineering sense because most parts of TCP/IP are not in the high-speed pathway or are rarely executed.

    I'm talking about the core packet processing functions like CRC checksum and port numbers and window management.  I'm also talking about IPv4 since I don't have experience with IPv6.  However, since modern wire-speed routers are implemented in hardware, there's no reason you can't do this with a decent FPGA since managing an end-point is a lot easier than routing.  In fact, you can buy TCP/IP IP for various Xilinx FPGAs.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • morgaine
    morgaine over 11 years ago in reply to johnbeetem

    John Beetem wrote:

     

    However, since modern wire-speed routers are implemented in hardware, there's no reason you can't do this with a decent FPGA since managing an end-point is a lot easier than routing.

     

    I think you meant the opposite, that routing is a lot easier than managing an endpoint.  Routing in hardware needs to handle only the IP layer and can ignore all higher-level detail, which is just payload data at the IP level --- that's why good routers can route frames back-to-back even on 10gig.  The routing management protocols and ICMP only come into play at exception or change points, so that's typically left to CPUs to handle at their leisure in all but the highest end backbone routers.

     

    At the endpoints, the entire protocol stack comes into play, which is a heavy burden indeed.  TCP offload engines commonly dedicate an embedded CPU to the task rather than hardware, although as we both mentioned, simple functions like CRC are very commonly implemented in hardware, often as a dedicated instruction in the SoC.

     

    That iTOE Verilog for Virtex and Spartan doesn't look like open source to me. image

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • Cancel
  • michaelkellett
    michaelkellett over 11 years ago in reply to morgaine

    @Morgaine and John,

     

    I haven't quite got to full TCP/IP in the fpga yet but I'm currently using a Lattice ECP3 to generate multi fragment UDPs sent out at wire speed to GBE, (external Marvell phy and it runs pretty hot  - which answers an other question). I can't see that it would ever make sense to do all of the work in the FPGA  - things like ARP don't need that kind of speed.

    I would love to get my teeth into some TCP acceleration in the FPGA but it is very expensive in terms of development time and we have already hit issues with common GBE network components (like switches) which cant actually handle wire speed data unconditionally - and the conditions are not well specified.

    One of the problems you hit with sharing the network stack between processor and FPGA is that you end up writing the entire stack, parts in C and parts in VHDL or Verilog - that's why so far we've kept our end very simple with support for UDP, IP, ARP and not much else.

    The phy uses more power than the FPGA and the processor (STM32F407).

     

    MK

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • Cancel
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube