HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
David Williams - MicroFPGA – The Coming Revolution in Small Electronics

HACKADAY · Youtube · 106 HN points · 1 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention HACKADAY's video "David Williams - MicroFPGA – The Coming Revolution in Small Electronics".
Youtube Summary
Big FPGA’s are awesome. They’re doing what they’ve always done, enabling AI, signal processing, military applications etc. However, there is a new possibility emerging – FPGA’s for small applications – which is quite possibly even more significant. Using open source tools, cheap flexible development boards, and new libraries, designers have a whole new set of options, creating incredibly high performance, flexible, low power projects and products.

Read the article on Hackaday:
https://www.youtube.com/watch?v=ME_e06ApxJA

Slides for David's talk:
https://davidthings.github.io/spokefpga/hackaday_slides_2019_11_12/#/

Follow David on Twitter:
https://twitter.com/davidthings
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
May 15, 2021 · 106 points, 56 comments · submitted by peter_d_sherman
metaphor
This talk struck me as conflicting in a sort of have-your-cake-and-eat-it-too kind of way. The wrinkles part[1] summarizes a lot of what this individual is advocating. Paraphrasing (with equivalent liberal handwaving of details as the talk does):

I want to write FPGA code like any other high-level programming language (but quickly dismiss that the hard part of developing native HDL is in fact that it's fundamentally a lot more akin to architecting physical hardware).

I want to use FPGAs because they're powerful (but only when a conceptual design aligns with a traditional microcontroller architecture).

I want to use FPGAs because they're flexible (but I need to nerf proprietary building blocks that endow specific devices with competitive edge, or trade significant performance in other ways to do so).

I want to commodify the development peripheral market (but only if they conform to a certain commercially trademarked and weakly defined spec[2] inherently constrained to low-frequency applications where sloppy SI is presumed to be inconsequential).

I want to standardize high-performance interfaces (so arbitrarily "abandon standards you don't like" and simply write your own, but only if they're not "annoying"...and look like a custom FIFO with specific signal labels).

I'm sure this will be an unpopular opinion, but the positive feedback loop generated around this topic needs a stability check.

[1] https://youtu.be/ME_e06ApxJA?t=2080

[2] https://digilentinc.com/Pmods/Digilent-Pmod_%20Interface_Spe...

elcritch
Good points! His talk is full of enthusiasm, and I am partial to the overall notion of better integration of data and peripherals.

Still it seems much of what he's wanting could be handled by approaches like the RP2040 chip from the RPi folks that let's you define your own low level pin level protocols using assembly to program state machines. Also many newer MCUs allow you define almost any function on any pin via internal IO muxes. Some MCU's even contain a small FPGA / CPLD.

Also, I prefer the "Mikroe Click" add-on / peripheral board from Mikroe Electronics. It's got a decent ecosystem and it handy for making quick "proto-pcb"'s that integrate sensors but allow you to iterate smaller circuits (even with hand made proto boards https://www.mikroe.com/proto-click). There's a few other non-Mikroe vendors as well.

Given how difficult FPGA's seem to be to program, there's lots of alternatives to achieve the same goals. Things like RISC-V or OpenPOWER and the leveling off of improving semiconductor node sizes will probably lead to a lot more "customizable peripherals" in the next decade, but programmed using the MCU.

haberman
What constrains PMod to low-frequency applications with sloppy SI? What inherently limits the frequency?

I have been dabbling in FPGA programming on a Xilinx Nexys A7. Through some PMod ports, I've managed to drive the shift registers on an LED matrix display at 20MHz. Is this considered high frequency?

My oscilloscope shows that the signal is somewhat messy at this frequency. But the data sheet for the shift register suggests that the thresholds are 0.7xVcc and 0.3xVcc for high and low, so it seems to be good enough. Is this shift register unusually accepting of sloppy SI?

I'm not understanding how a PMod port would constrain the usable frequency. If my clock is going at 100MHz inside the FPGA, what prevents signals of this frequency being sent through PMod wires? Is a PMod port different than the normal I/O pins offered on a board like TinyFPGA?

metaphor
Not trying to be unhelpful, but if the assertion was not immediately evident, then you're missing a huge chunk of basic undergrad EE theory that cannot be dismissed as merely academic. I'd recommend reading the spec (it's only 10 pages) and coming to your own conclusions.

To your shift register question---which is the only one with enough detail to afford a quick answer---referencing JEDEC JESD8C.01 § 2.3 Table 2 as basis of comparison: yes wrt VILmax, and no wrt VIHmin...at face value, it doesn't comply with industry standard, but is nevertheless permissible by Pmod spec because, well, it's a shit spec by a sample size of 1 engineer's measure.

To your TinyFPGA question, I don't care to ponder the design considerations (or lack thereof) of arbitrary hobbyist dev boards in the wild. In the nicest way possible, it's a complete waste of my time, sorry.

Suffice it to say the spec's interconnect choice is the primary limiter, compounded by permissiveness in some aspects, poor specificity in others, and the general ease of abusing objective interoperability while still complying with normative language.

I leave it to the passerby as an exercise to consider the implications of the following notions in this context:

1. All real conductors exhibit certain parasitic properties which deviate from an ideal model; whose characteristics are largely a function of physical geometries, relevant material properties, and the signals that flow in/around them; and whose significance is almost entirely dependent on the application under consideration.

2. An arbitrary signal that satisfies the 3 Dirichlet conditions can be summarily expressed as the linear sum of its fundamental and harmonic sinusoidal components.

3. As a back-of-the-envelope measure, transmission line effects cannot be so easily neglected when, in a point-to-point system, the ratio of the distance that a signal must traverse to the wavelength of the propagating signal itself exceeds roughly 1%.

haberman
Yep, I'm undoubtedly missing a huge chunk of basic undergrad EE theory, seeing as I am a CS guy, not EE. That said, I think it's possible (even fun) to explain key concepts to people outside your own field in plain language. I practice on my friends and girlfriend a lot.

To say that PMod's primary limiting factor is its interconnect choice is not very specific. The spec itself says that "speeds greater than 100MHz should be achievable using high-speed ports", so the claim that PMod is "inherently constrained to low-frequency applications" seems at best incomplete. I'm not arguing that every PMod port will support that -- obviously most are not designed for this -- but I am not seeing a fundamental limitation.

metaphor
You're probing for an easy answer because you think one exists that'll be as easy as handling the commodity 100mil interconnect you've been playing with. But the fact is a minimal satisfactory answer resides in the 2nd level of nature's bowels where the simple conservative models of Ohm's law and the laws of Kirchhoff that you were taught in basic physics start to break down as a useful standalone tool and must be supplemented with more complex models to remain consistent with real world observations. I'm not even talking about Maxwell's equations in the raw either...we're still several orders removed, sandwiched in a domain where lossless transmission line theory and lumped-element modeling still hold. Still not ringing any bells? No worries...most practicing EEs are quite deaf to them after college as well.

If you insist on settling on an answer constrained to classical 1st-order models, then you're in for a treat: everything should just work! The guzzintos perfectly match the guzzoutas, first law of thermodynamics is demonstrably preserved, motherhood and apple pie, happy endings.

Except that's not how the real world works...and it almost certainly won't meet any pragmatic definition of commercially reliable at the limits you're wanting to probe at. But why trust what I have to say? You have an entirely capable Artix-7 dev board at your beck and call. And you have an oscope that'll generate eye diagrams all day to your heart's content. And the Pmod interconnect is so trivial that you could even hack together a throwaway DUT load...or why bother with that when you could use one of your existing Pmods as a real world DUT to characterize?

I thought I left enough breadcrumbs for the incidental reader to self-service on how deep down this infinite rabbit hole will be sufficient to grasp the magnitude of the questions being asked, and I thought my language was appropriately tailored. Maybe not. Here's a few more courtesy hints:

1. The first remark was to steer towards qualitatively thinking about the role of parasitic inductance and how this parameter will dominate the interconnect at higher frequencies as skin effect kicks on. It's worth noting that you'll be quickly sucking hind tit attempting to control impedance through a board-to-board interconnect of this design, which immediately kills reasonable prospects for a huge chunk of the application space that FPGAs are chosen as a viable host for right out of the gate!

2. The second remark was to kindly say that EEs who care about SI couldn't care less about the fundamental frequency of your digital signal when what matters in practice are the significant harmonics that said signal's edges are comprised of. The term significant here is roughly defined as at least 10%ish of the fundamental's magnitude. To get a feel for the landscape, derive the Fourier series expansion of (keeping things simple) an ideal square wave and interpretation of the rest should naturally follow.

3. The third remark was to establish a ballpark basis for independently sizing up when the convenience of simple 1st-order models start to break down and it becomes necessary as a matter of due diligence to kick it up a notch. Determine the permissible distance your signal needs to travel. Determine the wavelength of the highest significant harmonic of the signal to account for (assume in a vacuum to keep it simple, completely ignoring substrate permittivity). If the ratio of distance to wavelength exceeds 1%, consult with a competent electronics engineer if you objectively care about quality.

That's about the best approximate free answer this engineer is willing to extend. Need better precision? I'd recommend deferring to a billable scientist.

(P.S. only about as fun as trying to explain the merits of idempotency to my wife /s)

haberman
Thank you (genuinely) for writing all of this down, but nothing you have written is specific to PMod. I get that complex physical effects cause simple models of the world to break down -- that much is obvious. And of course it's up to you what explanations you are willing to write up for free. But I don't see how any of this supports your claim that PMod is insufficient to high-frequency work when it does not reference any design decisions made by the PMod spec that adversely affect its high-frequency performance.

One of the following must be true:

1. The PMod interconnect is specified in a way that inherently limits high frequency applications, whereas it could have been specified in a different way that would allow for them.

2. Something about the design space PMod is trying to solve (off-board peripheral modules connected to a main board through a standard connector) is inherently hostile to high-frequency applications, and no specification is capable of getting around this limitation.

3. PMod is actually capable of high-frequency applications, although most PMod ports are not particularly designed for this.

Your original message seemed to be claiming (1), but none of your subsequent arguments have offered any support for (1) that I can see. You have absolutely convinced me that complex physical effects must be taken into account, but that information does not help resolve which of 1-3 is true.

To convince me of (1) I would need to see a specific design decision in the PMod spec that makes high-frequency applications impossible, as well as the alternative design choice that would have allowed for them. For example: "PMod does not allow shielded wires, but high-frequency requires shielding" would be a convincing argument if true.

pclmulqdq
FPGA evangelists like this really need a reality check. Programming in RTL is not easy, and it's not really a skill that most people need, so it doesn't make a lot of sense as an educational tool.

The "killer app" for hobbyist FPGAs might be some sort of "SoC builder" tool that lets you build a specific microcontroller around a few high-performance cores, and then program it like software.

But don't worry. Since 2000, mainstream FPGA programming has been right around the corner.

swiley
Meh. HDL isn't that hard and there are good books on it. The main reason FPGA programming is unpopular is because the tools are pretty hideous. Icestorm makes things a lot better but it supports like 5 parts and they're not very powerful ones.
exikyut
How cheap are the devkits, can I easily source <10 unit quantities for <$ouch, and what are the limitations?

- Fundamental questions I imagine a semi-interested tinkerer would be asking

swiley
devkits compatible with icestorm? I have an icestick that I've messed with, at 1x it costs $20 (at least when I bought it off amazon.)
canada_dry
> Since 2000, mainstream FPGA programming has been right around the corner.

As an avid technologist I dabble in everything from electronics to assembler. But, every time I try dipping my toe into FPGA programming I quickly retreat!

The tools are too complex and the whole process has a learning curve that far out weigh using a stock CPU/SoC.

FPGAhacker
> The tools are too complex and the whole process has a learning curve that far out weigh using a stock CPU/SoC.

Sadly this is true, although the open source tools are gaining ground pretty quickly from what I understand. https://symbiflow.github.io/getting-started.html

detaro
Accessibility of FPGAs has made massive strides since 2000. You can nowadays get people build a really basic FPGA demo in an hour, starting with a dev board and their unprepared laptop, and even I remember times where you'd struggle to get the tooling installed in that time.

(Not going to go into "mainstream", because that's a) incredibly vague (is microcontroller programming "mainstream"?) and b) IMHO not that interesting - it's fine for specialist things to be accessible but not mainstream)

FPGAhacker
Personally I think "programming" RTL is dead simple. It's so simple, in fact, that it's not even interesting. But I've been doing it forever so my perspective is skewed.
pclmulqdq
I think you are thinking of different types of tasks. A lot of RTL is brainless, but if you are actually trying to compute with it, it can get tough.

A mediocre C++ programmer can write a (bad) hash table in C++ in a few hours. How fast can you give me one in Verilog?

FPGAhacker
I have to make a lot of assumptions, but a couple hours doesn't seem unreasonable for the core function. Among the many assumptions are the hash function, what is actually being hashed, what processor interface is, desired performance, etc.
IQunder130
You're not writing a hash table in verilog, you're writing a hardware configuration that can hold and manipulate a hash table. It's a different ball game.
blihp
https://github.com/enjoy-digital/litex
mypalmike
The SoC builder you describe might be the MiSTer project for console hardware emulation. But even then, the number of people who develop verilog for that project is rather tiny.

However, it's the first platform that has gotten me, a long time software developer, to really look into how it works. As in, I've been working through verilog tutorials, etc. and formulating a project to build on it. Compare to when I first bought a Xilinx FPGA dev board some 20 years ago, I found that I had no idea what to do with it, and there were not many resources to learn from compared to today. I don't think it's going to be mainstream soon, but there's definitely been some progress.

pclmulqdq
As a (former) FPGA developer, this revolution has been coming for a while, and I'm still waiting. Unfortunately, the programming model actually makes things really difficult for people who think in a software frame of mind. High-level synthesis gets part of the way, but not for low-level peripherals.

PMODs are good for university/hobby projects with FPGAs, though, and I'm glad the bar is getting lower and lower. FPGAs are usually built on very high-tech processes, so the gap between an FPGA (at 28 nm) configured like an SoC and an actual microcontroller (at 90 nm) may be smaller than we think, and that may be the saving grace of FPGAs as hobbyist devices.

e-_pusher
Curious - why did you get out of FPGA development, and what do you work on now?

I have anecdotally heard of FPGA/RTL devs leaving the field mostly because of the poor tooling etc.

pclmulqdq
I was an FPGA dev at a trading shop and moved to software at big tech. Hardware development in trading is fairly unique in that it is more like software than anything else. At other companies, FPGA dev is not well-rewarded, and is treated a lot more like hardware than software. That leads to slow development cycles and project-based hiring/management, which make it very hard to treat the project like software.

In addition, lack of a standard library and bad tooling (buggy, slow compilers) makes developing simple things very difficult.

The same attributes that made me a good FPGA developer have also made me a good low-level software developer (x86 assembly doesn't hold a candle to FPGA dev in complexity), so I do that now.

xfer
Not to mention the tools are shit and synthesis takes a long time. If you thought c++ compilers are slow just try using fpga tools.
pclmulqdq
The tools are awful and software people don't know how good they have it. However, the tools also do a ton of optimization for you, and so it's somewhat justified that they take a long time to run.
detaro
Give the open-source toolchains a go some time. Yes, they only cover some FPGA models and can't make use of all the features, but the dev workflow is sooo much nicer. And they are constantly improving.
blihp
I think it's closer than most people might think. I suspect this is largely because there seems to be a wide gulf between most hardware and software hackers (i.e. they don't talk much and tend to throw things over the wall to each other) and FPGAs require you to have a foot in both disciplines. You're right that the mindset required is a hurdle but once I got over that (I'm primarily a software person and it only took the better part of an afternoon), I've been finding it pretty straightforward. I'm struggling more with dusting off my digital logic design skills... I'll get there, just need to get back in practice.

The current state of the open source tool chain is looking pretty reasonable (covering the development life-cycle with yosys/nextpnr/iverilog/verilator/etc) and a few FPGA families having been reverse engineered (the ones I find the most interesting are also the furthest along: the ice40 and ECP5) so you can buy/build open devices and use them with open tooling today. Granted, the devices to choose from are at the low end of the universe of FPGAs out there (i.e. <100k LUTs) but that's enough to do a lot of interesting things with for now and (hopefully) eventually higher-end devices will be added. That's not terribly different than the state of play re: microcontrollers and SoCs these days in the open source world.

The one part of his talk that I think is off base is replacing microcontrollers/SoCs with FPGAs in most scenarios. At any given process node, the specialized device (i.e. a processor) should be able to crush the generalized one (i.e. FPGA) every time in terms of cost, raw performance and perf/watt and it would likely take more than the current ~2-3 process node delta to overcome that. I don't look at an FPGA as being an either/or proposition: use a microcontroller/SoC for what they're good at in conjunction with an FPGA for what it's good at where the application warrants it. Sure, if you need a tiny bit of software control throw a soft core in there and save the hassle of adding a micro. But don't try to replace a hard core with a soft one just because you can.[1]

[1] Two cases where exceptions seem to be obvious and make sense: - New/experimental ISAs like RISC-V where you can't readily/affordably source CPUs in the open source world yet. - Emulators mainly because the ISA you need is typically at least effectively dead. Any devices still available aren't often good fits for what you're trying to accomplish.

ozmaverick72
The talk mentioned a number of small flag boards. Can someone recommended one of them to get started playing with fpgas ?
xfer
Here is a cheap board but with no peripherals: https://www.tindie.com/products/tinyvision_ai/upduino-v30-lo...
analog31
Haven't gotten very far yet but TinyFpga BX was easy to get set up and running "LED blink" which is the hardware version of hello world. I've used microcontrollers since forever and want to expand my options.
snvzz
I have one of these and love it, but unfortunately the tinyfpga team is MIA and getting the boards has been hard to impossible for a long time now.
analog31
Aha, that explains why the two spares that I ordered have been on back order.
snvzz
Open synth/route stack supports Lattice iCE40 and ECP5 the best.

iCE40 are a relatively simple, cheap and easy to work with architecture with a focus on low power usage. iCESugar + nanoDLA (cheap sigrok-friendly logic analyzer) is a good cheap set to get started on FPGAs with. These are cheap on aliexpress.

ECP5 are much larger, much more complex and powerful enough to fit a whole computer (e.g. the minimig open Amiga implementation) in. I don't recommend going anywhere near this until you've already got started with iCE40, but a high-density and price-effective development board is the ULX3S.

ccosm
Barring some exponential increase in the usability of FPGA tools, I think the mainstreaming of these technologies will forever remain just around the corner.

More coarse-grained architectures like GPUs or reduced-scope architectures with a less intense cognitive load like the programmable IOs found in Raspberry Pi Picos or some Texas Instruments MCUs seem like a much more feasible solution for the vast majority of potential FPGA use-cases a tinkerer would run into.

jvanderbot
Q for the community. FPGA seems to be used for three things: 1) Custom I/O or high performance interfaces that aren't widely standardized 2) Prototyping boards / processor cores 3) Blazing fast implementations of algorithms that are hard to run otherwise.

Is that about right?

If even close, (3) is very interesting to me for a variety of reasons. Is my understanding correct that this is a reasonable use of FPGAs and that maybe now is a reasonable time to get into it?

metaphor
I offer 3 different reasons why FPGAs might make sense to consider:

1. (general) application reconfigurability where the benefits of SRAM performance, integrated hard IP, fabric real estate, and/or cycle-accurate RTL control cannot be easily replaced by other roughly asymptotic solutions;

2. (general) offloading massively/embarassingly parallel compute architectures where commodity GPGPUs are either physically incompatible and/or power-wise too inefficient;

3. (enterprise) overall volume/performance/budget/schedule objectives fall short of the sort needed to justify pushing a bespoke ASIC design through process pipeline.

Relatively speaking, "blazing fast implementations of algorithms" is possible iff the compute bottleneck can be effectively parallelized; I'm not certain what was meant by "hard to run otherwise".

To put this into perspective, reliably clocking a non-trivial FPGA application at 200 MHz is objectively hard in a way that hobbyist types outside of the professional domain can't seem to understand (or arrogantly dismiss). And yet for the right class of problems, such an application may have the potential to put a GPGPU clocked in the GHz to shame on both compute throughput and power consumption fronts.

jvanderbot
For "hard", I was implicitly referring to applications where existing compute throughput doesn't support the algorithm you want to run (which the example I was thinking of [1] is precisely case 2 you have)

1. https://ieeexplore.ieee.org/abstract/document/5747245 Implementation of pin point landing vision components in an FPGA system for NASA stuff

coryrc
Correct. I know how to use both well and would like to use an fpga in my own projects, but it never makes sense to. Even a custom Lisp processor is slower then extra instructions on a faster processor :( and not like I have time for that anymore
DoingIsLearning
Caveat on 3):

The only real benefit of an FPGA for algorithms is when you're algorithm benefits from parallelization.

There is nothing intrinsically faster in programmable logic.

The point is that execution is truly concurrent, as long as you have space in the FPGA fabric, you can _almost_ do everything at the same time.

I say this as someone who has done a fair share of FPGA projects, it is very difficult to make the business case for an FPGA, if your problem can be solved with GPU programming on a COTS GPU.

Regardless of whatever you read, FPGA's do have a purpose but will most likely continue to only be used in niche/custom applications.

balefrost
There is nothing intrinsically faster in programmable logic.

It probably depends what you're comparing to.

Yeah, since FPGAs essentially implement combinatorial logic, and hard CPUs are also implemented in combinatorial logic, you gain no benefit from directly "porting" the hard CPU to the FPGA.

But, if your hard CPU is a microcontroller that can only process say 8 bits at a time, but you really want to process say 256 bits at a time, it might be more efficient to use an FPGA to build a soft CPU whose architecture better matches your problem. In that case, you should be able to do more work per clock cycle with the FPGA.

That's arguably "increased parallelization", but what I'm talking about applies to sequential algorithms.

Of course, in that case, maybe you chose the wrong microcontroller. Maybe a full CPU would be a better choice, or maybe there's a DSP that can be adapted to the specific use case.

it is very difficult to make the business case for an FPGA, if your problem can be solved with GPU programming on a COTS GPU

Sure, that makes a lot of sense at the high end. I'm thinking of problems more in the embedded space, where FPGAs might be a bit more attractive.

mikewarot
CuriousMarc just did an intro[1] to logisim-evolution[2], which can generate verilog to then program FPGAs.

  [1] - https://www.youtube.com/watch?v=gYmDpcV0X7k
  [2] - https://github.com/reds-heig/logisim-evolution
It's the best of both worlds, simulate to design and debug, then push through to hardware.
bryzaguy
If you currently program micro controllers, I bet you’ll like this. Really checks all the boxes for why you’ll be interested, how it works, what the landscape is, and advice for getting started/contributing. Doesn’t feel like a pitch, unlike similar talks. Rather, it seems like an actual cool opportunity! Makes me want to start learning verilog.
ellis0n
A great talk and good slides. Just join good development software to low cost FPGA matrix hardware and you won't need CPU in many cases.
spiritplumber
Why not use a propeller or propeller 2? Similar benefits, cheaper, easier to code on.
analog31
I can't speak for the OP, but for myself, my motivation for learning FPGA's is to find out what they can do, that I can't already do on a microcontroller. I'm already pushing a screaming fast MCU to its limits. The engineers look at my designs and say: "You're trying to do stuff in an MCU that's easy in a FPGA."
balefrost
I believe one big advantage is that most FPGAs can have multiple clock domains. So while everything on the Propeller is tied to the system clock (and the system clock is hopefully fast enough that you have the resolution that you need to do the right thing at the right time), FPGAs can simply use clocks running at different frequencies.
amelius
FPGAs offer unlimited parallelism.
balefrost
Up to some limit that varies from device to device.
blihp
FPGAs just allow you to remove a layer or two of abstraction and corresponding latency (vs a CPU) where you don't want/can't have it and gives you control, speed and/or efficiency improvements in exchange. The price you pay is you're dealing with things at a much lower level (i.e. bits and wires)... if you don't need the trade-off, don't make it. They are for the situations where there is no off-the-shelf solution that quite solves your particular problem adequately.
dtgriscom
To quote the top comment on the YouTube page: "When the speaker is talking about a slide, show the slide." Amen.
nynx
This is a great talk.
hermitsings
hmm
contingencies
There is substantial complexity being swept under the rug here.

Interesting talk but I remain skeptical. Anyone got stats on how FPGA ecosystems have fared vs. MCUs over chipageddon? From a cursory search it seems: 1. There are very few PMOD suppliers 2. They do extremely low volume. So low, in fact, it appears there is simply ~no demand. Therefore, good luck hiring and scaling a team that is familiar with this stuff on a budget. Especially since the half-life of the software tooling is probably 3-6 months and the majority of toolchains are still vendor-specific. Further, in terms of supply chain security, the plethora of suppliers for lower-end MCUs cannot be matched in lower-cost FPGA offerings, which rely heavily on specific vendors who cannot readily guarantee global availability.

M-LVDS is an onboard high speed signalling standard whereas USB and ethernet are specifically designed for off-board connectivity (and have features like power distribution, dynamic topologies, etc.). They are apples and oranges.

The notion of a baseboard for these things is so simple, why isn't there a FPGA+PMOD+baseboard PCB generation and fabrication webservice offered by Digilent? Because it's not that simple. PCB design is often largely constrained by non-electronic functional aspects such as form factor, thermals, mounting and assembly considerations. Having modules to just plug in is great for a quick test, but the second you actually want a product you have to consider the whole problem and that typically means multiple iterations of traditional PCBs for these reasons.

Conclusion: The case is over-stated and the market has effectively rejected the current offerings.

g_p
> Anyone got stats on how FPGA ecosystems have fared vs. MCUs over chipageddon?

No data I can offer as such, but anecdote from several suppliers who integrate some big name FPGAs into their products. They've seen supply dry up like other chips, causing several months of delays to lead times. It's not "zero supply", but a significant reduction in supply.

The vendor-specific toolchains and supply chain security issues you point out are huge issues, and ones most people (even security experts) are a bit blind to. You can't verify the state of a programmer FPGA by inspection (at least not using normal techniques), so you can't easily verify the output of your bitstream compiler, which is the output of the complex design toolchain.

The toolchain output and bitstream are probably encrypted (or heavily obfuscated at minimum), and this will make it near impossible to audit the component. I don't see how anyone can defend against a Solarwinds (i.e. Stuxnet style attack) in the long run, as a toolchain compromise could introduce arbitrary modification of the logic, and would be unlikely to be spotted, and very difficult to ever detect once in the field.

pclmulqdq
Bitstream encryption is okay, but someone still has yet to make an FPGA that is truly secure. As far as I know, bitstream encryption on the Xilinx Ultrascale+ devices has been broken, so there isn't a lot of protection there against sophisticated actors.
g_p
I've not had much experience with modern allegedly "secure" FPGAs, but the scenario I see here is if your synthesis tools get compromised by a bad update (or another attack), they could add a little bit of "side logic" to every design. Solarwinds style.

With encrypted output and what I understand to be no ability to carry out QA/evaluation on the bitstream (with encryption meaning binary comparisons shouldn't be meaningful), you can't really gain assurance that the output is as expected.

Of course you could argue that you compare the encrypted bitstream with your "final version", but this only works until you need to use the field programmable aspect to update something, and now you have to take a leap of faith and flash the bitstream, and hope your build environment is good, as there's no real way to check.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.