HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
2012-04-07 - Lab-Tools FPGA Binning Co-processor for Apl

LabToolsInstruments · Youtube · 25 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention LabToolsInstruments's video "2012-04-07 - Lab-Tools FPGA Binning Co-processor for Apl".
Youtube Summary
This video describes an example USB2 interfaced FPGA co-processor for use with the array processing language Apl.
This co-processor implements a single scientific algorithm : Binning.
As such it is an example of what can be achieved using field-programmable gate array co-processors with Apl.
The FPGA module used is the Morphic II, available from FTDI. This module has the advantage that it does not require a USB Blaster to program it.
Software, firmware and modules are also available from Lab-Tools Ltd, who are also developing a series of credit-card sized interface and data I/O modules to interface with them.
The FPGA co-processor is accessed and controlled from AplX, a version of the array processing language Apl, from Micro-Apl.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Jun 14, 2014 · 25 points, 8 comments · submitted by chesterfield
pflanze
I don't understand why binning 1 million values would take more than a second on the CPU. I've tested the following in C:

  int len= 1000000;
  int numbuckets= 1000;
  double* vec= ...; // malloc(sizeof(double)*len) and fill with values between 0..1
  unsigned long* res= calloc(numbuckets, sizeof(unsigned long));
  int i;
  for (i=0; i<len; i++) {
      int ii= vec[i] * numbuckets;
      res[ii]++;
  }
This runs in 8 ms on my elderly laptop.

It would be slower if the individual bins wouldn't be of equal size, but the examples shown in the video seem to suggest that they are, and even with about a 10 times slowdown for a binary search it would be faster than the USB 2.0 transfer. Without using a GPU, multiple threads or SIMD.

thesz
Make more buckets, make bins with different bounds (not all equal to 1/numbuckets). All of sudden your algorithm will work much more due to cache misses and the need to perform conditional jumps.
pflanze
That does not explain it. Yes it will be slower with more buckets; how much?

  numbuckets uint64  uint32
  1000         6 ms    5 ms
  10000        6 ms    5 ms
  100000       6 ms    7 ms
  1000000     14 ms    8 ms
  10000000    38 ms   32 ms
  20000000            38 ms
  40000000            40 ms
  80000000            44 ms
40 ms is still >50 times faster than the "software algorithm" in the video. And he didn't use 40 mio buckets there, as can be seen from the graphs.

> make bins with different bounds

You'd have to implement this and use a huge number of buckets and likely still use an older CPU than he did to get to his software timings. Or use a slow language implementation (interpreted or boxing arithmetic results). Or implement a slow algorithm like a linear search over the buckets (which would explain what the difference between his two algorithms of about a factor 2 is, the faster one stopping once the bucket is found). In any of these cases, he either doesn't know how to write fast code, or deliberately left the software version way slower than possible to have the FPGA version shine, pending any other better explanation.

pflanze
I've now implemented the binary search version; I was right, the factor 10 was about correct, and even with 20 mio buckets the algorithm is still faster than the FPGA and more than 3 times faster than his faster software algorithm even though he's got a newer CPU (I'm running it on a Core 2 duo (single-threaded) at 2.5 Ghz), and again, he's really using something between about 1000..100000 buckets, which would make my program on my machine between 17 to 32 times faster than his:

  buckets   gcc -O1 gcc -O3
  1000       75 ms   72 ms
  10000     105 ms  102 ms
  100000    135 ms  136 ms
  1000000   243 ms  240 ms
  10000000  580 ms
  20000000  693 ms
(Results in my previous post were with gcc -O1.)

All results so far were with random numbers with an uneven distribution: vec[i]= (i / len) * random_between(0.,1.). Here are the results for the binary search with vec[i]= random_between(0.,1.):

  buckets  gcc -O1
  1000       82 ms
  10000     115 ms
  100000    148 ms
  1000000   360 ms
  10000000  735 ms
  20000000  860 ms
(He's using a Gaussian distribution, not even.)

Here's the code:

  int nvals= ..;
  double* vals= ..;
  int nbuckets= ..;
  double* buckets= ..;
  unsigned int* res= ..;

  int i;
  for (i=0; i<nvals; i++) {
      double val= vals[i];
      int bi;
      {
	  int lo=0, hi=nbuckets;
	  while (lo < hi) {
	      int mid= (lo+hi) / 2;
	      if (val < buckets[mid]) {
		  hi= mid;
	      } else {
		  lo= mid+1;
	      }
	  }
	  bi= lo;
      }
      res[bi]++;
  }
(Embedded within a Scheme driver program: http://christianjaeger.ch/scratch/binning.scm)
sparky
Aliasing conflict!

This is about doing histogramming on an FPGA, not a tool to group FPGA dice into speed grades :)

Would be interesting to see how an FPGA stacks up against a GPU on this problem. GPUs are very fast at parallel histogramming, but hardwiring the number-to-bin-index computation for a particular problem instance might buy you quite a bit of energy efficiency, or even possibly a bit of performance if the computation involves a divide. If the bins are of non-uniform size and the indexing computation involves a binary search on a lookup table, it seems like the much higher clock speeds on a GPU would win out.

sitkack
GPU clocks aren't really that high. FPGAs win when it comes to combinatorial logic, if statements kill GPUs. My money is on the FPGA by a superlative.
TD-Linux
Most FPGAs can't handle very high clocks either (~400MHz max, 100MHz is more reasonable for the Xilinx Artix chips). A huge fanout with a ton of comparators is going to not run that fast.

I imagine binning with an ISA with conditional execution wouldn't be that bad as far as jump penalty. Even with jumps, as long as they are predicted correctly it's fine.

The big limitation here is probably the USB 2.0 speed - 40MB/s is not a lot compared to a CPU's bandwidth to main memory.

sitkack
GPUs just recently hit 800+, they were at 600Mhz for a long while. As for USB bottleneck, I am speaking generically and assuming an equal footing (memory bandwidth, bus, etc), the specific device (binning on an FPGA) as a co-processor for a computer system doesn't really make sense from a speed advantage. I saw this more as a proof of concept for augmenting APL with an FPGA. Of course, APL itself should just get compiled down the GPU.

It does make a for a great benchmark. Eventually FPGA fabric and GPU compute will merge into an oatmeal with raisin consistency. I'd argue that with embedded multipliers, FPGAs are already there. Altera has an OpenCL sdk.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.