HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Shared memory parallelism in Julia with multi-threading | Cambridge Julia Meetup (May 2018)

The Julia Programming Language · Youtube · 10 HN points · 6 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention The Julia Programming Language's video "Shared memory parallelism in Julia with multi-threading | Cambridge Julia Meetup (May 2018)".
Youtube Summary
A Fast, efficient scheduler for parallel programming

Parallel programming is essential for high performance on current processor and system architectures. Many libraries include carefully tuned parallel implementations of their functionality, but using these in a parallel program requires effective nested parallelism. We show that existing threading models such as fork-join (OpenMP) and work-stealing (Cilk/TBB/Go) fail to nest parallel code effectively, making these highly tuned parallel libraries near useless to parallel programs; a state of affairs we do not wish to inflict on Julia. A decade-old idea, parallel depth-first scheduling, offers hope. We show the promise of this approach and report on our implementation of PDF scheduling in Julia.

Speaker:
Kiran Pamnany is a research scientist in Intel’s Parallel Computing Lab. His research interests include programming models, high performance computing and compilers. He is the author of Julia’s existing experimental threading runtime as well as Julia’s forthcoming parallel task runtime, and is one of the people responsible for Julia joining the petaflop club with Celeste. He has a PhD in computer science from Brown University.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Nov 26, 2019 · oconnor663 on Julia v1.3
Does the Rust Rayon library have the issue the speaker was describing, where parallel inner loops mostly get executed serially once an outer loop is parallelized? (I assume it does, from its Cilk heritage?) Are there workarounds for when that is a problem?

Edit: Dumb mistake on my part. I was looking at the video linked in the top comment (https://www.youtube.com/watch?v=YdiZa0Y3F3c) and then I came back to this thread and thought it was the topic.

GolDDranks
What speaker you are talking about? The linked post is to Julia 1.3 release notes that doesn't mention Rayon at all?
oconnor663
Oops, dumb mistake on my part. I was referring to the video linked in the top comment: https://www.youtube.com/watch?v=YdiZa0Y3F3c
Nov 26, 2019 · falkaer on Julia v1.3
By far the most interesting part of this release is the new multi-threading features, I recommend reading https://julialang.org/blog/2019/07/multithreading for an overview, and watching https://www.youtube.com/watch?v=YdiZa0Y3F3c for a talk about some unique stuff that's being worked on with Julia's multi-threading (though the depth-first scheduling is not in 1.3 afaik).
Consider a machine with N cores running a program which spawns N parallel tasks, each of which in turn spawns N parallel tasks. Which subtasks do the cores work on initially? If it's the first subtask of each top-level task, that's breadth-first scheduling. If it's all N subtasks of the first top-level task, that's depth-first scheduling. The latter is better for high-performance computing because it tends to be the lower level libraries that are really carefully tuned to make optimal use of the machine (think BLAS/FTT/parallel sum/prefix sum). Previously it was unknown how to implement this kind of scheduling efficiently. See this talk by Kiran Pamnany for more details:

https://www.youtube.com/watch?v=YdiZa0Y3F3c

dwohnitmok
Ah I see what you mean! I really hate to pile on with the Haskell observation but the 1995 paper by Blelloch that is mentioned in that talk is exactly the heart of GHC's data parallelism as well (under the name nested data parallelism).

See slide 8 here: https://www.microsoft.com/en-us/research/wp-content/uploads/...

Later versions of Blelloch's language are referred to in other parallel Haskell papers.

FWIW I think this is nonetheless fantastic news for Julia! It's certainly poised to do great work in areas that I doubt will get any Haskell traction anytime soon (if ever) for good reasons.

EDIT: Interesting, I think Julia is taking a coarser grained approach here that breaks down a bit for irregularly balanced execution trees, but relies less on fusion (i.e. doesn't use fusion at all) which is nice. I'm curious to see how this particular implementation of the "depth first" idea goes!

EDIT EDIT: Actually I'm not sure how much fusion is necessary for GHC's approach... Time to investigate more closely. Also call out to anyone more familiar with Data Parallel Haskell to chime in. I'm getting out of my depth.

StefanKarpinski
I'm not surprised that Haskell supports this kind of parallelism, although I wasn't aware of it previously. After all, if you're going to the trouble of being that pure, why not be parallel?
checkout https://www.youtube.com/watch?v=YdiZa0Y3F3c which explains the new features of the task scheduler in a simple fashion.

Basically Julia is employing a new algorithm to schedule tasks, where traditional methods would get stuck/blocked. Essentially, if you run a job with parallel tasks and one of the threads finishes, it's extremely difficult to schedulate the free thread, since the data may still be used by the other threads that are busy; thus you need to have clever way to determine how to allocate computational resources. Julia is implementing results of a new paper on how to optimally do this.

This work has been a monumental collaborative effort and I think the eventual outcome will be pretty spectacular.

I gave a talk at an MIT Julia meetup a few months ago that the author of this PR also spoke at; I found his talk to be an insightful overview of this work: https://www.youtube.com/watch?v=YdiZa0Y3F3c

Oct 23, 2018 · 10 points, 0 comments · submitted by boromi
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.