HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Novel Strategies for Sensing and Stimulating the Brain Noninvasively and Precisely

College of Engineering, Carnegie Mellon University · Youtube · 83 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention College of Engineering, Carnegie Mellon University's video "Novel Strategies for Sensing and Stimulating the Brain Noninvasively and Precisely".
Youtube Summary
DARPA has awarded a large grant to three faculty members: Pulkit Grover and Maysam Chamansar of electrical and computer engineering, and Jana Kainerstorfer of biomedical engineering. Together, they are working to develop a noninvasive way to sense and stimulate brain activity in order to diagnose and treat neurological disease.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Sep 03, 2021 · 83 points, 45 comments · submitted by grawprog
ckemere
The title is misleading. DARPA has funded teams to work on minimally invasive brain interfaces, but AFAIK they have yet to be achieved.
AussieWog93
And as someone who studied this field at a PhD level, I would be shocked if they managed to achieve a level of control on par with even a standard Xbox controller any time within the next two decades.

The whole field is propelled by deliberate misinformation and hype used to farm grants.

pyinstallwoes
Yeah, but have you worked at the DARPA level?
graderjs
Really, is it really so hard? I had the impression from seeing those videos of people operating robotic limbs through biofeedback or typing through biofeedback or the write-ups on that AI related work to determine words from brainwave patterns that a lot of this stuff was achieving impressive capabilities.

but maybe the sort of sensory bandwidth like required to operate say a helicopter is just a completely other order of magnitude to think that you could do it through the same channel. What's the main blockers or bottlenecks in this space?

dexwiz
I would imagine some of those videos are the hype and misinformation that the GP is talking about
AussieWog93
Yes, a lot of the videos shown (such as the pig walking cycle in the Neuralink presentation) deliberately misrepresent the challenges, limitations and lack of progress with the technology in an attempt to get more funding.

I guess you could argue that people writing the grants should perform more due diligence, but at the same time I believe we should be able to trust scientists to act ethically and honestly in the same way we trust doctors.

graderjs
At the same time I reckon we should throw money at stuff even half- (not baked, but, half-) completed...because the pay-off is high (if it works) even if the current state of art right now is weak. I get your frustration at the perceived support for a research direction you view as wrong, but I think this is normal, especially in emerging tech. There are often competing approaches, and funding favorites.

But I didn't know this--what you said about the difficulty of signal collection, and rejection--and really value that alternative perspective to the one I've seen in the press about using AI, which you think is overblown. Thank you! :)

AussieWog93
>At the same time I reckon we should throw money at stuff even half- (not baked, but, half-) completed...because the pay-off is high (if it works) even if the current state of art right now is weak.

I don't disagree with that line of reasoning in general - moonshot projects have led to amazing discoveries in the past and should definitely be funded.

In this particular case, though, there are both very, very strong arguments to suggest that we reached the peak of what the current paradigm can achieve back in the late 00's - as well as very, very strong (financial and cultural) pressure to ensure all new research projects in the field adhere to the current paradigm.

>I get your frustration at the perceived support for a research direction you view as wrong, but I think this is normal, especially in emerging tech.

It's so, so much deeper than that. The researchers in the area know full well what they're doing is complete bullshit, but toe the line anyway because that's where the funding comes from.

It's not just a wrong research direction but a soft and invisible form of academic fraud. This is not normal (or at least shouldn't be normal), and we should not be feeding the beast.

We could possibly have much, much stronger BCI tech today if the incumbent culture weren't actively impeding innovation.

graderjs
Wow, man. Thanks for sharing more how you feel about it. I thought it was just equal competing approaches, different camps. But you tell a different story. I really appreciate that.

That's not just fraud, I think...but it's like wasted scientific capacity. You know? They're researchers, they could be working to build something, but instead they're taking money on an approach that can't work. That's sick. Feels bad....

So what do you think's the solution? Like if you could take 10 million dollars to do your own lab with BCI tech what direction would you go in? ;p ;) xx

AussieWog93
>Like if you could take 10 million dollars to do your own lab with BCI tech what direction would you go in?

The honest truth is I have no idea exactly how to solve the problem. I'm a data/algorithms guy, and our skills are completely inapplicable to this field at this time. My original thesis suggested we "somehow" work out how to train humans to produce signals that can be more easily distinguished by a BCI, but I had no idea where to even begin with such a system.

A mainstream BCI will require a better understanding both how the brain works and what materials can survive inside and coexist with the human body. Or perhaps a new sensor method will come along that greatly increases the quality of the data we have and render both of those things obsolete.

I think the best way of spending my 10 million would involve simply giving it to smart, frugal people and walking away without asking any questions. I mean that genuinely.

grp000
I thought a lot of those videos worked off of nerve impulses at the terminated end of the limb.
AussieWog93
The guy with the robotic arm that looks like Bryan Cranston (can't remember his name off the top of my head) definitely uses nerve endings at the end of the limb. Others, such as the patient in Krishna Shenoy's 2017 paper, though, use invasive BCIs.
AussieWog93
>I had the impression from seeing those videos of people operating robotic limbs through biofeedback or typing through biofeedback or the write-ups on that AI related work to determine words from brainwave patterns that a lot of this stuff was achieving impressive capabilities.

Sticking to BCI specifically and not other forms for biofeedback, most of those demos you see come from people with invasive, penetrative electrode arrays such as the Utah array. In some people, these sensors can enable a high level of control and performance (not Xbox controller level, but still Atari controller level which would be incredibly useful for people with Motor Neurone Disease). Unfortunately the body rejects them after a few months and they're back to square one.

To be honest with you, I don't think that the problem's intractable. Perhaps a clever materials engineer could develop a coating for the electrode arrays that the body accepts long-term and allows the user to train for years rather than months. Perhaps we could develop a new form of sensor altogether.

I don't know specifically what the answer is, but it won't come from the current paradigm of "turn garbage data into non-garbage output through the power of Machine Learning somehow" that we've been stuck in for the better part of two decades. And unfortunately, projects adhering to that paradigm is where all of the funding (and people) goes to.

kodah
I did cognitive neurotherapy after a series of TBIs. The therapy seemed to work on me, though, what I've heard is that it's hit and miss. Of course, there's always the chance that I'd have recovered that way anyway, but given the state my academics were in I doubt it.

For reference, the machine I used employed a bike game that supported movement across the screen. I eventually could make the bike go faster as well as up and down, nearly on command. It focused on alpha, beta, and theta waves measured by diodes stuck to your scalp. Not sure if that narrows that down, but I'm curious to know your thoughts.

systemsignal
Thoughts on openwater? https://www.openwater.cc/

Claim to use light for imaging https://youtu.be/awADEuv5vWY

There’s also the company Kernel

Both companies have pretty successful founders

cinquemb
> "turn garbage data into non-garbage output through the power of Machine Learning somehow"

Yeah, i was pretty turned off by this in the lab i've worked at/contracted for.

A lot of support for this kind of stuff wrt EEF neurofeedback while ignoring the physics of what could be done with the raw data (if the PI actually understands or had a E&M class at least). Ppl in charge of getting the grants don't really care about the fact we do not gain any insight into whats going on when we use black box systems… oh well.

I'd be happy to play around with biosemi 128 electrode cap+array + dsp + beamformers[0] and openemu[1] for two decades, deff not gonna happen in academia though.

[0] https://github.com/cinquemb/LCMVBeamformer

[1] https://github.com/cinquemb/OpenEmu

sn41
Won't this mean that you cannot think of anything else while operating the equipment?
omgitsabird
Do you need to not think of anything else to move your arms?
SavantIdiot
Not until you said that. Thanks.
AussieWog93
Ex-BCI research student here.

The nerves that control your arm are wired directly into your central nervous system and can active your muscles in direct response to action potentials fanned out from a single neuron.

These noninvasive technologies pick up aggregate signals from hundreds of thousands of neurons all firing pseudo-randomly, filtered spatially by the dura (thick layer of membrane that protects the brain), skull, flesh, skin and hair.

A more "HN" analogy would be like comparing a directly wired Ethernet connection (nerves connected to the CNS) to a side channel attack on an air-gapped system (noninvasive EEG). While randomly running background processes (normally) won't affect the data transmitted to the network, they will absolutely foil most side channel attacks.

omgitsabird
The resolution is the problem, as with all non-direct, or invasive, measurement techniques (imaging included) used on living humans' brains.
zaptrem
This is a great analogy.
westurner
What about with realtime NIRS with an (inverse?) scattering matrix? From https://www.openwater.cc/technology :

> Below are examples of the image quality we have achieved with our breakthrough scanning systems that use just red and near-infrared light and ultrasound pings.

https://en.wikipedia.org/wiki/Near-infrared_spectroscopy

Another question: is it possible to do ah molecular identification similar to idk quantum crystallography with photons of any wavelength, such as NIRS? Could that count things in samples?

https://twitter.com/westurner/status/1239012387367387138 :

> ... quantum crystallography: https://en.wikipedia.org/wiki/Quantum_crystallography There's probably some limit to infrared crystallography that anyone who knows anything about particles and lattices would know about ?

omgitsabird
A couple of studies linked in Wikipedia for this use (one retracted):

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4049706/

Retracted: https://journals.plos.org/plosbiology/article?id=10.1371/jou...

Table with resolution differences between different techniques:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8116487/table/T...

westurner
> Table with resolution differences between different techniques:

Looks like MEG has the best temporal and spatial resolutions.

cknizek
Most NIRS systems use continuous-wave (CW) systems detecting blood oxygenation/chromophore concentration. I say this because, in the time domain, you're looking at 5-7 seconds to see changes in cerebral oxygenation.

It's also noteworthy that you're dealing with huge amounts of noise, very low resolution, and are limited to the cortical surface. This limits the applications in the BCI-domain.

I'm currently researching the feasability of fast optical imaging. This has to be done in the frequency domain, but may yield temporal resolution in the milliseconds. The downside is finding an incoherent light source that's able to be modulated fast enough to detect the scattering changes.

westurner
You mentioned "time-domain", and I recalled "time-polarization".

From https://twitter.com/westurner/status/1049860034899927040 :

https://web.archive.org/web/20171003175149/https://www.omnis...

"Mind Control and EM Wave Polarization Transductions" (1999)

> To engineer the mind and its operations directly, one must perform electrodynamic engineering in the time * domain, not in the 3-space EM energy density domain.*

Could be something there.

Topological Axion antiferromagnet https://phys.org/news/2021-07-layer-hall-effect-2d-topologic... :

> Researchers believe that when it is fully understood, TAI can be used to make semiconductors with potential applications in electronic devices, Ma said. The highly unusual properties of Axions will support a new electromagnetic response called the topological magneto-electric effect, paving the way for realizing ultra-sensitive, ultrafast, and dissipationless sensors, detectors and memory devices.

Optical topological antennas https://engineering.berkeley.edu/news/2021/02/light-unbound-... :

> The new work, reported in a paper published Feb. 25 in the journal Nature Physics, throws wide open the amount of information that can be multiplexed, or simultaneously transmitted, by a coherent light source. A common example of multiplexing is the transmission of multiple telephone calls over a single wire, but there had been fundamental limits to the number of coherent twisted light waves that could be directly multiplexed.

Rydberg sensor https://phys.org/news/2021-02-quantum-entire-radio-frequency... :

> Army researchers built the quantum sensor, which can sample the radio-frequency spectrum—from zero frequency up to 20 GHz—and detect AM and FM radio, Bluetooth, Wi-Fi and other communication signals.

> The Rydberg sensor uses laser beams to create highly-excited Rydberg atoms directly above a microwave circuit, to boost and hone in on the portion of the spectrum being measured. The Rydberg atoms are sensitive to the circuit's voltage, enabling the device to be used as a sensitive probe for the wide range of signals in the RF spectrum.

> "All previous demonstrations of Rydberg atomic sensors have only been able to sense small and specific regions of the RF spectrum, but our sensor now operates continuously over a wide frequency range for the first time,"

westurner
Sometimes people make posters or presentations for new tech, in medicine.

The xMed Exponential Medicine conference / program is in November this year: https://twitter.com/ExponentialMed

Space medicine also presents unique constraints that more rigorously select from possible solutions: https://en.wikipedia.org/wiki/Space_medicine

There is no progress in medicine without volunteers for clinical research trials. https://en.wikipedia.org/wiki/Phases_of_clinical_research

https://clinicaltrials.gov/

AussieWog93
> What about with realtime NIRS with an (inverse?) scattering matrix?

I've purged a lot of the knowledge from that time from my brain, but from what I recall fNIRS takes a long time (on the order of multiple seconds) to take a single reading.

It also only really shows which regions of the brain are receiving more blood supply. While a huge improvement in spacial precision over EEG, it's still not anywhere near the same level of precision that a directly-wired nerve has.

That doesn't mean it's useless technology, just probably not viable for the low-latency, high-accuracy control you'd want for a military drone.

>Another question: is it possible to do ah molecular identification similar to idk quantum crystallography with photons of any wavelength, such as NIRS? Could that count things in samples?

I have absolutely no idea about molecular identification or quantum crystallography, so probably can't give you a good answer on that one.

mattkrause
The newer systems can sample much faster (10-100 Hz), but they're still limited by the fact that they're measuring a hemodynamic signal that lags behind neural activity.
westurner
Which other strong and weak forces could [photonic,] sensors detect?

IIUC, they're shooting for realtime MRI resolution with NIRS; to be used during surgery to assist surgery in realtime.

edit: https://en.wikipedia.org/wiki/Neural_oscillation#Overview says brainwaves are 1-150 Hz? IIRC compassion is acheivable on a bass guitar.

mrtnmcc
I'd written an EEG signal processing and inference backend for a drone control system that did this in 2010. It picks up on Mu rhythms from the motor cortex.

It flew around campus at UIUC..

https://ieeexplore.ieee.org/abstract/document/5509671/

Would be skeptical they could do this in a battlefield situation, we always found you had to be very still to not create surface electric potentials which drown out EEG.

akomtu
Do you know if there's a detectable em signal when the subject alternates two images or colors in imagination? Say, black-red or black-white. The two colors would supposedly correspond to different energy levels in the brain and that would generate the em wave. The idea was inspired by an old trick: even without wifi hardware, a processor can emit a em wave by alternating between two clockrates.
mrtnmcc
From my experience, visual imagery / imagination is not discernable in EEG, it simply doesn't produce large scale modulations like the motor cortex does.

However with MRI or fMRI I believe it is possible, and have seen papers even reconstructing visualized numbers.

mrtnmcc
Following up after more thought, we had seen 'tongue motor imagery' (imagination of moving tongue) as clearly discernible in EEG. The motor cortex also has eye muscle mappings nearby. http://mrtnmcc.com/wp-content/uploads/2013/12/motor_imagery....

So it is conceivable that motor control of the eye or imagination of it could be detected. In the case of imagining dark or light perhaps this could affect pupil/iris dilation muscles.

akomtu
What range of em spectrum is analysed and what's the sensitivity? I suspect that motor commands produce a blunt low frequency signal, while emotions and thoughts produce something barely discernible.
twarge
Here's a USB-powered unshielded MEG sensor that you can just carry out into the woods

https://arxiv.org/abs/2001.03534

So that's new.

lawrenceyan
Are there any ongoing clinical trials? This definitely looks like groundbreaking research. What does the safety profile look like in comparison to using implanted devices to treat epilepsy, etc.
lawrenceyan
Did some searching myself, found this pretty useful: https://neuro.psychiatryonline.org/doi/10.1176/appi.neuropsy...
pingeroo
Is this real?
justinclift
Not yet. The whole article is about future stuff being researched, but without any actual results yet.

Whoever submitted this went with a click bait title that doesn't reflect the article.

mattkrause
It's not entirely their fault--the drone angle actually came from DARPA.

DARPA organizes its research funding as contracts, with specific milestones and deliverables. For this program, they had two technical milestones where participants have to demonstrate that their brain interfaces can "read" control signals from the brain with certain number of degrees of freedom: three DOF, then five for one track; five, then ten for the other. There were other requirements too, like a maximum latency for the decoder.

The system's capability was to be demonstrated by a human participant doing some kind of DoD-relevant task. The program announcement specifically called out "controlling a drone" as a good way to demonstrate that the system meets these technical requirements. Three DOF could correspond to up/down, left/right, forward/backward; add in pitch, yaw to make five.

While you probably could propose something else, the program announcement literally told you what they wanted, so....

Source: wrote an (unfunded) proposal for the same program.

graderjs
Unfortunately there seems to have been a bit of that on the front page lately. Like the article yesterday about how Australia had decided to implement social credit which unfortunately again was something just like a half year old parliamentary committee proposal that wasn't anywhere near becoming law.
mattkrause
This is perhaps a little different, because the drone angle actually did come from DARPA.

We wrote a proposal for the same program, and it specifically called out "flying a drone [in VR]" as a way that they wanted to see your system's capabilities demonstrated.

The real aims of the program were to "read out" signals from the brain that would let you control an external device. Depending on the phase and level of invasiveness, these signals were supposed to contain three, five, or ten degrees of freedom. It was also supposed to have a fairly low decoding latency (~100 ms).

At the end of the program, a human participant was supposed to demonstrate the system by performing a DOD-relevant task with it. These requirements map nicely onto moving something through space: 3DOF corresponds to x/y/z; add in yaw and pitch to get to five, and a second object gets you ten.

graderjs
That's awesome! Thanks for sharing :)

I think it was just the original title that seemed off -- suggesting they're already built it ;p :) xx

black_13
And who needs this?
dang
Url changed from https://idstch.com/technology/biosciences/darpa-n3-developin..., which points to this (and does nothing else).

Submitters: "Please submit the original source. If a post reports on something found on another site, submit the latter." - https://news.ycombinator.com/newsguidelines.html

Readers: note that the some of the comments are complaining about the earlier submission, or rather its title ("DARPA Developed Nonsurgical Brain Interfaces to Control Drones Using Thoughts")—unsurprisingly, since that's literally all that was there—other than ads, of course.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.