How a dancer with ALS used brainwaves to perform live

(electronicspecifier.com)

34 points | by 1659447091 5 hours ago

2 comments

  • usui 42 minutes ago
    The featured video does not explain how it uses signals to produce which outcomes and they basically just say "we use machine learning while outputting a dance". At 07:10 it looks like the person chooses between two binary options of "sad" and "relieved". Unfortunately I doubt the person has anywhere near the real-time input to the performance as much as it is implied. Dentsu is also an advertisng agency in Japan, so it seems like this is more marketing than it is technical.

    Dances by physical humans are always choreographed beforehand but live performances always show physical motion that can interrupt or change to unchoreographed movement at any time. I have a hard time believing that this person's brainwaves are mapping and producing the hologram in a specific 3D space, other than instructing it which mood preset to use at a given time.

    Excluding the marketing of the ALS story, I guess I'm wondering how it's different from a Michael Jackson hologram performance where someone could adjust the sliders for mathematical functions live?

    • EEBio 26 minutes ago
      I am pretty sure you’re right, they are probably recording alpha waves, possibly combined with heart rate.

      Decoding limb joint movements from EEG scalp recordings is basically an unsolved problem (we can barely do it in lab with implants), I doubt an advertising company has cracked it.

  • MajorTakeaway 4 hours ago
    Now is a really good time to contribute to https://openeeg.sourceforge.net/doc/ as far as EEG concerns go. There are a myriad of things that can be observed with EEG, and it would honestly be a decent thing to see grow in time.
    • EEBio 44 minutes ago
      There is quite a number of freely available EEG software for different paradigms (one such collection is MOABB - Mother of All BCI Benchmarks, and there’s a huge number of scientific articles).

      The biggest bottleneck for a hobbyist is that when using EEG, most paradigms require somewhat expensive hardware to work and that most paradigms still don’t work well with scalp recordings outside a lab environment, even when using mid-cost devices.

      There’s also the issue that classifiers usually have to be quite simple because datasets are small, because they are time consuming to record (and after you remove noisy epochs, you have even less data left). Cross-session and cross-subject learning rarely works, since EEG is dependent on so many factors like subjects’ brain anatomy, the type and precise location of electrodes, amount of gel (or lack thereof) and how dried out it is, mood and focus of the subjects, a huge number of environmental factors that influence subjects’ focus and many others.

      The only paradigm I have seen to work a bit more reliably than others is Steady State Visual Potentials, because you have extra information that doesn’t need to be learned from EEG (the frequency of visual stimuli is roughly the same as the one in subjects’ occipital lobe).