Even Perfect Decoder-only BCIs will have less signal and precision than motor channels (*HANDS*)

Send your thoughts via twitter or mail. status: speculative

Thesis: Even with perfect neural decoding, a read-only BCI is slower and more error-prone than motor channels (HANDS 🙌 🤝🤟 ) for tasks that demand explicit reference and commitment.

Below are few arguments for this. All of them have potential engineering workarounds but they'd mostly trade precision for mental friction giving equal or worse signal/effort ratio.

No built-in gating: Muscle control has natural "push-to-submit" affordances (press, grip force, eye movement) that idle thoughts can't trigger. Gating is native to muscles. BCIs would have to implement this with deliberate mental mode-switching (neural codec) to avoid accidental commands leaking through.

Sample budget vs. task complexity:In a 30 minute session we can harvest ~100–1000 informative bits about the user’s latent taste vector θ. That is much lower than the theoretical peak of ≈2*10⁴ bits that Markus Meister’s 10 bits per second could deliver with perfect scaffolding (10 * 60s/min * 30min = 18000 bits), because most neural traffic is spent on gating, reference, redundancy and error-correction. So 30 minutes of clean, calibrated preference signals constrain only a low-dimensional surface of θ and the full high-dimensional preference manifold remains underdetermined UNLESS you inject strong priors or add extra motor channels.

Weak references and composition: Without external anchors (gaze, point, speech), decoded thought can't bind variables or scope arguments as tightly as tokenized language. Directionally, studies find that homonym confusion and symbol mis-binding stay high even after lots of training.

Drift: Neural statistics drift session-to-session. Users have to retrain and use meta-loops that motor channels don't need.

Overall, as a system, motor output gives us crisper tokens and built-in gating, where throttling/pausing come naturally and "wrong commits" are minimized.

Current input devices are a strong baseline and zero-shot decoding of highly idiosyncratic preferences via brain reading is unrealistic even with perfect hardware. Also, there is no exact shared coordinate system -- alignment with external artifacts and categories is approximate at best (issue of symbol-grounding).

Read-only BCIs can give users hands-free/covert control and continuous implicit signals (error-related potentials, arousal, attention). BUT it's still unclear how much lower level parallel pre-cursor brain signals are actually worth. My stance as of today is that our idiosyncratic, conscious choices are where 99% of the differentiator and value is. Meaning that that gives us the bulk of the "actualized" perceived output difference between person A and person B.

Provisional Metric Sketch

Initial ideas for metrics to base discussions on.

Metric Definition Tests
Spurious activation rate Commands/hour during passive monitoring (reading, talking, daydreaming). Subject told "system is on but ignore it." Typing/speech ~0. Gating failure. Idle thought leaking through as commands.
Referential error rate % wrong selections in 20-object cluttered array. Compare: (a) BCI alone (b) BCI + gaze (c) speech + gaze (d) mouse. Intent: "select [object]" or latent equivalent. Binding failure. Grounding mental content to external referents without tokens+pointing.
Cross-session throughput decay Correct commands/min day-1 vs day-7, zero recalibration. User picks their speed/accuracy tradeoff. Motor drift ~0%. Drift tax. Decoder rot rate.
Constrained task time Seconds for 10-step file task (create/rename/move/delete) at ≤5% error ceiling. BCI vs keyboard+mouse. Tasks force reference ("move that file"), composition, gating. Overall system loss. If 3× slower at matched error budget, "perfect decoding" still mogged by hands.