The Great Unlock

Feb 23, 2026

2025 felt like the year the pieces finally lined up for brain and behavior research. Consumer EEG headsets dropped below $300. Signal processing got better. Remote studies stopped feeling like a workaround and started feeling normal. The tools to study the mind are no longer limited to well-funded labs with specialized hardware and a lot of patience.

We built NeuroFusion around a simple bet: research gets much better when people can run studies outside the lab and repeat them often enough to learn from individuals, not only from averages. That is the shift we care about. It is also why this moment matters so much to us.

The field is crowded from both directions. Large incumbents already own pieces of the workflow. Qualtrics dominates surveys. REDCap is standard in many universities. Gorilla powers remote behavioral experiments. BioSemi and Brain Products still anchor a lot of in-person neuroscience. At the other end, PhD students are shipping their own jsPsych tasks, open-source EEG tools keep improving, and startups are racing to become the default research platform. We are building in the middle of that pressure.

We call this moment The Great Unlock.

The bottleneck is still experimentation speed

Brain and behavior research has no shortage of questions. What it lacks is throughput. Good ideas move too slowly from hypothesis to data to iteration.

In the lab, the best groups can build remarkable datasets. The Healthy Brain Network, run by the Child Mind Institute, has collected high-resolution EEG from more than 3,000 participants ages 5 to 21. The dataset includes resting state, cognitive tasks, movie watching, behavioral measures, and psychopathology dimensions. It is public, large, and carefully structured. The public biobank is still mostly a deep baseline snapshot for each participant rather than repeated follow-up sessions over time. You get a rich picture of one visit, not a running picture of how that child changes over months.

That gap shows up everywhere else too. Most labs still run a small study, recruit 30 or 40 participants, spend half an hour placing electrodes, collect a session, and then move on. The work is real and the science matters, but the pace is low because the setup is heavy.

In the clinic, the pattern is similar. A clinician might administer a Montreal Cognitive Assessment during a short appointment and then not see the patient again for months. The data in between is missing. Sleep changes, attention changes, stress changes, medication changes, and none of it is measured often enough to become useful.

At home, the constraints are different. A person with a Muse or similar headset can do a short resting-state recording in a living room before work, after a cold plunge, after a bad night of sleep, or during a period of high stress. They can pair it with self-reports and wearable data. Over weeks and months, that produces a record of change that no single lab visit can match.

The device surface is shifting too. Phones are useful for prompts and check-ins, but a lot of cognitive work still fits better on a tablet. That middle ground is getting better every year. Larger screens, better sensors, and more comfortable study flows make it easier to imagine serious remote assessment moving there first before longer-form recording becomes routine on even lighter devices.

Lab research still matters because control matters. Home research matters because frequency matters. We need both. Right now the tooling is much stronger for the first case than the second.

Three shifts are pushing that balance.

Consumer neurotechnology is no longer niche. Mobile EEG now includes headbands like Muse, headsets like Neurosity Crown, and newer devices aimed at longer recording windows. They are noisier than lab rigs and they have fewer channels, but they are already good enough for a growing set of questions. Muse alone is now represented across hundreds of published studies, which is a better signal of real adoption than any marketing claim about units sold.

Signal processing is much easier to use than it was a few years ago. EEG cleanup used to demand a patient neuroscientist, a lot of MATLAB, and a tolerance for brittle scripts. Now researchers have solid tools such as MNE-Python, FOOOF, and automated artifact rejection workflows. Our own pipeline can take an uploaded recording and return analysis and visualizations in seconds. Fast processing changes the rhythm of research. You can adjust the study and run it again instead of waiting weeks to see whether the last setup was even useful.

Model-based cleanup is pushing this further. Zyphra's ZUNA, a 380M-parameter masked diffusion autoencoder trained on roughly two million channel-hours of EEG, can denoise recordings, reconstruct missing channels, and estimate signals at locations a headset never recorded directly. In practical terms, a short four-channel Muse session collected at home can get much closer to the quality researchers used to expect only from denser systems in controlled settings.

Remote research infrastructure also matured. COVID forced the field to get comfortable with distributed studies. Prolific and similar platforms proved that participant pools exist. The missing piece was a way to combine recurring surveys, cognitive tasks, and brain recordings inside one workflow that a researcher could actually manage. That is the gap we are trying to close.

For incumbents, this shift extends existing products. For small teams, it lowers the cost of shipping something new. Both trends are real.

Why the current stack still falls apart

If the parts exist, why build another platform?

Because research teams still spend too much time stitching tools together. A typical study can involve one system for surveys, another for task delivery, another for EEG acquisition, another for storage, and a pile of ad hoc scripts to align timestamps and formats later. Remote studies make this worse. The participant gets bounced between links, apps, upload flows, and instructions that were never designed to feel like one study.

We have watched teams lose months to that integration work. Data gets dropped at the joins. Participants disappear when the workflow feels fragile. File formats drift. The study becomes harder to trust because the pipeline is harder to understand.

NeuroFusion exists to collapse that sprawl into a single product. Researchers can design a study with prompts, onboarding, consent, experiments, and recordings in one place. Participants can join from mobile, web, or tablet. Behavioral data and brain data land in the same system. Exports come out clean. Analysis can run as the study runs.

That is the product bet: the durable value sits with the platform that owns the whole research workflow.

What we think matters most

The first differentiator is scope. We are trying to put recurring behavioral prompts, jsPsych experiments, consumer EEG recordings, and wearable health data into the same study container. That matters because real research questions often cut across those boundaries. A team should be able to look at Stroop performance, resting-state EEG, sleep, and daily mood without building a custom stack for each project.

The second is privacy. Brain data is sensitive. We do not want identity to be the default price of participation. NeuroFusion uses Nostr for authentication, keeps participation anonymous by default, and avoids forcing an email-first account model when it is unnecessary.

The third is community research. We have seen the value of this in practice. At BrainHack Toronto we completed 25 brain recording sessions from 18 participants in 48 hours. At ZuConnect we ran cold plunge experiments with live EEG. At Edge Esmeralda we recorded brain activity during chess games. Those events made one thing very clear: useful research can happen outside formal institutions if the tooling is good enough.

The fourth is reproducibility. Researchers should be able to understand how data is collected and analyzed rather than trusting a black box. We document our methods, expose structured outputs, and design the platform so that study configurations and analysis pipelines can be shared and reused across teams.

The infrastructure layer we care about

Good science and good models already exist. The problem is that they often sit as isolated assets: a large dataset in cloud storage, a strong model behind a research paper, a preprocessing library that requires too much setup, or a custom analysis notebook that never leaves one lab.

We want that stack to feel usable.

A recording should come through one API whether it was collected at home on a Muse or in a community session on a Neurosity Crown. The same study container should hold participant metadata, prompts, experiment context, and device provenance. Researchers should not have to manage manual uploads and email chains just to keep a dataset coherent.

Analysis should also be extensible without turning every project into a software platform effort. We ship built-in pipelines for spectral decomposition, ERP extraction, and common comparisons such as eyes-open versus eyes-closed alpha. Researchers can also attach their own Python scripts to a quest and run them on incoming data, nightly aggregates, or combined datasets across multiple experiments.

That matters because the interesting work rarely lives in one table. A team studying cognitive decline may want to compare Stroop performance, resting-state EEG, sleep, and self-reported energy inside one analysis script. Another team may want to run ZUNA on consumer recordings before building a classifier. Another may want to compute a custom feature set that only makes sense for their protocol. The platform should support all of those cases without forcing researchers to rebuild the collection layer every time.

We also care about model testing in real-world conditions. Most EEG models are evaluated on static datasets. That is useful, but it misses the messier setting where consumer devices are used at home, over time, alongside everyday context. We think one of the most valuable things NeuroFusion can become is a place where models meet that kind of data.

Over time, the data gets more useful. Every quest adds another set of recordings, behavioral responses, timestamps, and context. No single lab can easily collect that kind of longitudinal, multi-modal dataset at scale on its own.

What the progress already looks like

The roots of Fusion go back to a simple question from late 2021: if we already generate so much data across our apps, could we use it to understand how we work and feel a little better?

We started by correlating music, sleep, and work patterns. One of us noticed that neurotech work tended to happen more often when listening to slower music, after early exercise, and on days with a lower sleep heart rate. The correlations were weak, but the experiment felt worth continuing. If small shifts in behavior create measurable differences, what happens when people can study those shifts much more often?

That line of thought became Fusion. We wanted personal data to do more than support ads and dashboards. We wanted it to help people notice patterns in their own lives.

Very quickly we hit the same wall each time. Spotify knew what someone listened to. The phone knew sleep duration. Wearables knew steps and heart rate. None of those systems knew what was happening in the brain while the rest of life unfolded. EEG research already showed that theta power tracks workload and that alpha relates to relaxation and attention. The signal existed. Access to it did not.

In late 2022, our CEO left his role building machine learning platforms at Microsoft for M365 Growth to work on this full time. Before that he had built data systems at scale for both startups and major companies. The problem he wanted to work on next was more personal: better infrastructure for understanding ourselves.

Since then, the work has moved through Lagos, Accra, San Diego, Vancouver, Toronto, Istanbul, Tampa, and Telford. We ran a community event in Lagos where people got excited simply by tracking mood over time. We ran cold plunge experiments at ZuConnect with EEG headsets before participants entered five-degree water. We ran BrainHack Toronto sessions that got people recording in under five minutes. We presented at the Technology in Psychiatry Summit. We worked with the N-CODE consortium on cognitive assessments for preclinical Alzheimer's detection. The settings changed, but the same theme kept returning: once the setup friction drops, more experiments become possible.

Today the platform is live on iOS, Android, and the web. Research groups at the University of Toronto, the University of Plymouth, and the University of Port Harcourt are using it. Quests already support jsPsych 8 experiments, media uploads, Prolific recruitment, organization billing, automated Python analysis scripts, and multi-experiment workflows. The data layer already includes self-reports, cognitive task performance, resting-state EEG, event-related potentials, FOOOF-based frequency analysis, steps, sleep, and heart rate.

We have also released open datasets because we want the data commons we wished existed when we started.

What we are building toward

The long-term goal has not changed since Entry #00: increase the rate of experimentation enough that we can build useful predictive models for a single person.

Most brain research still produces group-level statements. Those results matter, but they do not yet tell an individual what changed for them this week, what usually happens next, or what intervention helped the last time a similar pattern showed up. To get there, you need repeated measurements from the same person across time. That requires tools that fit into everyday life instead of requiring a clinic visit or a lab booking.

A few areas feel especially ready.

Remote cognitive assessment for early neurological detection is high on the list. Alzheimer's affects tens of millions of people. Earlier detection matters. A brief cognitive screen every six months is not enough if the goal is to notice decline before it becomes obvious. We are already working with the N-CODE consortium on computerized assessments for preclinical Alzheimer's detection. We want that workflow to become routine, not exceptional.

Community intervention studies are another. People already experiment with meditation, cold exposure, nootropics, and breathwork. Most of those claims are still thinly measured. We have seen how much can be learned when a community event can run a study over a weekend instead of waiting months for a formal lab setup.

Longitudinal baselines for individuals are the deeper objective behind almost everything else. What does one person's brain activity look like when they are sleeping well, stressed, recovering, focused, or overloaded? How stable are those patterns? What changes first when something is drifting in the wrong direction? We are interested in building the infrastructure that makes those questions answerable.

Mobile and continuous brain recording will matter more as device form factors improve. A platform built for five-minute sessions today should be ready for longer recording windows tomorrow.

Closed-loop experiments also matter. If a task can adapt to a person's cognitive load in real time, each session becomes more informative and more useful.

Foundation models for brain activity are coming into view faster than we expected. Large open datasets such as the Healthy Brain Network and models such as ZUNA are part of that story already. What is still missing is a large body of longitudinal, multi-modal data collected from the same people over time. That is exactly the kind of data our quest system is designed to produce.

The move from "30 participants, one snapshot each" to repeated measurements from thousands of people is the change we care about. Lab research gave us the science. We are building the infrastructure that can make that science more frequent, more personal, and easier to use.

If you want to run a study, this is your moment

The Great Unlock only matters if more people actually use it. If you are a researcher, clinician, builder, community organizer, or simply someone with a question you want to test, we would love to help you turn it into a quest on NeuroFusion.

You do not need a perfect protocol to begin. A good starting point is often a narrow question with a repeatable measurement:

  • Does sleep duration change next-day Stroop performance, resting-state alpha, or self-reported focus?
  • Does a two-week breathwork, meditation, or cold exposure practice shift mood, calm, or spectral power in a measurable way?
  • Can a community run the same short EEG + prompt protocol before and after a shared event, workshop, or intervention weekend?
  • Do subjective energy, heart rate, steps, and short cognitive tasks move together during stress, recovery, or burnout?
  • Can a clinic, lab, or student group collect lightweight longitudinal baselines before a larger in-person study begins?
  • Are there signatures that distinguish a person's "good day" from a "drifting day" when prompts, wearables, and short recordings are combined?

Those are all quest-shaped studies. A quest can combine onboarding, consent, recurring prompts, jsPsych experiments, wearable data, media uploads, and EEG recordings inside one flow that participants can repeat over time.

If you want to pilot a study, run a community challenge, or test a protocol with your own group, reach out to us at contact@usefusion.app or start exploring the NeuroFusion Explorer quest dashboard. We are especially excited to work with early collaborators who want to run small, focused studies that can grow into something much bigger.

We do not take that work lightly. Three years ago we were sketching a way to correlate brain activity with daily behavior on a whiteboard. Today researchers on multiple continents are already doing that with tools we built.

The lab showed what is measurable. The clinic showed what is at stake. Home and community settings are where this becomes routine.

The unexamined life is not worth living. We want to build better tools for examining it.

It is time to build.

By NEUROFUSION Research, Inc.

ThesisBrain ResearchInfrastructureOpen Source