In the era of Trump 2.0, we're calling on all scientists and friends to advocate and unpack science in a public-facing way. This is much bigger than anyone of us. But if we all do one thing, imagine the cumulative impact!
Terrific! Paywalls are disappearing! No more 12 month embargo between when NIH funded work appears in a journal and when it becomes accessible to all (as of December 2025).
Favorite websites to grab free or low cost images?
Great to see NIH sponsor bioart. What other websites are your go-to for images that you can, say, put on websites without violating license agreements?
Fact check gem of the day: On Karl Popper's contribution to neurotransmission
In the early 1950s, neuroscientists were arguing about whether neurons communicate with one another via electricity (sparks) or chemical neurotransmissions (soups). It was known as "The War of the Soups and the Sparks" (Big reveal: It's mostly soups).
The experiment that put the debate to rest (at least for the spinal cord) was performed in 1950 by John Eccles and colleagues. In that experiment, they demonstrated that their own hypothesis (sparks) was wrong.
What inspired them to do a "disproving" experiment as opposed to the type that would gather support for their favorite theory? In 1944, Eccles met Karl Popper, and they began corresponding. Per one historian,
"The association with Popper made Eccles reformulate his experimental questions in accord with Popper’s philosophy that apparent ‘‘authentication” is no proof at all. It is only the clear-cut ‘‘falsification” of a theory that carried intellectual weight." https://pubmed.ncbi.nlm.nih.gov/18617413/
Continuing the compromise that I’ll run but only if I get to learn wonderful things, episode 1 of the Santa Fe Institute’s Complexity podcast is wonderful. The curiosity of my colleague Vijay Balasubrimanian is infectious. Borges, brains, the energy efficiency of abstractions - all there.
The dominant paradigm of experiments in the social and behavioral sciences views an experiment as a test of a theory, where the theory is assumed to generalize beyond the experiment’s specific conditions. According to this view, which Alan Newell once characterized as “playing twenty questions with nature,” theory is advanced one experiment at a time, and the integration of disparate findings is assumed to happen via the scientific publishing process. In this article, we argue that the process of integration is at best inefficient, and at worst it does not, in fact, occur. We further show that the challenge of integration cannot be adequately addressed by recently proposed reforms that focus on the reliability and replicability of individual findings, nor simply by conducting more or larger experiments. Rather, the problem arises from the imprecise nature of social and behavioral theories and, consequently, a lack of commensurability across experiments conducted under different conditions. Therefore, researchers must fundamentally rethink how they design experiments and how the experiments relate to theory. We specifically describe an alternative framework, integrative experiment design, which intrinsically promotes commensurability and continuous integration of knowledge. In this paradigm, researchers explicitly map the design space of possible experiments associated with a given research question, embracing many potentially relevant theories rather than focusing on just one. The researchers then iteratively generate theories and test them with experiments explicitly sampled from the design space, allowing results to be integrated across experiments. Given recent methodological and technological developments, we conclude that this approach is feasible and would generate more-reliable, more-cumulative empirical and theoretical knowledge than the current paradigm—and with far greater efficiency.
@UlrikeHahn@jonny Fascinating! I’m working to flesh out a good analogy for this line of thought. Are you thinking of something maybe chaotic, like the weather? Where small changes to initial conditions have inpredictable long term effects?
The exceedingly simple logistic equation behaves in this way. https://en.m.wikipedia.org/wiki/Logistic_map In it’s chaotic regime, start it at 0.2 and it will do one thing; start it at 0.20000001 and it will do the same thing for awhile but diverge. If this simple equation does that, why not the brain?
But the weather is chaotic and we’ve figured it out insofar as we have equations that can predict it in the near term and we understand why it’s chaotic. I think your point is along the lines of: the equivalent of the 7 equations for weather prediction will be harder to find for the brain. I’m trying to pinpoint: why might we think that, exactly? Because there are likely hundreds? Or they are of a different type?
(No doubt we all agree that a good first step that needs to be made is acknowledging the brain is a dynamical system upfront. We haven’t tried much of that - how far will it take us?)
I'm a patient with a deadly illness that has nearly killed me five times, and I'm also a physician-scientist racing to discover a cure before my time runs out.
Thanks to a drug that I discovered to treat my disease and began testing on myself, I'm currently in my longest remission ever and was able to have a beautiful daughter (2018) and son (2021) with the love of my life.
I dedicate my life to advancing cures for Castleman disease and many more diseases through Every Cure, spreading our innovative approach to other diseases.