Skip to main content
Skip to content
Case File
d-37684House OversightOther

Neural Foundations of Learning and Neural Darwinism excerpt

The passage discusses theoretical neuroscience concepts and references Nobel laureate Gerald Edelman's work. It contains no allegations, financial details, or connections to political or intelligence Describes Hebbian learning applied to cell assemblies. Explains Edelman's Neural Darwinism theory. Mentions probabilistic logic networks and CogPrime.

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #013175
Pages
1
Persons
0
Integrity
No Hash Available

Summary

The passage discusses theoretical neuroscience concepts and references Nobel laureate Gerald Edelman's work. It contains no allegations, financial details, or connections to political or intelligence Describes Hebbian learning applied to cell assemblies. Explains Edelman's Neural Darwinism theory. Mentions probabilistic logic networks and CogPrime.

Tags

neuroscienceneural-darwinismgerald-edelmantheoryhouse-oversighthebbian-learning

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
13.5 Neural Foundations of Learning 259 in the mean activation of A2 that occurs at time t+epsilon) is on average closest to w x (the amount of energy flowing through the bundle from A1 to A2 at time t). So when Al sends an amount x of energy along the synaptic bundle pointing from Al to A2, then A2’s mean activation is on average incremented/decremented by an amount w x 2. In a similar way, one can define the weight of a bundle of synapses between a certain static or temporal activation-pattern P1 in assembly Al, and another static or temporal activation- pattern P2 in assembly A2. Namely, this may be defined as the number w so that (the amount of energy flowing through the bundle from Ai to A2 at time t)xw best approximates (the probability that P2 is present in A2 at time t+epsilon), when averaged over all times t during which P1 is present in Al. It is not hard to see that Hebbian learning on real synapses between neurons implies Hebbian learning on these virtual synapses between cell assemblies and activation-patterns. These ideas may be developed further to build a connection between neural knowledge rep- resentation and probabilistic logical knowledge representation such as is used in CogPrime’s Probabilistic Logic Networks formalism; this connection will be pursued at the end of Chapter 34, once more relevant background has been presented. 13.8.3 Neural Darwinism A notion quite similar to Hebbian learning between assemblies has been pursued by Nobelist Gerald Edelman in his theory of neuronal group selection, or “Neural Darwinism.” Edelman won a Nobel Prize for his work in immunology, which, like most modern immunology, was based on C. MacFarlane Burnet’s theory of “clonal selection” [Bur62], which states that antibody types in the mammalian immune system evolve by a form of natural selection. From his point of view, it was only natural to transfer the evolutionary idea from one mammalian body system (the immune system) to another (the brain). The starting point of Neural Darwinism is the observation that neuronal dynamics may be analyzed in terms of the behavior of neuronal groups. The strongest evidence in favor of this conjecture is physiological: many of the neurons of the neocortex are organized in clusters, each one containing say 10,000 to 50,000 neurons each. Once one has committed oneself to looking at such groups, the next step is to ask how these groups are organized, which leads to Edelman’s concept of “maps.” A “map,” in Edelman’s terminology, is a connected set of groups with the property that when one of the inter-group connections in the map is active, others will often tend to be active as well. Maps are not fixed over the life of an organism. They may be formed and destroyed in a very simple way: the connection between two neuronal groups may be “strengthened” by in- creasing the weights of the neurons connecting the one group with the other, and “weakened” by decreasing the weights of the neurons connecting the two groups. If we replace “map” with “cell assembly” we arrive at a concept very similar to the one described in the previous subsection. Edelman then makes the following hypothesis: the large-scale dynamics of the brain is dom- inated by the natural selection of maps. Those maps which are active when good results are obtained are strengthened, those maps which are active when bad results are obtained are weakened. And maps are continually mutated by the natural chaos of neural dynamics, thus providing new fodder for the selection process. By use of computer simulations, Edelman and his colleagues have shown that formal neural networks obeying this rule can carry out fairly compli-

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.