Case File
efta-01793418DOJ Data Set 10OtherEFTA01793418
Date
Unknown
Source
DOJ Data Set 10
Reference
efta-01793418
Pages
4
Persons
0
Integrity
Extracted Text (OCR)
EFTA DisclosureText extracted via OCR from the original document. May contain errors from the scanning process.
From:
Joi Ito
Sent:
Tuesday, October 22, 2013 11:26 AM
To:
Epstein Jeffrey
Subject:
Fwd: MDF
Attachments:
signature.asc
BTW, getting going with Joscha. He's smart. Let me know if you're =nterested in joining the brain threads.
Begin forwarded message:
> From: Joscha Bach
> Subject: Re: MDF
> Date: October 21, 2013 23:56:09 -0400
> To: Joi Ito
> Cc: takashi ikegami
>
Kevin Slavin
Greg Borenstein
Ari Gesher
Martin Nowak
> Hi Takashi, hi Ari, hi all,
> finally I got around to look at Takashi's talks and his 2010 ACM =rticle. The first thing that came to mind was the
distinction between =neat" and "scruffy" AI, which might be described as the clash between =olks that wanted to
construct Al by adding function after function, vs. =hose that want to take a massively complex system and constrain it
=ntil it only does what it is supposed to do.
> The idea of starting from massive data flows is very natural and =heoretically acknowledged, even it is often practically
neglected. =ognition, by and large, is an organism's attempt to massively reduce =omplexity, by compressing, encoding,
selectively ignoring, abstracting, =redicting. controlling it. Thus, it seems natural to focus on the =echanisms that handle
this complexity reduction, which I think is =xactly what most research in computer vision, machine learning,
=lassification, robot control etc. is doing. A lot of the work on =roblem solving and learning within cognitive science even
works _only_ =n the highest level of abstraction, i.e. grammatical language, regular =oncept structures, ontologies and
soon.
> If I understand Takashi correctly, he points towards another
> =erspective: (please forgive and correct me if I should oversimplify too =uch here) 1. Cognitive systems do not only
need to reduce complexity, but also =uild it (for instance, take simple cues or abstract input and use it to =eed a rich,
heterogenous, ambiguous and dynamic forest of =epresentations).
> 2. Cognitive processes that work directly on and with high complexity =ata are under-explored.
> 3. The study of systems that are immersed in such complexity might =pen the door to understanding intelligence and
cognition.
> There is really much more in Takashi's talk, but let me respond to =hese in turn:
> 1. I believe that cognition is really about handling massive data =lows, by encoding it in ways that the cognitive agent
can handle and =se to fulfill its demands. This works mostly by identifying apparent =egularities and turn them into
perceptual categories, features, =bjects, concepts, ontologies and so on. Our nervous system offers =everal levels and
layers of such complexity reduction, the first one of =ourse at the transition between sensory inputs and peripheral
EFTA_R1_00126387
EFTA01793418
nervous =ystem (for physiological, tactile, proprioceptive input), or, in the =ase of visual perception, the compression we
see between retina and =ptic nerve. The optic nerve transmits massively compressed data from =he retina to the
thalamus, and from there to the striate cortex (the =rimary visual cortex, V1). V1 is the lowest level of a hierarchy of
=isual and eventually semantic processing regions: from here, the dorsal =nd ventral processing streams head off into
the rest of the cortex. V1 =ontains filtering mechanisms, which basically look for blobs, edges, =ovements, directions and
soon, based on local contrasts. V2 organizes =hese basic features into a map of the visual field, including contours, =3
detects large, coherently moving patterns, V4 encodes simple =eometric shapes, VS seems to take care of moving
objects, and V6 =elf-motion. The detection of high-level features always projects back =nto the lower levels, to
anticipate and predict the lower level =eatures that should be isolated based on the higher-level perceptual =ypothesis.
The story is similar for auditory processing, and eventually =he integration of basic visual and optical percepts into
semantic =ontent: at each level, we take extremely rich and heterogeneous =atterns and reduce their complexity.
> The transformation from concepts to language also represents another, =ncredible level of complexity reduction.
> The highest complexity reduction, however, takes place at the =nterface between conscious thought and all the other
processes. I =elieve that the prefrontal cortex basically holds a handful of pointers =nto the associative cortical
representations, skimming off only a =andful objects, relations or features at a time, and bring them into =he conscious
focus of attention.
> The perspective of the need for staying at a complex level is entirely =arranted, though: there are many intermediate
representations that =Ilow cognitive processes only if the complexity stays high, and might =ven need to increase it. This
includes many sensor-motor coordination =rocesses, but also most creative, more intuitive exploration.
> This is not the same complexity as the one at the input, however! This =s a level where data is already split into
modalities, semantically =rganized and so on. On the other hand, it is much more complex as =inguistic or cognitively
accessible types of mental content.
> 2. Scientists tend to have a fixation on thinking with language, and =t is quite natural to fall for abstract, a-modal
representations, such =s predicate logic systems or extensions of these when it comes to =odeling cognition and
problem solving. This might explain the fixation =f cognitive architectures like Act-R and Soar on rule-based
=epresentations, and the similar approaches of a lot of work in =lassical Al.
> On the other hand, there is a lot of work on learning and
> =lassification to handle vast complexity, with the goal of reducing
> it. =A particular beautiful example was Andrew Ngs work on deep
> learning, =here his group took 30 million randomly chosen frames from
> Youtube, and =rained an unsupervised neural net to make sense of them.
> They ended up =ith spontaneously emerging detectors for many typical
> object =ategories, including cats and human faces. I could not avoid
> to think =f that paper when Takashi mentioned his fascination with
> looking at TV =ixels directly...) --> http://arxiv.org/pdf/1112.6209.pdf
> Thus, the typical strategies seem to encompass "abstract 2 abstract" =ognition, and "complex 2 abstract" cognition.
What about "abstract 2 =omplex" and "complex 2 complex"? Most of the existing approaches on =complex 2 complex"
cognition are not really cognitive, such as Ansgar =redenfeld's "Dual Dynamics" architecture, or Herbert Jaeger's Echo
=tate Networks. The current proponents of such complex cognition are =lso often radical embodimentalists (cognition as
an extension of sensor =otor control, neglecting dreams, creativity, imagination, and =apabilities for abstract thinking).
> 3. The idea of getting to artificial intelligence just_ by "looking
> =t" (blind deep learning) on complex data flows is not new. I think that =here are at least two aspects to it: deriving a
content structure that =Rows the identification and exploitation of meaningful semantic =elationships (for instance,
discerning space, color, texture, causal =rder, social structure, ... for instance simply by analyzing all of =outube, or by
2
EFTA_R1_00126388
EFTA01793419
collecting data from a robotic body and camera in a =hysical world), and the integration of that structure with an
=rchitecture that is capable of thought, language, intention, goal =irected action, decision making, and so on. The former
is tricky, the =atter impossible. Complexity itself does not define intentional action, =nd the differences between
individuals and species should not be =educed to differences in complexity perceived by the respective agents. a I
agree that we need to gain a much better understanding of "complex 2 =omplex" cognition, but that must integrate, not
replace what we already =now about the organization of cognitive processes. I am certain that =ur current models are a
long way off from capturing the richness of =onscious experience of our inner processes, and even more so from the
=uch greater complexity of those processes that cannot be experienced.
> Another interesting point I gathered from Takashi's talk is the idea =f something we might call "hyper-complex"
cognition. The complexity =andled by our human minds (as well as the one of Andrew Ng's deep =earning Youtube
watching networks) builds on very simple stimuli. But =hat if the atoms themselves are abstract or highly complex, for
=nstance because they are already semantic internet content? The =ognitive agents handling those elements may
essentially be operating at = level above human cognition if they are capable of operating on that =omplexity without
reducing it. Unlike humans which are forced to =ranslate and reduce all content into their individual frame of =eference,
and access it only through a single perspective at a time, =rtificial agents do not need to obey such restrictions. Today's
Big =ata moniker probably marks just the beginnings of the abilities of =achines to make sense of abstract and complex
input data.
> Cheers,
> Joscha
»» Fascinating. Ikegami is taking a very interesting tack:
>>»
»» http://www.youtube.com/watch?v=tOLIHhjNIBc
»» http://sacral.c.u-tokyo.ac.jp/pdf/ikegami_ACM_2010.pdf
>>»
»» For me, this is similar to the discussions that you and I and Kevin =ave been having about auto-didactism: starting
from complexity rather =han abstraction (which is generally antithetical to academic learning). =It would seem to me
that most artificial intelligence research has =tarted from abstraction (and forgive my ignorance if I'm off base here) =nd
attempted to build up to complexity. My very cursory look at the =oscha's MicroPSl work seems to show an approach
moving in the direction =f the what Ikegami did with the MTM from the classical =bstraction-first approach. MicroPSl
places its constructs in a reduced =idelity virtual environment, has lower-level abstractions, and brain
=tructures/dynamic pre-synthesized for things like motivation, emotion, =please correct me if I'm off base - like I said:
cursory). The brain =tructures in living systems have have evolved as low-energy means of =rocessing brain signals (both
sensory data flows and internally routed =treams) once they have showed fitness - ultimately, they were =and-blasted
into their shape by generations of massive data flows. We =ave an understanding of what purpose they serve but not a
good =nderstanding of how they work (maybe I'm behind on the state of the art =n neuroscience on that point?).
>>»
»» Ikegami is starting from the complexity and seeing what emerges - =hich seems to me to mirror the rise of
consciousness in natural =ystems. Mind is the surfer that hangs on the eternal wave of the =assive data flow of sensory
input without wiping out. Somehow, the =eality of the temporally continuous observer arose from exposure to =ensory
data flows and the evolution of the complexity of the brain. =kegami is shortcutting the snail's pace of the physical
evolution of =atural systems by synthesizing a neural network of sufficient =omplexity as well as high-resolution sensors.
>>»
»» Thinking about modern synthetic data flows (you know.... the =nternet!) as being as rich as sensory data leads one
to imagine some =nteresting possibilities in a) whimsically, the spontaneous emergence =f consciousness and b)
3
EFTA_R1_00126389
EFTA01793420
practically, new techniques for dealing with =hat massive data flow that mimic something like natural consciousness.
=here's nothing in the practical world of big data that really looks =ike the MTM (that anyone is talking about - who
knows what lurks in the =igh frequency trading clusters busily humming in the carrier hotels). =verything that Google
and Facebook and the like seems to be doing is =uch simpler than anything like this.
>>»
>>»
»» On Oct 19, 2013, at 9:37 AM, Joi Ito
>>»
>>>»
>»» http://www.dmi.unict.itiecal20B/workshops.phpft4th-w
>>>»
>»» - Joi
wrote:
Please use my alternative address, [email protected] to avoid email auto =esponder
4
EFTA_R1_00126390
EFTA01793421
Related Documents (6)
DOJ Data Set 10CorrespondenceUnknown
EFTA Document EFTA02058439
0p
DOJ Data Set 10OtherUnknown
EFTA02058439
2p
DOJ Data Set 10CorrespondenceUnknown
EFTA Document EFTA01793418
0p
DOJ Data Set 10OtherUnknown
EFTA01793461
4p
DOJ Data Set 10CorrespondenceUnknown
EFTA Document EFTA01793461
0p
Court UnsealedSep 9, 2019
Epstein Depositions
10. 11. 12. l3. 14. 16. 17. l8. 19. Jeffrey Epstein v. Bradley J. Edwards, et Case No.: 50 2009 CA Attachments to Statement of Undisputed Facts Deposition of Jeffrey Epstein taken March 17, 2010 Deposition of Jane Doe taken March 11, 2010 (Pages 379, 380, 527, 564?67, 568) Deposition of LM. taken September 24, 2009 (Pages 73, 74, 164, 141, 605, 416) Deposition ofE.W. taken May 6, 2010 (1 15, 1.16, 255, 205, 215?216) Deposition of Jane Doe #4 (32-34, 136) Deposition of Jeffrey Eps
839p
Forum Discussions
This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.
Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.