Case File
efta-02508316DOJ Data Set 11OtherEFTA02508316
Date
Unknown
Source
DOJ Data Set 11
Reference
efta-02508316
Pages
2
Persons
0
Integrity
Extracted Text (OCR)
Text extracted via OCR from the original document. May contain errors from the scanning process.
From:
jeffrey E. <[email protected]>
Sent:
Friday, March 9, 2018 10:40 AM
To:
Joscha Bach
Subject:
Re:
I would think of it more of a space / field effects = Not recursive algorithm s
> wrote:
Last week I got to know Steve Hyman, Daniel Kahneman and Bo= Horvitz. Telefonica invited all of us to a two
day workshop with Pablo Ro=riguez, Ken Morse and a few others, where we were meant to advise them on =ow to use
Al for health applications. I told them that I think the goal of=therapeutic invention is not to increase happiness, but
integrity. Happine=s is merely an indicator, not the benchmark. Current apps tend to subvert =he motivation of people,
but I don't think that this is necessary or t=e best strategy. Humans are meant to be programmable, not subverted. They
=erceive their programming as "higher purpose". If we can come fr=m the top, supporting purpose, instead of from the
bottom, subverting atte=tion, we might be more successful. (Downside might be that we create cults=)
Of the bunch, Hyman managed to be the most interesting (Kahneman was very c=arismatic but mostly tried to
see if he could identify an application for =is system one/system two theory). Gary Marcus was there, too, but annoyed
=veryone by being too insecure to deal with his incompetence.
Did I tell you that I discovered that Deep Learning might be best understoo= as Second order Al?
First order Al was the classical Al that was started by Marvin Minsky in th= 1950ies, and it worked by figuring out
how we (or an abstract system) can=perform a task that requires intelligence, and then implementing that algo=ithm
directly. It yielded most of the progress we saw until recently: ches= programs, data bases, language parsers etc.
Second order Al does not implement the functionality directly, but we write=the algorithms that figure out the
functionality by themselves. Second ord=r Al is automated function approximation. Learning has existed for a long =ime
in Al of course, but Deep Learning means compositional function approx=mation.
Our current approximator paradigm is mostly the neural network, i.e. chaine= normalized weighted sums of real
values that we adapt by changing the wei=hts with stochastic gradient descent, using the chain rule. This works wel= for
linear algebra and the fat end of compact polynomials, but it does no= work well for conditional loops, recursion and
many other constructs that=we might want to learn. Ultimately, we want to learn any kind of algorithm=that runs
efficiently on the available hardware.
Neural network learning is very slow. The different learning algorithms are=quite similar in the amount of
structure they can squeeze out of the same =raining data, but they need far more passes over the data than our
nervous=system.
The solution might be meta learning: we write algorithms that learn how to =reate learning algorithms.
Evolution is meta learning. Meta learning is go=ng to be third order Al and perhaps trigger a similar wave as deep
learnin=.
I intend to visit NYC for a workshop at NYU on the weekend of the 16th.
We just moved into a new apartment; the previous one had only two bedrooms =nd this one has three, so I can
have a study. It seems that we are as luck= with the new landlords as with the previous ones.
Bests, and thank you for everything!
I
EFTA_R1_01639307
EFTA02508316
Joscha
> On Mar 8, 2018, at 16:37, jeffrey E. <[email protected] <mailto:jeevacation=gmail.com» wrote:
> progress?
> --
>
please note
> The information contained in this communication is
> confidential, may be attorney-client privileged, may
> constitute inside information, and is intended only for
> the use of the addressee. It is the property of
> JEE
> Unauthorized use, disclosure or copying of this
> communication or any part thereof is strictly prohibited
> and may be unlawful. If you have received this
> communication in error, please notify us immediately by
> return e-mail or by e-mail to [email protected] <mailto:[email protected]> , and
> destroy this communication and all copies thereof,
> including all attachments. copyright -all rights reserved
=AO please note
The information contained in this=communication is confidential, may be attorney-client privileged, mayconstitute
inside information, and is intended only for the use of the=addressee. It is the property of JEE Unauthorized use,
disclosure or=copying of this communication or any part thereof is strictly prohibite= and may be unlawful. If you have
received this communication in err=r, please notify us immediately by return e-mail or by e-mail to
[email protected]<=a>, and destroy this communication and all copies thereof, including=all attachments.
copyright -all rights reserved
--001a114bbafaba37e30566f869bd-- conversation-id 12974 date-last-viewed 0 date-received 1520591995 flags
8590195713 gmail-label-ids 7 6 remote-id 803133
2
EFTA_R1_01639308
EFTA02508317
Technical Artifacts (3)
View in Artifacts BrowserEmail addresses, URLs, phone numbers, and other technical indicators extracted from this document.
Related Documents (6)
DOJ Data Set 10OtherUnknown
EFTA02039071
48p
DOJ Data Set 9OtherUnknown
From: Jeffrey Epstein <[email protected]>
2p
DOJ Data Set 9OtherUnknown
Brin Se e
7p
DOJ Data Set 10CorrespondenceUnknown
EFTA Document EFTA01976399
0p
DOJ Data Set 9OtherUnknown
From: "Jeffrey E." <[email protected]>
2p
DOJ Data Set 10OtherUnknown
EFTA01976399
2p
Forum Discussions
This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.
Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.