WEBVTT

00:19.579 --> 00:22.020
This week , we will be discussing the

00:22.020 --> 00:24.379
fundamental tension between storing

00:24.379 --> 00:26.729
discrete traces of individual

00:26.729 --> 00:29.180
experiences , which allows recalled

00:29.180 --> 00:31.347
particular moments in our past without

00:31.347 --> 00:33.330
interference , and how that's in

00:33.330 --> 00:35.419
tension with extracting regularities

00:35.419 --> 00:37.659
across these experiences , which

00:37.659 --> 00:39.826
supports generalization and prediction

00:39.826 --> 00:41.937
in similar situations in the future .

00:42.520 --> 00:44.687
Doctor Ana Shapiro is joining us today

00:44.687 --> 00:46.880
to lead our discussion as an assistant

00:46.880 --> 00:48.602
professor in the Department of

00:48.602 --> 00:50.380
Psychology at the University of

00:50.380 --> 00:52.602
Pennsylvania , where her research draws

00:52.602 --> 00:54.799
on neuroimaging , behavioral , and

00:54.799 --> 00:57.229
computational modeling techniques to

00:57.229 --> 00:59.062
investigate how humans learn and

00:59.062 --> 01:01.173
consolidate information across time .

01:01.400 --> 01:03.344
So here now , over to you , Doctor

01:03.344 --> 01:05.456
Shapiro , to discuss a neural network

01:05.456 --> 01:07.720
model of how the hippocampus learns

01:07.720 --> 01:09.553
representations of specifics and

01:09.553 --> 01:11.609
generalities over time . Thank you .

01:11.790 --> 01:13.957
Great . Thank you . Thanks so much for

01:13.957 --> 01:15.901
having me . I'm , I'm very excited

01:15.901 --> 01:18.068
about um interacting with this group .

01:18.110 --> 01:20.110
Please feel free to stop me and ask

01:20.110 --> 01:22.709
clarification questions or deeper

01:22.709 --> 01:24.876
questions , whatever you're interested

01:24.876 --> 01:27.750
in . Um , but I will dive right in here .

01:27.830 --> 01:31.260
So , um , I'm gonna start with this

01:31.260 --> 01:33.980
really broad question , which is , how

01:33.980 --> 01:36.300
does the brain encode new information

01:36.300 --> 01:38.660
from the environment ? And one way to

01:38.660 --> 01:40.771
think about this question is to think

01:40.771 --> 01:43.379
about two ends of a representational

01:43.379 --> 01:45.540
spectrum . Um , from localist or

01:45.540 --> 01:47.651
pattern separated representations all

01:47.651 --> 01:49.262
the way to fully distributed

01:49.262 --> 01:51.779
representations . Um , so these words

01:51.779 --> 01:54.410
are used in very different ways in , um ,

01:54.620 --> 01:56.620
different literatures and sometimes

01:56.620 --> 01:58.676
even within the same literature . So

01:58.676 --> 02:00.898
I'm just gonna start by defining what I

02:00.898 --> 02:02.898
mean by these terms and why I think

02:02.898 --> 02:02.620
that this is an important way of

02:02.620 --> 02:04.989
thinking about how we represent

02:04.989 --> 02:08.029
information . So , um , I'm gonna kind

02:08.029 --> 02:10.085
of play this out in these little toy

02:10.085 --> 02:12.140
neural network models here . So in a

02:12.140 --> 02:14.251
localist representation , what I mean

02:14.251 --> 02:16.307
is that some input comes in from the

02:16.307 --> 02:15.789
environment . There's some pattern of

02:15.789 --> 02:17.733
activity across this little hidden

02:17.733 --> 02:20.067
layer that's gonna represent that input .

02:20.067 --> 02:22.178
And then when some new input comes in

02:22.178 --> 02:24.400
from the environment , we're just going

02:24.400 --> 02:23.710
to make sure that there is a

02:23.710 --> 02:26.630
non-overlapping population of neurons

02:26.630 --> 02:28.797
or units here that will represent that

02:28.797 --> 02:31.399
new information . As opposed to a

02:31.399 --> 02:33.566
distributed representation where we're

02:33.566 --> 02:35.732
will allow there to be overlap in that

02:35.732 --> 02:37.566
internal representation of these

02:37.566 --> 02:39.621
different pieces of information . So

02:39.621 --> 02:41.343
Jeff Hinton said a distributed

02:41.343 --> 02:43.510
representation is where each computing

02:43.510 --> 02:45.510
element is involved in representing

02:45.510 --> 02:47.399
many different entities , whereas

02:47.399 --> 02:47.320
localist representation is where each

02:47.320 --> 02:49.264
computing element is involving and

02:49.264 --> 02:51.376
representing involved in representing

02:51.376 --> 02:54.149
just one entity . So , uh , distributed

02:54.149 --> 02:56.371
representations , of course , have been

02:56.371 --> 02:58.205
crucial to the success of neural

02:58.205 --> 03:00.205
network models , um , both in , you

03:00.205 --> 03:01.927
know , artificial intelligence

03:01.927 --> 03:04.093
applications and also I think in their

03:04.093 --> 03:06.316
long history of success in explaining .

03:06.316 --> 03:08.316
Um , cognitive and , and behavioral

03:08.316 --> 03:09.760
phenomena . Um , so these

03:09.760 --> 03:12.399
representations are very powerful at

03:12.399 --> 03:14.732
finding structure and data when they're ,

03:14.732 --> 03:16.621
when they're kind of in the right

03:16.621 --> 03:18.788
architecture and paired with the right

03:18.788 --> 03:21.010
learning rule . Um , but there are kind

03:21.010 --> 03:23.010
of costs and benefits to using this

03:23.010 --> 03:25.232
kind of representation that are kind of

03:25.232 --> 03:27.232
very , uh , famous and , and long .

03:27.232 --> 03:29.455
study , but just to quickly give you um

03:29.455 --> 03:31.788
an intuition for how I think about this .

03:31.788 --> 03:33.899
So a distributor representation makes

03:33.899 --> 03:36.121
it really easy , of course , to see the

03:36.121 --> 03:38.121
commonalities across your different

03:38.121 --> 03:40.000
experiences . Um , so if you are

03:40.000 --> 03:41.889
representing , for example , your

03:41.889 --> 03:44.167
knowledge of all these different birds ,

03:44.167 --> 03:46.111
um , using this overlapping , um ,

03:46.111 --> 03:48.222
neural representation . Makes it very

03:48.222 --> 03:50.444
easy to see how those birds are related

03:50.444 --> 03:52.556
to one another . And it also makes it

03:52.556 --> 03:54.722
very easy and natural to generalize to

03:54.722 --> 03:56.833
new information . So if you see a new

03:56.833 --> 03:56.770
bird that has a lot of features in

03:56.770 --> 03:58.992
common with these previous birds , um ,

03:58.992 --> 04:01.103
you'll kind of immediately be able to

04:01.103 --> 04:02.826
see that relationship and then

04:02.826 --> 04:04.826
generalize . So even if you haven't

04:04.826 --> 04:06.603
seen this new bird fly , you'll

04:06.603 --> 04:08.659
understand that it can probably also

04:08.659 --> 04:10.881
fly . This kind of representation makes

04:10.881 --> 04:13.103
that very easily easy . In the localist

04:13.103 --> 04:15.270
case , um , it's very difficult to see

04:15.270 --> 04:17.159
those commonalities , um , in the

04:17.159 --> 04:19.459
extreme , maybe it's impossible . Um ,

04:19.750 --> 04:21.694
and that makes generalization more

04:21.694 --> 04:23.950
difficult . However , sometimes , um ,

04:24.070 --> 04:26.769
that's actually what you want . So in a

04:26.769 --> 04:28.602
situation where you're trying to

04:28.602 --> 04:30.491
remember that this bird swims but

04:30.491 --> 04:32.713
doesn't fly , whereas this very similar

04:32.713 --> 04:34.769
looking bird can fly , to the extent

04:34.769 --> 04:36.825
that you're trying to remember those

04:36.825 --> 04:38.991
differences , it's useful to use these

04:38.991 --> 04:41.213
more separated representations to avoid

04:41.213 --> 04:42.970
interference . The distributor

04:42.970 --> 04:46.880
representation has um this uh uh uh has

04:46.880 --> 04:48.936
high interference , which is kind of

04:48.936 --> 04:50.658
useful when you're kind of see

04:50.658 --> 04:52.769
commonalities , um , but can be a big

04:52.769 --> 04:55.200
problem in some situations . So this um

04:55.200 --> 04:57.422
interference issue is associated with a

04:57.422 --> 04:59.600
very famous kind of behavioral

04:59.600 --> 05:01.767
difference between these two styles of

05:01.767 --> 05:03.711
representation , which is that the

05:03.711 --> 05:05.656
distributed representation is very

05:05.656 --> 05:08.320
sensitive to the order of presentation

05:08.320 --> 05:10.890
of information . Um , so just to give

05:10.890 --> 05:13.001
you a quick intuition for this if you

05:13.001 --> 05:15.168
haven't , um , Thought about this , or

05:15.168 --> 05:17.279
if you just want to see this play out

05:17.279 --> 05:19.501
in these little um cartoons I have . So

05:19.501 --> 05:21.501
in the localist case , if you uh go

05:21.501 --> 05:23.890
back and forth between those two

05:23.890 --> 05:25.946
different like sets of information I

05:25.946 --> 05:28.168
showed you earlier , um , that's fine ,

05:28.168 --> 05:30.001
or you could block information ,

05:30.001 --> 05:31.890
meaning you present the first set

05:31.890 --> 05:33.946
entirely before the second set . And

05:33.946 --> 05:33.329
because there's no overlap in that

05:33.329 --> 05:35.218
internal representation , it just

05:35.218 --> 05:37.350
doesn't care . In what order you

05:37.350 --> 05:39.500
present the information . In the

05:39.500 --> 05:42.029
distributed case , if you uh have this

05:42.029 --> 05:44.029
nice interleaved exposure , you can

05:44.029 --> 05:46.029
build up an internal representation

05:46.029 --> 05:48.251
that reflects that structure of , of um

05:48.251 --> 05:51.040
both of those inputs , but go ahead .

05:52.769 --> 05:56.540
Is there a question ? Mm . Mm . No ,

05:56.660 --> 05:59.170
I , I was just uh making sure I'm muted .

05:59.980 --> 06:02.420
Sorry . No problem . Um , in the block

06:02.420 --> 06:05.089
case , What happens is that if you , if

06:05.089 --> 06:07.422
you learn this first set of information ,

06:07.422 --> 06:09.645
to the extent that you're gonna attempt

06:09.645 --> 06:11.811
to use the same units to represent the

06:11.811 --> 06:13.645
second set of information , what

06:13.645 --> 06:15.478
happens is that you just tend to

06:15.478 --> 06:17.320
overwrite the first uh set of

06:17.320 --> 06:20.529
information with the second set . So to

06:20.529 --> 06:22.751
the extent that there's overlap in that

06:22.751 --> 06:24.918
internal representation , you get this

06:24.918 --> 06:26.585
retroactive interference . So

06:26.585 --> 06:28.640
distributed representations are very

06:28.640 --> 06:30.751
good at finding storing structure and

06:30.751 --> 06:32.751
data efficiently . Um , they can be

06:32.751 --> 06:32.690
very powerful , but they're highly

06:32.690 --> 06:34.489
susceptible to this kind of

06:34.489 --> 06:37.100
interference , and this is the classic

06:37.100 --> 06:40.269
catastrophic interference problem . OK ,

06:40.429 --> 06:42.596
so what does the brain use ? There are

06:42.596 --> 06:44.651
these trade-offs for these different

06:44.651 --> 06:46.873
kinds of representations . Probably the

06:46.873 --> 06:48.985
brain uses both of these things , and

06:48.985 --> 06:51.151
that was really the proposal of the um

06:51.151 --> 06:53.096
the classic complementary learning

06:53.096 --> 06:55.151
systems theory , which said , well ,

06:55.151 --> 06:57.429
maybe what the brain does is that it's ,

06:57.429 --> 06:59.596
it has a division of labor , where one

06:59.596 --> 07:01.596
region , the hippocampus , is gonna

07:01.596 --> 07:03.596
specialize in the rapid learning of

07:03.596 --> 07:06.140
individual experiences using those um

07:06.750 --> 07:08.899
Pattern separated localist style

07:08.899 --> 07:11.010
representations that are very good at

07:11.010 --> 07:12.843
avoiding interference . And then

07:13.070 --> 07:15.790
offline , um , maybe especially during

07:15.790 --> 07:18.589
sleep , the hippocampus will interleave

07:18.910 --> 07:22.350
um replay of those experiences and

07:22.350 --> 07:24.739
allow the rest of the brain to extract

07:24.859 --> 07:27.380
the statistics across those experiences

07:27.380 --> 07:29.658
and then ultimately build up that nice ,

07:29.658 --> 07:31.269
um , overlapping distributed

07:31.269 --> 07:33.299
representation that allows you to

07:33.589 --> 07:37.049
generalize . So this um framework

07:37.049 --> 07:39.579
solves the interference problem by not

07:39.579 --> 07:41.635
writing information directly to that

07:41.635 --> 07:43.746
ultimate distributed representation ,

07:43.746 --> 07:45.700
but instead by first um storing

07:45.700 --> 07:47.922
information in the hippocampus and then

07:47.922 --> 07:50.033
carefully through offline interleaved

07:50.033 --> 07:51.644
replay , building up that um

07:51.644 --> 07:53.811
distributed representation in cortex .

07:55.380 --> 07:59.260
OK . So this framework has been very

07:59.260 --> 08:01.204
useful . I think it explains a lot

08:01.204 --> 08:04.140
about how we , um , how we learn and

08:04.140 --> 08:06.418
how these memory systems interact , um ,

08:06.459 --> 08:08.820
but one kind of really important

08:08.820 --> 08:11.540
missing piece here is that we can

08:11.540 --> 08:13.815
understand the . of our environment and

08:13.815 --> 08:16.815
generalize way before there's time for

08:16.815 --> 08:19.845
all of that offline replay and sleep um

08:19.845 --> 08:22.067
and all of that we can , we can extract

08:22.067 --> 08:24.067
structure from our environment over

08:24.067 --> 08:26.012
just , you know , a few minutes or

08:26.012 --> 08:28.234
hours . And so how do we , how do we do

08:28.234 --> 08:27.454
that ? What happens if you need to

08:27.454 --> 08:30.970
learn statistics quickly ? And um we

08:30.970 --> 08:33.330
have argued that the hippocampus is

08:33.330 --> 08:35.409
also crucial for rapid statistical

08:35.409 --> 08:37.729
learning and actually that it's also

08:37.729 --> 08:40.169
building up these distributed

08:40.169 --> 08:42.169
representations of the kind that we

08:42.169 --> 08:44.336
think are so powerful , ultimately out

08:44.336 --> 08:46.502
in cortex , um , that's building these

08:46.502 --> 08:48.669
up quickly within the hippocampus to ,

08:48.669 --> 08:50.669
to support statistical learning and

08:50.669 --> 08:52.447
generalization . And so the key

08:52.447 --> 08:54.558
proposal is that the brain is able to

08:54.558 --> 08:56.391
learn these powerful distributed

08:56.391 --> 08:58.613
representations quickly . And then also

08:58.613 --> 09:01.830
more slowly over time with sleep . So ,

09:01.909 --> 09:03.900
um , the reason that we had gotten

09:03.900 --> 09:05.733
interested in the hippocampus is

09:05.733 --> 09:07.844
possibly serving this kind of role in

09:07.844 --> 09:09.900
rapid statistical learning was these

09:09.900 --> 09:12.700
FMRI studies where we were , uh ,

09:12.710 --> 09:15.820
showing people sequences of novel

09:15.820 --> 09:18.270
images , um , that had some hidden

09:18.270 --> 09:19.992
structure in them . So in this

09:19.992 --> 09:22.103
experiment , there were just pairs of

09:22.103 --> 09:24.270
items that always occurred together in

09:24.270 --> 09:26.381
the sequence , and we found that from

09:26.381 --> 09:28.603
before to after the sequence exposure ,

09:28.603 --> 09:30.659
it's like an hour of exposure , um ,

09:30.659 --> 09:32.881
the hippocampus would come to represent

09:32.881 --> 09:35.103
the statistically associated items more

09:35.103 --> 09:37.270
similarly . Um , and this also happens

09:37.270 --> 09:39.437
for more complex forms of structures ,

09:39.437 --> 09:41.770
so not just simple pair-wise statistics ,

09:41.770 --> 09:43.492
but also for this more kind of

09:43.492 --> 09:45.548
realistic community structure . Um ,

09:45.880 --> 09:48.200
and we found that it's not , it's not

09:48.200 --> 09:50.256
just that the hippocampus is kind of

09:50.739 --> 09:52.739
sensitive to these statistics , but

09:52.739 --> 09:54.739
that it's actually necessary for um

09:54.739 --> 09:56.795
extracting these statistics in a way

09:56.795 --> 09:59.229
that supports , um , you know , uh

09:59.229 --> 10:01.396
behavioral evidence for um statistical

10:01.396 --> 10:03.562
learning . So , uh , this is a patient

10:03.562 --> 10:05.673
with bilateral hippocampal damage and

10:05.673 --> 10:08.059
as well as some other MTL cor cortical

10:08.059 --> 10:11.059
damage , um , who was completely unable

10:11.059 --> 10:13.219
to um show behavioral evidence of

10:13.219 --> 10:15.441
statistical learning , and there's been

10:15.441 --> 10:17.580
4 additional patients with actually

10:17.580 --> 10:19.747
more selective hippocampal damage that

10:19.747 --> 10:21.969
also , um , replicate this pattern . So

10:21.969 --> 10:23.802
we think that the hippocampus is

10:23.802 --> 10:26.080
crucial for rapid statistical learning .

10:27.580 --> 10:30.460
OK . So if it's true that the

10:30.460 --> 10:32.682
hippocampus is doing this kind of rapid

10:32.682 --> 10:34.904
statistical learning in addition to its

10:34.904 --> 10:38.890
kind of classic role in um Episodic

10:38.890 --> 10:41.112
memory keeping things separate to avoid

10:41.112 --> 10:43.057
interference , how do we reconcile

10:43.057 --> 10:45.334
those two functions of the hippocampus ,

10:45.334 --> 10:47.446
they're using very different kinds of

10:47.446 --> 10:49.668
representations . How could one area do

10:49.668 --> 10:51.668
both of those things ? So to try to

10:51.668 --> 10:53.723
understand this , we've been working

10:53.723 --> 10:56.710
with um this model of the hippocampus

10:57.020 --> 10:59.659
that is uh you know , reflects what we

10:59.659 --> 11:02.260
know about the actual anatomy and and

11:02.260 --> 11:03.982
connectivity and properties of

11:03.982 --> 11:07.640
hippocampus subfields . Um , I see , I

11:07.640 --> 11:09.862
see this question about consciousness ,

11:09.862 --> 11:12.196
which I'm gonna get to in a second . Um ,

11:12.196 --> 11:15.950
it's a good question . Um , so the ,

11:16.080 --> 11:19.039
um , there are two main pathways

11:19.039 --> 11:21.039
through the hippocampus that are

11:21.039 --> 11:23.039
represented in this model . There's

11:23.039 --> 11:24.928
this tri-synaptic pathway and the

11:24.928 --> 11:27.150
monosynaptic pathway . The tri-synaptic

11:27.150 --> 11:29.261
pathway , um , connectsentinal cortex

11:29.261 --> 11:32.460
to dentate to C3 to C1 . This is the

11:32.460 --> 11:35.409
classic pathway that we think is really

11:35.409 --> 11:37.539
crucial for this pattern separation

11:37.539 --> 11:39.700
kind of function , um , that supports

11:39.700 --> 11:41.811
episodic memory and the hippocampus .

11:41.859 --> 11:44.380
So this , um , connectivity in this

11:44.380 --> 11:47.450
pathway is very specialized and unusual .

11:47.630 --> 11:49.919
It's taking input in the anonal cortex ,

11:50.099 --> 11:52.489
um , that is , that could be quite

11:52.489 --> 11:54.650
overlapping and projecting it to

11:55.710 --> 11:57.766
Relatively orthogonal populations of

11:57.766 --> 11:59.919
neurons and dented gyrus , which is

12:00.150 --> 12:03.030
really uh kind of unusual , like that's

12:03.030 --> 12:05.390
not the way that the brain usually uh

12:05.390 --> 12:07.557
represents information . It seems like

12:07.557 --> 12:09.279
there's something very special

12:09.279 --> 12:09.150
happening here that's allowing it to do

12:09.150 --> 12:12.770
this pattern separation function . OK .

12:12.929 --> 12:15.151
But then there's this pathway that runs

12:15.151 --> 12:17.207
straight from entinal cortex to C1 ,

12:17.207 --> 12:19.318
and this pathway seems to have , um ,

12:19.318 --> 12:21.429
interesting different properties . So

12:21.429 --> 12:23.400
we know that this pathway has less

12:23.400 --> 12:25.011
extreme pattern separation ,

12:25.011 --> 12:26.789
representations seem to be more

12:26.789 --> 12:28.956
overlapping , um , and that's probably

12:28.956 --> 12:31.210
related to , um , less , uh , of that

12:31.210 --> 12:33.250
kind of , um , banning sparse

12:33.250 --> 12:36.590
connectivity . And we know that it has

12:36.590 --> 12:38.479
a kind of different learning rate

12:38.479 --> 12:40.590
profile . So the tri-synaptic pathway

12:40.590 --> 12:42.701
is capable of very fast learning . It

12:42.701 --> 12:44.789
can encode an experience even in one

12:44.789 --> 12:47.190
shot , um , and the monosynaptic

12:47.190 --> 12:49.301
pathway requires more experience . So

12:49.301 --> 12:51.579
you can learn directly on this pathway .

12:51.579 --> 12:53.634
You could lesion C3 to CA1 and still

12:53.634 --> 12:55.579
have direct learning that supports

12:55.579 --> 12:57.579
behavior on this pathway , um , but

12:57.579 --> 12:57.559
it's , it's slower , it's more

12:57.559 --> 13:00.890
incremental . Um , and these , um ,

13:00.900 --> 13:02.956
properties of the , the kind of more

13:02.956 --> 13:05.178
overlapping representation , the slower

13:05.178 --> 13:07.178
learning rate turn out to make this

13:07.178 --> 13:09.719
pathway very well suited to statistical

13:09.719 --> 13:11.599
learning . Um , and this is very

13:11.599 --> 13:14.510
analogous to the kind of original , um ,

13:14.559 --> 13:16.781
dichotomy proposed in the complementary

13:16.781 --> 13:19.003
learning systems framework which said ,

13:19.003 --> 13:21.115
well , the hippocampus specializes in

13:21.115 --> 13:23.003
these like fast learning of these

13:23.003 --> 13:23.000
orthogonalized representations , and

13:23.000 --> 13:25.278
the neocortex allows things to overlap ,

13:25.278 --> 13:28.039
um , to form these , um , more slowly

13:28.039 --> 13:30.206
form distributor representations . And

13:30.206 --> 13:32.095
what we proposed is that a little

13:32.095 --> 13:34.261
microcosm . Of this dynamic is playing

13:34.261 --> 13:36.261
out within the hippocampus itself ,

13:36.261 --> 13:39.030
where C1 , um , is kind of like a

13:39.030 --> 13:41.580
little version of cortex . It's not as ,

13:41.750 --> 13:43.583
um , distributed , it's not , it

13:43.583 --> 13:45.917
doesn't learn as slowly , um , but it's ,

13:45.917 --> 13:48.083
it's , uh , able to form these kind of

13:48.083 --> 13:50.028
like moderately overlapping faster

13:50.028 --> 13:52.194
learning representations that that can

13:52.194 --> 13:54.028
support statistical learning and

13:54.028 --> 13:56.139
generalization on a fast time scale .

13:56.139 --> 13:59.900
Um , OK , so , Hey Anna , can I ask

13:59.900 --> 14:02.250
a quick question ? This is Kevin . Um ,

14:02.989 --> 14:05.211
you had kind of like a strong statement

14:05.211 --> 14:07.100
of like it was something like the

14:07.100 --> 14:09.211
hippocampus is needed for statistical

14:09.211 --> 14:12.900
learning . Would it be for is it

14:12.900 --> 14:14.678
particular kinds of statistical

14:14.678 --> 14:16.900
learning and what , what are those ? Do

14:16.900 --> 14:18.900
you have an idea of that yet versus

14:18.900 --> 14:21.067
like someone without a hippocampus can

14:21.067 --> 14:23.011
learn certain kinds of statistical

14:23.011 --> 14:24.733
learning like probabilities or

14:24.733 --> 14:26.956
something , right ? Yeah , so , um , We

14:26.956 --> 14:30.890
think that there it's situations where

14:31.099 --> 14:34.349
you have to integrate across um

14:35.250 --> 14:37.530
Across experiences . So if it's , so if

14:37.530 --> 14:41.200
it's just like what are the rates of

14:41.200 --> 14:43.144
presentation of like an individual

14:43.144 --> 14:45.200
stimulus that that might not require

14:45.200 --> 14:47.367
the hippieups but situations where you

14:47.367 --> 14:50.159
need to kind of um Uh , you need to be

14:50.159 --> 14:53.460
sensitive to like what critics what

14:53.710 --> 14:55.710
next thing , right ? So you have to

14:55.710 --> 14:57.932
like like integrate across some kind of

14:57.932 --> 15:01.599
delay . Um , and situations where you ,

15:01.690 --> 15:03.679
where learning is relatively like

15:03.679 --> 15:05.901
observational and passive as opposed to

15:05.901 --> 15:08.400
like stimulus response , because the

15:08.400 --> 15:10.369
basal ganglia is quite good at

15:10.369 --> 15:12.313
statistical learning in situations

15:12.313 --> 15:14.239
where you are creating stimulus

15:14.239 --> 15:17.159
response mappings , um , whereas it

15:17.159 --> 15:19.215
seems like the hippocampus is really

15:19.215 --> 15:21.159
especially critical for situations

15:21.159 --> 15:22.992
where you're just more passively

15:22.992 --> 15:24.992
picking up on the statistics of the

15:24.992 --> 15:28.460
environment . Thank you . Um ,

15:30.340 --> 15:32.739
OK . So , I , I'm gonna get back to the

15:32.739 --> 15:35.017
consciousness point a little bit later ,

15:35.017 --> 15:37.128
but just to like foreshadow , um , we

15:37.128 --> 15:39.380
think that the episodic memory

15:39.380 --> 15:42.549
functions of the hippocampus are more

15:42.549 --> 15:44.940
likely to support kind of conscious

15:44.940 --> 15:47.039
forms of memory than . learning

15:47.039 --> 15:49.095
functions . We think the statistical

15:49.095 --> 15:50.817
learning is especially kind of

15:50.817 --> 15:52.761
automatic and can occur completely

15:52.761 --> 15:55.080
without awareness . So often in our

15:55.080 --> 15:57.024
paradigms for statistical learning

15:57.024 --> 15:58.913
where we're asking people to make

15:58.913 --> 16:00.802
judgments about what they've seen

16:00.802 --> 16:02.802
before , where they completely feel

16:02.802 --> 16:04.636
like they're guessing , um , but

16:04.636 --> 16:06.524
they're still above chance . Um ,

16:06.524 --> 16:08.636
whereas episodic memory assessments ,

16:08.636 --> 16:10.524
not always , but typically , um ,

16:10.524 --> 16:12.691
involve information that you have more

16:12.691 --> 16:14.747
conscious access to . Um , and we'll

16:14.747 --> 16:16.691
get back to that a little bit more

16:16.691 --> 16:20.250
later . Um , OK , time scale question

16:20.250 --> 16:23.179
has been answered . Um , all mammals

16:23.179 --> 16:25.729
have cortex . Um , you know , some

16:26.380 --> 16:28.750
Um , I don't know a lot about the

16:28.750 --> 16:32.219
evolution . I , I do know that the , um ,

16:32.229 --> 16:35.549
tri-synaptic pathway , um , is a

16:35.549 --> 16:38.190
more recent , um ,

16:39.169 --> 16:42.989
Uh , like evolutionary , um ,

16:45.989 --> 16:49.580
Um , advanced , I don't know .

16:50.760 --> 16:52.816
And I know that there are also there

16:52.816 --> 16:54.149
are other kinds of like

16:54.149 --> 16:56.204
hippocampus-like structures in other

16:56.204 --> 16:58.260
animals . I don't know , maybe other

16:58.260 --> 16:58.070
people here know more about this and

16:58.070 --> 17:00.237
could put their thoughts in the chat .

17:00.237 --> 17:02.292
Um , I think fish can have it . OK ,

17:02.450 --> 17:05.790
interesting , yeah , all right . Yeah ,

17:05.819 --> 17:08.041
I know I'm not an expert on that . OK ,

17:08.041 --> 17:11.349
um , so . If the model is right ,

17:12.359 --> 17:14.526
Then , um , and this kind of speaks to

17:14.526 --> 17:16.692
Kevin's question as well . Like , this

17:16.692 --> 17:18.637
monoytic pathway learning strategy

17:18.637 --> 17:20.859
should allow the hippocampus to support

17:20.859 --> 17:24.119
many forms of a cross-o structure

17:24.119 --> 17:26.341
learning . So beyond this like temporal

17:26.341 --> 17:28.508
statistical learning paradigm that I ,

17:28.508 --> 17:30.730
that I showed you a minute ago , really

17:30.730 --> 17:32.786
any situation where you're trying to

17:32.786 --> 17:34.952
pick up on structure across episodes .

17:35.040 --> 17:36.984
And another kind of paradigm we've

17:36.984 --> 17:39.207
gotten really interested in in recently

17:39.207 --> 17:41.318
is new concept or category learning .

17:41.318 --> 17:43.318
So it's another situation where you

17:43.318 --> 17:46.810
need to pick up on um structure across

17:47.339 --> 17:49.395
experiences in this case like across

17:49.395 --> 17:51.506
exemplars of a category , um , and we

17:51.506 --> 17:53.395
know that the hippocampus is also

17:53.395 --> 17:55.770
engaged in this kind of learning . So

17:55.770 --> 17:57.881
could it be that this learning , this

17:57.881 --> 17:59.937
monosynaptic learning strategy could

17:59.937 --> 18:02.103
also be contributing to to quick novel

18:02.103 --> 18:04.326
category learning . So we explored this

18:04.326 --> 18:06.492
in the model . Um , I'm gonna show you

18:06.492 --> 18:08.326
a couple of simulations where we

18:08.326 --> 18:10.437
applied just the same model to , um ,

18:10.437 --> 18:12.214
different kinds of like classic

18:12.214 --> 18:14.326
category learning paradigms . Um , so

18:14.326 --> 18:16.381
the first is this weather prediction

18:16.381 --> 18:18.326
task where , um , participants see

18:18.326 --> 18:20.159
these sets of abstract cards and

18:20.159 --> 18:22.381
they're supposed to predict sunshine or

18:22.381 --> 18:24.930
rain from these cards . Um , and we

18:24.930 --> 18:27.849
know that amnesiacs are not a chance on

18:27.849 --> 18:29.905
this task , so this is actually Uh ,

18:29.905 --> 18:32.016
task where we think that the uh basal

18:32.016 --> 18:34.430
ganglia is also um contributing and

18:34.430 --> 18:36.989
relevant , um , but amnesiacs are not

18:36.989 --> 18:38.989
um performing as well as controls ,

18:39.000 --> 18:41.530
which means that there's some causal

18:41.530 --> 18:43.709
contribution of the hippocampus here .

18:44.869 --> 18:47.036
So what happens in the model , um , so

18:47.036 --> 18:50.079
if you ask the model to categorize

18:50.079 --> 18:52.270
these cards as sunshine or rain , the

18:52.270 --> 18:54.310
green is the kind of normal intact

18:54.310 --> 18:56.421
performance of the model . The orange

18:56.421 --> 18:58.477
is a version that only has access to

18:58.477 --> 19:00.421
its monosynaptic pathway , and the

19:00.421 --> 19:02.310
purple is a version that only has

19:02.310 --> 19:04.366
access to its tri-synaptic pathway .

19:04.366 --> 19:06.477
And you can see that the monosynaptic

19:06.477 --> 19:06.150
pathway is really driving

19:06.150 --> 19:08.261
categorization performance . Um , the

19:08.261 --> 19:10.483
static pathway can do it a little bit ,

19:10.483 --> 19:12.706
but really not , not very well . But if

19:12.706 --> 19:15.189
you change the task and you say , OK ,

19:15.310 --> 19:17.532
well , instead of um categorizing these

19:17.532 --> 19:19.477
cards , can you just tell me which

19:19.477 --> 19:21.532
cards um which combinations of cards

19:21.532 --> 19:25.069
you saw during learning ? Um , then the

19:25.069 --> 19:27.030
story kind of flips , and now the

19:27.030 --> 19:29.510
monostatic pathway is not , um , as

19:29.510 --> 19:31.709
good as the tri-synaptic pathway at

19:31.709 --> 19:33.876
that kind of recognition of particular

19:33.876 --> 19:35.987
configurations of cards . So it's not

19:35.987 --> 19:39.290
that , you know , that one pathway is ,

19:39.390 --> 19:42.270
uh , is like , is the only one

19:42.270 --> 19:44.381
listening to this task . Like both of

19:44.381 --> 19:46.603
them are forming representations , um ,

19:46.603 --> 19:48.548
of this task , but they're forming

19:48.548 --> 19:48.349
different kinds of representations that

19:48.349 --> 19:50.460
are useful for different , um , kinds

19:50.460 --> 19:52.959
of , um , uh , information about the

19:52.959 --> 19:56.140
task . Here's another example of a

19:56.140 --> 19:58.140
different kind of category learning

19:58.140 --> 20:00.140
paradigm . This is a paradigm we've

20:00.140 --> 20:01.807
used a bunch in our lab where

20:01.807 --> 20:03.862
participants learn about these three

20:03.862 --> 20:06.449
categories of um Novel satellite

20:06.449 --> 20:08.505
objects um where objects in the same

20:08.505 --> 20:10.630
category share most of their parts ,

20:10.699 --> 20:12.921
but they also have unique individuating

20:12.921 --> 20:15.609
parts , and we had found in one of our

20:15.609 --> 20:18.329
imaging studies that used this paradigm

20:18.699 --> 20:21.300
that um the CA1 subfield of the

20:21.300 --> 20:23.356
hippocampus represents this category

20:23.356 --> 20:25.467
structure , meaning like objects from

20:25.467 --> 20:27.411
the same category represented more

20:27.411 --> 20:27.260
similarly than objects from different

20:27.260 --> 20:29.427
categories , whereas that was not true

20:29.427 --> 20:31.427
in CA3 and and dentate chairs , the

20:31.427 --> 20:34.479
trisynaptic pathway . So in the model ,

20:34.560 --> 20:36.719
we find that the intact version

20:36.719 --> 20:38.886
generalizes in this paradigm well . So

20:38.886 --> 20:40.997
if you see it show in a new satellite

20:40.997 --> 20:43.219
from one of these categories that knows

20:43.219 --> 20:45.441
what category it's from , um , the mono

20:45.441 --> 20:47.663
snapy pathway actually does even better

20:47.663 --> 20:47.319
by itself than if the tri-snap do

20:47.319 --> 20:48.986
pathway is present , which is

20:48.986 --> 20:51.041
interesting . Um , and the tri-snive

20:51.041 --> 20:53.263
pathway is not great at that task . But

20:53.263 --> 20:55.263
again , if you change the task , it

20:55.263 --> 20:57.375
flips the pattern . So if you ask the

20:57.375 --> 21:00.359
model to , um , basically , um ,

21:00.369 --> 21:02.930
disambiguate between these satellites ,

21:02.969 --> 21:05.136
so try to remember the unique features

21:05.136 --> 21:07.160
that are , that Um , are different

21:07.160 --> 21:09.160
between these exemplars . Now , the

21:09.160 --> 21:10.882
tri-synaptic pathway is really

21:10.882 --> 21:13.160
responsible entirely for that behavior .

21:13.160 --> 21:14.938
The monosynaptic pathway really

21:14.938 --> 21:17.119
struggles . Um , so this is to say ,

21:17.160 --> 21:19.160
you know , the tri-synoptic pathway

21:19.160 --> 21:21.327
actually is very useful , um , in this

21:21.327 --> 21:23.549
kind of task to the extent that you are

21:23.549 --> 21:25.549
trying to remember the details that

21:25.549 --> 21:27.716
distinguish these objects , but to the

21:27.716 --> 21:29.938
extent that you're trying to understand

21:29.938 --> 21:29.709
the category structure and generalize ,

21:29.920 --> 21:32.142
um , you really want to make use of the

21:32.142 --> 21:35.750
monosceptic . These are the internal

21:35.750 --> 21:37.750
representations in the model , um ,

21:37.760 --> 21:39.704
where you can see the same kind of

21:39.704 --> 21:41.704
pattern that we saw in the , in the

21:41.704 --> 21:43.704
FMRI data where objects in the same

21:43.704 --> 21:45.927
category are represented more similarly

21:45.927 --> 21:48.093
in CA1 but quite orthogonally and then

21:48.093 --> 21:52.040
they in CA3 . So , um ,

21:52.189 --> 21:54.310
so that's the , um , behavior of the

21:54.310 --> 21:56.477
model , what like is this , you know ,

21:56.477 --> 21:58.588
does this actually manifest in a real

21:58.588 --> 22:00.849
empirical , um , human behavior . Um ,

22:00.910 --> 22:03.132
so this is an experiment where we tried

22:03.132 --> 22:05.132
to test this , um , using a kind of

22:05.132 --> 22:07.077
behavioral assay . We're now doing

22:07.077 --> 22:09.132
imaging experiments to more directly

22:09.132 --> 22:11.188
ask about different subfields , um ,

22:11.188 --> 22:13.410
but This was pandemic time , so we were

22:13.410 --> 22:15.577
being creative about how to , um , how

22:15.577 --> 22:17.688
to get these questions behaviorally .

22:17.688 --> 22:19.799
And I actually think that this is a ,

22:19.799 --> 22:21.854
um , a , a , a cool way to do this .

22:21.854 --> 22:23.521
This is a , this is kind of a

22:23.521 --> 22:25.743
behavioral technology that other groups

22:25.743 --> 22:27.966
have started to use that , um , that we

22:27.966 --> 22:30.299
think is really exciting and that we're ,

22:30.299 --> 22:32.410
we're using here . The idea is to use

22:32.410 --> 22:34.881
memory . Um , distortions in memory in

22:34.881 --> 22:37.722
a continuous space as a way of

22:37.722 --> 22:40.991
measuring kind of , um , warping to

22:40.991 --> 22:43.213
representational space . So the idea is

22:43.213 --> 22:45.102
that people are gonna learn these

22:45.102 --> 22:46.935
objects , but here , each of the

22:46.935 --> 22:49.158
features of these objects is assigned a

22:49.158 --> 22:51.380
particular color that's drawn from this

22:51.380 --> 22:53.380
two-dimensional slice of , of color

22:53.380 --> 22:55.491
space . And we're gonna look for very

22:55.491 --> 22:58.280
subtle um distortions to memory for the

22:58.280 --> 23:00.680
colors as a way , as a way of kind of

23:00.680 --> 23:03.400
tracking how um memory for these

23:03.400 --> 23:05.622
features is changing with learning with

23:05.622 --> 23:09.280
the category structure . So in this

23:09.280 --> 23:11.949
experiment , people learn the parts of

23:11.949 --> 23:14.489
these satellites , um , but they see it

23:14.489 --> 23:16.656
like a satellite with a missing part ,

23:16.656 --> 23:18.822
they have to make a guess about how to

23:18.822 --> 23:20.989
fill it in , they um they get feedback

23:20.989 --> 23:23.100
on their guesses , and then sometimes

23:23.100 --> 23:22.640
they're shown the colors of these

23:22.640 --> 23:25.530
features . And then , um , occasionally

23:25.530 --> 23:27.863
we assess their memory for those colors .

23:27.863 --> 23:30.380
So we , we show an object , um , with a

23:30.380 --> 23:32.324
bunch of different options for the

23:32.324 --> 23:34.547
color of one of its features . Um , and

23:34.547 --> 23:36.547
we ask people , you know , which of

23:36.547 --> 23:38.436
these five very similar shades of

23:38.436 --> 23:40.547
purple do you think belongs , um , on

23:40.547 --> 23:42.769
that top feature of the satellite . And

23:42.769 --> 23:44.991
we have , you know , the correct option

23:44.991 --> 23:47.213
there , and then we have foils that are

23:47.213 --> 23:47.060
either closer to the center of the

23:47.060 --> 23:49.219
category or farther away from the

23:49.219 --> 23:51.780
center of the category . And what we

23:51.780 --> 23:54.339
find over the course of learning is

23:54.339 --> 23:56.500
that the probability of choosing the

23:56.500 --> 23:59.780
correct color , um , for the shared and

23:59.780 --> 24:02.099
unique features is , is the same . But

24:02.099 --> 24:04.300
when people get it wrong , they get it

24:04.300 --> 24:06.900
wrong differently . So they are , in

24:06.900 --> 24:09.067
the case of a shared feature , they're

24:09.067 --> 24:11.233
more likely to remember that feature's

24:11.233 --> 24:13.530
color as closer to the category center

24:14.020 --> 24:16.339
as opposed to the unique features . So

24:16.339 --> 24:18.450
shared features are getting distorted

24:18.500 --> 24:21.719
more by the category structure . And

24:21.719 --> 24:23.997
this also plays out in generalizations ,

24:23.997 --> 24:26.959
so if you ask people to um to indicate

24:26.959 --> 24:29.319
the color um of a feature of a novel

24:29.319 --> 24:31.541
satellite , they also tend to pull that

24:31.541 --> 24:34.079
color closer to the category center .

24:34.829 --> 24:36.885
Um , and if you run this paradigm in

24:36.885 --> 24:39.359
the model , what you find is that , um ,

24:39.520 --> 24:41.464
the , the same pattern , so to the

24:41.464 --> 24:43.687
extent that the monosynaptic pathway is

24:43.687 --> 24:45.798
driving the representations , you get

24:45.798 --> 24:47.964
the same thing where shared features ,

24:47.964 --> 24:50.131
um , get pulled closer to the category

24:50.131 --> 24:52.076
center and unique features kind of

24:52.076 --> 24:54.910
resist that movement . Um , so , uh ,

24:54.930 --> 24:57.152
it , this is , you know , some evidence

24:57.152 --> 24:59.152
that , um , what's happening in the

24:59.152 --> 25:00.986
model might be happening in , in

25:00.986 --> 25:02.930
people's , um , behavior and , and

25:02.930 --> 25:04.652
their , their kind of internal

25:04.652 --> 25:07.680
representations . OK . So , we're kind

25:07.680 --> 25:10.390
of accumulating data here that we think

25:10.589 --> 25:13.150
is um consistent with this possibility

25:13.150 --> 25:15.039
that we're building these rapidly

25:15.039 --> 25:17.599
formed distributed representations , um .

25:18.300 --> 25:21.939
And I see a couple of questions . So ,

25:22.140 --> 25:24.251
um , yeah , before I go to the next ,

25:24.430 --> 25:26.708
right , well , I'll take the questions .

25:26.708 --> 25:28.874
So let's see . I see a question in the

25:28.874 --> 25:28.310
chat . How are the shapes and colors

25:28.310 --> 25:32.020
encoded for Seahorse ? So in the , yeah ,

25:32.239 --> 25:34.183
what the way that we are providing

25:34.183 --> 25:37.060
input to the model . In the simulations

25:37.060 --> 25:40.010
I've shown you so far is , um , really

25:40.010 --> 25:42.729
just very simple , like one hot inputs

25:42.729 --> 25:44.562
that correspond to the different

25:44.562 --> 25:46.451
features . So these features have

25:46.451 --> 25:48.562
discrete values , um , which makes it

25:48.562 --> 25:50.340
very easy to just say like this

25:50.340 --> 25:52.507
particular feature is going to get one

25:52.507 --> 25:54.562
unit assigned to it , and this other

25:54.562 --> 25:56.840
feature will get a different unit . Um ,

25:56.840 --> 25:56.790
we're now starting to do more

25:56.790 --> 25:59.569
sophisticated , um , things . That are ,

25:59.650 --> 26:01.650
you know , like , like image , uh ,

26:01.650 --> 26:04.439
computable inputs where we have a real

26:04.439 --> 26:07.849
stimulus , um , that is like that a

26:07.849 --> 26:10.071
comnet processes , and then we take the

26:10.071 --> 26:12.182
like one of the high up embeddings of

26:12.182 --> 26:13.969
that commnet to as input to our

26:13.969 --> 26:16.136
hippocampus model . That's a much more

26:16.136 --> 26:18.469
realistic , interesting way of doing it ,

26:18.469 --> 26:20.747
but in the simulations I'm showing you ,

26:20.747 --> 26:22.802
um , they're really just very simple

26:22.802 --> 26:24.969
one hot inputs . Uh , is there another

26:24.969 --> 26:24.750
question ?

26:29.270 --> 26:32.380
Scott , yes . I , I , I'll ask Scott .

26:32.869 --> 26:35.091
I think you , you covered the , I think

26:35.091 --> 26:37.313
we have the same question there , but ,

26:37.313 --> 26:39.591
uh , the , uh , it's really getting it ,

26:39.591 --> 26:41.480
really wanting to , to understand

26:41.480 --> 26:43.702
better the , uh , the discretization of

26:43.702 --> 26:43.430
your , your color space that you were

26:43.430 --> 26:45.652
talking about . It sounds like you're .

26:45.709 --> 26:47.670
So the colors , so the , so we

26:47.670 --> 26:50.060
discretize the the parts for the model ,

26:50.349 --> 26:52.405
um , the colors are actually , so in

26:52.405 --> 26:54.979
the case of the The , the , the reason

26:54.979 --> 26:57.829
we use the colors for people is that we

26:57.829 --> 26:59.940
don't have in a behavioral experiment

26:59.940 --> 27:01.385
direct access to internal

27:01.385 --> 27:04.150
representations , um , and so we are um

27:04.150 --> 27:06.469
using color as like this way of trying

27:06.469 --> 27:08.413
to track what's happening with the

27:08.413 --> 27:10.525
internal representations , but in the

27:10.525 --> 27:12.580
model we do have access to the the .

27:12.580 --> 27:14.580
Like those continuous like internal

27:14.580 --> 27:14.525
representations . So we're just

27:14.525 --> 27:16.636
directly measuring the warping of the

27:16.636 --> 27:18.803
representations in the model we're not

27:18.803 --> 27:21.081
using color at all , um , in that case .

27:21.081 --> 27:23.303
So we train the model up on the feature

27:23.303 --> 27:25.414
structure and then we directly assess

27:25.414 --> 27:27.414
representational warping um without

27:27.414 --> 27:29.414
using color as this like additional

27:29.414 --> 27:31.739
add-on thing . OK , got it , got it .

27:31.849 --> 27:34.859
Thank you so much . Yeah . Cool .

27:35.349 --> 27:39.290
Anything else ? OK ,

27:39.489 --> 27:41.489
so this is you've gotten a sense of

27:41.489 --> 27:43.770
like our story so far , but like , are

27:43.770 --> 27:45.770
we , do we solve it ? Like , are we

27:45.770 --> 27:47.826
done ? Are we right ? Or like , what

27:47.826 --> 27:49.992
are the alternatives ? How else , um ,

27:49.992 --> 27:51.881
might we be learning this kind of

27:51.881 --> 27:51.619
structure quickly ? And actually ,

27:51.729 --> 27:54.089
there , there , there are alternatives .

27:54.130 --> 27:56.186
People have thought about , um , how

27:56.186 --> 27:58.297
the hippocampus might be contributing

27:58.297 --> 28:02.219
to Um , this kind of learning , um ,

28:02.469 --> 28:04.691
that are , you know , theories that are

28:04.691 --> 28:06.858
distinct from what I just showed you .

28:06.858 --> 28:06.709
So , so let me give you a sense of , um ,

28:06.910 --> 28:09.132
what else is out there and how we might

28:09.132 --> 28:11.299
test between these ideas . And to talk

28:11.299 --> 28:13.521
about this , I'm gonna bring in another

28:13.521 --> 28:15.743
task , um , where there's been a lot of

28:15.743 --> 28:17.743
theorizing about how the hip campus

28:17.743 --> 28:19.743
might solve this task , um , called

28:19.743 --> 28:21.799
Associated reference . It's like the

28:21.799 --> 28:23.799
simplest possible form of inference

28:23.799 --> 28:25.577
where you learn . Two things go

28:25.577 --> 28:27.632
together , A and B , you learn these

28:27.632 --> 28:29.910
other two things go together , B and C ,

28:29.910 --> 28:31.966
but there's this overlap between the

28:31.966 --> 28:34.021
two pairs , and so then there's some

28:34.021 --> 28:36.188
inference test , can you get from A to

28:36.188 --> 28:38.819
C . And we know that the hippocampus is

28:38.819 --> 28:41.270
important for this kind of inference

28:41.270 --> 28:43.829
behavior both from um rodent lesion

28:43.829 --> 28:47.060
studies , and , and it also shows up in

28:47.349 --> 28:49.682
lots of human fMRI studies of this task .

28:50.939 --> 28:53.161
So there's different strategies for how

28:53.161 --> 28:55.989
you could solve socios . Um , one kind

28:55.989 --> 28:58.430
of strategy was implemented in this um

28:58.430 --> 29:01.560
model called for Merge . And this model

29:01.560 --> 29:04.560
said , we are going to really preserve

29:04.560 --> 29:06.449
this idea that the hippocampus is

29:06.449 --> 29:08.520
specialized in patterning separated

29:08.920 --> 29:11.087
representations . Um , we're not going

29:11.087 --> 29:13.359
to allow there to be any overlap . And

29:13.359 --> 29:15.959
the way that you solve this problem is

29:15.959 --> 29:17.848
through recurrent computations at

29:17.848 --> 29:19.903
retrieval . So the idea here is that

29:19.903 --> 29:22.126
you store your memories of seeing A , B

29:22.126 --> 29:24.126
and BC together , but in , in these

29:24.126 --> 29:26.359
like separate , you know , completely

29:26.359 --> 29:28.739
separate units . And then at test what

29:28.739 --> 29:31.300
you can do is if you're given item A by

29:31.300 --> 29:34.119
itself , you can pull up your AB memory ,

29:34.459 --> 29:36.570
and that can remind you about kind of

29:36.570 --> 29:38.626
the B item by itself , and then that

29:38.626 --> 29:41.260
can get you to BC and then back out to

29:41.260 --> 29:43.427
C . So you can solve this inference at

29:43.427 --> 29:45.819
retrieval through these recurrent

29:45.819 --> 29:47.986
connections as long as you've like set

29:47.986 --> 29:49.708
up those recurrent connections

29:49.708 --> 29:51.875
correctly . That's very different from

29:51.875 --> 29:53.930
this kind of strategy that I've been

29:53.930 --> 29:56.097
talking about where you are doing this

29:56.097 --> 29:58.208
interleaved learning with distributed

29:58.208 --> 30:00.375
representations , and then like slowly

30:00.375 --> 30:02.541
merging these representations to allow

30:02.541 --> 30:05.420
there to be this overlap um that allow

30:05.420 --> 30:07.642
that , you know , in , in the case of a

30:07.642 --> 30:09.642
socio difference would just , would

30:09.642 --> 30:11.698
just automatically show you that ANC

30:11.698 --> 30:11.459
are related because they're , they're

30:11.459 --> 30:13.403
sharing neurons by the end of this

30:13.403 --> 30:15.800
learning process . So the distributed

30:15.800 --> 30:18.770
um strategy makes the inference very

30:18.770 --> 30:21.959
fast and automatic , but it's dependent

30:21.959 --> 30:24.439
on having experienced the information

30:24.439 --> 30:26.495
in interleaved order for the reasons

30:26.495 --> 30:28.550
that I told you about earlier , um ,

30:28.550 --> 30:31.140
whereas the local strategy . It also

30:31.140 --> 30:32.862
works , but it's more slow and

30:32.862 --> 30:36.540
effortful , um , and I think of it as

30:36.540 --> 30:39.300
being more um explicit and probably

30:39.300 --> 30:41.420
more conscious , right ? So you are

30:41.420 --> 30:43.680
kind of like thinking through . Um ,

30:43.750 --> 30:45.861
these connections that test like , oh

30:45.861 --> 30:47.917
yeah , A went with a B , oh yeah , I

30:47.917 --> 30:50.083
also saw B with BC , right ? It's kind

30:50.083 --> 30:52.250
of , um , I think that it's often more

30:52.250 --> 30:54.306
explicit in that way . Um , but it's

30:54.306 --> 30:56.417
not dependent on presentation order .

30:56.417 --> 30:58.583
So however you experience the things ,

30:58.583 --> 31:00.806
um , initially it's fine as long as you

31:00.806 --> 31:03.083
have time to do that reasoning at test .

31:03.199 --> 31:06.310
OK . Um , so we're gonna try to use

31:06.310 --> 31:09.000
this difference and dependence on

31:09.000 --> 31:11.167
presentation order as a way of teasing

31:11.167 --> 31:13.278
apart people's different strategies ,

31:13.278 --> 31:15.389
um , that they might be using in this

31:15.389 --> 31:17.930
task . So this is a um experimental

31:17.930 --> 31:20.920
design where we do associated reference

31:20.920 --> 31:24.839
in um uh a setting where

31:24.930 --> 31:27.650
the individual pairs that people are

31:27.650 --> 31:29.483
learning are presented either in

31:29.483 --> 31:31.539
interleaved or in blocked order . So

31:31.539 --> 31:33.761
for some of these triads , you'll see A

31:33.761 --> 31:35.872
B and then BC and then A , B and BC ,

31:36.140 --> 31:38.251
and then for others you'll see all of

31:38.251 --> 31:40.473
the ABs before you see any of the VCs .

31:41.880 --> 31:44.047
And then at the end of this exposure ,

31:44.130 --> 31:47.819
we , um , do two kinds of tests . The

31:47.819 --> 31:49.819
first test is a speeded recognition

31:49.819 --> 31:51.930
test where we show two objects and we

31:51.930 --> 31:54.041
ask people to quickly make a judgment

31:54.041 --> 31:56.097
about whether those two objects were

31:56.097 --> 31:57.875
actually shown as a pair during

31:57.875 --> 32:00.097
learning . So in this case , the answer

32:00.097 --> 32:02.430
is no , even though there was this like ,

32:02.430 --> 32:04.430
um , linking the object , you never

32:04.430 --> 32:06.652
actually directly saw these two objects

32:06.652 --> 32:08.930
together , and so , um , you would say ,

32:08.930 --> 32:11.152
no , I didn't see them . And then we do

32:11.152 --> 32:13.319
the kind of classic explicit inference

32:13.319 --> 32:15.541
test . This is what people typically do

32:15.541 --> 32:15.439
in associated reference experiments

32:15.439 --> 32:17.661
where we show them an object and we say

32:17.661 --> 32:19.661
which of these two objects below is

32:19.661 --> 32:21.772
indirectly associated with the object

32:21.772 --> 32:23.883
above and you're given plenty of time

32:23.883 --> 32:26.106
to like reason through that if you , if

32:26.106 --> 32:28.383
you want to solve the task in that way .

32:28.383 --> 32:30.619
So we don't expect a difference in the

32:30.619 --> 32:32.452
explicit in test between our two

32:32.452 --> 32:34.452
conditions because we think you can

32:34.452 --> 32:36.563
really solve that test , uh , in , in

32:36.563 --> 32:38.508
either with either strategy . What

32:38.508 --> 32:40.397
we're really interested in is the

32:40.397 --> 32:42.286
speedy recognition test . And the

32:42.286 --> 32:44.609
reason is that if you have built up

32:44.609 --> 32:48.000
this overlapping representation of ANC ,

32:48.459 --> 32:50.939
then this particular judgment should be

32:50.939 --> 32:53.050
kind of confusing , right ? It should

32:53.050 --> 32:55.106
be hard to say that you didn't study

32:55.106 --> 32:58.229
ANC together . Whereas if you have this

32:58.229 --> 33:00.500
more separate representation of ANC ,

33:00.750 --> 33:03.640
um , Then this , this should be fine .

33:03.689 --> 33:05.633
You should be able to say , no , I

33:05.633 --> 33:07.911
didn't study those two things together .

33:07.911 --> 33:10.133
So the prediction is that people should

33:10.133 --> 33:12.078
be slower to say that they did not

33:12.078 --> 33:14.189
study an interleaved AC together with

33:14.189 --> 33:16.411
the idea that interleaving leads you to

33:16.411 --> 33:17.911
build up these overlapping

33:17.911 --> 33:20.133
representations relative to the case of

33:20.133 --> 33:23.540
the blocked ACs . OK , so here's what

33:23.540 --> 33:25.819
we found . This is across three kind of

33:25.819 --> 33:28.041
like replications of basically the same

33:28.041 --> 33:30.530
uh paradigm . um we find no difference

33:30.530 --> 33:32.660
in behavior for interleaved versus

33:32.660 --> 33:34.993
blocked in the explicit difference test .

33:35.339 --> 33:37.450
But for speeded recognition , what we

33:37.450 --> 33:39.506
find is that people are consistently

33:39.506 --> 33:41.750
slower to judge that those interleaved

33:41.750 --> 33:43.861
ACs were not studied together , which

33:43.861 --> 33:45.972
we think is consistent with this this

33:45.972 --> 33:48.083
idea that you're that you're confused

33:48.083 --> 33:49.861
because you're building up this

33:49.861 --> 33:51.972
overlapping representation . Um , and

33:51.972 --> 33:54.250
actually people would even false alarm .

33:54.250 --> 33:54.130
And so in 2 out of these three

33:54.130 --> 33:56.463
experiments , we found that people were ,

33:56.463 --> 33:58.519
would actually say that they studied

33:58.519 --> 34:00.686
these objects together directly , um ,

34:00.686 --> 34:03.019
when they had not studied them together .

34:03.019 --> 34:05.019
So they're , they're having a false

34:05.019 --> 34:07.019
memory about , um , about what they

34:07.019 --> 34:09.297
studied . In the interleaved condition ,

34:09.297 --> 34:11.780
not blocked condition . So that that's

34:11.780 --> 34:15.139
kind of an example of um of

34:15.139 --> 34:18.499
interleaving . Not necessarily being

34:18.499 --> 34:20.610
helpful to you , right , it's kind of

34:20.610 --> 34:22.888
like causing these false memories , um ,

34:22.888 --> 34:22.739
but of course we think it should be

34:22.739 --> 34:25.599
useful for um certain kinds of tasks

34:25.599 --> 34:28.658
like generalization . So we tried the

34:28.658 --> 34:30.825
same paradigm , but then at the end of

34:30.825 --> 34:32.880
the exposure , we had people learn .

34:33.030 --> 34:35.830
that um some of the A objects had some

34:35.830 --> 34:37.699
like some property , some novel

34:37.699 --> 34:39.755
nonsense property , and then we , we

34:39.755 --> 34:41.532
checked how likely they were to

34:41.532 --> 34:43.421
generalize that property to the C

34:43.421 --> 34:45.643
objects . And we found that people were

34:45.643 --> 34:47.588
much more likely to do that in the

34:47.588 --> 34:49.866
interleaved case than the blocked case .

34:49.866 --> 34:51.866
And then we also thought that if we

34:51.866 --> 34:54.229
turned this task into a statistical

34:54.229 --> 34:56.388
learning task , so have , so have

34:56.388 --> 34:58.555
people see these objects presented one

34:58.555 --> 35:00.610
at a time in this kind of continuous

35:00.610 --> 35:02.666
sequence , it's more implicit , it's

35:02.666 --> 35:04.888
harder to figure out what you're , what

35:04.888 --> 35:07.055
the kind of pairs here are that you're

35:07.055 --> 35:09.221
even learning about . This is the kind

35:09.221 --> 35:11.444
of situation where interleaved learning

35:11.444 --> 35:13.555
is especially powerful . And we found

35:13.555 --> 35:15.721
in this case that actually , um , Only

35:15.721 --> 35:17.915
with interleaved learning could you do

35:17.915 --> 35:20.082
any of these tasks . Could you do even

35:20.082 --> 35:22.193
the explicit inference . You can give

35:22.193 --> 35:24.359
people as much time as you want , um ,

35:24.359 --> 35:26.526
at test , and they will fail , um , in

35:26.526 --> 35:28.475
this kind of situation if they had

35:28.475 --> 35:30.419
learned the information in blocked

35:30.419 --> 35:32.642
order . So this shows you that like the

35:32.642 --> 35:34.975
blocked conditions completely at chance ,

35:34.975 --> 35:36.531
um , for this paradigm . So

35:36.531 --> 35:38.586
interleaving can be very powerful in

35:38.586 --> 35:37.794
these situations where you're

35:37.794 --> 35:39.627
integrating across kind of noisy

35:39.627 --> 35:43.570
information . Um , so we compared

35:43.570 --> 35:45.181
some different models of the

35:45.181 --> 35:47.126
hippocampus . So here's the emerge

35:47.126 --> 35:49.403
model that um I told you about earlier .

35:49.403 --> 35:51.737
Here's our Seahorse model on the bottom ,

35:51.737 --> 35:51.489
and then this is the temporal context

35:51.489 --> 35:54.330
model , which is another kind of um

35:54.530 --> 35:56.969
like powerful model of , of how the

35:56.969 --> 35:59.929
hippocampus um supports , um .

36:01.429 --> 36:03.651
Different kinds of memory . It's , it's

36:03.651 --> 36:05.818
been applied to so inference as well ,

36:05.818 --> 36:07.707
um , and it uses in a distributed

36:07.707 --> 36:09.818
internal representation , and we find

36:09.818 --> 36:11.818
that TCM always prefers interleaved

36:11.818 --> 36:13.949
presentation where merge never cares

36:13.949 --> 36:16.110
about presentation order , and then

36:16.110 --> 36:18.469
Cors actually can can exhibit both

36:18.469 --> 36:21.149
behaviors depending on the task and

36:21.149 --> 36:23.469
depending on which . pathway you're

36:23.469 --> 36:25.525
relying on more , right ? Because it

36:25.525 --> 36:27.025
has kind of both styles of

36:27.025 --> 36:29.247
representation present , um , and so we

36:29.247 --> 36:31.247
can account for the kind of lack of

36:31.247 --> 36:33.136
difference between interleaving ,

36:33.136 --> 36:35.358
blocking and explicit case , as well as

36:35.358 --> 36:37.525
the preference for interleaving , um ,

36:37.525 --> 36:41.300
uh , uh . Uh Sorry , the , yeah , in ,

36:41.350 --> 36:44.979
um , in the , the , the increased

36:44.979 --> 36:47.035
ability to do inference and speed of

36:47.035 --> 36:49.350
recognition and the . OK .

36:51.010 --> 36:54.000
OK , so in learning benefits , rapid

36:54.000 --> 36:55.889
inference , we think that this is

36:55.889 --> 36:57.833
consistent with this idea that the

36:57.833 --> 36:60.000
hippocampus contains these distributed

36:60.000 --> 37:02.000
representations that complement the

37:02.000 --> 37:03.889
kind of classic out and separated

37:03.889 --> 37:07.879
localist style representations . OK ,

37:08.070 --> 37:10.181
I'll pause here for a second , see if

37:10.181 --> 37:12.348
there's any questions before I move on

37:12.348 --> 37:14.348
to some of our sleep stuff . Yeah ,

37:14.348 --> 37:16.459
Katrina . Hi , yeah , thanks for your

37:16.459 --> 37:18.626
talk . This is fascinating . Um , I am

37:18.626 --> 37:20.848
aware of some research that shows these

37:20.848 --> 37:22.681
kind of category versus exemplar

37:22.681 --> 37:25.479
effects by hemisphere such that like

37:25.479 --> 37:27.646
right hemisphere seems to be better at

37:27.646 --> 37:29.757
exemplar encoding and left hemisphere

37:29.757 --> 37:31.868
better at category encoding , and I'm

37:31.868 --> 37:34.035
curious if you've encountered that and

37:34.035 --> 37:35.979
looking at the like right and left

37:35.979 --> 37:38.600
hippocampus at all . Um , we have not

37:38.600 --> 37:40.649
found differences , hemispheric

37:40.649 --> 37:44.629
differences on in that way . We find

37:44.629 --> 37:48.239
sometimes , um , that there are

37:49.090 --> 37:51.919
Differences in like the type of , um ,

37:52.010 --> 37:53.732
stimuli that we use . So right

37:53.732 --> 37:55.566
hippocampus is a little bit more

37:55.566 --> 37:57.788
interested in like a visual stimuli and

37:57.788 --> 37:59.899
left hippocampus is a little bit more

37:59.899 --> 38:02.066
interested in verbal stimuli , but the

38:02.066 --> 38:04.288
effects are quite small and generally ,

38:04.288 --> 38:03.610
like we don't , we don't see big

38:03.610 --> 38:06.409
hemispheric differences . Yeah . Thanks .

38:09.040 --> 38:12.149
Um , OK , cool . So I have a

38:12.149 --> 38:14.870
um a few more minutes on sleep stuff

38:14.870 --> 38:16.592
and then I and then I can take

38:16.592 --> 38:19.080
questions on , on everything . So

38:19.620 --> 38:21.787
everything I've talked about so far is

38:21.787 --> 38:23.453
like direct learning from the

38:23.453 --> 38:25.509
environment , like how do you encode

38:25.509 --> 38:27.676
new information , whether it's in this

38:27.676 --> 38:27.290
distributed form or this localist form ,

38:27.500 --> 38:29.444
but then like , ultimately , we do

38:29.444 --> 38:31.333
think that this transformation is

38:31.333 --> 38:33.760
happening where the hippocampus is um

38:33.760 --> 38:35.699
replaying information offline and

38:35.699 --> 38:38.050
helping to establish a different form

38:38.050 --> 38:40.179
of representation in , in neocortical

38:40.179 --> 38:42.340
areas . And so how does this

38:42.340 --> 38:44.451
transformation happen ? Like , how is

38:44.451 --> 38:46.350
it possible um that you can do

38:46.350 --> 38:49.159
interesting learning . Offline without

38:49.159 --> 38:51.760
any more direct exposure from the

38:51.760 --> 38:53.871
environment . How do you , how do you

38:53.871 --> 38:55.816
do this kind of like systems level

38:55.816 --> 38:58.870
transformation ? So one prediction of

38:58.870 --> 39:01.092
this kind of like way of thinking about

39:01.092 --> 39:04.580
um systems transformation is that with

39:04.709 --> 39:07.870
over the course of um this like offline

39:07.870 --> 39:10.037
processing , especially during sleep .

39:10.037 --> 39:12.989
Um , you might expect to have a better

39:12.989 --> 39:15.156
understanding of the structure of your

39:15.156 --> 39:17.267
environment , even without any more ,

39:17.267 --> 39:19.156
you know , direct exposure to the

39:19.156 --> 39:20.878
environment . So we had run an

39:20.878 --> 39:22.933
experiment again using the satellite

39:22.933 --> 39:24.933
stimuli where we said , well , what

39:24.933 --> 39:27.045
happens to your memory for the unique

39:27.045 --> 39:27.000
versus the shared features of the

39:27.000 --> 39:29.040
stimuli across the night of sleep

39:29.040 --> 39:31.429
versus a day awake . And what we found

39:31.429 --> 39:33.959
is that people hang on to the unique

39:33.959 --> 39:37.889
features . Of um these satellites

39:37.889 --> 39:40.479
across , um , a night of sleep ,

39:40.649 --> 39:42.816
whereas there's a lot of forgetting of

39:42.816 --> 39:44.871
the of that um feature type across a

39:44.871 --> 39:46.816
day a week , so sleep prevents the

39:46.816 --> 39:49.030
deterioration of unique features . But

39:49.030 --> 39:50.863
in the case of shared features ,

39:50.863 --> 39:52.870
there's this very intriguing above

39:52.870 --> 39:55.037
baseline effect , um , which is pretty

39:55.037 --> 39:57.259
rare in the sleep literature , at least

39:57.259 --> 39:59.481
in the kind of declarative memory sleep

39:59.481 --> 40:01.648
literature , where sleep promotes this

40:01.648 --> 40:03.592
better understanding of the shared

40:03.592 --> 40:05.648
structure of the satellites , better

40:05.648 --> 40:07.814
than your understanding was before you

40:07.814 --> 40:09.870
slept . And we think that this might

40:09.870 --> 40:09.639
indicate what we're talking about here ,

40:09.649 --> 40:11.593
where you're like building up this

40:11.593 --> 40:13.760
representation that's really sensitive

40:13.760 --> 40:15.816
to the structure of this information

40:15.816 --> 40:17.982
and might make it even more obvious to

40:17.982 --> 40:20.360
you . Um , I think I'll , I'll skip

40:20.360 --> 40:23.320
that one . OK , so , yeah , Jason . Hey ,

40:23.439 --> 40:25.772
yeah , sorry . Uh , so question on this ,

40:25.772 --> 40:27.883
I know you're gonna go into the sleep

40:27.883 --> 40:27.439
part of this , but are there any other

40:27.439 --> 40:29.639
scenarios where you think this is

40:29.639 --> 40:32.620
occurring outside of sleep ? In any way .

40:34.030 --> 40:36.669
So we , we know that there's quite a

40:36.669 --> 40:38.840
lot of offline replay that happens

40:39.110 --> 40:41.469
while you are awake , um , in like

40:41.469 --> 40:45.100
awake kind of rest periods . Um ,

40:45.429 --> 40:48.580
and we think that that is also ,

40:48.949 --> 40:51.116
um , like behaviorally relevant . Like

40:51.116 --> 40:53.060
it , it . Is there's some learning

40:53.060 --> 40:55.060
function there and it impacts later

40:55.060 --> 40:57.445
behavior . Um , we think that sleep

40:57.445 --> 40:59.645
might be especially important for this

40:59.645 --> 41:03.014
idea , um , of systems transformation .

41:03.165 --> 41:05.405
So , um , we know that there are these

41:05.405 --> 41:09.165
specialized , um , kinds of , um ,

41:10.350 --> 41:12.183
Coupling between hippocampal and

41:12.183 --> 41:14.072
cortical areas , that's happening

41:14.072 --> 41:16.183
during sleep and less so during awake

41:16.183 --> 41:18.017
reactivation that we think could

41:18.017 --> 41:20.183
support this like teaching function of

41:20.183 --> 41:22.294
the hippocampus helping the neocortex

41:22.294 --> 41:24.406
to establish new representations . So

41:24.406 --> 41:26.628
our idea , this is just our idea , it's

41:26.628 --> 41:28.961
really not established empirically , um ,

41:28.961 --> 41:31.183
but the way we think about this is that

41:31.183 --> 41:33.350
what's special about sleep here is its

41:33.350 --> 41:35.406
ability to transform , um , memories

41:35.406 --> 41:37.628
across systems as opposed to doing like

41:37.628 --> 41:40.189
more local learning . Within systems .

41:40.899 --> 41:44.020
Awesome . Thank you . Um , OK ,

41:44.310 --> 41:48.020
so , To try to think through like how

41:48.020 --> 41:50.139
do you do useful learning offline

41:50.139 --> 41:51.972
without any more exposure to the

41:51.972 --> 41:54.139
environment . We've been again working

41:54.139 --> 41:56.250
with uh um neural network models . So

41:56.250 --> 41:58.129
this is a model that has um a

41:58.129 --> 42:00.462
hippocampus , that's , that's sort of a ,

42:00.462 --> 42:02.629
like a simpler version of our Seahorse

42:02.629 --> 42:04.851
model , um , and we connected up with a

42:04.851 --> 42:06.907
big neocortical module . Um , and we

42:06.907 --> 42:08.851
can , you know , we can train this

42:08.851 --> 42:10.851
model up on like the satellites the

42:10.851 --> 42:13.073
same way that we normally would train a

42:13.073 --> 42:12.780
model the awake learning , we know how

42:12.780 --> 42:15.090
to do that . Um , the trick with this ,

42:15.110 --> 42:17.580
um , with , with this new , um , model

42:17.580 --> 42:19.747
is to figure out how to let this model

42:19.747 --> 42:21.858
run completely autonomously offline ,

42:21.939 --> 42:24.106
um , and do something useful , do some

42:24.106 --> 42:26.217
um interesting kind of transformation

42:26.217 --> 42:28.439
of the representations . So how do we ,

42:28.439 --> 42:30.495
how do we get this to work ? Um , so

42:30.495 --> 42:32.750
there's a , there's a few kind of key

42:32.750 --> 42:36.199
like properties of , um , this , uh ,

42:36.260 --> 42:38.482
this like learning scheme that allow us

42:38.482 --> 42:40.593
to do something useful during sleep .

42:40.593 --> 42:42.816
So the first thing is that we just need

42:42.816 --> 42:44.927
a way of this model to for this model

42:44.927 --> 42:47.038
to transition from memory to memory .

42:47.038 --> 42:48.871
And we use a short term synaptic

42:48.871 --> 42:51.139
depression mechanism , um , which means

42:51.139 --> 42:53.500
that to the extent that two of these

42:53.500 --> 42:55.659
units are coactive for a while , the

42:55.659 --> 42:57.860
synapse between them tires out

42:57.860 --> 43:00.580
temporarily , um , so that the model's

43:00.580 --> 43:02.860
kind of forced to move on to its next

43:02.860 --> 43:05.027
attractor state . So that's how we get

43:05.027 --> 43:07.193
the model to transition from a tractor

43:07.193 --> 43:09.304
to a tractor by itself during sleep .

43:09.560 --> 43:11.727
And then we need a way of knowing like

43:11.727 --> 43:14.760
which of these moments , um , during

43:14.760 --> 43:17.840
the sleep are worth learning from . And

43:18.090 --> 43:20.201
for this , we have the model tracking

43:20.201 --> 43:23.250
its own stability , and it knows that

43:23.250 --> 43:25.719
when activity is very stable , um ,

43:25.810 --> 43:27.810
that , that might be a state that's

43:27.810 --> 43:29.969
worth learning from . And so we , we

43:29.969 --> 43:32.136
mark , um , so like right now , that's

43:32.136 --> 43:34.191
a very stable state . We mark , um ,

43:34.191 --> 43:36.191
these states as good or plus states

43:36.191 --> 43:38.413
that might be worth learning from . And

43:38.413 --> 43:41.020
then we use oscillations . Which are a

43:41.020 --> 43:43.187
very prominent feature of the sleeping

43:43.187 --> 43:46.020
brain , um , as a way of perturbing

43:46.020 --> 43:49.139
these stable states to reveal kind of

43:49.139 --> 43:51.306
aspects of these attractors that could

43:51.306 --> 43:53.739
be improved . So let me just walk you

43:53.739 --> 43:55.906
through how that works . So what we're

43:55.906 --> 43:58.128
doing is we're oscillating the level of

43:58.128 --> 44:00.461
inhibition throughout this network . Um ,

44:00.461 --> 44:02.683
and if you think like , OK , here we're

44:02.683 --> 44:04.739
replaying this satellite over here ,

44:04.739 --> 44:06.572
we're in this tractor , um , but

44:06.572 --> 44:08.572
there's this other nearby attractor

44:08.572 --> 44:10.739
over here for this like satellite from

44:10.739 --> 44:12.795
the same category . Um , as we raise

44:12.795 --> 44:15.017
inhibition in the model , that makes it

44:15.017 --> 44:17.060
harder for units to be active , and

44:17.060 --> 44:19.227
what happens is that the weakest parts

44:19.227 --> 44:21.979
of this memory will fall out first as

44:21.979 --> 44:24.312
you raise inhibition . So that's useful ,

44:24.312 --> 44:26.535
that's that's a useful kind of thing to

44:26.535 --> 44:28.701
reveal about this , um , tractor . And

44:28.701 --> 44:30.757
then when you lower inhibition below

44:30.757 --> 44:32.701
baseline , that allows activity to

44:32.701 --> 44:34.923
spread farther than it normally would .

44:34.923 --> 44:36.979
Um , and that's a way of identifying

44:36.979 --> 44:39.570
potentially competing or interfering

44:39.570 --> 44:41.889
nearby memories . So both sides of the

44:41.889 --> 44:44.969
oscillation are kind of useful um in

44:44.969 --> 44:47.080
figuring out ways that we can improve

44:47.080 --> 44:49.909
this tractor . So what we do is , um ,

44:49.919 --> 44:52.086
now we have our good stable states and

44:52.086 --> 44:53.975
we have our kind of perturbed bad

44:53.975 --> 44:56.086
states from the oscillations , and we

44:56.086 --> 44:58.252
can do um air-driven learning now . So

44:58.252 --> 45:00.308
we're going to use contrastive heavy

45:00.308 --> 45:02.197
learning to update the connection

45:02.197 --> 45:03.975
weights so that the patterns of

45:03.975 --> 45:06.086
coactivity in the minus state between

45:06.086 --> 45:08.141
units look more like the patterns of

45:08.141 --> 45:10.086
coactivity in the good stable plus

45:10.086 --> 45:12.086
state . So the , the trick here was

45:12.086 --> 45:13.919
trying to find some way of doing

45:13.919 --> 45:15.975
error-driven learning when you don't

45:15.975 --> 45:15.639
have actual feedback from the

45:15.639 --> 45:17.750
environment . Um , and this is , this

45:17.750 --> 45:19.972
is our way of , of like getting that to

45:19.972 --> 45:23.479
work um in the sleeping brain . So we

45:23.489 --> 45:25.711
train the model up on these , um , same

45:25.711 --> 45:28.209
satellites . We find in its sleep that

45:28.209 --> 45:30.199
it does a nice job of uniformly

45:30.530 --> 45:32.629
interleaving replay of the different

45:32.629 --> 45:34.685
satellites . So that's good . That's

45:34.685 --> 45:36.796
like , um , just kind of prerequisite

45:36.796 --> 45:38.685
for finding interesting structure

45:38.685 --> 45:40.740
learning . And then we find that the

45:40.740 --> 45:43.760
neocortical representations of , um ,

45:43.770 --> 45:46.360
this model become more overlapping over

45:46.360 --> 45:48.693
the course of learning from this replay .

45:49.100 --> 45:51.739
And that that supports better memory

45:51.739 --> 45:54.179
for shared features , which is um what

45:54.179 --> 45:56.401
we had seen in our behavioral data . So

45:56.401 --> 45:58.457
it's just a kind of proof of concept

45:58.457 --> 46:00.123
that you can get that kind of

46:00.123 --> 46:02.457
behavioral change um working , you know ,

46:02.457 --> 46:04.623
in a system running by itself , that ,

46:04.623 --> 46:06.790
that is doing this kind of like memory

46:06.790 --> 46:08.735
transformation , building up these

46:08.735 --> 46:10.957
overlapping representations in cortex .

46:12.090 --> 46:14.979
So the model also has sleep stages . It

46:14.979 --> 46:18.340
has a period um of sleep that

46:18.340 --> 46:20.507
corresponds to non-REM sleep . This is

46:20.507 --> 46:22.562
what I was referring to earlier . We

46:22.562 --> 46:24.729
know that there are these very special

46:24.729 --> 46:26.618
coupled oscillations , um , where

46:26.618 --> 46:28.840
hippocampal ripples and um lamocortical

46:28.840 --> 46:30.507
spindles and neocortical slow

46:30.507 --> 46:32.507
oscillations are kind of all , um ,

46:32.507 --> 46:34.507
happening in these coordinated ways

46:34.507 --> 46:36.673
that we think is helping communication

46:36.673 --> 46:38.896
between the hippocampus and cortex . So

46:38.896 --> 46:38.810
the way we implement this in the model

46:38.810 --> 46:41.290
is that we have a very um like strong

46:41.290 --> 46:43.512
coupling between hippocampus and cortex

46:43.512 --> 46:45.530
during our non-REM phase . And then

46:45.530 --> 46:48.649
during our REM phase of um the model

46:48.649 --> 46:51.340
sleep , we let the hippocampus and the

46:51.340 --> 46:54.250
cortex basically run on their own more

46:54.250 --> 46:56.417
independently . They're less coupled ,

46:56.417 --> 46:58.639
um , and we think that this could serve

46:58.639 --> 47:01.010
as a time for the neocortex to explore

47:01.010 --> 47:04.310
its existing . Understanding of the

47:04.310 --> 47:06.310
world , um , separate from what the

47:06.310 --> 47:08.532
hippocampus has to tell it about recent

47:08.532 --> 47:10.532
experience . And we think that this

47:10.532 --> 47:12.750
could be very useful for learning in

47:12.750 --> 47:14.972
non-stationary environments . So let me

47:14.972 --> 47:18.169
give you a quick sense of Of um why we

47:18.169 --> 47:21.969
think that is . So , this is um a

47:21.969 --> 47:25.520
simulation that shows what happens um

47:25.689 --> 47:28.050
when you are doing different kinds of

47:28.050 --> 47:31.840
sleep , um . After , uh , a ,

47:31.889 --> 47:35.770
a , a change in the statistics of

47:35.770 --> 47:37.959
your environment . So , we first have

47:37.959 --> 47:40.800
some environment one training , um ,

47:40.810 --> 47:42.709
like very overlearned . This is

47:42.709 --> 47:44.709
information that you can imagine is

47:44.709 --> 47:46.765
completely consolidated , completely

47:46.765 --> 47:48.931
represented in neocortex , and then we

47:48.931 --> 47:51.098
have the model start to learn some new

47:51.098 --> 47:53.153
information environment too . It's ,

47:53.153 --> 47:55.376
it's overlapping with environment one ,

47:55.376 --> 47:55.370
but it's some new , some new

47:55.370 --> 47:59.189
distribution of um information . And

47:59.189 --> 48:01.411
starts to learn this information in the

48:01.411 --> 48:03.522
hippocampus . And then we stop it and

48:03.522 --> 48:05.689
we say , OK , we're gonna let you do a

48:05.689 --> 48:05.429
night of sleep . This night of sleep

48:05.429 --> 48:08.189
could have only non-REM sleep , or it

48:08.189 --> 48:10.522
could alternate between non-REM and REM ,

48:10.522 --> 48:12.870
as we do over the course of a night of

48:12.870 --> 48:15.110
normal sleep , or it could have just

48:15.110 --> 48:19.040
REM sleep . And what we find is that ,

48:19.290 --> 48:22.659
um , performance for environment 2

48:23.129 --> 48:25.439
improves over the course of the sleep

48:25.850 --> 48:28.449
as long as non-AM sleep is involved .

48:28.570 --> 48:30.570
So , um , so blue is non-RAM , it's

48:30.570 --> 48:32.681
getting better over the course of the

48:32.681 --> 48:34.600
night of sleep , and the gray is

48:34.600 --> 48:36.711
alternating between non-RAM and RAM .

48:36.711 --> 48:38.822
So to the extent that the hippocampus

48:38.822 --> 48:40.656
has a chance and non-RAM to like

48:40.656 --> 48:42.544
express this new information from

48:42.544 --> 48:44.822
Environment 2 , you will get better at ,

48:44.822 --> 48:46.933
at , at environment 2 over the course

48:46.933 --> 48:48.933
of the night of sleep . But what we

48:48.933 --> 48:51.100
think is really important here is that

48:51.100 --> 48:53.211
for performance and environment one ,

48:53.211 --> 48:56.000
the only way to retain your knowledge

48:56.000 --> 48:57.879
of environment one while you're

48:57.879 --> 49:00.959
learning environment 2 is to alternate

49:00.959 --> 49:03.181
between non-REM and REM sleep . And the

49:03.181 --> 49:06.959
reason is that REM is allowing your

49:06.959 --> 49:09.429
cortex to go back and remind itself

49:09.429 --> 49:11.485
about what it knew about environment

49:11.485 --> 49:13.651
one , and make sure that's not getting

49:13.651 --> 49:15.485
overwritten by environment two ,

49:15.485 --> 49:17.596
basically . So , going back and forth

49:17.596 --> 49:19.651
between non-REM and REM is a way for

49:19.651 --> 49:21.429
the brain to kind of solve this

49:21.830 --> 49:24.550
non-stationary learning problem , um .

49:25.969 --> 49:27.802
Seems like , yes , it is exactly

49:27.802 --> 49:29.913
avoiding cat shock forgetting , yes ,

49:29.913 --> 49:31.747
it's a way of avoiding cat shock

49:31.747 --> 49:35.560
forgetting inside . Uh , Um , so

49:35.560 --> 49:37.727
the all the replay I just talked about

49:37.727 --> 49:40.429
is kind of like this simple like , um ,

49:40.560 --> 49:43.590
replay like individual objects , um ,

49:43.800 --> 49:45.840
that don't unfold . There's no like

49:45.840 --> 49:47.951
temporal sequence , but often when we

49:47.951 --> 49:50.062
talk about replay in the neuroscience

49:50.062 --> 49:49.949
literature , we're thinking about

49:49.949 --> 49:52.600
sequence , uh , sequences of states

49:52.600 --> 49:54.933
that are being replayed . I just wanted ,

49:54.933 --> 49:57.100
I'm not going to get into this paper ,

49:57.100 --> 49:56.419
but just in case you're interested ,

49:56.479 --> 49:58.646
you can ask me about it . We're also ,

49:58.646 --> 50:00.979
um , building models of , of sequential ,

50:00.979 --> 50:03.146
um , replay . We're interested in that

50:03.146 --> 50:06.120
as well . Um , OK , so , um , overall .

50:06.770 --> 50:08.937
We think the hippocampus might contain

50:08.937 --> 50:11.103
these distributed representations that

50:11.103 --> 50:13.189
are very powerful , um , that , that

50:13.189 --> 50:15.300
allow you to learn statistics quickly

50:15.300 --> 50:17.411
and generalize , um , that complement

50:17.411 --> 50:19.245
these pattern separated localist

50:19.245 --> 50:21.467
representations that we still think are

50:21.467 --> 50:23.633
crucial for the kind of basic episodic

50:23.633 --> 50:25.745
memory functions . But then that this

50:25.745 --> 50:27.856
might be part of a broader process or

50:27.856 --> 50:30.189
maybe even continuum where during sleep ,

50:30.189 --> 50:29.330
the hippocampus is helping the

50:29.330 --> 50:31.552
neocortex to further kind of learn that

50:31.552 --> 50:33.689
shared structure , um , and to

50:33.689 --> 50:35.633
integrate the new information with

50:35.633 --> 50:37.745
existing knowledge , um , using these

50:37.745 --> 50:39.245
kind of highly distributed

50:39.245 --> 50:42.159
representations . OK , so there's my

50:42.159 --> 50:44.215
lab . Thanks so much for listening .

50:44.215 --> 50:46.381
And I'm very happy to take questions .

50:46.381 --> 50:48.548
I , I , the chat has been moving and I

50:48.548 --> 50:50.603
don't haven't caught everything . So

50:50.603 --> 50:52.492
just like , um , yeah , I'll read

50:52.492 --> 50:54.492
Howard's question , but then if you

50:54.492 --> 50:56.826
have questions from earlier in the chat ,

50:56.826 --> 50:58.881
please just like , like , uh , bring

50:58.881 --> 51:01.048
them back up or unmute yourself . OK ,

51:01.048 --> 51:03.103
so please forget this question . The

51:03.103 --> 51:05.215
premise is cracked . I heard of folks

51:05.215 --> 51:05.030
using lucid dreaming to increase the

51:05.030 --> 51:07.308
primacy of important thought decisions .

51:07.308 --> 51:09.419
This possible could lucid dreaming be

51:09.419 --> 51:11.474
employed to not only increase memory

51:11.474 --> 51:13.474
duration during sleep . But also to

51:13.474 --> 51:15.641
increase learning effectiveness . Um ,

51:15.641 --> 51:18.399
this is a super , super understudied

51:18.399 --> 51:22.179
area . There's just now , um , been

51:22.179 --> 51:24.459
out maybe two papers showing that we

51:24.459 --> 51:27.739
can , um , induce lucid dreaming in the

51:27.739 --> 51:29.906
lab and have conversations with people

51:29.906 --> 51:31.961
while they're lucid dreaming , which

51:31.961 --> 51:33.906
opens up a whole new , um , set of

51:33.906 --> 51:36.072
possibilities for how we could study ,

51:36.072 --> 51:39.080
um , Memory consolidation um through

51:39.280 --> 51:41.909
lucid dreaming , but it's really like

51:41.909 --> 51:44.649
has not yet been done , so um it's just

51:44.649 --> 51:46.093
like a totally new area .

51:48.870 --> 51:51.092
What other questions did I miss or what

51:51.092 --> 51:52.759
other questions do you have ?

51:59.350 --> 52:02.129
Uh this is . I'm sorry . Yeah , I was ,

52:02.229 --> 52:04.979
I was just gonna say a first really

52:04.979 --> 52:06.979
impressive work we , we've followed

52:06.979 --> 52:08.812
your material . It's very , very

52:08.812 --> 52:10.812
impressive and it's exactly what we

52:10.812 --> 52:13.035
care about . Uh , you mentioned earlier

52:13.035 --> 52:15.090
that you might return to comments on

52:15.090 --> 52:17.090
consciousness , um , specifically ,

52:17.090 --> 52:19.709
what I'm interested in is , um , your

52:19.709 --> 52:23.669
approach to modeling , um , is tied and

52:23.669 --> 52:25.780
especially experiments you do seem to

52:25.780 --> 52:27.613
me to be tied to the data you're

52:27.613 --> 52:29.391
putting in and how you're gonna

52:29.391 --> 52:31.502
represent that data . Um , one of our

52:31.502 --> 52:33.613
big interests are what happens if you

52:33.613 --> 52:35.889
have this vocabulary of memory and

52:35.889 --> 52:37.945
cognition which isn't in the sensory

52:37.945 --> 52:40.209
space but is removed from it , and then

52:40.209 --> 52:42.153
you go through all these processes

52:42.153 --> 52:44.320
you're doing . Um , your processes can

52:44.320 --> 52:46.487
create a vocabulary , but what happens

52:46.689 --> 52:50.060
is that I'm gonna use the consciousness

52:50.060 --> 52:52.227
word , the conscious experience is the

52:52.227 --> 52:54.171
vocabulary that you didn't want to

52:54.171 --> 52:56.338
encode . So that was done prior to all

52:56.338 --> 52:58.449
these things you're showing us how to

52:58.449 --> 53:00.227
do . Have you , do you have any

53:00.227 --> 53:00.060
thoughts on those areas ? Thank you .

53:00.179 --> 53:03.429
Over . Um , so , um ,

53:05.570 --> 53:09.020
I definitely think that what almost all

53:09.020 --> 53:11.020
of what we're talking about here is

53:11.020 --> 53:13.979
highly abstracted away from sensory

53:13.979 --> 53:17.429
information . So , um , there's quite a

53:17.429 --> 53:19.373
lot of , I call it preprocessing ,

53:19.373 --> 53:22.350
which is kind of uh If you're if you're

53:22.350 --> 53:24.572
a perception scientist , like you might

53:24.572 --> 53:26.683
not appreciate that kind of framing ,

53:26.683 --> 53:28.628
but like there's a lot of steps of

53:28.628 --> 53:30.628
processing that happened before the

53:30.628 --> 53:32.850
hippocampus has access to information ,

53:32.850 --> 53:34.969
um , and so that means that you're ,

53:35.080 --> 53:38.800
you're , you know , your , your whole

53:38.800 --> 53:41.149
like life's experience and evolution

53:41.149 --> 53:43.371
and all of this is going into like your

53:43.371 --> 53:45.427
interpretation um of the information

53:45.427 --> 53:47.538
that even just like makes it into the

53:47.538 --> 53:49.538
hippocampus in in the first place .

53:49.538 --> 53:51.427
Like you're not , it's not direct

53:51.427 --> 53:53.482
perceptual information . Um , and so

53:53.482 --> 53:55.705
all these operations are happening in a

53:55.705 --> 53:57.816
space that is like very like abstract

53:57.816 --> 54:01.639
and semantic and um . Uh ,

54:01.770 --> 54:04.850
but the , but , but that doesn't , that

54:04.850 --> 54:08.689
in itself , I don't think . Tells

54:08.689 --> 54:11.709
you Anything about the nature of your

54:11.709 --> 54:14.149
like , conscious , um , experience of

54:14.149 --> 54:16.750
the information . So , I , like I said ,

54:16.850 --> 54:18.572
you know , you could take that

54:18.572 --> 54:20.572
information and do different things

54:20.572 --> 54:22.739
with it . You could do operations that

54:22.739 --> 54:24.628
involve like deliberate , kind of

54:24.628 --> 54:26.794
explicit , maybe more conscious , um ,

54:26.794 --> 54:28.683
operations , or you could do more

54:28.683 --> 54:31.350
automatic processing , um , with that

54:31.350 --> 54:34.870
information . And I don't

54:35.679 --> 54:37.846
know if I have anything interesting to

54:37.846 --> 54:39.735
say about how those . No , no , I

54:39.735 --> 54:41.735
wasn't even trying to push you down

54:41.735 --> 54:44.068
that path that you answered the perfect .

54:44.068 --> 54:46.290
Your answer was freaking exactly what I

54:46.290 --> 54:48.346
wanted to hear . Um , you know , our

54:48.346 --> 54:50.512
model of of consciousness , it doesn't

54:50.512 --> 54:50.239
fall into that trap of deliberate

54:50.239 --> 54:52.395
versus . Automatic into it , that's ,

54:52.435 --> 54:54.602
that's not where we go at all . So I ,

54:54.754 --> 54:56.865
I didn't want , I didn't want to push

54:56.865 --> 54:58.976
you there , but what I wanted to hear

54:58.976 --> 55:01.198
was what you said . If in fact you're a

55:01.198 --> 55:03.625
zealot like I am , that that conscious

55:03.625 --> 55:06.225
experience is a vocabulary of cognition ,

55:06.514 --> 55:09.879
I can take my , my , um , But that

55:09.879 --> 55:11.935
vocabulary , and put it through your

55:11.935 --> 55:14.046
processes to , to replicate the , the

55:14.046 --> 55:15.990
exciting results you're getting in

55:15.990 --> 55:17.879
terms of how do you form memories

55:17.879 --> 55:19.935
quickly , distribute the localized ,

55:19.935 --> 55:21.990
all that can be done in that space .

55:21.990 --> 55:23.657
And I just , I just find that

55:23.657 --> 55:25.712
fascinating . So I , I appreciate it

55:25.712 --> 55:27.046
over . Great , thanks .

55:41.830 --> 55:45.479
I guess I have two questions , but ,

55:45.489 --> 55:47.711
and I'll , I'll do this other one first

55:47.711 --> 55:49.711
cause it follows . So in one of the

55:49.711 --> 55:51.767
papers I can't remember , I think it

55:51.767 --> 55:54.370
was the PNAS paper , you started to map

55:54.370 --> 55:56.592
these different things together of like

55:56.592 --> 55:58.759
automatic . I don't think you used the

55:58.759 --> 56:00.879
word conscious , but um . You also at

56:00.879 --> 56:02.657
one point use like the implicit

56:02.657 --> 56:04.712
explicit kind of like formalisms and

56:04.712 --> 56:06.823
then you're like , well , maybe these

56:06.823 --> 56:08.823
things don't actually map onto that

56:08.823 --> 56:10.712
exactly . I think you have like a

56:10.712 --> 56:12.935
couple different sentences . So I guess

56:12.935 --> 56:15.157
like when you're doing this hippocampal

56:15.157 --> 56:17.435
statistical learning , like , what are ,

56:17.435 --> 56:19.712
what's the experiences of those people ?

56:19.712 --> 56:21.879
Are they able to articulate the rule ?

56:21.879 --> 56:21.320
Like , is there some sort of like

56:21.320 --> 56:23.487
memory system interaction there ? Does

56:23.487 --> 56:25.709
it look Implicit is it ? I don't know .

56:25.709 --> 56:28.000
Sorry , it's fully . There , there

56:28.000 --> 56:31.199
actually , there is work on this , so

56:31.199 --> 56:35.010
sometimes people can Sometimes

56:35.010 --> 56:38.120
people can report the like pairs in a

56:38.120 --> 56:39.870
in a implicit system learning

56:39.870 --> 56:42.370
experiment , and sometimes they can't ,

56:42.409 --> 56:45.209
and there are differences , the

56:45.209 --> 56:47.376
literature is kind of mixed on whether

56:47.376 --> 56:49.320
behavior is different in those two

56:49.320 --> 56:51.431
situations . What we know for sure is

56:51.431 --> 56:53.265
that Um , you don't need to have

56:53.265 --> 56:55.320
conscious access in order to perform

56:55.320 --> 56:57.487
the task above chance . Um , but there

56:57.487 --> 56:59.598
are some studies that suggest that if

56:59.598 --> 57:01.850
you do come to have conscious access ,

57:02.080 --> 57:04.302
then that allows you to do , you know ,

57:04.302 --> 57:06.590
like do even better on the task . Um ,

57:06.979 --> 57:08.757
so , so I , it's an interesting

57:08.757 --> 57:10.757
question , like , what does it mean

57:10.757 --> 57:12.979
that you kind of come to have conscious

57:13.010 --> 57:15.177
access to the information sometimes or

57:15.177 --> 57:17.288
like what's happening in those people

57:17.288 --> 57:19.454
who do , um . Who can report something

57:19.454 --> 57:21.870
about what they , what they saw . Um ,

57:22.199 --> 57:25.770
so one possibility is that Like , you

57:25.770 --> 57:27.770
know , your tri synaptic pathway is

57:27.770 --> 57:29.739
encoding information from the

57:29.739 --> 57:31.850
monostatic pathway , so that would be

57:31.850 --> 57:34.017
like a memory systems interaction kind

57:34.017 --> 57:36.128
of idea . Another possibility is that

57:36.128 --> 57:39.580
You are , there's just like a , a

57:39.590 --> 57:41.750
strength of learning in the automatic

57:41.750 --> 57:43.861
system that when it reaches a certain

57:43.861 --> 57:45.917
level , we just sort of like our can

57:45.917 --> 57:48.270
are somehow read out from that in a

57:48.270 --> 57:50.860
more explicit way . um I feel very

57:50.860 --> 57:53.340
agnostic about . Which are those two

57:53.340 --> 57:55.507
possibilities it is . I don't think we

57:55.507 --> 57:57.673
have , I don't think we know . I think

57:57.673 --> 57:59.919
either one is possible . Yeah . It

58:03.719 --> 58:06.340
Um , yeah , Katrina . Yeah , I have a

58:06.340 --> 58:08.562
question just about interleaving , um ,

58:09.379 --> 58:11.601
In interleaving research , there's kind

58:11.601 --> 58:13.435
of choices to be made about what

58:13.435 --> 58:15.657
exactly you are interleaving or at what

58:15.657 --> 58:17.879
level you're interleaving information .

58:17.879 --> 58:19.879
So I was like curious in your model

58:19.879 --> 58:22.046
kind of . Does it mean interleaving if

58:22.046 --> 58:24.212
you interleave across one of your made

58:24.212 --> 58:25.990
up categories , or is it within

58:25.990 --> 58:28.101
features of that category ? Does that

58:28.101 --> 58:30.101
matter ? Do we know kind of at what

58:30.101 --> 58:32.101
level of the hippocampus , that CA3

58:32.101 --> 58:34.323
pathway or , right , the CA3 pathway is

58:34.323 --> 58:38.110
doing it ? Um , so the , um , You , you

58:38.110 --> 58:40.149
need to interleave across the

58:40.149 --> 58:42.038
information that you're trying to

58:42.038 --> 58:44.310
generalize across . Um , so if you're

58:44.310 --> 58:47.020
trying to understand the structure of

58:47.030 --> 58:50.610
one category , then you will need to

58:50.610 --> 58:52.610
interleave the exemplars within the

58:52.610 --> 58:54.594
category . If you're trying to

58:54.594 --> 58:56.372
understand the structure across

58:56.372 --> 58:58.483
categories , then you'll need to both

58:58.483 --> 59:00.483
interleave the exemplars one of the

59:00.483 --> 59:02.538
category and interleave the order of

59:02.538 --> 59:04.594
the categories . Um , so any form of

59:04.594 --> 59:06.427
blocking at any level could be a

59:06.427 --> 59:08.705
problem for distributed representation .

59:08.705 --> 59:11.469
Um , as long as you are attempting to

59:12.020 --> 59:14.076
understand the structure across that

59:14.076 --> 59:17.159
level . So it depends on the assessment

59:17.159 --> 59:20.350
and the task , um , but like any , uh ,

59:20.479 --> 59:23.189
the , the model will show interference

59:23.649 --> 59:26.560
at whatever level , um , is blocked .

59:26.959 --> 59:27.959
Yeah .

59:30.760 --> 59:32.879
OK , guys , any , any final question

59:32.879 --> 59:35.320
for Doctor Shapiro , jump in , um , but ,

59:35.439 --> 59:37.550
uh , you know , unmute yourself , but

59:37.550 --> 59:39.272
we're really thankful for your

59:39.272 --> 59:41.495
engagement here and folks could look up

59:41.495 --> 59:43.661
your email online or wherever , get in

59:43.661 --> 59:45.661
touch with you , I'm sure . Um , so

59:45.661 --> 59:47.883
thank you so much for coming in today .

59:47.883 --> 59:49.995
Thank you . Thank you for having me .

59:49.995 --> 59:51.939
Awesome questions , a lot of fun .

59:51.939 --> 59:52.360
Thank you . See you guys next week .

