WEBVTT

00:04.413 --> 00:09.400
- Okay, good afternoon my
name is Christopher Brunett.

00:09.400 --> 00:11.610
I serve as the designated federal officer

00:11.610 --> 00:13.680
for the Defense Innovation Board.

00:13.680 --> 00:16.750
It is my role to open this
public listening session

00:16.750 --> 00:18.480
of the Science and Technology Subcommittee

00:18.480 --> 00:20.223
of the Defense Innovation Board.

00:21.220 --> 00:23.010
Thank you to Carnegie
Mellon University for

00:23.010 --> 00:25.523
hosting us today,
hosting today's sessions.

00:26.610 --> 00:28.130
If you have not done so already,

00:28.130 --> 00:30.850
please silence your electronic devices.

00:30.850 --> 00:32.790
This session is part
of a Defense Innovation

00:32.790 --> 00:34.290
Board initiative called Artificial

00:34.290 --> 00:36.350
Intelligence Principles Project.

00:36.350 --> 00:38.750
Today's session is being
recorded and live streamed

00:38.750 --> 00:41.910
to allow members of the
public to attend virtually.

00:41.910 --> 00:43.840
It will also be accessible on the

00:43.840 --> 00:46.953
Board's website innovation.defense.gov.

00:48.260 --> 00:50.310
Thank you to the Defense Media Activity

00:50.310 --> 00:53.150
for providing their expert
support to this event.

00:53.150 --> 00:57.210
Welcome to all of our in
person and virtual attendees.

00:57.210 --> 00:58.580
As we begin this public meeting,

00:58.580 --> 01:00.993
allow me to share a
few procedural remarks.

01:02.350 --> 01:05.700
This board is a discretionary
independent advisory board

01:05.700 --> 01:07.640
operated under the Federal Advisory

01:07.640 --> 01:11.023
Committee Act and the
Government Sunshine Act.

01:12.440 --> 01:14.920
Today's meeting was announced
in the Federal Register Notice

01:14.920 --> 01:18.530
posted Friday, February 15, 2019.

01:18.530 --> 01:20.480
There have been no significant
changes to the meeting's

01:20.480 --> 01:23.450
agenda as posted in the
Federal Register Notice.

01:23.450 --> 01:25.590
The public was invited to submit written

01:25.590 --> 01:28.520
comments for board members to consider.

01:28.520 --> 01:30.330
16 written comments were received

01:30.330 --> 01:32.010
in advance of today's session.

01:32.010 --> 01:34.020
These comments will be posted online

01:34.020 --> 01:36.290
with the minutes of this meeting.

01:36.290 --> 01:38.100
We welcome additional
written comments on a

01:38.100 --> 01:41.410
rolling basis which can be
submitted via our website.

01:41.410 --> 01:43.590
The primary purpose of
this session is to provide

01:43.590 --> 01:45.993
an opportunity for members
of the public to provide

01:45.993 --> 01:48.953
verbal comments to the
board subcommittee today.

01:48.953 --> 01:51.910
As a reminder, these are
comments to the board,

01:51.910 --> 01:54.210
not question and answer sessions.

01:54.210 --> 01:57.010
Board members may ask
clarifying questions.

01:57.010 --> 01:58.950
With that I now turn the
meeting over to the board's

01:58.950 --> 02:01.370
executive director Joshua
Marcuz for his opening

02:01.370 --> 02:03.670
remarks and introduction
of our board members.

02:11.520 --> 02:14.130
- Thanks Bruno and welcome
everyone and thank you

02:14.130 --> 02:16.870
to the Carnegie Mellon
University for hosting us.

02:16.870 --> 02:19.725
This is our first official
public listening session

02:19.725 --> 02:22.960
for the A. I. Principles
Project and we cannot be

02:22.960 --> 02:25.520
more excited to start this
off in the country's premier

02:25.520 --> 02:28.300
institutions for A. I.
research development,

02:28.300 --> 02:30.630
thought leadership, and its application.

02:30.630 --> 02:32.950
More than a half century
ago, Carnegie Mellon's

02:32.950 --> 02:35.370
own Herb Simon and Allen Newell wrote

02:35.370 --> 02:37.810
the first artificial intelligence program.

02:37.810 --> 02:40.500
Since then, CMU has forged
a culture around using

02:40.500 --> 02:43.700
technology to solve real
problems which is why we believe

02:43.700 --> 02:47.280
CMU is the perfect host for
this sort of engagement.

02:47.280 --> 02:50.160
I'd like to introduce out
board members present today

02:50.160 --> 02:53.590
Dr. Missy Cummings, who is
the Professor of Engineering

02:53.590 --> 02:56.560
and Director of Human Autonomy
Lab at Duke University,

02:56.560 --> 02:58.760
Dr. Michael McQuade, the
Vice President of Research

02:58.760 --> 03:02.480
right here at CMU, Mr. Milo
Medin, the Vice President

03:02.480 --> 03:05.810
of Wireless Services at
Google, and Dr. Richard Murray,

03:05.810 --> 03:07.860
Professor of Control and Dynamical Systems

03:07.860 --> 03:09.960
in Bioengineering at Cal Tech.

03:09.960 --> 03:12.710
Dr. McQuade and Dr. Murray
are the co-chairs of the

03:12.710 --> 03:14.940
Defense Innovation Board's
Science and Technology

03:14.940 --> 03:17.890
Subcommittee and as such, they
are leading this initiative.

03:18.880 --> 03:22.050
I would like to give a brief
overview of this project

03:22.050 --> 03:25.050
and sort of how we all came
to be in this room together,

03:25.050 --> 03:28.100
to kind of set the context
for this conversation.

03:28.100 --> 03:31.520
Last July, the department
asked the board to undertake

03:31.520 --> 03:33.860
an effort to help establish
a set of artificial

03:33.860 --> 03:36.050
intelligence principles for defense.

03:36.050 --> 03:38.500
After a few months of planning
and some internal discussions

03:38.500 --> 03:42.030
the first weeks of 2019 saw
the board begin convening

03:42.030 --> 03:45.490
a mix of academics,
researchers, ethicists, lawyers,

03:45.490 --> 03:47.690
business executives, non-profit leaders,

03:47.690 --> 03:51.230
venture capitalists, policy
experts and a wide variety

03:51.230 --> 03:53.650
of other people in the A.I.
field to provide input to

03:53.650 --> 03:57.560
this process through a series
of round table conversations.

03:57.560 --> 04:01.030
We taking care to include not
only experts who often work

04:01.030 --> 04:03.780
with the department,
but also A.I. skeptics,

04:03.780 --> 04:06.330
DOD critics, and leading
A.I. engineers who have

04:06.330 --> 04:08.330
never worked with the department before.

04:09.560 --> 04:12.200
There may be differences of
opinion among this diverse

04:12.200 --> 04:14.670
group since these matters
are controversial.

04:14.670 --> 04:17.780
We will not shy away from
disagreements as respectful

04:17.780 --> 04:20.543
and forthright dialogue
should lead to meaningful

04:20.543 --> 04:23.877
understanding on all sides
and a robust contest of ideas

04:23.877 --> 04:28.090
to generate new insights into
this question of how to set up

04:28.090 --> 04:30.940
an ethical framework for
the Department of Defense.

04:30.940 --> 04:33.860
DOD recognizes the need to
view A.I. differently than

04:33.860 --> 04:36.900
other technologies especially
on ethics and the imperative

04:36.900 --> 04:41.080
to get this right on how DOD
employs these technologies.

04:41.080 --> 04:43.410
Artificial intelligence not
only affects the men and women

04:43.410 --> 04:46.630
who serve in uniform in our
country, but societies around

04:46.630 --> 04:48.500
the world, and that's why the board

04:48.500 --> 04:51.610
is a different process
than we typically do,

04:51.610 --> 04:54.980
to ensure that our process of
developing these principles

04:54.980 --> 04:57.920
is robust, inclusive, and transparent.

04:57.920 --> 05:00.280
We want everyone to take
part in this dialogue

05:00.280 --> 05:02.810
because these issues touch
everyone and that's why

05:02.810 --> 05:05.497
it's so important that all of
you are here today to join us

05:05.497 --> 05:07.820
and to those who are
watching online, or who come

05:07.820 --> 05:09.920
to watch this video in
the future, we appreciate

05:09.920 --> 05:13.280
your taking this role as
a citizen very seriously.

05:13.280 --> 05:15.880
Today's public listening
session is one element of this

05:15.880 --> 05:19.943
board's initiative and we hope
that this helps the board,

05:19.943 --> 05:22.313
helps advance the board's dialogue.

05:24.680 --> 05:26.820
I wanna outline the flow of how the

05:26.820 --> 05:28.560
remaining few hours will unfold.

05:28.560 --> 05:30.700
In a moment, I'll ask Michael
McQuade to say a few words

05:30.700 --> 05:33.000
on behalf of the Defense Innovation Board

05:33.000 --> 05:35.430
and then we'll hear from
the Carnegie Mellon Provost

05:35.430 --> 05:38.480
Dr. Jim Garrett, someone we're
very lucky to have involved

05:38.480 --> 05:39.990
in this initiative and
we've had the pleasure

05:39.990 --> 05:42.330
of interacting with over
the last couple of days.

05:42.330 --> 05:45.180
After Jim's remarks, we will
move to the public comments

05:45.180 --> 05:47.740
for the bulk of the time when
audience members can address

05:47.740 --> 05:50.038
the board and I know this is the part

05:50.038 --> 05:51.685
that you are all, are here to see.

05:51.685 --> 05:53.746
So when we get to that point, I'll explain

05:53.746 --> 05:55.650
how the public comments
will proceed in more detail.

05:55.650 --> 05:58.030
If you haven't already
submitted a comment online,

05:58.030 --> 06:00.777
and again Bruno said we
have received 16 of those,

06:00.777 --> 06:04.530
when you RSVP, I'll ask you
to take one of these comment

06:04.530 --> 06:06.320
cards and give them to one of my

06:06.320 --> 06:08.250
colleagues who will be in the aisles.

06:08.250 --> 06:10.188
This is the way in which you
essentially raise your hand

06:10.188 --> 06:12.720
and so we really hope that all of you,

06:12.720 --> 06:14.070
if you haven't taken a comment card,

06:14.070 --> 06:15.560
well do so, because we would love to hear

06:15.560 --> 06:18.200
from as many of you as we have time for.

06:18.200 --> 06:19.710
Just before the public comments begin,

06:19.710 --> 06:22.855
we'll collect those cards and
here's the important part, we

06:22.855 --> 06:25.830
really are going to limit every
commenter to five minutes.

06:25.830 --> 06:28.490
It's perfectly acceptable
for you to do 20 seconds, but

06:28.490 --> 06:31.480
it is not acceptable to do
five minutes and 20 seconds.

06:31.480 --> 06:34.763
We do this to be fair, so
we can hear from everyone,

06:34.763 --> 06:37.020
so I'm going to tap on the microphone

06:37.020 --> 06:38.650
to let you know when
you have one minute left

06:38.650 --> 06:40.820
and when you have no minutes
left, I'm going to be

06:40.820 --> 06:43.070
as polite as possible
in making you sit down.

06:44.198 --> 06:47.810
We also have written comments
and so for those of you

06:47.810 --> 06:49.920
that submitted written comments
that aren't physically here,

06:49.920 --> 06:51.950
I've asked Bruno to read those comments

06:51.950 --> 06:53.930
because I think it's very
important that those public

06:53.930 --> 06:57.450
comments be heard by everyone
as well as the board members.

06:57.450 --> 06:59.920
So luckily Bruno has agreed
to read them so won't have

06:59.920 --> 07:02.490
to listen to my voice that much longer.

07:02.490 --> 07:04.090
And now I will turn things over

07:04.090 --> 07:06.400
to Dr. McQuade for a few words.

07:06.400 --> 07:10.111
- Thank you Josh, let me
also welcome everybody here

07:10.111 --> 07:13.300
with two hats on, one as a CMU employee,

07:13.300 --> 07:16.410
and one as a member of the
Defense Innovation Board.

07:16.410 --> 07:19.240
The Defense Innovation
Board was started under

07:19.240 --> 07:22.450
Secretary Carter with
an objective to bring

07:22.450 --> 07:25.840
an outside view around topics
of innovation, culture,

07:25.840 --> 07:28.931
technology, to the department,
to find ways to help

07:28.931 --> 07:32.925
the department, if possible,
improve the way it operates

07:32.925 --> 07:36.220
and achieves its mission for the country.

07:36.220 --> 07:38.637
We have over the last two years,

07:38.637 --> 07:41.320
had multiple engagements
with the department.

07:41.320 --> 07:44.120
They have ranged from examining the way

07:44.120 --> 07:46.450
the department does software, does,

07:46.450 --> 07:49.670
including both acquisition
and execution of software.

07:49.670 --> 07:51.890
We have also looked
substantially at issues

07:51.890 --> 07:55.040
around workforce and workforce deployment.

07:55.040 --> 07:58.370
And we had a number of issues,
a number of examinations

07:58.370 --> 08:00.690
around the technical
capacities of the department.

08:00.690 --> 08:03.510
So there's a fairly
broad remit and through

08:03.510 --> 08:06.475
all of those discussions,
the overarching implications

08:06.475 --> 08:10.787
of A.I. as a broad based
technology has been front

08:10.787 --> 08:13.200
and center for quite some time.

08:13.200 --> 08:14.637
And for those of you who
follow the department,

08:14.637 --> 08:17.535
you will see that there are
a large number of initiatives

08:17.535 --> 08:20.630
over the last year or 18
months relative to the way A.I.

08:20.630 --> 08:24.190
is being examined and rolled
out including some activity

08:24.190 --> 08:26.863
here at CMU, substantial
activity here at CMU.

08:27.870 --> 08:30.470
In the process of that
discussion around A.I.,

08:30.470 --> 08:32.764
both on the Defense Innovation
Board with my colleagues

08:32.764 --> 08:36.116
and with the broader
community, we have constantly

08:36.116 --> 08:39.568
had in the middle of that
conversation that it is not just

08:39.568 --> 08:43.020
a technology, it is not
just a fundamental science

08:43.020 --> 08:45.630
or mathematics or however you
want to characterize A.I.,

08:45.630 --> 08:48.690
it is also a social element of the way,

08:48.690 --> 08:50.780
the way the department
and society operate.

08:50.780 --> 08:54.240
And therefore carries
with it both obligations

08:54.240 --> 08:56.670
and responsibilities around ethics

08:56.670 --> 08:58.987
and the ethical use in
an environment where the

08:58.987 --> 09:02.320
department is trying to
prosecute its mission.

09:02.320 --> 09:03.700
Josh gave you a little
bit of background about

09:03.700 --> 09:06.020
why we're here and how that got set up.

09:06.020 --> 09:08.340
I would simply end by
saying the following,

09:08.340 --> 09:11.200
that in this day and age,
and with the technology

09:11.200 --> 09:15.580
as broadly implicative as A.I.,
it is absolutely necessary

09:15.580 --> 09:18.567
to encourage broad public
dialogue on the subject.

09:18.567 --> 09:21.400
I would also say it is,
in this day and age,

09:21.400 --> 09:23.670
even more important to
encourage respectful

09:23.670 --> 09:25.610
public dialogue in that comment.

09:25.610 --> 09:29.400
So everybody who is
willing to speak today,

09:29.400 --> 09:31.740
we want to hear what you have to say.

09:31.740 --> 09:33.530
I would just ask us
all to be respectful in

09:33.530 --> 09:34.870
the way we communicate that message

09:34.870 --> 09:38.767
and please be as open and
evocative for us as possible

09:38.767 --> 09:41.190
because at the end of the day,
we are trying to represent

09:41.190 --> 09:46.190
the broad set of issues and
consequences and discussion

09:46.210 --> 09:49.160
around ethics of A.I. so thank
you very much for being here.

09:49.160 --> 09:51.364
We really do appreciate
hearing from everybody

09:51.364 --> 09:56.240
and thanks to all of my colleagues
for coming to CMU, Josh?

09:56.240 --> 09:58.330
- Excellent, many of you know Jim as

09:58.330 --> 10:01.220
a pillar of the CMU community,
a three time graduate

10:01.220 --> 10:03.680
of the university and a
long time professor of civil

10:03.680 --> 10:06.830
and environmental engineering,
having also served in a

10:06.830 --> 10:10.290
variety of leadership roles at
CMU before becoming provost.

10:10.290 --> 10:13.060
He has a strong background
in sensors, data analytics,

10:13.060 --> 10:14.590
and a unique blend of technical

10:14.590 --> 10:17.060
research and leadership experience.

10:17.060 --> 10:20.030
It's my great pleasure to
welcome him to the podium.

10:25.690 --> 10:29.150
- Thank you Josh, and good
afternoon to all of you.

10:29.150 --> 10:32.210
I'm Provost Jim Garrett, it's
my pleasure to welcome you,

10:32.210 --> 10:35.700
those of you here in person
as well as those of you online

10:35.700 --> 10:39.060
on the live stream to this
afternoon's public listening

10:39.060 --> 10:42.580
session with the Defense Innovation Board.

10:42.580 --> 10:45.000
I hope that prior to today's session,

10:45.000 --> 10:48.410
you had the opportunity to
learn about the amazing work

10:48.410 --> 10:51.810
produced by Carnegie Mellon
for artificial intelligence

10:51.810 --> 10:55.840
strategies for a host
of important industries.

10:55.840 --> 10:59.330
In fact our university has
held a place at the forefront

10:59.330 --> 11:03.000
of innovation and emerging
technologies for decades,

11:03.000 --> 11:06.280
globally and locally right
here in Pittsburgh as an

11:06.280 --> 11:10.540
essential component of our
city's own tech revolution.

11:10.540 --> 11:13.310
We're proud that this
recognition is one of CMU's

11:13.310 --> 11:17.000
signature strengths
alongside our deep tradition

11:17.000 --> 11:19.930
of infusing collaboration
and interdisciplinary

11:19.930 --> 11:23.620
academics that expand the
horizons of knowledge.

11:23.620 --> 11:27.060
Every day our students,
faculty and researchers

11:27.060 --> 11:29.770
work collaboratively across disciplines

11:29.770 --> 11:33.680
and share their curiosity,
knowledge and discoveries

11:33.680 --> 11:37.280
with each other to innovate
and to inspire each other.

11:37.280 --> 11:40.690
Even more broadly, I believe
that as a university,

11:40.690 --> 11:44.480
our role is to focus not
just internally on the

11:44.480 --> 11:47.930
impact of technologies and
scientific research produced

11:47.930 --> 11:50.890
by the Carnegie Mellon
University community,

11:50.890 --> 11:53.410
it's equally vital that we bring outside

11:53.410 --> 11:56.950
perspectives and voices to our campus.

11:56.950 --> 12:00.030
By providing platforms such
as today's public listening

12:00.030 --> 12:03.820
session, we live up to the
respect and world class

12:03.820 --> 12:06.880
reputation that we've
earned as a university.

12:06.880 --> 12:10.029
We help to lead conversations
that explore new ways

12:10.029 --> 12:13.350
of thinking and ponder
questions that make us

12:13.350 --> 12:15.713
evolve in the work that we do.

12:16.710 --> 12:19.210
Today's listening sessions
surrounding the ethical

12:19.210 --> 12:21.860
and responsible use of
artificial intelligence

12:21.860 --> 12:24.955
is a prime example of
how our university also

12:24.955 --> 12:29.120
values freedom of expression,
especially as it relates

12:29.120 --> 12:32.420
to the accountable ways
that we're using the dynamic

12:32.420 --> 12:36.320
technology, this dynamic technology.

12:36.320 --> 12:39.160
This sense of accountability,
as well as the societal

12:39.160 --> 12:42.580
implications and artificial
intelligences impact

12:42.580 --> 12:46.680
on the workforce are also topics
that we teach our students

12:46.680 --> 12:48.780
in a number of exciting and relevant

12:48.780 --> 12:51.650
academic programs in this space.

12:51.650 --> 12:54.080
We're proud that you chose Carnegie Mellon

12:54.080 --> 12:57.150
to facilitate these important discussions.

12:57.150 --> 12:59.850
Our university, as was
pointed out by Josh,

12:59.850 --> 13:02.350
is the birthplace for
artificial intelligence,

13:02.350 --> 13:05.400
so I can't think of a more
fitting place to examine

13:05.400 --> 13:07.780
the guiding principles for the technology

13:07.780 --> 13:09.960
that is revolutionizing the world

13:09.960 --> 13:12.633
and what solutions
we're trying to explore.

13:14.480 --> 13:16.100
Together all of us have a tremendous

13:16.100 --> 13:18.770
responsibility to do the right thing.

13:18.770 --> 13:21.560
I hope that the DIB members
and all of our distinguished

13:21.560 --> 13:25.010
guests visiting today will
benefit greatly from different

13:25.010 --> 13:28.310
perspectives that will be
shared and are inspired through

13:28.310 --> 13:30.980
Carnegie Mellon's leadership
towards discovering common

13:30.980 --> 13:34.350
objectives as global technology leaders.

13:34.350 --> 13:36.730
Now please join me in
welcoming back to the podium,

13:36.730 --> 13:38.910
Josh Marcuse who will kick off

13:38.910 --> 13:41.110
today's public listening
session, thank you.

13:42.226 --> 13:45.226
(audience applauds)

13:48.960 --> 13:50.240
- Excellent, thank you very much Jim,

13:50.240 --> 13:52.340
and thank you again for hosting
us here we're very happy

13:52.340 --> 13:54.330
to be here and had great, great

13:54.330 --> 13:56.030
sessions earlier today with faculty.

13:56.030 --> 14:00.000
So I thought a good way to
get this conversation started

14:00.000 --> 14:04.940
was to offer a very brief primer
of I think three key ideas

14:04.940 --> 14:07.659
that undergurd this
conversation about ethics.

14:07.659 --> 14:10.490
Let me just run through a
few things that I think will

14:10.490 --> 14:13.790
help inform the audience
and the public about what

14:13.790 --> 14:17.060
the department's current
approach to this issue is.

14:17.060 --> 14:19.670
In a memo to all DOD personnel last month,

14:19.670 --> 14:22.790
on leading with an ethics
mindset, the acting

14:22.790 --> 14:26.600
Secretary of Defense Shanahan
observed a key component

14:26.600 --> 14:30.080
of leadership is
reinforcing ethical behavior

14:30.080 --> 14:31.970
across the full spectrum of our work

14:31.970 --> 14:35.330
and recognizing ethics
principles as the foundation

14:35.330 --> 14:38.870
upon which we make sound,
informed decisions.

14:38.870 --> 14:41.740
Ethics are indeed foundational
to DOD's responsible use

14:41.740 --> 14:44.320
of artificial intelligence
and we believe that

14:44.320 --> 14:46.520
the department can show
technical leadership

14:46.520 --> 14:49.640
and moral leadership in
the world on this issue.

14:49.640 --> 14:52.180
Secretary Shanahan's insight
has been the approach

14:52.180 --> 14:56.380
that DOD has also sought in
all of its other adaptations

14:56.380 --> 14:59.150
to emerging technology can
be applied to artificial

14:59.150 --> 15:01.590
intelligence and that's
why DOD leaders asked the

15:01.590 --> 15:04.181
Defense Innovation Board to
develop an ethical framework

15:04.181 --> 15:07.500
for the adoption of A.I.
on the very same day it

15:07.500 --> 15:10.840
announced the establishment
of the Joint A.I. Center.

15:10.840 --> 15:13.160
That request was the basis
for the board launching

15:13.160 --> 15:15.570
the A.I. Principles Project about which

15:15.570 --> 15:17.870
this meeting is a crucial part.

15:17.870 --> 15:20.100
As a foundation for the
discussion I want to cover

15:20.100 --> 15:22.960
three ideas that I will
only very briefly introduce.

15:22.960 --> 15:25.840
First, I'm gonna discuss
how the department believes

15:25.840 --> 15:30.030
the law for applies to DOD's
use of A.I. particularly

15:30.030 --> 15:32.710
in military operations, though
I would like to point out,

15:32.710 --> 15:35.650
military operations are
probably a small fraction

15:35.650 --> 15:37.180
of the situations the department

15:37.180 --> 15:39.730
might choose to use
artificial intelligence.

15:39.730 --> 15:42.450
Second, what the department's
current policy is

15:42.450 --> 15:45.370
on autonomy and weapons systems,
because of the important

15:45.370 --> 15:48.030
intersection between autonomy
and artificial intelligence.

15:48.030 --> 15:50.420
And third, what the
department's newly released

15:50.420 --> 15:53.770
artificial intelligence strategy
says about A.I. and ethics.

15:53.770 --> 15:55.030
For those of you who
don't know, the department

15:55.030 --> 15:57.180
really just launched only about

15:57.180 --> 16:00.216
a month ago, it's first A.I. strategy.

16:00.216 --> 16:02.940
These are exciting new
developments in the field of A.I.,

16:02.940 --> 16:06.270
like many emerging
technologies that preceded it,

16:06.270 --> 16:07.560
but the department's commitment to

16:07.560 --> 16:10.320
our values and the rule
of law is enduring.

16:10.320 --> 16:12.740
To put it another way,
the addition of A.I.

16:12.740 --> 16:15.260
to any existing system process or product,

16:15.260 --> 16:16.750
doesn't diminish the department's

16:16.750 --> 16:19.710
commitment to abide by the law of war.

16:19.710 --> 16:21.430
So what really is the law of war?

16:21.430 --> 16:23.860
The law of war is a body
of international law

16:23.860 --> 16:27.000
specifically adapted to
the conduct of warfare.

16:27.000 --> 16:30.270
For the United States, this
body of law includes treaties

16:30.270 --> 16:32.240
the United States has accepted such as

16:32.240 --> 16:35.811
the 1949 Geneva Conventions
and customary international law

16:35.811 --> 16:38.860
which results from the general
and consistent practice

16:38.860 --> 16:42.360
of states done out of a
sense of legal obligation.

16:42.360 --> 16:44.960
There are five fundamental principles

16:44.960 --> 16:46.900
that the foundation for law of war which

16:46.900 --> 16:50.253
is explained in DOD's
Official Law of War Manual.

16:51.439 --> 16:52.859
And I'm just going to very briefly run

16:52.859 --> 16:54.586
through what these five fundamentals are.

16:54.586 --> 16:57.100
First, military necessity
justifies the use

16:57.100 --> 16:59.520
of all measures needed to defeat the enemy

16:59.520 --> 17:01.890
as quickly and efficiently as possible,

17:01.890 --> 17:04.013
that are not prohibited by the law of war.

17:04.870 --> 17:07.700
Humanity forbids the
infliction of suffering,

17:07.700 --> 17:10.300
injury, or destruction unnecessary

17:10.300 --> 17:13.690
to accomplish a legitimate
military purpose.

17:13.690 --> 17:16.040
Proportionality means that even where

17:16.040 --> 17:18.648
one is justified in
acting, one must not act

17:18.648 --> 17:21.483
in a way that is
unreasonable or excessive.

17:22.710 --> 17:26.240
Distinction obliges parties to a conflict

17:26.240 --> 17:29.430
to distinguish principally
between the armed forces

17:29.430 --> 17:31.250
and the civilian population and

17:31.250 --> 17:34.780
between unprotected and protected objects.

17:34.780 --> 17:37.880
Honor demands a certain
amount of fairness in offense

17:38.852 --> 17:40.496
and defense and a certain mutual

17:40.496 --> 17:42.546
respect between opposing military forces.

17:43.600 --> 17:45.847
The key points to understand
about the intersection of

17:45.847 --> 17:49.440
the law and war and A.I. I
would say are the following.

17:49.440 --> 17:53.010
First, international law of
war provides a well established

17:53.010 --> 17:55.860
body of law to address
the legality of conduct

17:55.860 --> 17:58.203
in the context of armed conflict.

17:59.410 --> 18:01.545
Second, existing law of war rules apply

18:01.545 --> 18:04.320
when new technologies, such as new types

18:04.320 --> 18:07.490
of artificial intelligence
are used in armed conflict.

18:07.490 --> 18:10.340
Third, the fundamental
principles of the law of war

18:10.340 --> 18:13.160
provide a general guide
for the conduct of war

18:13.160 --> 18:15.600
where no more specific rule applies,

18:15.600 --> 18:18.690
thus provides a framework
to consider novel legal and

18:18.690 --> 18:23.640
ethical issues posed by emerging
technologies such as A.I.

18:23.640 --> 18:26.790
DOD has a robust process
to implement the law of war

18:26.790 --> 18:29.380
including training
regulations and procedures,

18:29.380 --> 18:32.510
the reporting of incidents
of alleged violations,

18:32.510 --> 18:34.680
investigations and reviews of incidents

18:34.680 --> 18:37.460
and appropriate corrective actions.

18:37.460 --> 18:40.470
DOD lawyers are engaged in
efforts to articulate how

18:40.470 --> 18:43.090
existing law of war
principles apply to emerging

18:43.090 --> 18:45.430
technologies such as
artificial intelligence.

18:45.430 --> 18:48.230
And last, and conversely,
experts are engaged in

18:48.230 --> 18:51.610
understanding how or if
the potential of something

18:51.610 --> 18:54.300
like an emerging technology
such as artificial intelligence

18:54.300 --> 18:57.770
precipitates strategic or
tactical questions about which

18:57.770 --> 19:00.590
current law of war is not yet developed.

19:00.590 --> 19:02.360
And that is the reason that we're here,

19:02.360 --> 19:04.700
because we are asking
that question and hence

19:04.700 --> 19:06.540
our listening session is designed to aid

19:06.540 --> 19:09.930
the department in
undertaking that process.

19:09.930 --> 19:12.970
So with that, let me move to
the second area, which is what

19:12.970 --> 19:16.460
is the current policy on
autonomy and weapons systems?

19:16.460 --> 19:19.070
The department has issued
specific guidance on autonomy

19:19.070 --> 19:21.677
in weapons systems, which
is not synonymous with

19:21.677 --> 19:24.850
artificial intelligence, but
is clearly related to it.

19:24.850 --> 19:29.350
And that document is known
as DOD directive 3000.09

19:29.350 --> 19:32.440
which was signed on November 21, 2012

19:32.440 --> 19:36.226
and then was reissued with
minor revisions in 2017.

19:36.226 --> 19:39.010
And I would just say, this
is a very public document,

19:39.010 --> 19:40.520
you can all find it if you google it

19:40.520 --> 19:42.140
and it's very short and very clear

19:42.140 --> 19:43.720
and I encourage you to read it.

19:43.720 --> 19:46.115
The purpose of this document is that

19:46.115 --> 19:47.170
it establishes three key ideas.

19:47.170 --> 19:50.200
The first is, and I'm gonna
quote directly from the policy,

19:50.200 --> 19:52.920
it says it establishes
DOD policy and assigns

19:52.920 --> 19:54.630
responsibilities for the development

19:54.630 --> 19:57.870
and use of autonomous and
semi-autonomous functions in a

19:57.870 --> 20:01.970
weapons system including
manned and unmanned platforms.

20:01.970 --> 20:04.740
And this document also
establishes guidelines designed

20:04.740 --> 20:07.717
to minimize the probability
and consequences of failures

20:07.717 --> 20:10.690
in autonomous and
semi-autonomous weapons systems

20:10.690 --> 20:13.183
that can lead to unintended engagements.

20:14.056 --> 20:18.440
So basically they're four
things that this document says,

20:18.440 --> 20:21.020
and again I'm just gonna
give four short excerpts here

20:21.020 --> 20:22.560
that basically give you the general idea

20:22.560 --> 20:24.980
of what the policy says to the department.

20:24.980 --> 20:27.880
First, it is DOD policy that an autonomous

20:27.880 --> 20:30.726
and semi-autonomous weapons
system shall be designed

20:30.726 --> 20:34.960
to allow commanders and
operators to exercise appropriate

20:34.960 --> 20:39.197
levels of human judgment
over the use of force.

20:39.197 --> 20:42.260
Second, it says systems
will go through rigorous

20:42.260 --> 20:45.410
hardware and software
verification and validation

20:45.410 --> 20:47.630
and realistic system and development

20:47.630 --> 20:49.763
and operational test and evaluation.

20:51.020 --> 20:55.380
Third, persons who authorize
the use of, direct the use of,

20:55.380 --> 20:59.310
or operate the autonomous or
semi-autonomous weapons systems

20:59.310 --> 21:02.370
must do so with appropriate
care and in accordance

21:02.370 --> 21:05.740
with the law of war, applicable
treaties, weapons systems

21:05.740 --> 21:09.000
safety rules and applicable
rules of engagement.

21:09.000 --> 21:11.990
And last, autonomous or
semi-autonomous weapons systems

21:11.990 --> 21:15.230
intended to be used in a
manner that falls outside

21:15.230 --> 21:17.660
of these policies must be approved by the

21:17.660 --> 21:19.720
Undersecretary of Defense for Policy,

21:19.720 --> 21:22.811
the Undersecretary of Defense
for Acquisition Technology

21:22.811 --> 21:25.226
and Logisitics, and the
Chairman of the Joint Chiefs of

21:25.226 --> 21:27.850
Staff before formal development
and again before fielding.

21:27.850 --> 21:29.750
And just for a little added context,

21:29.750 --> 21:31.430
those three individuals named are

21:31.430 --> 21:34.080
three of the most senior
officials in the department.

21:35.210 --> 21:37.740
So the third area I wanna
cover, and this is almost,

21:37.740 --> 21:39.420
you know you could say breaking news here,

21:39.420 --> 21:42.830
is what the A.I. strategy
has to say about this topic.

21:42.830 --> 21:45.660
So this recently released
and unclassified summary

21:45.660 --> 21:47.840
of this strategy and also
it's public, so please

21:47.840 --> 21:50.775
feel free to look it up,
and there's a section in it,

21:50.775 --> 21:53.360
one of the five pillars
of the strategy, which is

21:53.360 --> 21:56.370
leading in the military
ethics and A.I. safety.

21:56.370 --> 21:58.944
And so I just wanna read
a short quote directly

21:58.944 --> 22:01.840
from the strategy on that
subject in that pillar

22:01.840 --> 22:03.590
about leading in ethics and safety.

22:04.466 --> 22:06.770
It reads, the department
will articulate its vision

22:06.770 --> 22:09.541
and guiding principles
for using A.I. in a lawful

22:09.541 --> 22:12.505
and ethical manner to promote our values.

22:12.505 --> 22:17.210
We will consult with leaders
from across academia,

22:17.210 --> 22:19.300
private industry, and the
international community

22:19.300 --> 22:23.570
to advance A.I. ethics and
safety in a military context.

22:23.570 --> 22:25.830
We will invest in the
research and development

22:25.830 --> 22:27.960
of A.I. systems that are resilient,

22:27.960 --> 22:31.030
robust, reliable, and secure.

22:31.030 --> 22:33.290
We will continue to fund
research into techniques

22:33.290 --> 22:35.700
that will produce more explainable A.I.,

22:35.700 --> 22:38.408
and we will pioneer approaches for A.I.

22:38.408 --> 22:40.700
test evaluation,
verification and validation.

22:40.700 --> 22:43.020
We will also seek
opportunities to use A.I.

22:43.020 --> 22:46.122
to reduce unintentional
harm and collateral damage

22:46.122 --> 22:48.490
via increased situational awareness

22:48.490 --> 22:50.950
and enhanced decision support.

22:50.950 --> 22:53.240
As we improve the technology
and our use of it,

22:53.240 --> 22:55.290
we will continue to share our aims,

22:55.290 --> 22:57.903
ethical guidelines and safety
procedures to encourage

22:57.903 --> 23:01.720
responsible A.I. development
and use by other nations.

23:01.720 --> 23:04.670
Again that strategy was signed
by the Secretary of Defense.

23:05.630 --> 23:07.380
So the last thing I wanna cover is what

23:07.380 --> 23:09.920
else is in that strategy about ethics.

23:09.920 --> 23:12.988
So there is a section about
a couple pages directly

23:12.988 --> 23:16.110
focused on this issue and
I just wanna basically say

23:16.110 --> 23:18.410
the six things that the department has

23:18.410 --> 23:20.870
committed to as part of A.I. strategy.

23:20.870 --> 23:23.160
First, developing A.I.
principles for defense

23:23.160 --> 23:25.090
again which is what we're here to do.

23:25.090 --> 23:27.250
Second, investing in
research and development

23:27.250 --> 23:30.035
for resilient, robust,
reliable and secure A.I.

23:30.035 --> 23:33.190
Third, continuing to fund
research to understand

23:33.190 --> 23:35.637
A.I. driven decisions and actions.

23:35.637 --> 23:38.664
Fourth, promoting transparency
and A.I. research.

23:38.664 --> 23:41.160
Fifth, advocating for a global

23:41.160 --> 23:43.700
set of military A.I. guidelines.

23:43.700 --> 23:46.130
And sixth, using A.I. to reduce the risk

23:46.130 --> 23:49.486
of civilian casualties and
other collateral damage.

23:49.486 --> 23:52.620
And we will post that on our website,

23:52.620 --> 23:54.500
so if any of you find that was helpful,

23:54.500 --> 23:57.200
which I hope you did, you
can use that as a reference.

23:58.060 --> 24:00.670
And now we get to the real action,

24:00.670 --> 24:03.454
which is where you guys get to contribute.

24:03.454 --> 24:04.287
- [Man] Joshua?

24:04.287 --> 24:05.120
- Yeah.

24:05.120 --> 24:08.970
- 159 and such, 314, 159, so I just like

24:08.970 --> 24:12.672
to recognize pi day today
at this particular time, so.

24:12.672 --> 24:15.422
(audience claps)

24:18.772 --> 24:20.920
- It's good to know that you all can

24:20.920 --> 24:22.874
feel at home with the board members.

24:22.874 --> 24:24.135
(laughter)

24:24.135 --> 24:25.820
All members of the same tribe apparently.

24:25.820 --> 24:27.620
Excellent, so now we will
hear from the audience

24:27.620 --> 24:31.380
for the rest of our allotted
time and many of you

24:31.380 --> 24:33.470
submitted your comments
to us online or RSVP'd,

24:33.470 --> 24:35.170
we'll start with those who have done so.

24:35.170 --> 24:37.358
If we get through those comments,

24:37.358 --> 24:38.463
then we'll use the comment cards.

24:42.137 --> 24:45.360
I've really explained all
this, we're just gonna get,

24:45.360 --> 24:47.734
let's just get straight to it for now,

24:47.734 --> 24:51.023
I think that's great, so perfect, alright.

24:52.819 --> 24:57.590
So the first, first comment we received

24:57.590 --> 25:00.750
was from Ellie Niewood, is Ellie here?

25:00.750 --> 25:05.750
Perfect, thank you, and after Ellie

25:06.180 --> 25:09.073
is William Powers here, William?

25:10.100 --> 25:13.680
Okay over to you Ellie

25:15.623 --> 25:17.391
- Should I, well--

25:17.391 --> 25:18.224
- Can I give them the floor?

25:18.224 --> 25:19.057
- Sure.

25:19.057 --> 25:20.610
- Ellie, over to you.

25:20.610 --> 25:21.860
- Hi, I'm Ellie Niewood, I'm here

25:21.860 --> 25:23.190
from the Mitar Corporation.

25:23.190 --> 25:24.887
So first of all, thank you to the board

25:24.887 --> 25:27.640
and to Josh for the
opportunity to sort of briefly

25:27.640 --> 25:30.560
touch on how we see ethical considerations

25:30.560 --> 25:34.140
for military application
of artificial intelligence.

25:34.140 --> 25:37.713
At Mitar we are very committed
both to ethical approaches

25:37.713 --> 25:41.770
to modern warfare, but also
to enabling our service men

25:41.770 --> 25:44.190
and women to have at hand sort
of the best technology that

25:44.190 --> 25:47.280
they need both for the mission
and for their own protection.

25:47.280 --> 25:50.940
And I think clearly, as
Josh has articulated,

25:50.940 --> 25:53.030
artificial intelligence impacts both

25:53.030 --> 25:55.370
of those commitments to great extent.

25:55.370 --> 25:59.220
A.I. is a key emerging
technology that we think can help

25:59.220 --> 26:02.602
the joint force fight
and win in future wars,

26:02.602 --> 26:05.240
yet at the same time, I think it's clear

26:05.240 --> 26:08.880
that DOD has struggled to
field relevant capabilities

26:08.880 --> 26:11.178
in leveraging this technology.

26:11.178 --> 26:12.160
And I think that's for
a number of reasons.

26:12.160 --> 26:15.220
One is because I think
a lot of the developing

26:15.220 --> 26:17.530
in this technology comes from
areas outside from outside

26:17.530 --> 26:20.690
the traditional laboratories
and companies where DOD

26:20.690 --> 26:23.160
has gotten its capability
some of it I think revolves

26:23.160 --> 26:26.460
around challenges associated
with dirty data sets,

26:26.460 --> 26:28.580
with complex system
dynamics, but I think some

26:28.580 --> 26:31.330
of it revolves around
concerns about how to use

26:31.330 --> 26:34.940
this technology in an
ethical and open way.

26:34.940 --> 26:37.080
And so clearly, that needs to be

26:37.080 --> 26:39.020
accounted for to move forward.

26:39.020 --> 26:41.535
At the same time, you know
from an ethical perspective,

26:41.535 --> 26:44.480
we believe A.I. is similar
to a host of technologies

26:44.480 --> 26:47.170
that have preceded it
and have been fielded

26:47.170 --> 26:50.173
and used in ways that have
been ethical and we believe

26:50.173 --> 26:54.690
that integrating A.I. into
military systems and operations.

26:54.690 --> 26:57.060
Again as Josh alluded to,
as the strategy alluded to,

26:57.060 --> 27:00.020
in many ways we think can help
reduce civilian casualties,

27:00.020 --> 27:01.300
while at the same time, providing

27:01.300 --> 27:04.320
a critical military
advantage to our forces.

27:04.320 --> 27:07.490
Just as an example, if
you take claymore mines,

27:07.490 --> 27:10.410
right, remotely triggered
anti-personnel devices,

27:10.410 --> 27:12.970
that were not banned out
by the Ottoman Convention,

27:12.970 --> 27:15.350
were used heavily for example in Vietnam.

27:15.350 --> 27:18.164
What if a device like that
had a sensor that allowed,

27:18.164 --> 27:21.360
you know, you to determine
before detonation

27:21.360 --> 27:24.420
if the target was adult
sized, was carrying a weapon.

27:24.420 --> 27:27.338
Take the, you know the
downing, tragic downing in 1988

27:27.338 --> 27:31.064
of the Iranian airliner
by the USS Vincennes.

27:31.064 --> 27:35.270
The crew of that ship was
forced you know to take action,

27:35.270 --> 27:37.150
really make a split second decision

27:37.150 --> 27:40.370
about whether or not to
engage an unknown aircraft,

27:40.370 --> 27:41.860
before they fired that missile.

27:41.860 --> 27:44.010
If they had had and A.I. based seeker

27:44.010 --> 27:46.650
on that missile that could
have distinguished after being

27:46.650 --> 27:49.758
firing whether it was a civilian
airline in the aircraft.

27:49.758 --> 27:52.262
You know we think that would have,

27:52.262 --> 27:54.920
you know helped save lives in that case.

27:54.920 --> 27:57.030
These examples I think highlight
two of the three points

27:57.030 --> 27:59.607
we'd like to bring to your attention,

27:59.607 --> 28:02.246
you know as you move forward
in coming up with principles.

28:02.246 --> 28:05.150
The first point is that A.I.
does not fundamentally change

28:05.150 --> 28:07.840
we believe the way that we
employ advanced weapons.

28:07.840 --> 28:10.172
Many of the weapons in our inventory today

28:10.172 --> 28:13.057
already select and aim,
or home in on a target

28:13.057 --> 28:17.090
within some given set of
constraints after they're fired.

28:17.090 --> 28:19.760
The tomahawk cruise missile
uses seekers and guidance

28:19.760 --> 28:23.770
algorithms which correlate
the terrain to digital maps.

28:23.770 --> 28:26.534
There are air to air missiles
that lock on after launch so

28:26.534 --> 28:29.870
that the pilot fires them with
some expectation of what they

28:29.870 --> 28:32.880
will engage, but without knowing
for sure what it will do.

28:32.880 --> 28:36.140
All of these weapons make
autonomous or semi-autonomous

28:36.140 --> 28:38.880
decisions about where
they go or what to do

28:38.880 --> 28:42.960
once a human has decided to
go ahead and launch them.

28:42.960 --> 28:45.880
With A.I. technologies, we
clearly have less visibility,

28:45.880 --> 28:48.690
real time, into what
the weapon will decide,

28:48.690 --> 28:51.050
we may have more difficulty
testing the weapon because

28:51.050 --> 28:53.400
of the complexity of
what the A.I. will do,

28:53.400 --> 28:55.820
but a fundamental level
the human has you know

28:55.820 --> 28:57.810
is giving up control and decision making

28:57.810 --> 29:00.910
in ways that are consistent
with existing weapons.

29:00.910 --> 29:03.380
And that launch decision,
with or without A.I.

29:03.380 --> 29:06.306
inside the weapon
obviously needs to be done

29:06.306 --> 29:08.547
in an ethical way that balances the risks

29:08.547 --> 29:10.340
to others with the risks
to the war fighters.

29:10.340 --> 29:13.400
And that's been true for a long time.

29:13.400 --> 29:16.242
A second point that I'd
like to make highlights

29:16.242 --> 29:18.782
a you know, the human is not
the ideal decision maker.

29:18.782 --> 29:19.615
I think that example with the

29:20.887 --> 29:22.572
USS Vincennes makes that clear.

29:22.572 --> 29:24.341
According to some reports,
the Aegis weapons system

29:24.341 --> 29:26.660
on that cruiser knew that there
was a civilian transponder

29:26.660 --> 29:28.810
coping squack by the
airliner, but the humans

29:28.810 --> 29:31.066
did not have time to
take that into account

29:31.066 --> 29:33.860
as they were making, as
they were making a decision.

29:33.860 --> 29:36.920
Used properly, A.I.
technology we believe can lead

29:36.920 --> 29:39.768
to better decision making,
lead to reductions in errors,

29:39.768 --> 29:42.220
resulting in less collateral damage,

29:42.220 --> 29:45.820
resulting in fewer unnecessary
civilian casualties.

29:45.820 --> 29:47.805
Last point that I think Josh
also touched on is that A.I.

29:47.805 --> 29:51.340
technology is not primarily
focused on the pointy end

29:51.340 --> 29:54.260
of the spear about launching
weapons, far from it.

29:54.260 --> 29:56.610
The applications that DOD
is considering revolve

29:56.610 --> 29:59.247
around better maintenance,
around fusing together data

29:59.247 --> 30:02.410
from a variety of sources,
or about finding signals

30:02.410 --> 30:05.253
in high volumes of data or
making strategic decisions.

30:05.253 --> 30:08.250
In closing, we believe it's
important to remember there

30:08.250 --> 30:10.180
are three ethical
commitments we must balance

30:10.180 --> 30:12.640
in any set of principles to be developed.

30:12.640 --> 30:14.020
We have an ethical responsibility

30:14.020 --> 30:16.090
to minimize civilian casualties.

30:16.090 --> 30:18.500
We have an ethical responsibility
to our fellow citizens

30:18.500 --> 30:21.380
to find ways to use A.I.
to enhance their security.

30:21.380 --> 30:23.290
And we have an ethical
commitment to our soldiers,

30:23.290 --> 30:25.650
sailors, airmen, and
marines who put their lives

30:25.650 --> 30:27.852
at risk to best protect them and give them

30:27.852 --> 30:29.800
the capabilities that we need.

30:29.800 --> 30:31.839
We think A.I. can help with

30:31.839 --> 30:32.960
all of these, thank you very much.

30:32.960 --> 30:34.550
- [Josh] Thank you very muck Ellie,

30:34.550 --> 30:37.657
any clarifying question, great.

30:37.657 --> 30:40.180
Okay so let me just ask, is

30:40.180 --> 30:42.363
Kate Crawford or Nick Sinai here?

30:44.120 --> 30:48.726
Kate, Nick, okay, so
next I'm gonna ask Bruno

30:48.726 --> 30:51.540
to read the statement from William Powers

30:51.540 --> 30:55.093
from MIT Media Lab and I will
ask is Michelle Kensie here?

30:59.950 --> 31:03.060
Aaron Johnson, Aaron great,
if you wouldn't mind getting

31:03.060 --> 31:06.667
right at the mic that would
be awesome, over to you Bruno.

31:06.667 --> 31:08.407
- [Bruno] Okay this is from William Powers

31:08.407 --> 31:11.620
from MIT Media Lab,
his comment is the text

31:11.620 --> 31:14.910
of a January 7th, 2019 op-ed
that he co-authored in the

31:14.910 --> 31:19.555
Boston Globe called Beware
Corporate Machine-Washing of A.I.

31:19.555 --> 31:24.310
Back in the late 1960s and
early 70s when the fossil fuel

31:24.310 --> 31:27.090
industry and other corporate
polluters came under fire

31:27.090 --> 31:30.340
for harming the environment,
the polluters launched massive

31:30.340 --> 31:33.670
ad campaigns portraying themselves
as friends of the earth.

31:33.670 --> 31:37.000
This cynical practice was
later dubbed green-washing.

31:37.000 --> 31:39.220
Today we may be witnessing a new kind

31:39.220 --> 31:41.750
of green washing in the technology sector.

31:41.750 --> 31:45.923
Addressing widespread
concerns about the perniculous

31:45.923 --> 31:49.149
downsides of artificial
intelligence robots taking jobs,

31:49.149 --> 31:52.774
fatal autonomous vehicle
crashes, racial bias in criminal

31:52.774 --> 31:57.774
sentencing, the ugly polarization
of the 2018 election,

31:57.830 --> 32:00.870
tech giants are working hard to assure us

32:00.870 --> 32:03.320
of their good intentions surrounding A.I.

32:03.320 --> 32:06.200
But some of their public
relations campaigns are creating

32:06.200 --> 32:09.330
a surface illusion of
positive change without

32:09.330 --> 32:13.020
the verifiable realty,
call it machine washing.

32:13.020 --> 32:16.505
Last year, Google posted
a list of seven A.I.

32:16.505 --> 32:20.100
principles beginning with
be socially beneficial,

32:20.100 --> 32:22.962
Microsoft published The Future Computed,

32:22.962 --> 32:26.470
a book calling for
human-centered approach to A.I.

32:26.470 --> 32:29.980
that reflects timeless
values and launched a program

32:29.980 --> 32:33.947
to support developers working
to meet humanitarian needs.

32:33.947 --> 32:37.630
Germany based SAP, one of
the world's largest software

32:37.630 --> 32:40.780
companies now has an A.I. ethics advisory

32:40.780 --> 32:43.423
panel that included the theologian,

32:43.423 --> 32:48.423
political scientist and bioethicist.

32:49.000 --> 32:51.460
On seeing these initiatives the

32:51.460 --> 32:53.560
natural response is to applaud.

32:53.560 --> 32:56.350
If the most powerful tech
companies are on the case,

32:56.350 --> 33:00.170
surely these problems will
soon be solved, or will they?

33:00.170 --> 33:02.710
Facebook's response to the
intense public scrutiny

33:02.710 --> 33:05.440
it has received since the
election has been to treat

33:05.440 --> 33:08.700
it as its public relations challenge.

33:08.700 --> 33:11.430
After a sell off of stock
in the wake of the Cambridge

33:11.430 --> 33:15.000
Analytica scandal into
early 2017, Facebook spent

33:15.000 --> 33:17.748
1.7 million dollars on
an ad campaign in subway

33:17.748 --> 33:20.560
stations and trains in the Boston area.

33:20.560 --> 33:24.700
The slogan was the best part
of Facebook isn't on Facebook

33:24.700 --> 33:27.130
and the accompanying images
showed people engaging in

33:27.130 --> 33:31.730
healthy, fun, offline activities
such as hiking and dancing.

33:31.730 --> 33:34.900
The message, Facebook is
all about making our world

33:34.900 --> 33:38.970
better and a more harmonious place.

33:38.970 --> 33:41.550
Yet as the New York
Times recently reported,

33:41.550 --> 33:44.175
the company had also hired
lobbyists and opposition

33:44.175 --> 33:48.930
research firms to combat
Facebook's critics,

33:48.930 --> 33:50.710
shift public anger toward rival

33:50.710 --> 33:53.980
companies and ward off
damaging regulation.

33:53.980 --> 33:58.690
As experts on the societal
effects on ethics of A.I.,

33:58.690 --> 34:01.740
a term that broadly refers
to all technologies that

34:01.740 --> 34:04.840
use decision making
algorithms, we are keenly aware

34:04.840 --> 34:07.830
of how much work remains
to be done in understanding

34:07.830 --> 34:10.071
how this new form of intelligence works

34:10.071 --> 34:14.110
once it's released to the real world.

34:14.110 --> 34:16.630
The tech industry has a
long history of humanistic

34:16.630 --> 34:20.390
intentions and pronouncements
and in fact is responsible

34:20.390 --> 34:23.210
for all kinds of progress, yet somehow,

34:23.210 --> 34:25.360
we've gotten into the most serious A.I.

34:25.360 --> 34:28.910
crisis since the dawn
of these technologies.

34:28.910 --> 34:32.440
As with climate change and
environmental degradation,

34:32.440 --> 34:35.320
if we leave oversight of
intelligent machines solely

34:35.320 --> 34:38.170
to the companies that build
and sell technologies,

34:38.170 --> 34:41.293
we'll see many more crises
in the coming decades.

34:42.160 --> 34:44.365
- [Josh] Alright thank
you very much Bruno.

34:44.365 --> 34:46.470
So I just wanna remind everyone
that we really want you

34:46.470 --> 34:49.150
to use these comment cards
and hand them to my friend

34:49.150 --> 34:52.489
here in the corner who'll
give them to me because

34:52.489 --> 34:55.020
I would really love to hear
from all of you who came

34:55.020 --> 34:57.260
all this way today and
not just have Bruno read

34:57.260 --> 34:59.520
all of our friends who've written in.

34:59.520 --> 35:02.460
So please don't hesitate,
in the meantime, I will ask

35:02.460 --> 35:07.460
is Bobby Cunningham present,
Bobby, or Matthew Dodd?

35:09.910 --> 35:12.687
Alright, well you know
what you're next Bruno

35:12.687 --> 35:15.313
and I'm very glad to hear
from you in person Aaron.

35:17.560 --> 35:18.460
- Raise your hand.

35:19.910 --> 35:22.603
- Hi thank you all for
coming and I think it's

35:22.603 --> 35:25.400
great that the Defense Innovation Board

35:25.400 --> 35:27.030
is tackling this important issue.

35:27.030 --> 35:28.470
I especially wanna thank Dr. Murray

35:28.470 --> 35:31.293
who's textbook I use in
my class in the fall.

35:33.500 --> 35:35.100
- Can you get closer to the mic?

35:36.101 --> 35:40.700
- Yeah, bring that up
here, my main concern

35:40.700 --> 35:43.490
is with the use lethal
autonomous weapon systems

35:43.490 --> 35:46.740
and I know the issue of
A.I. ethics is much broader

35:46.740 --> 35:50.320
than that and as you mentioned
earlier that, the many,

35:50.320 --> 35:53.950
the places that the DOD is gonna be using

35:53.950 --> 35:56.340
A.I. is much, much broader than this.

35:56.340 --> 35:57.980
The distinction I wanna
make in particular about

35:57.980 --> 36:00.210
lethal autonomous weapons
systems is thinking

36:00.210 --> 36:04.240
about moral decision
making and moral actions.

36:04.240 --> 36:07.730
And so just because, if, if
a human had done the same

36:07.730 --> 36:12.140
action as an autonomous
system, if they had, and if the

36:12.140 --> 36:14.670
human had done it we would have
called it moral doesn't mean

36:15.807 --> 36:18.090
that if the weapons systems
does the exact same actions

36:18.090 --> 36:21.290
doesn't mean that it was
a moral decision process.

36:21.290 --> 36:24.250
In that thinking about
lethality decisions is something

36:24.250 --> 36:27.010
that's much deeper than, than just

36:27.010 --> 36:29.090
sort of what the final result was.

36:29.090 --> 36:32.360
And so I wrote probably more
coherent stuff in my written

36:32.360 --> 36:35.220
comments, but that was sort of
the main point that I wanted

36:35.220 --> 36:38.750
to bring up was that, these
sorts of ethical decisions

36:38.750 --> 36:42.146
can't always be measured and
we consider just take the

36:42.146 --> 36:45.961
weapon system and put it through
some kind of ethical test

36:45.961 --> 36:48.840
and make sure it doesn't kill
any civilians or whatever

36:48.840 --> 36:50.950
sort of test we come
up with because that's

36:50.950 --> 36:53.960
not what defines a moral action.

36:53.960 --> 36:57.567
And moral actions have more to
do with the internal process

36:57.567 --> 37:02.536
and the decision to trade
off between you know possible

37:02.536 --> 37:07.536
actions and potentially
make sacrifice from that.

37:07.930 --> 37:10.503
So that's all I wanted to say, thank you.

37:11.557 --> 37:14.663
- You want any clarifying questions great.

37:20.090 --> 37:23.159
- Comment is from Michelle
Kenzie from Boston University.

37:23.159 --> 37:25.463
The Department of Defense
in it's push to expand

37:25.463 --> 37:28.470
the intelligence, autonomy
and mobility of systems

37:28.470 --> 37:31.020
supporting the dismounted
soldiers in real time

37:31.020 --> 37:33.680
tactical decision making
capabilities will have

37:33.680 --> 37:36.760
to address a number of
design challenges related

37:36.760 --> 37:39.640
to the safe deployment of
artificial intelligence

37:39.640 --> 37:42.736
learning modules, models and techniques.

37:42.736 --> 37:46.440
Machine learning models are
often trained using private

37:46.440 --> 37:49.153
data sets that are very
expensive to collect or highly

37:49.153 --> 37:52.823
sensitive, using large
amounts of computing power.

37:54.330 --> 37:57.990
The models are commonly exposed
either through online APIs

37:57.990 --> 38:00.100
or used in hardware devices deployed

38:00.100 --> 38:02.653
in the field or given to the end users.

38:03.500 --> 38:06.700
This gives incentives to
adversaries to attempt to steal

38:06.700 --> 38:10.253
these ML models as a proxy
for gathering data sets.

38:11.104 --> 38:16.010
While API based model filtration
has been studied before,

38:16.010 --> 38:18.480
the theft and protection of
machine learning models on

38:18.480 --> 38:21.423
hardware devices have not
been explored as of now.

38:26.240 --> 38:30.240
- Is, is Alexis Reese Simpson, Alexis?

38:36.564 --> 38:39.928
Okay, do you all have these from outside?

38:39.928 --> 38:40.761
(laughing)

38:40.761 --> 38:42.328
There were a stack of these outside,

38:42.328 --> 38:45.348
should we bring these inside
for you, do you need them?

38:45.348 --> 38:48.063
Please hand them to Aaron or
just wave them in the air.

38:50.904 --> 38:53.400
I did not expect such a
shy and retirent bunch,

38:53.400 --> 38:55.280
in the meantime though I will ask Bruno

38:55.280 --> 38:57.860
if you would please
read Bobby Cunningham's.

38:57.860 --> 38:59.740
- [Bruno] Bobby Cunningham from Omnedy.

38:59.740 --> 39:02.250
Omnedy is a self assembling
knowledge curation

39:02.250 --> 39:05.450
and discovering platform
that fuses advanced natural

39:05.450 --> 39:08.450
language processing, machine learning,

39:08.450 --> 39:11.053
linguistic block chain and graph mat.

39:11.891 --> 39:14.700
Omnedy detects similarities
across diverse, intelligent

39:14.700 --> 39:18.580
sources, driving rapid
discovery and insight and has

39:18.580 --> 39:22.090
co-founded the wisdom tech
society to provide a framework

39:22.090 --> 39:26.573
of ethical data curation as
data is transformed into wisdom.

39:27.436 --> 39:29.283
I'm gonna slow down a little bit.

39:30.750 --> 39:34.060
As artificial intelligence
emerges as a means to find

39:34.060 --> 39:37.360
patterns and perform analytic
and massive data sets,

39:37.360 --> 39:40.090
many organizations, companies
and governments are seeking

39:40.090 --> 39:42.640
to leverage this powerful
technology for their own

39:42.640 --> 39:46.010
applications however it is
critically important that

39:46.010 --> 39:49.080
those that seek to use
this technology also better

39:49.080 --> 39:52.140
understand both the strengths
and weaknesses associated

39:52.140 --> 39:55.990
with the data processing
strategies, algorithms,

39:55.990 --> 39:59.590
and business practices
enabling machine learning.

39:59.590 --> 40:03.411
Such understanding is complicated
by the massive hyperbole

40:03.411 --> 40:07.850
expressed by many companies
often further amplified by

40:07.850 --> 40:09.470
journalists who do not understand

40:09.470 --> 40:12.150
the topics about which they are writing.

40:12.150 --> 40:14.964
Taken together, these forces create

40:14.964 --> 40:16.914
unrealistic expectations and even fear.

40:17.960 --> 40:21.500
Properly applied machine
learning can be a useful tool

40:21.500 --> 40:24.400
for exploring and discerning
patterns in big data,

40:24.400 --> 40:28.360
where human inspection of
massive data is not scalable.

40:28.360 --> 40:31.440
When seeking to categorize
or otherwise sort data into

40:31.440 --> 40:34.182
sets that do enable human insight,

40:34.182 --> 40:38.190
machine learning processes offer

40:38.190 --> 40:41.500
a useful augmentation
for human intelligence.

40:41.500 --> 40:44.810
One can think of the algorithms
driving these sorting

40:44.810 --> 40:48.710
processes as a form of high
dimensional curve fitting,

40:48.710 --> 40:51.530
that is applying a structural analysis to

40:51.530 --> 40:54.960
find the underlying patterns
in a large data set.

40:54.960 --> 40:57.680
Used in this manner, A.I. technology

40:57.680 --> 41:00.820
is well suited for useful application.

41:00.820 --> 41:02.532
What computers do poorly is,

41:02.532 --> 41:06.190
what computers do poorly
is make judgements.

41:06.190 --> 41:09.160
Computers do not understand
irony or sarcasm.

41:09.160 --> 41:12.580
Computational processes do
not well enable abstraction

41:12.580 --> 41:15.970
of ideas, generalization
or creative thinking.

41:15.970 --> 41:18.740
These areas remain in the
realm of the human mind,

41:18.740 --> 41:22.460
relying on computer processes
with expectations that

41:22.460 --> 41:25.720
the computer can be
creative, generalize, or make

41:25.720 --> 41:29.330
judgements will lead to
disappointment and frustration.

41:29.330 --> 41:31.520
Understanding the limits
of machine intelligence

41:31.520 --> 41:34.483
is critical for the effective
use of these technologies.

41:35.870 --> 41:38.840
It is important to note that
the output of a computational

41:38.840 --> 41:41.653
process is limited by the
quality of its input data.

41:42.610 --> 41:45.510
This clearly applies to
consistency of data formatting,

41:45.510 --> 41:47.940
completeness of data records,

41:47.940 --> 41:50.020
and other quality control metrics.

41:50.020 --> 41:52.560
Yet it also applies to
the ethical sourcing

41:52.560 --> 41:55.400
and curation of the data sets themselves.

41:55.400 --> 41:58.130
Where data is sourced from
people, organizations, companies,

41:58.130 --> 42:00.650
or government agencies in each case

42:00.650 --> 42:04.153
the data sources should
be derived in a manner

42:04.153 --> 42:08.110
than is both morally appropriate
and legally compliant.

42:08.110 --> 42:09.815
Finally, the use of data to form

42:09.815 --> 42:12.978
insights is a three step process.

42:12.978 --> 42:16.150
Data can be defined as
numbers, facts and figures

42:16.150 --> 42:18.520
such as sensor readings or the

42:18.520 --> 42:20.293
monitoring of vital signs in a patient.

42:20.293 --> 42:23.691
Yet data alone does not afford insight.

42:23.691 --> 42:28.490
When data is contextualized it
transforms into information.

42:28.490 --> 42:31.570
When that information is
contextualized, that information

42:31.570 --> 42:34.700
may form wisdom, leading
to actionable insight.

42:34.700 --> 42:38.500
Each tier of this transformation
process is vulnerable

42:38.500 --> 42:40.850
and must be safeguarded ethically so

42:40.850 --> 42:43.100
that those actionable
insights are consistent

42:43.100 --> 42:45.433
with the moral framework
of our civilization.

42:46.310 --> 42:48.710
At this time what is most
needed is a framework of

42:48.710 --> 42:52.930
ethical data curation as data
is transformed into wisdom.

42:52.930 --> 42:55.010
This is why we at Omnedy have

42:55.010 --> 42:57.520
co-founded the Wisdom Tech Society.

42:57.520 --> 43:00.190
As the Defense Innovation
Board considers ways in which

43:00.190 --> 43:02.780
to advise the Pentagon
with respect to the ethical

43:02.780 --> 43:05.710
use of artificial intelligence,
we urge you to meet

43:05.710 --> 43:09.290
with Omnedy and Wisdom Tech
Society which will demonstrate

43:09.290 --> 43:11.700
technology that can transform
the ways in which our nation

43:11.700 --> 43:14.223
conducts the intellengence
gathering methods.

43:16.370 --> 43:20.880
- [Josh] Thank you Bruno, Aaron,

43:20.880 --> 43:23.938
maybe we should hand the cards out again.

43:23.938 --> 43:25.410
(laughing)

43:25.410 --> 43:29.991
Just a subtle encouragement,
have we received any yet?

43:29.991 --> 43:33.074
(audience murmuring)

43:34.310 --> 43:36.630
Great, yes if, yeah, if
you have a card please

43:36.630 --> 43:39.695
hand them in, great, oh
this is very exciting.

43:39.695 --> 43:41.583
(laughing)

43:41.583 --> 43:45.872
I'm so relieved, no they
come to this mic, so Maggie,

43:45.872 --> 43:50.795
you've rescued me, thank you,
please see the microphone,

43:50.795 --> 43:52.123
that's phenomenal, thank you very much.

44:03.680 --> 44:06.320
- Hello my name is Maggie
Oats, I am a PhD student

44:06.320 --> 44:09.383
here at Carnegie Mellon in
societal computing, and I work

44:09.383 --> 44:13.573
in CyLab which is the
cybersecurity laboratory here.

44:14.643 --> 44:18.320
Let me first say that I disagree
with the very enterprise

44:18.320 --> 44:22.160
and existence of the Defense
Innovation Board as it seems

44:22.160 --> 44:24.750
like a tool to lend
credibility to the project

44:24.750 --> 44:27.410
of advancing military
efficiency and further

44:27.410 --> 44:30.303
escalating the baseline
that we consider defense.

44:31.380 --> 44:35.030
Second I object to CMU's
heavy involvement with the DOD

44:35.030 --> 44:36.910
and I'm hard pressed to name a quote

44:36.910 --> 44:39.433
ethical and responsible
use of A.I. at all.

44:39.433 --> 44:41.560
With that said, I would
like to focus on something

44:41.560 --> 44:46.054
else today and that is what I
view as an externality of the

44:46.054 --> 44:50.715
project of A.I. in the DOD and
that is the growth of civil

44:50.715 --> 44:54.340
surveillance, both
domestically and abroad.

44:54.340 --> 44:57.890
The development of machine
learning algorithms relies on

44:57.890 --> 45:01.440
massive amounts of data of
course and while methods are

45:02.341 --> 45:04.210
being developed first of all
reduce the amount of data

45:06.119 --> 45:07.613
required to reduce the amount
of labeled data required,

45:08.670 --> 45:11.100
these methods often correlate
with having the downside

45:11.100 --> 45:14.130
of being hard to explain and
more difficult to verify,

45:14.130 --> 45:18.750
making them an unlikely
use in the DOD's context.

45:18.750 --> 45:20.910
Beyond that, the project of labeling data

45:20.910 --> 45:24.940
often rests on exploitative
labor practices.

45:24.940 --> 45:27.920
So I stand here to assert that
any responsible principles

45:27.920 --> 45:31.330
must address the effects
that DOD A.I. will have on

45:31.330 --> 45:32.930
surveillance, not only from

45:32.930 --> 45:34.440
the state, but also from the tech

45:34.440 --> 45:37.860
companies that will be the
first line in building that A.I.

45:38.710 --> 45:41.963
This topic is absolutely
not out of scope, thank you.

45:44.080 --> 45:49.080
- Thank you, I hope we get a
few more and in the meantime

45:50.970 --> 45:54.010
Matthew Dodd from National
Institute of Health.

45:54.010 --> 45:56.017
- [Bruno] Matthew Dodd from

45:56.017 --> 45:57.545
the National Institute of Health.

45:57.545 --> 45:58.610
I would like to thank the
board for their leadership

45:59.587 --> 46:01.120
and sage council to the department,

46:01.120 --> 46:04.180
industry, academia and indeed the nation.

46:04.180 --> 46:05.730
It is a profoundly overwhelming

46:06.863 --> 46:08.670
undertaking of responsibility.

46:08.670 --> 46:09.550
I thank you all and I am humbled

46:09.550 --> 46:12.513
by the purity of competence
demonstrated by the board.

46:13.783 --> 46:16.080
This stands in stark
contrast to the anti-expert

46:16.080 --> 46:18.000
virus that has infected every corner

46:18.000 --> 46:22.023
of society one participation
trophy at a time.

46:24.060 --> 46:28.190
While demonstrating that,
primarily one has a duty to serve

46:28.190 --> 46:32.200
their country and one has a
unique, has unique experiences

46:32.200 --> 46:36.514
and tools with which to
innovate the notion of service.

46:36.514 --> 46:41.010
One has the moral imperative
obligation to find

46:41.010 --> 46:43.580
out how to manifest it, innovate, so

46:43.580 --> 46:45.663
as to be of service, thank you again.

46:48.581 --> 46:50.630
- Okay Bruno thank you very much.

46:50.630 --> 46:54.477
Next we will hear from
Dave Zubro from SEI.

46:59.533 --> 47:01.221
(laughing)

47:01.221 --> 47:03.236
Oh yes, hold on, as long
as you promise to give it

47:03.236 --> 47:05.700
back to me when you're
done, thank you very much.

47:05.700 --> 47:09.210
And after, and after Mr.
Zubro, we'll hear from

47:09.210 --> 47:12.740
Pat Houston next, you can go
to the mic if you want sir.

47:12.740 --> 47:17.740
- Hi I'm Dave Zuper from the
SEI and Josh, your remarks

47:19.520 --> 47:22.420
at the beginning struck me
because there was a lot of

47:22.420 --> 47:25.926
emphasis placed on adhering to the law and

47:25.926 --> 47:30.070
the legal framework that's been built up

47:30.070 --> 47:33.940
around conducting military
operations in war.

47:33.940 --> 47:38.630
And I just, it struck me
that our legal framework

47:38.630 --> 47:41.470
and our laws may not adequately

47:41.470 --> 47:46.470
express our values and
morals as a society.

47:46.790 --> 47:51.350
And so simply adhering to
the law may not be enough,

47:51.350 --> 47:56.350
so I would ask that in
the work and deliberations

47:56.500 --> 47:59.713
of the board and others
thinking about to deploy

47:59.713 --> 48:04.713
artificial intelligence
into, into our lives as well

48:06.270 --> 48:10.450
as you know national
security, that we push it back

48:11.470 --> 48:16.100
and say is that enough, is
that the right framework,

48:16.100 --> 48:21.100
is it really expressing our
set of values as a society?

48:21.920 --> 48:23.730
So that's my comment.

48:23.730 --> 48:26.163
- Thank you so much, any
clarifying questions?

48:27.880 --> 48:31.210
Thank you very much sir,
I greatly appreciate it.

48:31.210 --> 48:35.220
Bruno if you would, Julian Kline

48:35.220 --> 48:37.563
and then over to you sir right after that.

48:39.170 --> 48:41.900
- [Bruno] Julian Kline
from Kline Studio LLC.

48:41.900 --> 48:44.870
Dear DIB, here are my suggestions.

48:44.870 --> 48:48.640
One, while A.I. is very powerful
for information gathering,

48:48.640 --> 48:53.110
data analysis and reactive
cyber defenses, any A.I. dealing

48:54.000 --> 48:57.230
with humans should have a
margin of error for living

48:57.230 --> 49:01.320
creature's mistakes,
confusions and improvisations.

49:01.320 --> 49:02.920
No human nor animal should

49:02.920 --> 49:06.160
be held to mathematical expectations.

49:06.160 --> 49:10.447
Two humans can not be
backed up, redownloaded,

49:10.447 --> 49:14.930
or rebooted, any physical
world A.I. should do its best

49:14.930 --> 49:17.030
to preserve human life in a humane

49:18.349 --> 49:20.680
way despite any further coded tasks.

49:20.680 --> 49:24.710
Three any powerful A.I. should
come with equally powerful

49:24.710 --> 49:27.770
A.I. minimum of three which can

49:27.770 --> 49:30.480
create a checks and balances scenario.

49:30.480 --> 49:34.230
If one A.I. begins acting
strange, the other two can

49:34.230 --> 49:38.149
fix it with permission
override keys and regulations.

49:38.149 --> 49:41.410
One cannot override the other two.

49:41.410 --> 49:44.860
In the event all three
are corrupt, the owner or

49:44.860 --> 49:46.990
developer should have a kill switch

49:46.990 --> 49:51.100
and back door code to delete all three.

49:51.100 --> 49:53.560
Four, we need a clean, we need to

49:53.560 --> 49:56.290
clean the internet with A.I.

49:56.290 --> 49:59.520
This sort of cyber
regulation will rely on a

49:59.520 --> 50:03.169
communicative tech society,
segregating tech experts

50:03.169 --> 50:06.661
and what they know may
seem safer, but it's

50:06.661 --> 50:10.293
detrimental to our minimum
bar of tech education.

50:11.890 --> 50:14.130
A collaborative cyber community you will

50:14.130 --> 50:18.450
have the creative answers
to defense issues.

50:18.450 --> 50:23.060
Five, using A.I. to effect
people's way of life,

50:23.060 --> 50:26.270
sway people's opinions, or cause chaos or

50:26.270 --> 50:28.890
confusion is different
than a media campaign

50:28.890 --> 50:32.303
based of A.I. collected
and configured data.

50:32.303 --> 50:36.217
The former is systematic,
intentional and cultural

50:36.217 --> 50:39.900
intrusion, the latter is
an educated broadcast.

50:39.900 --> 50:41.550
Thank you for your time and work.

50:43.050 --> 50:44.640
- And Bruno because it's so brief why

50:44.640 --> 50:47.570
don't you just quickly
also read Joshua Darrow.

50:47.570 --> 50:49.330
- Joshua Darrow, Department of the Navy.

50:49.330 --> 50:52.950
Innovation begins and ends
with technical capability.

50:52.950 --> 50:56.070
Over decades the government
has outsourced its

50:56.070 --> 50:57.900
most technical work, eroding the

50:57.900 --> 51:00.880
technical skill of the
government workforce.

51:00.880 --> 51:03.230
To have an innovative
government workforce dealing

51:03.230 --> 51:06.490
with the distribution of
technical work, i.e. designing,

51:06.490 --> 51:11.100
building, testing, redesigning,
and testing is essential.

51:11.100 --> 51:13.330
- Terrific, over to you sir.

51:13.330 --> 51:15.550
- Well good afternoon,
my name's Pat Houston,

51:15.550 --> 51:18.980
and I'm a soldier and I'd like
to provide four observations

51:18.980 --> 51:21.330
from my perspective, but in my personal

51:21.330 --> 51:24.478
capacity based on five tours
in Iraq and Afghanistan

51:24.478 --> 51:28.412
and approximately three
decades in uniform.

51:28.412 --> 51:30.610
First I wanna assure
those here today that from

51:30.610 --> 51:34.080
what I've seen, the Pentagon
is very sensitive to these

51:34.080 --> 51:37.569
valid concerns and is deeply
committed to addressing them.

51:37.569 --> 51:41.410
Leaders at all levels are totally
dedicated to ensuring that

51:41.410 --> 51:44.711
A.I. enhanced systems are developed and

51:44.711 --> 51:48.633
used in compliance with
the law and with ethics

51:48.633 --> 51:51.260
and also done in a responsible manner.

51:51.260 --> 51:55.250
Number two, I acknowledge
that no A.I. system will

51:55.250 --> 51:57.700
ever be perfect, but I also would offer

51:57.700 --> 52:00.630
that no military system is perfect.

52:00.630 --> 52:03.060
In fact one of the most
unpredictable systems

52:03.060 --> 52:06.007
we can employ is the individual soldier.

52:06.007 --> 52:09.700
As we all know, human
behavior can never be reliably

52:09.700 --> 52:14.269
predicted especially when
someone is tired, cold, hungry,

52:14.269 --> 52:17.258
and scared as is often
the case with soldiers.

52:17.258 --> 52:21.300
But I do think A.I. can
help, one of the measures

52:21.300 --> 52:24.210
for determining whether
to leverage A.I. is to ask

52:24.210 --> 52:27.250
whether it's as good as
or in some cases better

52:27.250 --> 52:29.750
than what humans can
perform by themselves.

52:29.750 --> 52:33.640
And so we're lookin' at
human machine teaming that

52:33.640 --> 52:37.130
leverages the best of both
and that helps preserve

52:37.130 --> 52:39.550
appropriate levels of human
judgment to address some

52:39.550 --> 52:41.170
of the concerns and issues that

52:42.275 --> 52:43.108
have already been raised here today.

52:43.975 --> 52:46.490
Third, just to address
one concern head on,

52:46.490 --> 52:48.770
this concern about killer robots,

52:48.770 --> 52:51.878
the law of war is very
clear that commanders

52:51.878 --> 52:55.810
can never unleash systems
over which they'll

52:55.810 --> 52:59.129
lose control and the commanders
always are responsible

52:59.129 --> 53:02.100
for any weapons that they employ.

53:02.100 --> 53:05.810
Fourth and finally, I would
suggest the cooperation

53:05.810 --> 53:09.158
between the government
and academia and industry

53:09.158 --> 53:12.760
is absolutely essential to
the responsible way ahead

53:12.760 --> 53:15.020
in addressing all these issues.

53:15.020 --> 53:17.880
The best and brightest
A.I. researchers out there

53:17.880 --> 53:20.170
should insist and have a right to insist

53:20.170 --> 53:22.810
on legal and ethical conduct by the

53:22.810 --> 53:25.368
governments that they're working with.

53:25.368 --> 53:29.620
And once you have that
confirmation of ethical

53:29.620 --> 53:33.380
and legal performance by those governments

53:33.380 --> 53:35.720
out there, then I don't think

53:35.720 --> 53:39.850
those A.I. researchers
should boycott the endeavor.

53:39.850 --> 53:41.840
As some have suggested
that they should do.

53:41.840 --> 53:45.477
In my view, if the best and brightest A.I.

53:45.477 --> 53:49.300
researchers out there who
are concerned about ethics

53:49.300 --> 53:51.930
boycott the process, that's
gonna just leave a void

53:51.930 --> 53:54.310
that would be filled by
other A.I. researchers

53:54.310 --> 53:57.940
who are less ethical or less
capable or both and I think

53:57.940 --> 54:00.290
that would be a recipe
for disaster, thank you.

54:01.473 --> 54:06.473
- Thank you sir, any clarifying
questions, no, great.

54:08.654 --> 54:12.870
Okay I think I've seen some
evidence of cards stirring

54:12.870 --> 54:15.600
so please just feel free to
just push, you know give them

54:15.600 --> 54:18.910
to your neighbor, you can
pass them to me, we can be as

54:18.910 --> 54:23.390
casual as this podium,
stage arrangement permits.

54:23.390 --> 54:27.450
That's great, alright so I'll
ask Bruno if you would read

54:27.450 --> 54:30.740
the statement from Thomas
Creely and then Henry Hargrove

54:30.740 --> 54:33.330
if you would please feel free
to approach the microphone.

54:33.330 --> 54:37.570
- Thomas Creely, United
States Naval War College.

54:37.570 --> 54:39.890
The U.S. Naval War College
has recently established

54:39.890 --> 54:43.100
a special graduate
certificate program in ethics

54:43.100 --> 54:45.210
and emerging military technology.

54:45.210 --> 54:48.240
As its director, I work
with a dozen competitively

54:48.240 --> 54:51.730
selected students to engage
in ethics and technology

54:51.730 --> 54:55.010
related course work as well
as conduct research on the

54:55.010 --> 54:58.460
ethical implications of
emerging technologies.

54:58.460 --> 55:01.450
Each student produces a
lengthy professional paper,

55:01.450 --> 55:05.011
analyzing some aspects of
the ethics technology nexus.

55:05.011 --> 55:08.978
Many of them dealing with
the various forms of A.I.

55:08.978 --> 55:13.978
We have developed connections
with DARPA, ONR, Boston Global

55:14.590 --> 55:17.520
Forum, and the Director for
Defense Intelligence and a

55:17.520 --> 55:20.780
number of academic institutions
exploring the ethics of A.I.

55:24.114 --> 55:27.500
- Thank you for allowing
the time for public comment,

55:27.500 --> 55:30.860
I want to reaffirm the other
gentleman's comment before that

55:30.860 --> 55:33.510
one about the soldier and the
importance of understanding

55:33.510 --> 55:36.613
the soldier's behavior, which
my comment sort of echoed.

55:37.531 --> 55:40.450
So in my personal opinion, I
just wanna echo the need that

55:40.450 --> 55:41.900
you've already identified for

55:42.862 --> 55:45.340
realistic and continuous
tests and evaluation.

55:45.340 --> 55:48.300
And the reason for this is
autonomous systems will coexist

55:48.300 --> 55:51.740
with humans, individually and
in large social networks and

55:51.740 --> 55:55.480
if we are truly pursuing
understandable A.I. that responds

55:55.480 --> 56:00.190
to stimuli, shaped by this
humanity, then our DOD model

56:00.190 --> 56:03.840
and simulation in live and
virtual constructive programs,

56:03.840 --> 56:07.270
need to keep pace and improve
portrayals of the humans

56:07.270 --> 56:11.500
and human technical behavior
in those simulations.

56:11.500 --> 56:14.130
Secondly this testing
evaluation improvement should

56:14.130 --> 56:17.958
be continuous and that's because
our society and the society

56:17.958 --> 56:22.958
of our peers, allies and
adversaries is also changing

56:23.607 --> 56:28.607
and so that behavior's dyanamic
and demands that we are

56:29.438 --> 56:31.820
current and honest in our own

56:31.820 --> 56:34.543
assessments of that portrayal, thank you.

56:41.140 --> 56:44.007
- Yeah, if you wouldn't mind,
that would be great, bring

56:44.007 --> 56:47.560
that back up, and does anyone
else need a comment card?

56:47.560 --> 56:50.310
Great, thank you sir if you
wait you can just that's

56:50.310 --> 56:53.510
perfect and approach the
microphone there on that

56:53.510 --> 56:55.960
or right here would be
also excellent, thank you.

56:58.830 --> 57:03.830
- Thank you and hello, my name
is Kenny Chen and I'd like

57:04.031 --> 57:07.840
to share my appreciation
that explainability and

57:07.840 --> 57:11.020
transparency have been
emphasized in the DOD's initial

57:11.020 --> 57:14.930
A.I. strategy and because this strategy

57:14.930 --> 57:17.925
and it's components remain
at an understandably

57:17.925 --> 57:20.860
at a nascent phase, it's understandable

57:20.860 --> 57:25.860
that those terms are not yet
clearly rigorously defined.

57:25.910 --> 57:29.010
And so to that end I encourage
the DOD to pay especially

57:29.010 --> 57:32.680
close attention to the
question of to whom these A.I.

57:32.680 --> 57:36.150
systems are designed to be
explainable and transparent.

57:36.150 --> 57:39.390
Because there are substantial
differences between how

57:39.390 --> 57:43.160
accurate information might
be delivered to a computer

57:43.160 --> 57:46.280
scientist versus a war
fighter on the field

57:46.280 --> 57:49.060
versus a journalist or a politician.

57:49.060 --> 57:52.940
And the DOD cannot be
too careful in ensuring

57:52.940 --> 57:55.920
that misinformation and the risks

57:55.920 --> 57:58.555
of misunderstanding are minimized.

57:58.555 --> 58:03.300
Having talked to people at
the U.S., U.N. Office of

58:03.300 --> 58:06.022
Disarmament Affairs or
considerations across,

58:06.022 --> 58:10.000
given the pace of decision making within

58:10.000 --> 58:13.960
a highly automated you
know decision system

58:13.960 --> 58:16.460
when it comes to international conflict.

58:16.460 --> 58:19.807
Setting those kinds of
standards and understanding

58:19.807 --> 58:24.495
will be existentially important
and just as we are currently

58:24.495 --> 58:29.440
emphasizing inter-corroberability
across information systems

58:29.440 --> 58:33.213
and devices and technology we
should apply the same values

58:33.213 --> 58:36.420
and scrutiny to the way that organizations

58:36.420 --> 58:38.513
communicate with one another, thank you.

58:45.320 --> 58:47.444
- For sure, that's great,
we just, we wanna make

58:47.444 --> 58:50.077
sure we have your contact
information, that's great.

58:50.077 --> 58:52.227
All right, Bruno, Mark
Grubrud's statemnet.

58:53.320 --> 58:57.640
- Mark Grubrud, University of
North Carolina Chapel Hill.

58:57.640 --> 59:00.100
We stand today at the
start of a revolution,

59:00.100 --> 59:02.699
the rapid advance and wide use of A.I.

59:02.699 --> 59:05.890
Because this technology
replaces human intelligence

59:05.890 --> 59:08.960
in judgment, it has the
potential to cause catastrophic

59:08.960 --> 59:11.820
errors with consequences
in proportion to the

59:11.820 --> 59:15.600
responsibilities being
delegated to machines.

59:15.600 --> 59:19.290
The specific causes of error
in A.I. may be foreseen

59:19.290 --> 59:24.290
but in general will not be and
may not even be identifiable.

59:26.400 --> 59:30.230
In complex systems, it becomes
in principle impossible to

59:30.230 --> 59:35.170
foresee all exceptional situations
that may and will arise.

59:35.170 --> 59:39.320
Computer algorithms are
particularly brittle, but also

59:39.320 --> 59:43.110
complex networks of analog
and living systems exhibit

59:43.110 --> 59:47.163
unpredictable collective
behavior and sudden crises.

59:48.240 --> 59:51.310
As A.I. advances toward
human level capabilities

59:51.310 --> 59:54.260
its potential for instability is clear.

59:54.260 --> 59:57.810
The most severe danger arises
from the unforeseeable,

59:57.810 --> 01:00:00.890
untestable interactions of networked,

01:00:00.890 --> 01:00:04.571
complex, competing and
adversarial systems.

01:00:04.571 --> 01:00:08.830
Experience with such networks
such as the instability

01:00:08.830 --> 01:00:12.050
of high speed trading which
has produced several hugely

01:00:12.050 --> 01:00:15.866
expensive stock market flash
crashes has demonstrated

01:00:15.866 --> 01:00:20.120
the likelihood that confronting
and interacting adversarial

01:00:20.120 --> 01:00:24.876
networks will erupt into crisis
or open combat spontaneously

01:00:24.876 --> 01:00:29.876
or as triggered by unforeseen
circumstances and in any case

01:00:31.090 --> 01:00:35.243
once ignited, into a condition
of ongoing violence may

01:00:35.243 --> 01:00:40.243
escalate and execute and
escalate that violence so rapidly

01:00:40.740 --> 01:00:44.095
in such a complicated and
opaque way as to resist or

01:00:44.095 --> 01:00:47.023
frustrate any human attempt to intervene.

01:00:47.870 --> 01:00:51.923
The ongoing confrontation,
competition and adversarial

01:00:51.923 --> 01:00:56.330
nature of such systems and
the adversarial relationship

01:00:56.330 --> 01:00:59.670
of their creators contradict
and will frustrate any effort

01:00:59.670 --> 01:01:03.040
to coordinate between them
so as to mitigate the risk of

01:01:03.040 --> 01:01:05.273
unauthorized or uncontrollable conflict.

01:01:06.330 --> 01:01:08.740
Why would nations undertake to construct

01:01:09.748 --> 01:01:12.340
and rely on such obviously
dangerous systems?

01:01:12.340 --> 01:01:16.180
For the same reason as in the
Cold War, as in the arms race

01:01:16.180 --> 01:01:19.760
today, and under the competitive
pressure of an advancing

01:01:19.760 --> 01:01:22.730
technology that is already
able to aggregate and correlate

01:01:22.730 --> 01:01:25.950
more data than any human
could, and is increasingly able

01:01:25.950 --> 01:01:29.380
to integrate that information
and make high level decisions,

01:01:29.380 --> 01:01:30.750
particularly when signals are

01:01:30.750 --> 01:01:34.153
unambiguous more rapidly than any human.

01:01:35.010 --> 01:01:39.740
We must avoid taking that
road, but in fact, it is the

01:01:39.740 --> 01:01:43.430
road we are already on, so
we must avoid going further.

01:01:43.430 --> 01:01:46.350
The global community must
undertake ambitious arms control

01:01:46.350 --> 01:01:49.900
initiatives including a mandate
of real time accountable

01:01:49.900 --> 01:01:52.780
human control over all weapon systems.

01:01:52.780 --> 01:01:55.800
The DOD and the U.S. cannot do this alone,

01:01:55.800 --> 01:01:59.420
but America's preference
should be for arms control.

01:01:59.420 --> 01:02:01.060
We should say so and everything

01:02:01.060 --> 01:02:03.240
we do should be consistent with that.

01:02:03.240 --> 01:02:06.660
Unilateral disarmament or
renunciation of strategically

01:02:06.660 --> 01:02:10.643
decisive technology would not be effective

01:02:10.643 --> 01:02:14.570
or possible but the opposite
extreme of trying to win the

01:02:14.570 --> 01:02:17.710
arms race should be
equally strongly rejected.

01:02:17.710 --> 01:02:20.450
Questions about A.I.
and autonomous weapons

01:02:20.450 --> 01:02:23.241
are too often framed
only in terms of ethics.

01:02:23.241 --> 01:02:28.030
Is it right to use such weapons?

01:02:28.030 --> 01:02:30.800
We need to consider the ways
in which these weapons are

01:02:30.800 --> 01:02:34.780
eroding our control and
creating a threat to ourselves.

01:02:34.780 --> 01:02:38.550
We must avoid accelerating
arms race to the loss of human

01:02:38.550 --> 01:02:40.300
control and the occurrence of

01:02:40.300 --> 01:02:43.595
war by accident or misconceived design.

01:02:43.595 --> 01:02:47.345
Is it ethical to lead a global
race to oblivion instead

01:02:47.345 --> 01:02:51.810
of leading toward a strong
regime of binding verified arms

01:02:51.810 --> 01:02:55.393
control, global governance
and human security?

01:02:59.874 --> 01:03:03.583
- Thank you, great,
excellent, thanks a lot.

01:03:12.340 --> 01:03:14.517
- I'm April Gallier I'm with the SEI.

01:03:15.573 --> 01:03:18.330
And when we're talking
about ethics and responsible

01:03:18.330 --> 01:03:23.004
use of A.I. and machine
learning it, closer?

01:03:23.004 --> 01:03:28.004
Louder, okay, alright, the
ethics and responsible use where

01:03:29.541 --> 01:03:34.240
I see that it's really in the
details and the implementation

01:03:34.240 --> 01:03:37.990
that that's where the ethics
are really going to happen

01:03:37.990 --> 01:03:40.160
and I've got a list of a couple

01:03:40.160 --> 01:03:42.932
of examples just to highlight this.

01:03:42.932 --> 01:03:45.574
So anytime you're talking
about machine learning or A.I.

01:03:45.574 --> 01:03:49.530
there's going to be a base
rate of error and if you took

01:03:49.530 --> 01:03:51.280
an interest in a statistics class

01:03:52.137 --> 01:03:54.197
five percent's acceptable error.

01:03:54.197 --> 01:03:59.130
Well so Tumblr recently put
a adult content filter out

01:03:59.130 --> 01:04:01.320
and let's pretend that they actually got

01:04:01.320 --> 01:04:05.430
their five percent error
in this classifying images.

01:04:05.430 --> 01:04:08.610
They didn't get anywhere near
that, but if they had gotten

01:04:08.610 --> 01:04:11.760
to five percent error that's
still millions and millions

01:04:11.760 --> 01:04:14.320
of images that were
misclassified and millions

01:04:14.320 --> 01:04:16.380
and millions of unhappy users.

01:04:16.380 --> 01:04:20.900
Now five percent in a DOD
application, that's not okay.

01:04:20.900 --> 01:04:25.900
So now the second point,
training data has to match the

01:04:26.593 --> 01:04:30.119
conditions of use, so Alexa
and Siri and lots of other

01:04:30.119 --> 01:04:35.119
voice applications are really
bad with women's voices.

01:04:36.080 --> 01:04:40.526
Because their training data
had a bunch of male shares

01:04:40.526 --> 01:04:42.560
in it they didn't collect
samples from women.

01:04:42.560 --> 01:04:45.500
And so women with slightly higher voices,

01:04:45.500 --> 01:04:48.090
the equipment doesn't
work as well for them.

01:04:48.090 --> 01:04:51.961
And so you have women soldiers
that's gonna be an issue,

01:04:51.961 --> 01:04:53.795
but let's take that a step farther.

01:04:53.795 --> 01:04:55.740
If you train a system
in the United States,

01:04:55.740 --> 01:04:58.410
let's say it's gonna detect a threat,

01:04:58.410 --> 01:05:00.670
whether or not a particular
person is a threat

01:05:01.780 --> 01:05:03.303
based on the emotions they're displaying.

01:05:04.170 --> 01:05:06.870
If you train that on
data collected here and

01:05:06.870 --> 01:05:10.970
then you deploy it in theater,
it's not going to work.

01:05:10.970 --> 01:05:14.460
Because the training data
doesn't match the use case.

01:05:14.460 --> 01:05:18.500
So again the details of how
it was put together and how it

01:05:18.500 --> 01:05:23.023
was designed, that's going to
change the responsible use.

01:05:27.390 --> 01:05:28.863
I'm gonna skip one, it's written down.

01:05:30.740 --> 01:05:32.571
The last point--

01:05:32.571 --> 01:05:34.268
- [Josh] You have time for it if you want.

01:05:34.268 --> 01:05:35.759
- I have time, okay.

01:05:35.759 --> 01:05:37.105
(laughing)

01:05:37.105 --> 01:05:40.334
Alright current efforts in
explainable A.I. and people

01:05:40.334 --> 01:05:43.180
who are the end users of these things,

01:05:43.180 --> 01:05:46.390
those current efforts are
going to be insufficient.

01:05:46.390 --> 01:05:49.540
Because and there's a lot of proof here,

01:05:49.540 --> 01:05:53.114
one study in particular
took a best case scenario

01:05:53.114 --> 01:05:56.230
all the studies that are
working on explainable A.I.,

01:05:56.230 --> 01:05:58.840
they work perfectly, they get the results.

01:05:58.840 --> 01:06:02.444
They presented such results
to a largely college educated

01:06:02.444 --> 01:06:05.078
audience and less than 60 percent

01:06:05.078 --> 01:06:09.392
of those users could
understand the output.

01:06:09.392 --> 01:06:14.392
And you've seen the study,
so we've got to think

01:06:14.590 --> 01:06:18.261
about the human computer,
human A.I. interaction.

01:06:18.261 --> 01:06:20.680
Explainability is not enough
we have to think about

01:06:20.680 --> 01:06:24.654
interpretibility and
communing that efficiently.

01:06:24.654 --> 01:06:29.654
Last point, many predictive
systems so talking more about

01:06:30.600 --> 01:06:33.715
machine learning here, they
can be harmful or beneficial

01:06:33.715 --> 01:06:37.870
depending on what you
do with the predictions.

01:06:37.870 --> 01:06:42.100
So if I said I had a system
that predicts whether a

01:06:42.100 --> 01:06:46.510
particular person is gonna be
a criminal, I think most of us

01:06:46.510 --> 01:06:49.330
would be horrified if we took
that prediction and arrested

01:06:49.330 --> 01:06:52.500
them and locked them up
a la Minority Report.

01:06:52.500 --> 01:06:55.310
That's kind of, they
didn't do anything yet.

01:06:55.310 --> 01:06:57.913
But such systems are actually already

01:06:57.913 --> 01:06:58.750
being deployed in high schools.

01:06:58.750 --> 01:07:01.490
If we predict whether or
not a particular person

01:07:01.490 --> 01:07:04.188
is a threat of dropping out
if their success or at risk,

01:07:04.188 --> 01:07:09.188
what happens with that information?

01:07:09.740 --> 01:07:12.539
Is that student supported,
is that student given extra

01:07:12.539 --> 01:07:15.630
support so that they can
actually complete high school?

01:07:15.630 --> 01:07:19.870
That's a good use, if
they are treated as oh

01:07:19.870 --> 01:07:23.900
you're already a drop out,
then that's irresponsible.

01:07:23.900 --> 01:07:26.310
And so what we do with the predictions,

01:07:26.310 --> 01:07:29.680
it's not the ML it's what
we do with the information.

01:07:29.680 --> 01:07:33.810
So all of these examples are
to highlight that the ethics

01:07:33.810 --> 01:07:37.960
and the responsible use it
really comes down to the details

01:07:37.960 --> 01:07:41.063
in each particular case, thank you.

01:07:42.800 --> 01:07:45.309
- I'm sorry could I ask a question?

01:07:45.309 --> 01:07:46.142
- Yes please.

01:07:46.142 --> 01:07:47.760
- You made a comment about five percent,

01:07:47.760 --> 01:07:49.240
where did that come from?

01:07:49.240 --> 01:07:54.240
- So that's the standard
t-test, it's been kind

01:07:54.390 --> 01:07:58.252
of the and the five percent it comes

01:07:58.252 --> 01:08:02.890
from just a tradition of statistics.

01:08:02.890 --> 01:08:05.957
If you do something, like
if you flipped a coin

01:08:05.957 --> 01:08:09.600
when people start getting suspicious

01:08:09.600 --> 01:08:13.110
that maybe it's not fair,
it's about that seems weird.

01:08:13.110 --> 01:08:14.860
It's at about a five percent error.

01:08:15.924 --> 01:08:18.684
And so that's a good
threshold for a t-test,

01:08:18.684 --> 01:08:22.660
but what's happened, is
because that's the standard

01:08:22.660 --> 01:08:27.070
statistical threshold, people
have without thinking about

01:08:27.070 --> 01:08:30.700
it, transferred that five
percent over to all kinds of

01:08:30.700 --> 01:08:34.270
machine learning applications
as this is a great threshold.

01:08:34.270 --> 01:08:37.010
And it's not for a lot of applications,

01:08:37.010 --> 01:08:40.325
and so that's an example
of carelessly transferring

01:08:40.325 --> 01:08:43.960
something from one
domain to another domain.

01:08:43.960 --> 01:08:44.910
- [Josh] Thank you.

01:08:49.890 --> 01:08:52.330
Someone here is thinking about whether

01:08:52.330 --> 01:08:55.024
they'd like to make a
comment, I'd like to try

01:08:55.024 --> 01:08:57.820
to nudge you to the side
of giving it a shot.

01:08:57.820 --> 01:09:00.210
While you're thinking about
it, Bruno if you wouldn't

01:09:00.210 --> 01:09:03.470
mind reading the statement
from Michael Dougan, Dougan.

01:09:04.630 --> 01:09:07.400
- Michael Dougan, Who's Allen Hamilton,

01:09:07.400 --> 01:09:09.900
this is text taken from
a Who's Allen White paper

01:09:09.900 --> 01:09:14.607
called Analyst 2.0 Redefining
the Analysis Trait Craft.

01:09:16.860 --> 01:09:19.589
It's a little lengthy, Analyst 2.0--

01:09:19.589 --> 01:09:22.460
- Well can you--

01:09:22.460 --> 01:09:23.450
- I'm gonna read--

01:09:23.450 --> 01:09:27.440
- Sure, just a reasonable quantity of it,

01:09:27.440 --> 01:09:29.522
no more than five minutes.

01:09:29.522 --> 01:09:31.772
(laughing)

01:09:33.700 --> 01:09:36.700
- Making sure artificial
intelligence works for the mission.

01:09:36.700 --> 01:09:39.440
Artificial intelligence
and other advanced analytic

01:09:39.440 --> 01:09:42.370
approaches are rapidly becoming
integral to the intelligence

01:09:42.370 --> 01:09:45.628
mission as our nation's security
posture grows more complex

01:09:45.628 --> 01:09:48.953
and we need to keep our eyes
on more people and places,

01:09:48.953 --> 01:09:50.540
that the volume of critical

01:09:50.540 --> 01:09:53.333
intelligence data is
expanding exponentially.

01:09:54.500 --> 01:09:57.982
It is becoming difficult for
analysts alone to keep pace.

01:09:57.982 --> 01:10:00.670
There is simply too much
data to be brought together

01:10:00.670 --> 01:10:04.264
and analyzed in a short time
frames required by mission.

01:10:04.264 --> 01:10:08.021
The military and intelligence
communities recognize that

01:10:08.021 --> 01:10:11.277
advanced analytics hold
great potential and

01:10:11.277 --> 01:10:14.010
they are the beginning,
and they are beginning

01:10:15.046 --> 01:10:15.960
to adopt these emerging technologies.

01:10:15.960 --> 01:10:19.300
With A.I. for example, instead
of an analyst spending hours

01:10:19.300 --> 01:10:22.882
pouring over a stream of
satellite photos looking for a

01:10:22.882 --> 01:10:25.058
significant changes, the computer

01:10:25.058 --> 01:10:28.540
might complete the task in seconds.

01:10:28.540 --> 01:10:31.420
This frees up the analyst to
spend more time on higher level

01:10:31.420 --> 01:10:33.968
analysis, reviewing what
the computer has found

01:10:33.968 --> 01:10:36.430
and then preparing reports for decision

01:10:36.430 --> 01:10:39.700
makers that are both
timely and comprehensive.

01:10:39.700 --> 01:10:41.910
In essence, machines
are doing what they do

01:10:41.910 --> 01:10:44.763
best so that people can
do what they do best.

01:10:46.885 --> 01:10:48.990
But this shift turning over
much of the repetitive work

01:10:48.990 --> 01:10:52.690
to a computer, is also presenting
defense and intelligence

01:10:52.690 --> 01:10:55.760
organizations with a
significant challenge.

01:10:55.760 --> 01:10:58.250
How can they be sure that
the outputs of the computer

01:10:58.250 --> 01:11:00.990
are both accurate and
relevant to the mission?

01:11:00.990 --> 01:11:02.990
How can organizations be confident that

01:11:02.990 --> 01:11:05.313
the analytic tools are working for them?

01:11:06.220 --> 01:11:09.000
Mistakes of this are of the highest order.

01:11:09.000 --> 01:11:12.330
The expertise of the analyst
is vital to national security

01:11:12.330 --> 01:11:14.870
and if it is lost or
diminished in the human machine

01:11:14.870 --> 01:11:17.460
connection, the risk can be significant.

01:11:17.460 --> 01:11:20.600
What if the computer doesn't
have it quite right, and faulty

01:11:20.600 --> 01:11:23.810
analytic outputs are used by commanders

01:11:23.810 --> 01:11:26.743
or other decision makers down the line?

01:11:26.743 --> 01:11:30.891
Yet another challenge is
that analysts may not accept

01:11:30.891 --> 01:11:34.680
and use A.I. informed
analytics either because

01:11:34.680 --> 01:11:37.380
they don't trust the
outputs or because they fear

01:11:37.380 --> 01:11:40.240
that the computers will
put them out of a job.

01:11:40.240 --> 01:11:43.070
There are already examples of
this at some organizations.

01:11:43.070 --> 01:11:46.420
New technology systems are
introduced with great fanfare

01:11:46.420 --> 01:11:48.650
and then promptly ignored by analysts who

01:11:48.650 --> 01:11:51.910
are free to pick the tools they want.

01:11:51.910 --> 01:11:54.200
And yet with the new
technologies, decision makers

01:11:54.200 --> 01:11:57.120
won't be able to take full
advantage of the available data,

01:11:57.120 --> 01:11:58.470
something that is essential

01:11:59.343 --> 01:12:00.950
to keep pace with today's threat.

01:12:00.950 --> 01:12:03.220
Unfortunately, most
current approaches to A.I.

01:12:03.220 --> 01:12:05.140
and other advanced analytics don't

01:12:05.140 --> 01:12:08.475
resolve these dilemmas, in
fact they only make them worse.

01:12:08.475 --> 01:12:11.492
With all the hype around
A.I. data scientists

01:12:11.492 --> 01:12:15.650
and others are caught up in
what the technology can do.

01:12:15.650 --> 01:12:18.770
For example they try to build
better and better models

01:12:18.770 --> 01:12:21.720
for better recognition
or object identification,

01:12:21.720 --> 01:12:23.730
but this research is largely academic

01:12:23.730 --> 01:12:25.420
and theoretical and not tied to

01:12:25.420 --> 01:12:27.290
the specific mission at hand.

01:12:27.290 --> 01:12:30.234
Yes the tool can look for
changes in photos, but is it

01:12:30.234 --> 01:12:33.510
the kind of change that
the analyst is looking for?

01:12:33.510 --> 01:12:37.170
Too often such
contextualization is missing and

01:12:37.170 --> 01:12:39.960
when that happens, the tool simply

01:12:39.960 --> 01:12:43.570
can't be relied upon to
support decision making.

01:12:43.570 --> 01:12:45.530
Automation and speed count for nothing

01:12:45.530 --> 01:12:47.030
if the computer gets it wrong.

01:12:48.680 --> 01:12:53.680
- Thank you, any other
takers for this opportunity?

01:12:58.360 --> 01:13:03.360
Going once, twice, alright
we are going to adjourn.

01:13:05.890 --> 01:13:09.310
So we ask you to stay seated for a moment

01:13:09.310 --> 01:13:11.300
while the board retires to the green room.

01:13:11.300 --> 01:13:13.500
If you are a member of the
media and you'd like to

01:13:13.500 --> 01:13:16.760
participate in the media
availability please come forward.

01:13:16.760 --> 01:13:19.610
Let me close by saying that
we would really encourage you

01:13:19.610 --> 01:13:23.418
to invite others to submit
comments online at that website,

01:13:23.418 --> 01:13:25.523
it's quite easy to do,
takes a couple of clicks,

01:13:27.261 --> 01:13:29.500
we would really love to hear
from you and let me also close

01:13:31.108 --> 01:13:31.960
by saying that we will have
another listening session like

01:13:33.400 --> 01:13:35.590
this one in California on April
26th at Stanford University.

01:13:35.590 --> 01:13:38.590
You will be able to tune
into that live stream and get

01:13:38.590 --> 01:13:40.810
information about that
by signing up for our

01:13:40.810 --> 01:13:43.980
invitation list and we'll
send that information to you.

01:13:43.980 --> 01:13:46.410
And let me just close by
saying thank you all very much

01:13:46.410 --> 01:13:49.580
for coming to participate in
this conversation and listening

01:13:49.580 --> 01:13:52.020
to the discussion and particularly thanks

01:13:52.020 --> 01:13:54.780
to those who went to the
microphone to share their views,

01:13:54.780 --> 01:13:56.630
greatly appreciate it, thank you all.

