Dr. Vint Cerf on “Reinventing the Internet”

Dr. Vint Cerf on “Reinventing the Internet”

>>Andrew Conklin 1:
All right, thank you. Without further delay,
it’s my honor to introduce tonight’s speaker. Dr. Vint Cerf had
served as vice president and chief Internet evangelist
for Google since October 2005. Cerf is the former
senior vice president of technology strategy for MCI. Widely known as one of the
fathers of the Internet, Cerf is that co-designer
of the TCPIP protocols in the architecture
of the Internet. He performed this
work during his tenure with the US Department of Defense’s advanced
research project’s agency. Cerf has acquired numerous
accolades over the years. In 94 he was listed and
people magazines 25 most intriguing people. In 97 he received the US
national medal of technology by President Clinton, 2004
the ACM Turing award, 2005 the presidential Medal
of Freedom by President Bush. In 2013 he was appointed
by President Obama to serve on the National Science Board. Dr. Vince Cerf has
either started chairs, or is currently a fellow, at virtually every reputable
Internet technology related association on the
planet [laughs]. If you’re looking for
a conversation starter, ask him about Leonard Nimoy,
jet propulsion laboratory, and what it’s like to be married
before the Internet existed. Ladies and gentlemen,
it’s with great privilege, I ask you to please join me
in welcoming the president of the Association for
Computing Machinery, Dr. Vint Cerf [clapping]. [ Clapping ]>>Dr. Vint Cerf:
Well, thank you. Welcome to the Google
DC day care center. I hope you notice all
the primary colors and everything else. It’s really a pleasure to
be able to use this facility for meetings like this. So I’m very pleased that
the ACM chapter came to us and asked if it was possible. And I certainly appreciate
Internet society’s streaming of this activity. This is a special kind of event
for me, in a way, because three of the organizations
that I care a great deal about are represented here. ISOC where I served
as the first president, ACM, where I serve as current
president, and Google, where I served as one
of the vice presidents. So it means a lot to
me to see all three of these organizations
collaborating together on tonight’s event. I thought I would mention
one thing before I get into my formal presentation. The 1994 People magazine
thing was a bit of a surprise. I don’t quite know how they even
figured out who the heck I was because Internet was not
terribly, widely known to the general public. Just as the World Wide Web
is starting to show up. But the irony of all
this is that my wife, who lost her hearing
when she was three, got two cochlear
implants, one in 96 and one in 2006-10 years later. Her story is so dramatic
and wonderfully compelling that they sent a photo team out
for three days tracking her. She got six pages
in People magazine. In 1998, and that’s
always been a source of comic you know,
tension in the house. I’ve never been able
to get back in again. Second thing I wanted to
mention, you’ll be pleased, I hope, to know that Internet and the World Wide Web
has been recognized by an extraordinary source as a
major engineering achievement. We think of the ACM Turning
Award as being the Nobel Prize of computer science, but there
isn’t anything for engineering. Well, prizes don’t
even go close to that. There is a Queen Elizabeth
prize for engineering now. It was established last year. It’s awarded every two years,
to now up to five people, originally up to three. But now it’s up to five. It’s a 1,000,000 pound prize. That’s about a million
and a half dollars. Bob Kahn and I, and Tim Berners
Lee, Marc Andreessen and Louis Pouzin all
were announced as winners of the first QE2
engineering prize. And what is ironic, my wife
and I are moving to London for six months for some work
for Google, and less than a week after we arrived in
London, we will have lunch with the Lord Mayor of London,
we will go on a Thames boat ride with the Prime Minister. Then we will go to
Buckingham Palace, where we will meet Queen
Elizabeth, and she will hit each of us a check for
200,000 pounds. After that it’s all downhill. So, isn’t it a remarkable
welcoming! So tonight what I thought I
would do is just give you a sense for, you know, the
current state of the Internet, which most of you I’m sure,
are very familiar with, some of the trends that we can
already see are happening to it. Some of the implications
of those trends. And then some of the problems
basis that we all should have, not only an interest in, but feel a responsibility
for pursuing. But let’s start just with
some simple statistics. These data are rather
out of date. And the organizations that had
been producing them have not, at least in my knowledge,
produced anything more recent. Although the last time I
looked was a couple weeks ago, so I could be wrong about that. But the 900 billion devices
that are visible on the net with domain names and generally
speaking fixed IP addresses are the servers that
are on the system. They don’t count the laptops
and desktops and other things that are episodically connected. If you add all of those, then
you’re getting close to the 2 and a half, 3 billion devices,
although there isn’t any way for us to actually
know that for sure. There’s no central
registry for that. And in terms of the
number of users in midyear of last year, it
was 2.4 billion. My guess is it could
be easy to argue that there are 3
billion users now online. It’s a little hard
to tell because the 6 and a half million mobiles that
are out there, some fraction, maybe on the order of 20% today, and probably 50-to-
100% years from now. Will have smart phone
capability and be able to reach the Internet to, and
so a lot of people are getting onto the Internet with a first
experience from a mobile. And for many of them
it may stay that way. Although as time goes
on, as costs come down, new technologies arise, more and more people will have
multiple devices they can use to interact with the network. So one meme that I’d
like to leave you with tonight is I believe
that our early experience of interacting with the net one
device at a time will change over time, and that we will
be using multiple devices concurrently to make
use of the net. I’m wearing Google glass today. Google glass is connected
to the Internet. I’ve turned off the display
to keep from being distracted, but if I tap it on the
side, it’s waiting for me to say something, so I’ll say,
“okay, glass, take a picture.” It just took a picture of
this group of people here. This is a voice-activated
device. I’m sure many of
you read about it. We have 2000 of them
out in use today. We’re ramping up to 10,000
by the end of the year. And then we go into
production in 2014. They’re not being sold directly. People have to apply, and they
have to offer a description of what it is they
propose to do. What we’re interested in doing
is getting these in the hands of people who will
actually invent some interesting applications. We still ask them
to pay for these. They’re $1500 each today. I have no clue what it’s
going to look like in 2014. One hopes that as the
quantities increase, the price could potentially
come down. So these are, yet
again, another kind of device, connected to the net. And one theme in tonight’s
discussion is the increasing number of the variety
of devices that are part of the Internet environment. The other interesting
statistic, again, it’s from midyear last year,
is we are the people are who are using the net. 10 years ago North America was
absolutely the largest number of users and the highest
penetration of Internet. It’s still the highest
penetration. It’s slightly under 80%, But Asia has the absolute
largest number of users. And China has the largest number
of users in any one country. They have 500 billion,
sorry 500 million people. Half a billion people are
online in China today. That’s still only on the order
of maybe 40% of the population. So you can imagine
what that will look like when we get to 80% or so. It will be more than 1 billion
people there, online, in China. So it’s interesting, you
know, we hear all the stories about Chinese attacks
on the net and so on, which are presumably true,
but at the same time, it’s fair to note that the
country is making a huge investments in building
Internet infrastructures. So when interesting question
from a sociological point of view is, “what happens when
they have this enormous fraction of the population all online,
all able to use the system, able to exchange
information with each other? How does that change
the regime’s view?” And I don’t know the answer to
that, but I think this is a, we’re watching a very very
interesting sociological experiment with technology
taking place before our eyes. And so as this decade
unfolds, I think this is going to be among the more
interesting effects to observe. Now, Europe is the next largest,
in absolute number, of users. Although you’ll notice the
penetration rates is below North America. But I’ve given up making
any predictions about Europe because they keep adding
countries, so the definition of Europe is changing, and
I don’t know what it looks like 10 years from now. And the others, I won’t
go through all of them, but you can see, that in
Africa were still only modestly penetrated. This number is misleading though because I think we’re
seeing a very rapid growth in mobiles in Africa. I don’t know what fraction
of them are Internet capable, probably some fraction, either
20% or 30%, or something. And what we’re seeing as
individuals, or small families, who are willing to pay
to get onto the network, or at least to get a mobile,
which is a significant fraction of their monthly income. And yet they do this
because of the value to them. And so when people
make up stories about, “what will people do with
mobile phones in Africa?” And they have things like
“getting” prices in the cities so that you can figure out where
to sell your [inaudible] best. The honest fact is
they’re just like we are. You know, they use
social networking. They do all the other
things we do. Although the data rates that they have available
are more limited, and so some things they don’t
try, like streaming video, until the data rates
are accurate. But their use is not
terribly different than ours, except for whatever real
limitations there are on the local conditions. Well let’s look now at
the current Internet and see what’s happening to it. Everybody in this room
is well aware that we ran out of IP version 4 address
space in February 2011. I don’t know that it was an
embarrassment for Bob Kohn and me, but it was sort of
an interesting surprise. When we were first doing
the design in 1973, now, 40 years ago, and
the question came up, “how many addresses
should we have on this experimental Internet?” We didn’t know the
answer to that. So we said, “Well, let’s see, how many computers will
there be per country?” And at the time we
were doing this, machines we were using were
big time-sharing machines, so we thought, “Well, there
couldn’t be too many of those. They take up a lot of room
and they’re expensive.” But we said, “Let’s,” you
know, “be crazy about it.” We allocated 24 bits of
address space for the machine. 16 million. You know, what we thought of
as mainframes per country, and we thought, “boy,
that’s overkill right?” Then we said, “How many
countries are there?” We didn’t know. There wasn’t any
Google to look it up. So we guessed 128 because
that was the power of two. So, and then we said, “Well how
many networks will there be per country?” And we thought, “Well, we
just built the ARPA NET and it wasn’t exactly
expensive to do that, so maybe there’ll
be two per country.” You know, it’s for competition. So we said, “128 times 2 is 256. That’s 8 bits plus 24. 32 bits.” Though we actually
debated later, in 1977, just as the Internet protocols
were being standardized, whether we should
expand the address space. One group wanted
variable length addresses, and the programmers
hate that idea because you’d waste computer
cycles finding the fields in the packet because
of the variable length. So we didn’t do that. And somebody said, “How
about 128 bit address space?” And at the time, we were
running a 50 kilobit backbone, and frankly, an awful lot of the exchanges were full
duplex interactions, character by character
echoplex. And, if you can imagine
the overhead, for one character
going back and forth with 256 bits of
source and destination address. And that didn’t pass the
red face test, so we can do that either. Then we thought 32
bits would be enough. Well we all got scared in the
mid-1990s, and started work on what was going to be called
IPNG, for next-generation, and eventually that produced
the IPv6 address space. Bob Hinden and
others developed with 128 bits of addresses. So, in a very funny way, everyone has been using the
experimental Internet until now, and the production version
has now been finally, officially released
on June 6, 2012. I have a favor to ask
of every one of you. If you’ll go to whichever
ISP supplies you with Internet address
space, and ask them, “What is your IPv6 plan? When can I get IPv6?” This is important because it’s
been very very slow in terms of penetration, mostly
because ISPs haven’t bothered to turn it on. A lot of their software
is already capable. The routers have it. The edge devices have IPv6, but
the ISPs aren’t turning it on, and they’re saying,
disingenuous-like, “Nobody’s asking for it.” Well of course they’re
not asking for it. Ordinary users don’t even
know what IPv4 or IPv6 is. And they shouldn’t have to. But you do. And so if you will help us
embarrass them or force them into admitting people are asking
for it, and we need big help, so thank you for that. Now, I was trying to
explain what the implications of 128 bit addresses are. I remember saying, one time,
that that meant, “Every electron in the universe could
have its own webpage.” Somebody from Caltech wrote a
note saying, “Dear Dr. Cerf, you jerk, there’s 10 to the
88th electrons in the universe, and you’re off by 50
orders of magnitude.” So I don’t say that anymore. The other thing that’s
happening is the domain names, which have been written in
Latin characters now can be written in non-Latin
characters, meaning, using the Unicode character set, which accommodates
100,000 or more characters. That same Unicode set is used
in HTML and XML for the web, so it makes the web and
the domain name system much more compatible. And ICAN, the Internet
Corporation for Assigned Names and Numbers, initiated
a program of expansion of the generic top
level domains. Over 2000 applications
were received before these new domains. Many of them were in conflict because people picked
the same ones. But it’s certainly on the order
of 1500 of them are unique, and the question now is which ones will actually become
registered as top-level domains. That process is still unfolding. So this is a major
expansion of the namespace. Now we all understand the
domain name system has a number of vulnerabilities. One of which is poisoning
of the name server cache. And so in order to get rid
of, or to detect and inhibit, the effects of that
kind of poisoning, the domain name system
security architecture is being deployed now. Basically it’s just digitally
signing of the binding between the domain
name and an IP address. So when you do a look-out,
to translate the domain name into an IP address, you can
ask for a signed response. If you got the correct
response back, encrypted [inaudible]
matches what you expect, then at least you
have some confidence in the integrity
of the findings. Someone hasn’t gone in,
modified that binding in order to [inaudible] you
to the wrong place, because they don’t
have the private key that would have been used to
sign that particular binding. So that’s being deployed now. The entire root zone
is signed, and a number of the top level
domains are signed, and some of the second-level
domains are signed as well. A similar idea has come up with
regard to the routing system. Now, remember that this whole
thing was developed at a time when it was mostly geeks
that were doing the work, and they trusted each other. Now we’re in an environment
where you can do that anymore, so we have to retrofit
mechanisms that will deal with an untrustable components. In the case the routing
system, basically, in order to become part of
the net, you just announce that you are responsible
for a particular chunk of the address space. I’m autonomous
system number four and my IP address space is this. And that propagates around
through the routing system. The problem is you can lie,
and when you lie about this, then you hijack somebody’s
address space. In order to defend against that
it has been proposed, but not yet implemented, that we build a
similar kind of digital binding, digitally signed binding, between the economist system
number and the IP address space, which it is responsible for. So that when you’re
doing routing updates, you can ask a check to
see whether the party who is announcing the space
actually has the authority to do that. There’s some complexity to this because not all routing
announcements come from the edge. They get propagated
through the net. So some of the announcements
are coming from the middle, and so you have to be able to
verify that the whole chain of announcement is
actually valid. And so it makes it
somewhat more complicated, which is why it hasn’t
been deployed, but it’s in the process
of being developed. Now I’m going to talk more
about sensor networks, the smart grid program
and mobile devices, so this is just a
reminder that again, in our Internet environment
these 3 things are rapidly filling up the system. Now we all, I’m sure, recognize
that devices are showing up on the net that I
certainly hadn’t anticipated. Actually this is an older slide. They now have a picture of Larry
page we’re in Google glass, as another Internet
enabled device. But I can remember being very
surprised when somebody came in and said to me they had just
found an Internet enabled picture frame. And I couldn’t imagine,
what would that mean? All I could think
of was it sounds as useless as an electric fork. Bonk. But in fact it’s
obviously very useful, and many of us have these now. We have mobiles. We upload our pictures
to a website. The pictures are downloaded into the Internet
enabled picture frame, and we have an opportunity to watch what’s going
on in the family. So it’s kind of a crude form
of Google plus or Facebook, in a way, and it’s
just personal. And it’s at home and
it’s actually rather nice for the grandparents because
you can upload pictures of the grandchildren,
nieces and nephews and see what they’re doing. Now the way this works is, you know the frame download
images from some website. So now security is an issue because of the website gets
hacked the grandparents may see pictures of what they hope
is not the grandchildren. So now we have to worry
about, you know security and home as well as at work. There are things that
look like telephones that are really voice-over
IP devices. The guy in the mill has
developed an Internet enabled surfboard. And I don’t know them, but I
have an image of him sitting on water, waiting for the
next wave, thinking, you know, “If I had a laptop on my surfboard I could be
surfing the internet while I waited for the next wave.” So he puts a laptop
in the surfboard and then built
Wi-Fi service back at the rescue shack
on the beach. Now he sells this as a product. So if you want to go
surf the net while you’re out on the water,
this is for you. I used to tell jokes
that every light bulb in the world will someday
have an IP address, I can’t tell jokes
about that anymore because somebody sent me an IPv6
radio enabled LED light bulb. And it works! I have an IPv6 network
running in the house, and you can turn the lights off
and on with that little gadget. Now the interesting
thing is that, not only can you
turn it off and on, but you can tell whether
it’s broken or not. You can interrogated
and find out. So these sorts of things
are becoming reality and so if you have only imagined
that this is science fiction and speculation, it is not. [ Pause ] So this is my sensor
net at home. This is an IPv6 radio net. It’s commercial product,
so it’s not me in the garage with
a soldering gun. The company that makes these is
Arch Rock, which was acquired by Cisco Systems a
couple of years ago. The boxes are about
the size of a mobile. They run on 2 AA batteries. They are sampling a light level,
humidity, and the temperature in each room in the
house, every 5 minutes. And that data is propagated
through this network of sensors. So the sensors are actually
storing forward relays. They are running 6 [inaudible]
and an IPv6 on top of that. And so the data has accumulated
in the server that I have in a rack in the basement. So you know, only a geek
would do this, right? But my thought was
that at the end of the year I will have
very good information about how well the
heating and ventilation and air conditioning
has actually worked. Instead of anecdotal information
I have real engineering data. So, and as an engineer,
I like that. Now one room in the
house is a wine cellar. And so that one is armed,
and the temperature goes above 60 degrees
Fahrenheit, I get an SMS on my mobile telling me
that the wine is warming up. I remember going into
Argonne national laboratory for a three-day visit. Just as I was walking into the
building, the mobile goes off. It’s the wine cellar calling. “You’ve just broken through the
60 degree Fahrenheit barrier.” No one was home at the
time, and so every 5 minutes for the next 3 days, I kept getting this
little message saying, “You’re wine is warming up.” So I called the Arch Rock
guys. They said, “You have
remote actuators.” And they said yes. And they said, “Do you have
strong authentication on them,” because I have a
15-year-old next-door, and I don’t want him
messing with my wine cellar. And they said yes. So that’s a weekend
project to install. Then I got to thinking, you
know, I could probably tell if somebody has gotten into
the wine cellar while I’m away, because I can see if the
lights have gone off and on, but I don’t know what
they did in there. So I thought, “Okay,
solution to this is to put RFID chips
on each bottle.” Then I can do an instantaneous
inventory to see if any of wine has left the seller
without my permission. And I was describing this
brilliant design to one of my engineer friends and
he told me there was a bug. I said, “What do you
mean there’s a bug?” And he said, “Well you could
go into to the wine cellar and drink the wine
and leave the bottle.” So now we have to put
sensors in the cork. And as long as were
going to do that, we might as well sample
[inaudible] whether the wine is ready to drink. So before you open the bottle,
you interrogate the cork. And if that’s the bottle that
got to 80 degrees or something, that’s when you get
to Mike Nelson because he doesn’t
of the difference. Actually, this sort of thing
is going to be very common, not necessarily wine
cellar part. Although in fact, I had someone
come out with a prototype, RFID gun, a bunch of RFID
tags, and we actually wandered around the wine cellar and
picked up every single bottle. So this is actually doable. It’s not a crazy idea. The thing is that will
have sensor networks, and we’ll use them for
environmental control. We’ll use them for security,
and will use them for a variety of things to allow the
residence or business or manufacturing
facility to be managed and controlled and monitored. So these things will
be very much a part of the Internet environment
over time. Now there’s an interesting
question. I’m going to use the sensor
network as a tool for what is through a bunch of questions. Now the questions are very
specific to the notion of sensor network, architecture
and large numbers of devices that have to be configured
and access controlled. But I want you, as we go through
this particular discussion, I want you to be
thinking generally about internet-enabled
appliances of all kinds having to be managed and controlled,
and wanting them to be isolated from parties whom might want to
use them for nefarious purposes. The thing that we don’t want
is a big headline that says that there was a distributed
denial of services attack against Bank of America by
every refrigerator in America. So we need to be able to
think our way through, “how you will configure,
install, operate, protect and also move from
one place to another.” So imagine that you take
your appliances with you. They have IP addresses in them. Maybe they will get
new IP address is when they are installed
in a new residence. How do we do all of that without
expecting every single homeowner to become an expert in
configuring IP devices? They shouldn’t have to do that. They shouldn’t have to
know anything about that. And so there are
all series of design and operational questions. Should there be a
single controller? Should there be a controller
for each class of device? How should we get
each device to connect up with the appropriate
controller? What if you’re in an
apartment building, in your controller
detects devices that are in the neighboring apartment? This is sort of like
the problem you have with an electric blanket, where the 2 parties have
the controls switched. One is turning it
up to get warmer and the other is turning
it down to cooler, and of course they’re
cross connected. So there are these questions
that are actually pretty hard. And I think some of us are
going to have to figure out what’s the right
way to do it. Should they all be wireless? Well, depending on where
they are installed, you may or may not have good
radio-connectivity, even just inside the house. So some of them might have to be
wired, some might be wireless, some might use different
frequencies. None of this is easy, and every
device that is sold to be part of one of the censoring
environments is going to have to be adaptable to
different conditions depending on what is found in the place
where they’re installed. On [inaudible], there
are similar kinds of scaling questions that occur,
and once again, very important, make sure that these are
controlled only by parties that have the authority
to do that. You don’t want some
random person halfway around the world turn on
a manufacturing device and possibly harm
one of the workers. The other question is
whether the devices interact with each other. Whether they only interact
with an edge device. And here we get into
the interesting example of the Arch Rock system where the devices are
relays for each other. In addition to providing sensory
data, and then providing it to some accumulator, but they
also are part of the network which is self-adapting to
local radio conditions. The questions are
almost endless. There’s some more here. How do you decide what IP
address a website should have? Do you generate these
using IPv6, presumably because
v4 won’t be useful. But there is a question about
what the prefix should be, what should the addresses be, how are they [inaudible],
are they remembered? What about auto discovery? How do I discover other devices
that are part of the system? Some of you will
know a little bit about how Bluetooth
system works. If you’ve used it, you know
that when 2 devices are in close enough proximity, one
of them will present a number which you enter into the
other one in order to confirm that you want the 2
devices to be connected. Now the question is
whether every device in this environment has a
display capability or not. And if it doesn’t then
we start thinking about, “well how do I manage
these things?” And it could be that your
mobile become the device that you use not
only to control them, but also to configure them. And so the possibilities
here are fairly complex, and were going to have to figure
out how to make that work. What about web tools? I mean, it’s very attractive
to be able to go to website and say, “I would like to use a
webpage to manage my devices.” And if we standardize
things properly, then 3rd parties will
be able to offer service to manager systems for you. Think about entertainment. I don’t know about you, but
I have lots of boxes at home that have infrared
controllers for them. And usually I fumble around
for a while trying to figure out which controller is the
right one for which box. And when I finally get that
right that’s the controller with the dead battery. So we put all of them
up on the local network, and we use the mobile,
which is also on the local net
as the controller. Or maybe it’s a tablet
that can be a controller. And that means third-parties
can intervene if they wish, if you allow them, to be
the manager of the devices and all you do is go
to the website and say, “I want these songs and movies. Please populate all
the various devices on which this entertainment
content is of interest to me.” So it’s just incredibly
rich territory. What about state information? If you have things
that are gathering data about the temperature, humidity,
and light levels in my case, or other things about the
house, you may be able to infer whether anybody’s
at home, how many people are at home, where they active
and when are they not active? Depending on what
information is available. That could pose a
security hazard, you know, especially if they can figure
out that nobody’s home. So again this is a question of whether information
should be available, who should have access to it,
and how do you control that? So it’s just a long
long list of things that we’re going to
have to cope with. Here’s another interesting
thing- the set-top boxes. The cable provider configures
these things for you, and it operates a controller for
you, and it offers web access to control the system. And the set-top boxes, at least
some of them, now can interact with each other like
sharing a DVR. So that’s a common
example, but what happens if we transform these into
devices that are IPv6 enabled? Then they have the ability to
reach out and use the Internet, in addition to everything else. Now they become [inaudible]
be streaming video from other sources and the
cable television company. And that, interestingly enough,
creates a conflict, right? Because the cable TV
guy is supplying you with broadband access to
the Internet discovers that you’re using the
broadband Internet to get access to somebody else’s video, which
messes up their business model, which leads to possible
desire to prevent you from getting video
from anybody but then. That gets into net neutrality
debates and everything else. So very quickly, as we pursue
these technical problems, we also uncover and on
earth various policy issues that have to be resolved. And I’ve already talked
about the Arch Rock thing. The other big question is
diversity in uniformity. Uniformity is really
helpful because it allows you to achieve interoperability. Diversity is important
because you want to be able to try new things, and figuring
out how to cope with both of those at the same
time is a big challenge. The smart grid is a program that
was started by the Department of Commerce and the Department
of Energy about 4 years ago. And their initial focus was to create a smart grid
interoperability panel to encourage standards
for devices that are part of the smart grid. So they could be managed. So they could respond,
for example, to advice, saying “the power grid
was reaching a peak load and would you please
not keep the water or run the air conditioner
for a while.” This is important
because the ability to control load lets you not
build capacity, excess capacity, that’s only used a small
percentage of the time. And so that’s an important
capability, this ability to say, “Please don’t use
electricity now.” But you also have the
ability to get a report back, like which devices were used
during the course of the month and how long and when. That would tell you something about why your electric
bill is what it is. It’s what helps us decide
whether we should change our lifestyle, possibly
for economic reasons. Or maybe we just care about
the environment and we want to change our footprint of use
of electricity for example. So again, there are a whole
series of issues associated with a smart grid program
that are not like the ones with the sensor networks,
except one thing, which is really different. Up until recently most
of the power generated in the United States,
well centrally generated and distributed over
power lines, we’re now reaching the
point where we are capable of putting generation capability
at the edges of the net. And it gets even more
dramatic when you go from just photovoltaic
and maybe wind power, and possibly even
geothermal as generators of electricity in your home. Now we get electric cars,
and they happy batteries. And so now you have
the possibility, not only of pulling power out of
the systems to recharge the cars but pushing power
into the system. Now I want you to think
about control theory if you took it in school. It gets more complicated
if systems can both push and pull electricity, you
know, at the same time. Now we have the problem that
we are independently, you know, every household in the United
States is either generating electricity or pulling
electricity. How do you balance
a system like that? And so if you’re interested
in some of the thinking about that, look up micro-grids. Do a Google search on that because the ideas now our
considered actually distributing electrical power generation
farther, so that we aren’t so dependent on one
single central facility in a particular area. We might avoid a lot of
the meltdowns people have. This leads me to one other
thing about Internet, and the thing I mentioned
earlier about policy. One of the big issues
we have today is that there are countries that see how important the
Internet is, but they’re nervous about it, because it has
this democratic ability to let people share information. And for some authoritarian
regimes, that the threat. We’ve seen the outcomes,
for example, in Arab Spring. I’d like to emphasize
something for you. I mean some people are
pumping their chest saying, “Look at the Internet. It was part of, or
Facebook and others were, part of the Arab Spring. And isn’t it wonderful?” Except outcomes of
revolutions don’t always come out the way you would like. And right now were seen some
pretty difficult problems in Egypt for example,
and Libya for example and Syria for example. Of the places where the
Arab Spring started, Tunisia is probably the
one which is the most, I don’t use the word
placid exactly, but the most stable
regime change. The others have been
very very difficult. So I want to be careful not to
tout the role of the Internet in all this as if it
is uniformly positive. You have to be really
thoughtful about how you recover from major regime change. But what this has led
to is a real debate about what role governments have
in the control of the Internet. But for all of us in
the Internet world, historically we’ve seen this
is as very bottom-up thing, where multiple stakeholders
are interested in making decisions
about policy. They’re the ones
that are affected by it whether it’s the technical
community, the general public, civil society, the
governments, the private sector, all have an equal stake in what
happens in a policy domain. And yet we have many of the
intergovernmental institutions, like the international
telecommunications union, see this as the province of
governments to make policy. And this is a place
where there is plenty of opportunity for intervention. The World Summit on the
Information Society, which began in 2003,
began with the premise that there would be
an information society and we were seeing
emerge on its own. What did it mean from
the policy point of view? And the first thing that
these government officials, remember this was an
intergovernmental discussion. The first thing that
they asked was, “What’s an information society?” And people said, “Well the
Internet is an example of that.” So they said, “Who’s in
charge of the Internet?” Nothing could possibly
be this big and work if it wasn’t centrally managed. And we said, “Well, nobody,
it’s a distributed system.” And they didn’t believe us. So then they discovered ICANN
because ICANN has responsibility for IP address allocation
and domain name management, and they decided ICANN must
be in charge of the Internet. And then they said,
“Who’s in charge of ICANN?” They decided ICANN had
a government contract with the Department of
Commerce, so NTIA must be in charge of the Internet. So that meant the US is
in charge of the Internet, and that didn’t sit well
with some of the countries. So that theme, that US is
in charge of the Internet, has been a persistent
issue since 2003. The Internet Governance
Forum was created out of the World Summit on
the Information Society, as a place where we could talk about issues arising
in the Internet. It’s a very important,
multi-stakeholder, system, or, you know, function. It doesn’t make decisions. This turns out to
be really important because if you make decisions,
if you draft communiqués, or make recommendations, you
have to argue with each other, and you end up doing these
little wordsmithing things. In the end what we want is
people going away understanding the issues facing the Internet and how should they be
addressed and by whom. But not by IGF, IGF is there as
a way of surfacing the problems, and helping people figure
out what they should do. Let me skip through all of the
international telecom stuff. Although, some of you who
follow the world conference on information technology, know that there was
a major earthquake. This was intended to revise
the international telecom regulations, which hadn’t
been touched since 1988. And at that point,
of course, people, government officials
especially, weren’t thinking about Internet at all. So they didn’t take into
account what the Internet was like and what it could do. How it could subsume
almost all of the earlier
telecommunications applications and systems, which it has done. So they tried to revise these
things, and the countries that were most interested
in finding ways to control Internet, wanted to use the international telecom
regulation revision as a way of gaining authority
over the net. So they tried to put language
into a ITR, that they spoke of spam for example,
spoke with security spoke with viruses and
words and so on. Well those are applications. That’s content. And never in history is the
international telecom union ever dealt with content. I mean, it was standardizing
telephone interaction and originally, telegraph
interactions, but it was never
responsible for content. They didn’t control what
you could say on the phone, the only try to make sure
you could make a phone call. Well I think the Internet
should be the same way, but these guys didn’t
see that that way. The result of the debate
was that of the hundred and 140 some odd
countries that attended, 89 signed the resulting treaty, and 55 walked away
including the United States, which was the right thing to do. So if that is the biggest schism in international
negotiations regarding telecom that ever happened in
the history of the ITU. I think it’s a very important data point, and I think that we should
actually kind of take advantage of the fact that there
is disagreement in order to force people to face the fact
that the Internet is different from all of the other
telecom systems we have dealt with in the past. So, let us skip over
to privacy and safety. You hear, “security, security, security is an issue
on the Internet.” It’s true. But what I’d like to suggest to
you is that word causes people to think too much about
national security. They think about
nation-state attacks and so on, which happens, so this is
not to argue that they don’t. But the general public kind
of goes this way the point where we’re talking
national security, but what they don’t
fully appreciate is that are also a part
of the problem. But it’s safety that
they care about. They would like to be,
feel, like they’re safe on the Internet, and they
don’t feel that safe right now, because of all the various
problems that get reported. So I’ve been thinking that we
should be working out, kind of, cyber-fire department. The model I have in my head
is somebody standing in front of his house, it’s burning down
and he’s got a garden hose. And he’s thinking, “I need
somebody with a bigger hose and more water in
order to put this out.” The first thing that
comes to mind is not to call the police department. You call the fire department. So, we don’t have a cyber-fire
department, and for a lot of individuals, households,
and small businesses, and everything else, they
don’t have the expertise to put out a cyber-fire. They don’t know what to
do about being infected, even if they know they are
infected, they don’t know what to do about a cyber-attack
of one kind or another. I would love to have a kind
of cyber-fire department that you could call
on, as a business, or even as an individual,
but the rules may have to be different from what
we do we actually call the fire department. Your neighbor can call
the fire department if he sees your house on fire. The fire department feels free
to come in and break the windows and the roof in order
to put the fire out because the neighborhood is
at risk until the fire is out. But now, imagine your competitor
calls the cyber-fire department and says, “You’re
under attack,” okay? So the cyber-fire department
comes out, break everything down for three days, trying to
figure out what the problem is. Meanwhile, you’re
out of business, and your competitor is smiling. So the rules for being able to engage the cyber-fire
department may have to be different. They may have to be
voluntary, for example. Now the other thing that’s
important in this analogy is that the fire department,
after they put out the fire, try to figure out
why the fire started. And if it was arson, then
suddenly the Police Department or the FBI or other law
enforcement gets involved. You can imagine the
similar kind of scenario. Something else happens. The fire department’s
going to come out and inspect your
building to see whether or not you need fire
code or not. And we have fire
codes for buildings. And you can imagine, looking
for advice and even some rules for designing systems that
are safer than others. Just like we have rules for
doing buildings that are safer, and have rules for,
you know multiple exits and entrances for safety sake. So I think that we are
at a period of time when it’s worth actually
considering the invention of a cyber-fire department. There are three other things
are that I wanted to mention. I have a few other things to
talk about, but I also know that I’m getting close to
having bored you almost an hour. My daughter can’t believe that. Here’s a problem. Digital signatures are
really attractive, right? You can send, digitally
send, signed things anywhere. The question is, legally, is
a digital signature binding? And it’s not 100%
clear right now, especially on an
international basis, whether or not a contract that’s been
signed digitally has weight in court. And so there’s a
missing assessment. Either there’s legislation
that’s required, or maybe an international
treaty’s required, in order to determine whether
or not a digital signature on a contract is binding. We don’t have agreements
yet on that, but I think it would
commerce if we could come to some agreement about that. I’m sure all of you are aware
of the current state of debate on intellectual property. On the patent side,
software patents have turned out to be a nightmare. And I, you know, think all
of us recognize why that is. The best example I can give you
why I open this is important for creativity and
innovation, is HTML. When the first browsers
were made, even the ones that Tim Berners Lee
did, they’d had the feature that you could actually ask it
to show you the HTML source. And you could figure out,
“How do they do that?” And then they copied each other. And it was great
because, you know, webmasters learn
from each other. They developed new ideas. Everybody got to try things out,
and, you know, the evolution of the webpage with
pretty dramatic over a very short period of time because people would share each
other’s ideas and use of HTML. And that’s just one example of the reason why openness
has been so valuable. That’s why Google makes open
source available for android and chrome and chrome
OS for the same reasons. But right now there’s a
lot of tension over that. Now when it comes to copyright,
it’s a whole other story. Some of you will remember
in the Constitution, the notion of copyright
was installed in there for two reasons: To benefit a
party who had created a work for a limited period of time. Originally it was 14 years. And if you are the author of
something, you could control who could make copies of
it for that period of time. If you were still alive
after your 14 years, you actually had an option to extend the copyright
again by another 14 years. I think you had to make
a payment to do that. And then after that
period had expired, the work became part
of the public domain. The intent here, too
benefit the population. And the words are in
their exactly that way. Well, what is happened
over the course of a couple hundred years is that the intellectual property
community has extended the lifetime of control that the
copyright holder has to 75 years after the death of the author. And this feels a little
excessive to me because it seems to have left out
the idea that things in the public domain would
benefit the general public. I think we’ve gone
way too far over. We’ve similar problems
with patents because we have people fighting
over patents, spending billions of dollars on things that probably shouldn’t
have been patented in the first place. And I think this is
largely true of software. So this is a big problem. And some of us in the technical
world are going to have to engage with the legal
people in order to work through the implications of changing the way
these things work. And the final thing I wanted to
mention on this slide has to do with the problem that we
are increasingly creating digital content. Every day when we use
word processing programs and we use spreadsheets or
we use presentation software of one kind or another, we create fairly
complex digital files. The thing is, the bits in
those files are only meaningful if the software that
created them is available. And one of the problems that
arises is that over time that software may not be
executable on the machines that you have available. And so if you carefully store
the bits and you move them from a 5 and a quarter
inch floppy to the 3 and a half inch thing,
to, you know, the DVDs or CDs, Blu-ray and so on. If you move the bits from
one medium to another, there’s no guarantee
that the software that created them will still
be running on the machine that you’re currently using
or the operating system that you’re currently using. The mind experiment I
remember was, “What happens if it’s the year 3000 and
you’ve done a Google search and you turn up a
1997 PowerPoint file?” Suppose you’re running
Windows 3000, you know, just for the sake of
argument, and the question is, “Will Windows 3000 package
interpret the 1997 PowerPoint file correctly?” And the answer’s probably, “No,” and this is not a
gratuitous dig at Microsoft. Even if you were using
open source mechanisms, it’s not 100% clear that the
open source programs would continue to be compliable
for every machine up until the year 3000. So there’s a real
challenge there, which is maintaining the
ability to interpret the bits. If it’s scientific data, it’s
even worse because you need to keep track not only of the
bits themselves for measurements that have been made, but the
calibration of the instruments and the conditions under
which the data was collected. We don’t have a regime right
now that will either assure that we can retain
that information, or even that we can get access
to software when a company goes out of business or when
they release a new version and it’s not compatible
with the previous one. How do we get access to
the previous software? There is no current regime
which recognizes the importance of retaining access
to these digital bits. Some of you probably saw
the movie, “Lincoln,” and you remember reading a book
called, “A Team of Rivals,” by Doris Kearns Goodwin. One of the interesting
things about her book is that she reconstructed the
dialogue over the time, by going to 100 different
libraries in finding the letters that the principles in that
story had sent to each other. And she was able to reconstruct
dialogue based on that. Imagine that you’re the 22nd
century, and you’re trying to reproduce what life was
like in our 21st century. Everything we’ve
done is digital, and it’s all gone,
it’s not accessible. They won’t know who we were. We will not be good ancestors. We will be invisible ancestors. So if we care about this, were
going to have to figure out how to deal with that problem. [ Pause ] I really feel compelled to
tell you this, is it okay? All right, so the thing
that’s interesting about the Internet is that
despite the fact that Bob and I will celebrate
the 40th anniversary of the design this year, the first paper was
published in May of 1974, but the original paper was
written in September 1973. And we turned the system
on January 1, 1983. So it’s been operational
for 30 years. In spite of this age, it has
evolved pretty dramatically, and there are lots of
things that are still going on that could be done. But the open flow idea out of Stanford University
and Nick McKeown’s group is a particularly interesting
example. You know, when we built
routers, you know, with Cisco and others built routers,
they used the address bits on the packets to figure out
where they should route them. So you look at the
destination address, you look it up in the
table and you squirt it out whichever pipe the
forwarding table tells you to do that. So Nick McKeown
and his guys said, “Why are we limiting
ourselves to the address bits? Why can’t we use other bits
in the packets to tell us where to route things?” What if we didn’t look at
the address bits at all, we just looked is the content? And if I route based on that. That has a slightly multicast
like ring to it, doesn’t it, because you can imagine
announcing you were interested in this kind of content. Now everybody that said,
“I’m interested in the kind of content,” would get
copies of the packets because the routing
table would say, “Anybody that said they wanted that content, we
would route to.” That’s multicast. So what they have done basically
is open at the notion of routing to a much broader
range of possibilities. This is really powerful stuff. And I don’t have time to go
into all the details of it, but Google decided to build
our wide area network backbone that connects all of our
data centers together using open flow. So over a period of about
18 months, when we worked with the open flow team, we actually implemented
an open flow routers that we built ourselves, and
we put them into operation in the backbone linking all
the data centers together. It’s given us substantially
better performance in terms of utilization of
the optimal network that links our data centers. So [inaudible] very
ambitious thing to do. I didn’t have anything
to do with it. I can’t take any credit
and don’t want to. Urs Hölzle is our Senior
VP for Infrastructure, and his team just
went and did it. And it’s really impressive. So this is an example breaking
out of an older paradigm. It will emulate everything
that an ordinary router can do if it just sticks
with the address bits. But the fact is, it’s
like any new technology where it does the
previous thing, and then because you’re
using new technology to do the previous thing
you do use some new stuff that nobody realized
you could do. And that’s very exciting. We talked about configuration
management and the Internet of things. Here’s another problem-
certificates and certification. A lot of people thought that issuing certificates
would be a great way to do strong authentication, but that assumes you can trust
the certificate authorities that issued the certificate
in the first place. The problem with the certificate
notion is that the issuer of the certificate can
have any string they want to associated with
the public key. There was no constraint. And so if you, in fact, were able to coerce a
certificate authority, bribe a certificate authority, or break into a certificate
authority and generate a false certificate
with Google or Microsoft or one of the other software providers
name on the certificate, and then somehow issue
software associated with that certificate,
you could cost people to mistakenly download
and install malware. Well the problem here is that you can’t trust every
certificate authority. Now you could try to filter
that if you look at your browser and you see how many certificate
authorities are trusted. It’s silly and it’s wrong. So the other alternative is
the ITF right now is to look at the possibility of
installing certificates in the domain name system
using DNSSEC as the tool for making sure that everything
going down to the place where the certificate is being
installed has been digitally signed, so there’s
a chain of trust. On top of that, the certificates
are limited to refer to strings that are related to the zone
in which that domain name and certificate are installed. So if you’re at Google.com
the only certificates that I can have in Goole.com
will be 2nd level domain, or 3rd level domains,
inside of Google.com. Any other attempt to pull
a certificate at that part of the domain name zone, you
have to be rejected out of hand because the string that was
accredited wasn’t from the zone. So that’s an example
of a way of responding to that sort of thing. Let me just mention one
other thing and then I want to give you a quick summary
on the interplanetary system. Those of you who
study in the history of computing may
remember Project Mac at MIT back in the 1960s. One of the interesting things
they did was to modify machine, I think it was a GE 635, turn it
into a 645 that had eight rings of protection and
memory management. The eight rings of protection
were triggered whenever you executed a sensitive
instruction. That instruction trapped
down to the colonel software and it said, “Who are you? What are you doing here? And what instruction are
you trying to execute. And what makes you think you
have the authority to do that?” And the colonel would
decide whether or not you were actually allowed
to execute the instruction, and place you in the
appropriate ring of control. So as you got down to
ring zero, you had more and more control
over the machine. This is an example of hardware
and software working together to reinforce security. We actually have that
capability in the X86 chips but nobody’s programming them, and they should be,
but they’re not. So that’s one example of hardware reinforcing
security with the software. The same thing has just
happened recently with BIOS, digitally signed BIOS, where
you can’t execute the bootstrap program that boots in the operating system
unless the digital signature on the BIOS code checks
out with the hardware. It’s another example of
getting more reinforcement. Okay, I’m going to skip down
to the interplanetary Internet. Some of you heard about this so it’s just an update
on the status. When we started,
we were interested in providing better
networking for exploring Mars. This was in 1998 just after
the Pathfinder project landed a small rover on Mars
very successfully. The information that was
coming back went directly from that little rover
all the way back to Earth through the deep space network. And so my colleagues and I at the jet propulsion
Lab were thinking, “What should we be doing that
we will need 25 years from now in order to provide
better networking for the spacefaring nations?” And we very quickly concluded
we should use rich networking like the Internet. And then we said, “Well
maybe we can use TCP/IP. It works on earth, so maybe
it would work on Mars too.” Of course it will work on
Mars, but it doesn’t work between the planets because
of the variable delay which, could be up to 20 minutes one
way, 40 minutes round-trip. And also disruption. When the planets are rotating, we haven’t figured
out how to stop that. So if you’re talking to
something on the surface of the planet rotates, or
maybe there’s an orbiter, you can’t talk to it until
it comes back around again. So we end up with
this variable delay and disruption environment. TCP is brittle
in that regard. So we developed a new
[inaudible] of protocols, we call the “bundle protocol.” You know, we’re engineers,
we didn’t come up with a very good name, but if we had done
Kentucky Fried Chicken, we would’ve called it “hot
dead birds,” so you know. We’re just not really
good at naming. So the bundle protocol is,
in the interplanetary world, with the Internet protocol
is on planet Earth. It’ll run on basically anything
that’ll move the packet of bits from one place to another. It runs on top of TCP, runs on
top of UDP, runs on top of IP, runs on top of point-to-point
links. Here’s what we did. After we went through four
generations of the design of the system, we installed
it for experimental purposes, will actually, let me go back. There was, as you know in
2004, rovers that were landed in January, two of
them, very successfully, Spirit and Opportunity. One of them, I forget
which, spirit, has died. One of them is dead, and
the other one is still producing data. But when they were first
landed, they were supposed to transmit data directly back
to earth the way Pathfinder did. But those radios
overheated, and the result is that they needed the duty cycle
to keep it from harming itself or any other equipment. Well it was originally 28
kilobits a second and that was, you know, scientists
weren’t happy about that. When we told them we have to reduce the duty cycle
they were even more and happy about that. And so one of the
engineers said, “Well, there’s an X-Man
radio on the rover. There’s an X-Man radio on the
orbiter,” which had been put in place in order to
survey the planet to decide where the rovers should go. But they’d finished that task
so they were just in orbit. And they had available
communications computing in memory. So the orbiters and the
rovers were reprogrammed to take data they had collected
and squirted out to the orbiter when he got into the right
place, and hold onto the data until it transmitted data back
to the deep space net on earth. And the data rate going on the X-Man radio was 128
kilobits a second In fact, it’s been upgraded now to 256. And orbiters, since they
had larger solar rays and didn’t have any problem with
even the thin Mars atmosphere, could transmit data back more
than 128 kilobits a second. So all the data that
was coming back from Mars is going store and forward,
which is packet switching. So that’s a demonstration
of how well that at work. The Phoenix that landed in
May 2008, was in the North Pole, it didn’t have a configuration
that went straight back to earth, so they went
through the relay, again. So then we upload the protocols
to the epoxy spacecraft, which is in orbit around the sun that had rendezvoused
with two comets. It was called “deep
impact” originally and new mission renamed
it “epoxy.” So it has to protocols on board. The rovers have the
protocols on board. The orbiters have the
protocols on board. International space station
has to protocols on board. We’re operating basically a kind of rudimentary interplanetary
network now. And so what’s happening is
that at NASA there’s efforts to standardize these protocols, internationally through
the consultative committee on space data systems. We’re hoping that everybody
will adopt protocols, use them in their missions,
or even if they don’t use them on their missions, at
least have them on board. So when those missions are done
we can repurpose those devices as knowns in an interplanetary
backbone. So we might ask you see
something like that over, you know, many decades. Now some people have
said, “Well, you know, why are you doing this?” And of course one answer
is, “To provide rich options for more complex missions with
multiple spacecraft in orbit, in tandem flying in orbit
around the sun, for example, or flying out of
the solar system, or possibly on the surface.” In fact we did an example of
that couple of months ago, the European space agency
in cooperation with the US, did a test from the
international space station using the interplanetary
protocols to control a rover on the ground in Germany. And so what we were
demonstrating is what it would be like if you were in orbit
around Mars and you needed to control the rover remotely. It was close enough that it
was real time interaction. So the protocols, although they
can deal with variable delay, are also perfectly
okay just like UDP, with [inaudible] environment. So [inaudible] wondering,
“Well, all right, so that might help build
more complex missions.” But I have one other
reason for doing this. DARPA released a contract
last year for half $1 million to a consortium to design
a spacecraft that can get to the nearest star
in 100 years. That’s 4.4 light years away. It’s Alpha Centauri. And so I’m lucky to be a
part of the consortium. And there are three problems. Problem number one
is propulsion. It turns out that the current
propulsion systems would take about 65,000 years to get
from Earth to Alpha Centauri, which is a little long,
even for a DARPA project. So the question is,
“How do I get up to about 20 percent
the speed of light?” And oh by the way we
need to, you know, slow down before we
get to the other end, otherwise we’ll get one
image, then, you know, it’s an expensive image. So the idea would be a
slowdown and go into orbit, and so we have to reach a speed
of about 20 percent the speed of light at midpoint, 50
years, and then be able to slow down to get it to
orbit at the other end. So that’s one problem. Navigation is another problem. Imagine, you know, that when we
do interplanetary projects now, usually the spacecraft get to a
certain points in the mission, and you have to do
midcourse corrections, and we do that interactively. Not literally real-time, but we send the correction
information and we get back confirmation about the change
in the flight path. Now imagine if you’re doing
this with a spacecraft that a light year away. So it takes a year for the
information to get there and another year for
you to find out whether or not it worked or not. You know, this didn’t feel like it was really
interactive it all. So I got worried about
that until somebody pointed out that the proper motion
of the stars that are within about 10 light years of our solar system is
actually pretty well-known, which means that we can do
autonomous navigation most likely for that [inaudible]. Because we can use stars that
are much farther away as a way of calibrating where
we are and then use that to make the
midcourse corrections. So that part was easy. Then there’s communication. How the heck do you generate
a signal that’ll be detectable on earth from four
light years away? Also you have to
deliver the equipment to four light years away,
so you have some limitations on mass that you can send. So I’ve been thinking, “Well maybe femtosecond
second lasers.” You know, imagine that you
have 100 watt power supply and you compress 100 watt signal
into 10 to the minus 15 seconds. That should give you
pretty good signal strength. But now even if it’s a laser
beam, it’s going to beam spread over four light distance, so now you get this very
high pulse attenuated about the size of
the solar system. So now you know why I need
the interplanetary backbone. It’s so I can build a
synthetic aperture receiver to detect the signal
coming from Alpha Centauri. Now one of the physicists
said, “Wait a minute. Wait a minute. There’s another way to do this.” What is that? And he said, “Well you know
gravity will bend light, that’s how we figured out Albert
Einstein’s theory was right.” So he said, If go 550
astronomical units away from the sun, you begin
to get to the point where the Sun’s gravity
will form a focal plane. So if we can just get
[inaudible] 550 AU out from earth, then we can
use the gravitational lens of the sun in order to focus
the signal that’s coming from Alpha Centauri. So that’s sort of the
current idea behind building interstellar system. Intergalactic is
somebody else’s problem. All right, well I’m sorry
I went on so damn long, but you can tell I’m
enthusiastic about this. I’m happy to spend some time
on Q&A, but I will forgive you if you all decide to run away. But I think you in any case for
coming to Google this evening. [ Pause ]>>Unknown Speaker: Mr. Cert, thank you so much
for a great speech. I remember another great
speech gave last year at the Freedom to Connect
conference at the AFI. You also talked about
cyber security then. You said there was a
lack of understanding. That people were caught up in
buzzwords like cybercrime–>>Dr. Vint Cerf: Cyber-warfare
and things like that. Those are real. Those problems are real. The problem is that if
we confine our discussion to those words which
actually been thinking in ways that are not general enough. So [inaudible]–>>Unknown Speaker: Since
you gave that speech that year ago now do you
think government and business of taken a different approach or mentality towards cyber
security that you’ve seen?>>Dr. Vint Cerf: I have
actually had some interactions with the Department
of Homeland Security which is actually
doing an experiment with a cyber-safe notion. So I don’t know how
far they’ve gotten in releasing the functionality
to make it available, but my understanding is
that they see the need for the ordinary public
and the small businesses to have a place to
go to get help. Because the only help
they get today tends to be antivirus software they
can download, which does help to some degree but all of
your sure are very familiar with day-zero attacks which
are increasingly common. The other thing that I worry about specifically is the
response to a cyber-attack. If you have a distributed
denial service attack, if your response is, “I’m going
to wipe out the disk drives of all the machines that
are attacking me,” it turns out those machines are
probably owned by citizens who don’t know their
machines are infected. And so if you wipe out the
disk drives of million people, some of those machines were
important for daily operation, and suddenly we shot
ourselves in the foot. So that can’t be
the right answer. We have to think our way through
a lot more carefully about how to we attribute the source and
what do we do about the case where the actual
attacking devices are not under the control
actually of the users. So there’s still a lot
of work to be done.>>Unknown Speaker:
I was saying, I think in the beginning you
mentioned the next big thing, possibly is people using the
devices in unison to do X, Y and Z. like play music all
in the same room, play video, record like, surveillance,
and stuff. Maybe you can share more
thoughts about that. I’m working on that myself. So it’s cool to hear you
say that [laughs] so…>>Dr. Vint Cerf: So, you
know, one of the things that I find interesting, one thing that your question
suggests, imagine you’re driving in the car and you are
asking questions like, “How do I get to
this destination?” The voice message may go to a
speech-understanding system, but the answer may come back on the navigational
display on the car. And here you wound up using
two devices at the same time. The way that the navigational
system knows what is the address of the navigational display is that the system that’s taking
your voice is also telling the destination here or the
other devices in the car. Well how about you
walk into a hotel room and your mobile is turned on, and the mobile discovers
you have high resolution display available. Instead of showing everything
on the tiny, little screen on the mobile, now suddenly
put it up on the large, high resolution screen. So these are all
the kinds of things that I imagine we could do. Access control is
clearly an issue. You don’t want to be, will
maybe you do, but the other guy in the room next door doesn’t
want to looking at what’s on his mobile on your
high resolution display, so it’s clear that we
have to do something to make sure things are
properly controlled. We have a question over
here, but I don’t know where the microphone just went.>>Unknown Speaker: Thank you
for being here and thank you for your contributions Mr. Cert. My question is pertaining
to TCP. And I’ve been following the
web protocols as they evolve. We’re seeing TCP underlying
both web sockets and HTTP 2.0. I’m curious whether a
single stream of transport with consistent in order
messaging is still what you feel is the right choice of
underlying protocols for being able to communicate
on the World Wide Web.>>Dr. Vint Cerf: So, first
of all, the answer is that one of the reasons that
the Internet is so interesting is you
can invent new protocols. And so and many of them, RGP is
one for example, HTP is sitting on top of TCP, but others
are adjacent to TCP. So to answer your
question, no I don’t think that TCP is the only
thing that should be used. This astonishing to me that it
has actually serviced us as well as it has when you
think about its set of constraints and
everything else. But a lot of what
happens on the net that uses TCP does
not necessarily use it for subsequent action, so
their UDP streams, or RTP streams that are
activated by the web based HTTP, activates a stream delivery
which is not over TCP. It may be over a
different protocol, and that’s all the right thing. The right thing here is to have
protocols that are appropriate to the application that
you’re looking for. So I actually encourage
people to take advantage of the fact the Internet is
so open to try things out. There’s nothing that
stops you theoretically, from building a whole
new set of applications. And we see this. Skype is a good example of that, where they simply download
an application [inaudible], when you use it instead
of, you know, anything else that might have been available. So frankly I’d be happy to see
people and the new protocols that do a better job for
some things than TCP can do. It’s still amazing to me
that it’s been so robust over this period of time.>>Unknown Speaker: [inaudible].>>Dr. Vint Cerf:
You’re welcome.>>Unknown Speaker:
I have a question. So how do you think
about privacy? Like Google own our time series
search key words and email data? In the future at least, probably can be the most
valuable asset of Google. And do you think Obama
will for some legislation to the usage of this data?>>Dr. Vint Cerf: Well, first
of all, we are intently aware of the importance of
preserving privacy. And we paid dearly for a couple
of mistakes that happened, like the incidental collection of information we were
doing street view. I think though that privacy
is becoming increasingly hard to protect. The reason is not because
Google has information. We will fight very hard
to maintain your privacy. We have resisted
government demand for information more actively
than many other companies but also a lot of information. But we do it to ourselves,
and we took pictures of their mobiles, we
upload them to a website. Some people are discovered
being in places where, you know, they weren’t even the people
you took the pictures of. You took the picture of
your friend but adjacent to your friend is somebody who
is now tagged on, you know, Facebook or something. And somebody else says,
“Oh Joe was at that party, he said he wasn’t there.” Now Joe’s in trouble
because you put a picture up. So the big issue here is
learning what our social convention should be to deal
with the fact that we have such privacy invasive
technology available to us. On the Google side, we do use
the data to help figure out how to target ads and we’re
very upfront about that. But we don’t give the
data to anybody else. Some people imagine we, our
business model is to sell data about you to somebody else. That’s not what we do. Your data is inside our
system and it stays there. You can get all your
data back if you want to. But what is important is
that we use the information to deliver advertising,
which is our business model, and everybody knows that. Hoping that that information
gets seen will actually be of interest to you. So the purpose behind
the information that we’re accumulating
is to deliver advertising which is more relevant
to you than just, you know, blanket ads. And for many people that kind of advertising doesn’t
even feel like advertising. It feels like useful
information.>>Unknown Speaker: [inaudible].>>Dr. Vint Cerf: I’m
sorry, I can’t hear him. Can you give him the microphone?>>Unknown Speaker: In the
future, if Google decides to sell its data, like, do
you think government should –>>Dr. Vint Cerf: I think–>>Unknown Speaker: [inaudible].>>Dr. Vint Cerf:
Well, let’s see, if I were a politician I would
say that the hypothetical and I don’t want to go there. But it seems to me that
the data is so important and so it would easily be
treated as personal data, I mean, it’s your
email, for example. We would be in a lot of
trouble if we did that. I don’t think the company
with survive the response. So I’m doubtful, certainly the
current management absolutely would not want to do that. [ Pause ]>>Unknown Speaker:
Thanks again for the talk, a [inaudible] as always. My question action
follows up on one of couple ago regarding
the future of, sort of, Internet protocols. I have a question about
dynamic mesh networks. Certainly, you know, most of
us have mobile phones here but if the GSM
center Wi-Fi goes down, they’re kind of useless. And certainly for
places where, you know, Egypt where the Internet,
you know, backbone was taking down over a crisis situation, where the infrastructure
isn’t available. The phones and most of these
devices are not programmed to talk to things that
are right next to them. So, I mean, yeah
from a pipe standpoint. You know, what would sort of be
involved in getting that going?>>Dr. Vint Cerf: So I really
love the idea, in fact, while I was still chairman
of the visiting committee on advance technology
at NIST, we prepared a report on
public safety networking and that was exactly one of
the things we pointed out. The current public safety
net vector is to use LTE. And when things are normally,
that’s probably all right. But at some point when you
have Katrina or you have Sandy or you have some other kind of
event, man-made or otherwise, the infrastructure may
not be there at all. For example, maybe the equipment
is there but you can’t get any of the gasoline or oil
or what you, diesel fuel, to run the generators. I really love the idea that our
devices could be linked together in a mesh-based network. I think that that’s how the
system I use at home works. I use a mesh network and it’s
automatically self forming. I’m a big fan of incorporating that capability into
our devices. Now keep in mind
that privacy suggests that we should be encrypting
end-to-end in order to take advantage of that. You certainly don’t want
to use a mesh network in which everybody gets to
take a bite of the hot dog, to mix the metaphors,
while you’re trying to get one of the baseball game. So you really want to be able
to use end-to-end capability, but I really like the idea
of incorporating that. That would make for a much
more resilient systems than the one we have now. [ Pause ]>>Unknown Speaker: So following
up on that it seems that most of our programming languages
are actually antiquated. They’re geared towards
programming a single clock on a single computer. And not on a distributed,
not taking advantage of distribution, and that adds
levels of complexity on there. I heard of a great
language today bloom coming out of Berkeley, or at least
an experimental language. And I’m wondering what else,
and I’m personally a big fan of Earlang, which
is 20 years old. And everybody seems
to be trying to catch up to Earlang
so what do you see?>>Dr. Vint Cerf: So I actually
see a number of things going on. First of all, we
have to learn how to program multicore systems. And that in itself
is a challenge. The clock speeds are not
going to keep going up, otherwise chips will melt because too much
power goes into them. So we have to deal
with that already. We do that at Google as well. The other thing is that
I really like the idea of doing parallel processing,
but look at what we’ve been able to do in a very odd way
where we split some tasks up into multiple parallel stream that doesn’t require any
special programming to do that. SETI at home and the
protein folding activity is simple, single processor software. It’s just that it’s been
executed simultaneously on a wide number of systems. A lot of our applications
are the same way. We take a task and we break
it up into large numbers of parallel processes that
don’t require simultaneous use of cores. Each core is running
its own operation. We’ve been able to get away
with that for the kinds of applications we had,
but when we start getting into large scale simulations,
now it gets to be very different because now you end up having
either a single instruction stream running against a whole
variety of data that’s sort of SIMD- simple instruction
stream mobile two data stream. Or what you’re thinking
about is MIMD, which is multiple instructions
that are distinct running on mobile two data streams. There are languages for that, but some of them
are really archaic. One of them was called
[inaudible] that ran on the [inaudible] in the 1970s. Nobody could figure
out how to program it. It was really hard. [inaudible] sounds
like something that very few people
could even speak. It sounds like a [inaudible]
language from 1000 A.D. so I suspect that
there is real room for serious software
language development. And it’s a challenge for ACM to
start pushing in that direction. So I’m actually hopeful that
the platforms that are going to become available over
time are going to force us into thinking a few kinds
of programming paradigms. Some of them are the result
of higher level languages which hide some of the underlying
complexities of the system. Maybe you have some ideas to
pursue that, but if you don’t and others do I sure hope that you’ll make them known
and publish them in CACM so we can
all learn from it. One more, and then we
really need to break.>>Unknown Speaker: I was just
going to follow up on some of the things you
were mentioning about intellectual property and
I’m curious about your views on the role and sort
of opportunities of the government
releasing, you know, more of its own,
essentially, IP. Because there’s this
new, I guess, yet another executive order
that came from the White House about releasing open data and
sort of the economic potential and all the opportunities
around open data. There’s been a lot of
activity on that front. And there’s a lot more activity
around open source in government as well, but it can
be kind of mixed. And so I guess just, you know,
as an example, I was reading and some in the history
of ARPA NET and there was an example of BBN,
which I guess was one of the, sort of, main companies
involved been hesitant to release the source
code behind the, you know, the I guess the IMPs, the
packet switching computers. And eventually, you know, that
turned around and it was open. But I’m curious, sort of, if you’re the government
had he find the balance between those opportunities
to establish an open platform that people can build
economic capital on top of and other opportunities
versus maybe something small and you just, you know,
it’s just a commodity piece of software, it’s
less important. What you see as sort
of the opportunities and the balance in
that equation?>>Dr. Vint Cerf: Well,
first of all on openness, I’m a big fan of open access to publications,
open access to data. And the government is saying
that it wants to find ways to do that, now just us at ACM,
and all the other publishers. I also am a big fan of trying
to preserve information. I mentioned that earlier. The digital Vellum
idea is one of those. I think that increasingly were
going to find people I’m capable of writing their own software. Remember when spreadsheets
came out back in the early 80s, everybody became
programmers of spreadsheets, and they hadn’t been formally
trained to write software. Well don’t you imagine that as
we work our way up into higher and higher level
languages that more and more people will
have the ability either to write their own software,
or share it with each other. So I think unless we get into this bad loop of
preventing people from sharing their software for sharing their
intellectual property, we’ll have a higher probability
of being able to take advantage of everybody’s skills by
writing software that’s open. That’s why I like the
open source model so much. I don’t know if that gets
anything you were getting at. [ Pause ]>>Unknown Speaker: I mean
there’s some companies that write software that are
largely funded by government and the software is not open.>>Dr. Vint Cerf: Oh, you
are saying it’s not open.>>Unknown Speaker: Right.>>Dr. Vint Cerf: I can tell
you that just to cut this short because everybody probably
want to get out of here, the government’s already
spoken at least at NSF, and the White House, OSTP, saying that government-sponsored
stuff needs to be accessible. And, you know, there’s a
whole bunch of questions about when is it accessible,
and how long does it take, do you get to retain
the data for a period of time for use on your own. But commercial rights are
an argument that shows up when you have government
contracts and grants and things of that kind. And that’s a negotiated thing. If the government the
way it seems to be, I think increasing pressure
for people who are working under government grant to make
available what they’ve done to the general public. There’ll still be an argument
over commercial rights. And so this is not a
trivial thing to respond to, but it’s certainly
the trend right now. It looks like it’s
in that direction. Okay that’s all the time we’ve
got, and I need to scoot, but thank you very much.

Danny Hutson

8 thoughts on “Dr. Vint Cerf on “Reinventing the Internet”

  1. http://www.reverbnation.com/BGRANTMUSICPRODUCTIONS
    https://www.facebook.com/diogenes.guimaraes-B. Grant about Dica Guimarães Jr.


    You said you had que me and stay by your side I can tell for sure. I would like to hear you guys que A DREAM COME TRUE disclose and helping in any way; I also count on you guys! 
    Lyrics & Music: A DREAM COME TRUE Tip Guimarães Jr, 
    Co-written, arranged, prduzida B.Grant performance. 
    Plugged music against the manipulation and control of the Internet matter to you guys? Not made ​​any more contact on the subject, and to say that: Plugged must be professionally recorded in Studio, this homemade video is passed from the moment of leaving the scene to do a good production, no more hug: DEREK SLATER, GOOGLE INC VINT CERF, ok I'm here at your disposal … 
    Check information.

    Dica Guimarães Jr.

  2.  Dr. Vint Cerf on "Reinventing the Internet"

    The Internet has done a lot for humanity and will do much more, neuroscience, cognition and thinking human being as a character que live in society … Interacting positively in the education and creating a new mindset: The man is a machine not simply of generating profits .

     Dica Guimarães Jr 
    Dica Guimarães Jr – Plugged High Tech World Lyrics | Rap Genius 
    Traduzir esta página
    10/05/2013 – You pyonged “Dica Guimarães Jr – Plugged High Te…” …. http://rapgenius.com/Dica-guimaraes-jr-plugged-high-tech-world-lyrics# …. We liked your explanation of “Plugged (AGAINST MANIPULATION ON THE … Dica   

  3. Who is this clown , he knows nothing about memory , portable memory , storage , NTFS or FAT (File allocation tables).

    The fact is and im sorry to be a pain nothing is ever gone as you know evan after a format you can still recover data. Even if something is deleted it's never gone either. Christ of the shelf receovery software for home PC's can get data back. Just plug in your camera , smart phone , tablet or ipod }(anything with ntfs or fat file system) and run the software on that and you will see it will re write the ones and zeros back to what they where. Has the world gone stupid or something.

    Say i accidently formatted my memory stick with a load of photos on it i would run some software like raise data recovery on that SD card and it would UN-FORMAT IT. Nothing is ever lost. There was also a study by MIT years ago were they unformatted  several hard drives off ebay. Same applies to phones now and tablets and Cameras. I studied this in 2004 in southport college and passed my two year course. NOTHING IS EVER GONE UNDERSTAND ME. If you dont beleive me read the a+ computer hardware manual. If i dropped something i would pick it up again……..VINT No offence but you dont know anything.

    Alex Smith (BTEC Student 2004 distinction in operating systems and computer hardware_ peace out….

  4. EU Dica Guimarães Jr participei dessa vídeo Conferencia comandada pelo Dr: VinT Cerf considerado o pai da Internet, tive o privilegio de receber 2 emails  do próprio  Vint Cerf (Hangouts) & Derek Slater Google Inc. também enviou-me  emails de agradecimento pela participação.O fato é : O controle e a Manipulação  da Internet …A Internet deve se  grátis como as Tv's,mas os grandes grupos de Rede de Televisão ,Rádios etc,manipula Governos para que a População mundialmente  não seja  bem informados e assim a Manipulação se perpetue,The Rap Genius reconheceu o conteudo de PLUGGED song …(AGAINST MANIPULATION ON THE INTERNET | Genius
    Traduzir esta página
    We liked your explanation of “Plugged (AGAINST MANIPULATION ON THE INTERNET,r   #WoodinTheMachine   Dica Guimarães Jr. Salvador,Bahia-Brasil. 12/94/2015


  6. We should be disconnecting the internet from the organism of the home. No offense intended, but what idiot would want their house to be under control by a hacker's terrorist organization?
    Humans are not easy to hack because we have no Wi-Fi ports, we have to be physically at the location of interrogation to give information or be controlled. This is why we shouldn't connect home appliances to the internet.

Leave a Reply

Your email address will not be published. Required fields are marked *