How the Internet came to be: On scaling

by Vinton Cerf, as told to Bernard Aboba
Copyright (C) 1993 Vinton Cerf. All rights reserved. May be reproduced in any medium for noncommercial purposes.
This node is a part of How the Internet Came to Be node


The somewhat embarrassing thing is that the network address space is under pressure now. The original design of 1973 and 1974 contemplated a total of 256 networks. There was only one LAN at PARC, and all the other networks were regional or nationwide networks. We didn't think there would be more than 256 research networks involved. When it became clear there would be a lot of local area networks, we invented the concept of Class A, B, and C addresses. In Class C there were several million network IDs. But the problem that was not foreseen was that the routing protocols and Internet topology were not well suited for handling an extremely large number of network IDs. So people preferred to use Class B and subnetting instead. We have a rather sparsely allocated address space in the current Internet design, with Class B allocated to excess and Class A and C allocated only lightly. The lesson is that there is a complex interaction between routing protocols, topology, and scaling, and that determines what Internet routing structure will be necessary for the next ten to twenty years.

When I was chairman of the Internet Activities Board and went to the IETF and IAB to characterize the problem, it was clear that the solution had to be incrementally deployable. You can deploy something in parallel, but then how do the new and old interwork? We are seeing proposals of varying kinds to deal with the problem. Some kind of backward compatibility is highly desirable until you can't assign 32-bit address space. Translating gateways have the defect that when you're halfway through, half the community is transitioned and half isn't, and all the traffic between the two has to go through the translating gateway and it's hard to have enough resources to do this.

It's still a little early to tell how well the alternatives will satisfy the requirements. We are also dealing not only with the scaling problem, but also with the need not to foreclose important new features, such as concepts of flows, the ability to handle multicasting, and concepts of accounting.

I think that as a community we sense varying degrees of pressure for a workable set of solutions. The people who will be most instrumental in this transition will be the vendors of routing equipment and host software, and the offerers of Internet services. It's the people who offer Internet services who have the greatest stake in assuring that Internet operation continues without loss of connectivity, since the value of their service is a function of how many places you can communicate with. The deployability of alternative solutions will determine which is the most attractive. So the transition process is very important.


Up: How the Internet came to be
Previous node: How the Internet came to be: The Internet takes off
Next node: How the Internet came to be: On use by other networks

Log in or register to write something here or to contact authors.