Engineering the Politics of TCP/IP and the Enabling Framework of the Internet
Posted on | September 22, 2021 | No Comments
The Internet was designed with a particular architecture – an open system that would accept any computer and connect it to any other computer. A set of data networking protocols allowed any application on any device to communicate through the network to another application on another device. Email, web files, text messages, and data from sensors could be sent quickly over the Internet without using significant power or other resources from the device. The technical “architecture” of the Internet was designed to empower the network’s edges – the users and their hardware. Its power has been borne out as those edges are no longer just large mainframes and supercomputers but continued to incorporate new devices like PCs, laptops, smartphones, and the tiniest of sensors in the emerging Internet of Things (IoT).
This post explores the “political engineering” of the Internet protocols and the subsequent policy framework for a national and global data communications network that empowered users and created an open environment for competition, social interaction, and innovation. This system has been challenged over the years by programmatic advertising, oligopolistic ISPs, security breaches, and social media. But it’s still a powerful communications system that has changed commerce, entertainment, and politics worldwide.
The Power of Protocols
What gives communicative power to the Internet’s architecture are the protocols that shape the flows of data. With acronyms like TCP, IMAP, SMTP, HTTP, FTP, as well as UDP, BGP, and IP, these protocols formed the new data networks that would slowly become the dominant venue for social participation, e-commerce, and entertainment. These protocols were largely based on a certain philosophy – that computer hosts should talk to computer hosts, that networks were unreliable and prone to failure, and that hosts should confirm with other hosts that the information was passed to them successfully. The “TCP/IP suite” of protocols emerged to enact this philosophy and propel the development of the Internet.[1]
TCP/IP protocols allow packets of data to move from application to application, or from web “clients” to “servers” and back again. They gather content such as keystrokes from an application and package them for transport through the network. Computer devices use TCP to turn information into packets of data – 1s and 0s – sent independently through the web using Internet Protocol. Each packet has the address of its destination, the source of origination, and the “payload,” such as part of an email or video.
The nodes in the network “route” the packets to the computer where they are headed. Destinations have an IP address that are included in routing tables that are regularly updated in routers on the Internet. This involves some “handshaking” and acknowledging the connections and packets received between what we have been calling alternatively the edges/devices/hosts/applications/processes.
More specifically, a “process” on an application on one device talks to a “process” on an application on another device. So, for example, a text application like Kakao, Line, WhatsApp, or WeChat communicates to the same application on another device. Working with the device’s operating system, TCP takes data from the application and sends it into the Internet.
There, it gets directed through network routers to its final destination. The data is checked on the other side, and if mistakes are found, it requests the data be sent again. IMAP and SMTP retrieve and move email messages through the Internet, and most people will recognize HTTP (Hypertext Transfer Protocol) from accessing web pages. This protocol requests a file from a distant server, sets up a connection, and then terminates that connection when the files download successfully. Connecting quickly to a far off resource, sometimes internationally, and being able to sever the link when finished is one of the features that makes the web so successful.
HTTP is at the center of what has been called the World Wide Web (WWW). Mostly called the ‘web” these days, it combined the server with the “browser” to provide a powerful new utility – the website. Hypertext Markup Language (HTML) enabled the browser to present text and images on a 2-D color screen. The WWW empowered the “dot.com” era and allowed many people to develop computer skills to produce websites. Every organization had to have an online “presence” to remain viable, and new organizations were started to take advantage of the fantastic reach of the web. Soon, server-side software empowered a myriad of new possibilities on the net, including browser-based email, e-commerce, search, and social media.
Devices connect or “access” an Internet Service Provider (ISP), either from a home, school, or Wi-Fi connections at a café or public network in a train or park. Mobile subscriptions allow access to a wireless cell tower with a device antenna and SIM card. Satellite service is becoming more available, primarily through HughesNet, ViaSat, and increasingly SpaceX’s Starlink as more low-orbit satellites are launched. Starlink is teaming up with T-Mobile in the US to connect a smartphone directly to the low-orbit satellite network.
Physical media make a difference in good Internet access by providing the material access to the ISP. Various types of wires and fiber optic cables or combinations provide the critical “last mile” connection from the campus, home premise, or enterprise. Ethernet connections or wireless routers connect to a modem and router from your cable company or telco ISP to start and end communication with the edge devices.
Conceptually, the Internet has been divided into layers, sometimes referred to as the protocol stack. These are:
-
Application
Transport
Network
Link
and Physical layers.
The Internet layers schematic survived the Open Systems Interconnection (OSI) model with a more efficient representation that simplified the process of developing applications. Layers help conceptualize the Internet’s architecture for instruction, service and innovation. They visualize the services that one layer of the Internet provides to another using the protocols and Application Programming Interfaces (APIs). They provide discrete modules that are distinct from the other levels and serve as a guideline for application development and network design and maintenance.
The Internet’s protocol stack makes creating new applications easier because the software that needs to be written only for the applications at the endpoints (client and server) and not for the network core infrastructure. Developers use APIs to connect to sockets, a doorway from the Application layer to the next layer of the Internet. Developers have some control of the socket interface software with buffers and variables but do not have to code for the network routers. The network is to remain neutral to the packets running through it.
The Network layer is where the Internet Protocol (IP) does its work. At this layer, the packets are repackaged or “encapsulated” into larger packets called datagrams. These also have an address on them that might look like 192.45.96.88. The computers and networks only use numerical names, so they need to use a Domain Name Service (DNS) if the address is an alphabetical name like apennings.com.
Large networks have many possible paths, and the router’s algorithms pick the best routes for the data to move them along to the receiving host. Cisco Systems became the dominant supplier of network routers during the 1990s.
Although the central principle of the Internet is the primacy of the end-to-end connection and verification – hosts talk to hosts and verify the successful movement of data, the movement of the data through the network is also critical. The network layer in the TCP/IP model transparently routes packets from a source device to a destination device. The job of the ISPs are to take the data encapsulated at transport and network and transport it – sometimes over long distances via microwave towers, fiber optic cables, or satellites. The term “net neutrality” has emerged to protect the end-to-end principle and restrict ISPs from interfering with the packets at the network layer. If ISPs are allowed to examine data from the Application layer, they could alter speed, pricing, or even content based on different protocols.
The diffusion of the TCP/IP protocol was not inevitable. Computer companies like IBM, Honeywell, and DEC developed their own proprietary data communications systems. Telecommunications companies had already established X.25 protocols for packet-switched data communications with X.75 gateway protocols used by international banks and other major companies. TCP looked like a long shot, but the military’s subsequent decisions in 1982 to mandate it and National Science Foundation’s NSFNET support secured momentum for TCP/IP. Then, in 1986, the Internet Advisory Board (IAB) began to promote TCP/IP standards with publications and vendor conferences about its features and advantages. By the time the NSFNET was decommissioned in 1995, the protocols were well established.
The Philosophy of TCP
The military began to conceptualize the decentralized network as part of its defense against nuclear attack in the early 1960s. Conceived primarily by Paul Baran at RAND, packet-switching was developed as way of moving communications around nodes in the network that were destroyed or rendered inoperable by attack. Packets could be routed around any part of the network that was congested or disabled. If packets going from San Francisco in California to New York City could not get through a node in Chicago, they could be routed around the Windy City through nodes in other cities. As networks were being considered for command and control operations they had to consider that eventually computers would not only be in fixed installations but in airplanes, mobile vehicles, and ships at sea. The Defense Advanced Research Projects Agency (DARPA) funded Vint Cerf and others to create what became the TCP and IP protocols to connect them.
The Internet was also informed by a “hacker ethic” that emerged at MIT in the late 1950s and early 1960s as computers moved away from punch-cards and began to “time-share” their resources. Early hacking stressed openness, decentralization, and sharing information. In addition, hackers championed merit, digital aesthetics, and the possibilities of computers in society. Ted Nelson’s Computer Lib/Dream Machines (1974) was influential as the computer world moved to California’s Silicon Valley.
The counter-culture movement, inspired by opposition to the Vietnam War was also important. Apple founders Steve Jobs and Wozniak were sympathetic to the movement, and their first invention was a “Bluebox” device to hack the telephone system. Shortly after, the Apple founders merged hacktivism with the entrepreneurial spirit as they emphasized personal empowerment through technology in developing the Apple II and Macintosh.
The term hackers has fallen out of favor because computers are so pervasive and people don’t like to be “hacked” and their private data stolen or vandalized. But the hacker movement that started with noble intentions and continues to be part of the web culture. [2]
Developing an Enabling Policy Framework
Although the Internet was birthed in the military and nurtured as an academic and research network, it was later commercialized with an intention to provide an enabling framework for economic growth, education, and new sources of news and social participation. The Clinton-Gore administration was looking for a strategy to revitalize the struggling economy. “It’s the Economy, Stupid” was their mantra in the 1992 campaign that defeated President George H. Bush and they needed to make good on the promise. Their early conceptualization as information highways framed them as infrastructure and earned the information and telecommunications sectors both government and private investment.
Initially, Vice-President Gore made the case for “information highways” as part of the National Information Infrastructure (NII) plan and encouraged government support to link up schools and universities around the US. He had been supporting similar projects as one of the “Atari Democrats” since the early 1980s, including the development of the NSFNET and the supercomputers it connected.
As part of the National Information Infrastructure (NII) plan, the US government handed over interconnection to four Network Access Points (NAPs) in different parts of the country. They contracted with big telecommunications companies to provide the backbone connections. These allowed ISPs to connect users to a national infrastructure and provide new e-business services, link classrooms, and electronic public squares for democratic debate.
The US took an aggressive stance in both controlling the development of the Internet and pressing that agenda around the world. After the election, Gore pushed the idea of the Global Information Infrastructure (GII) worldwide that was designed to encourage competition in both the US and globally. This offensive resulted in a significant decision by the World Trade Organization (WTO) that reduced tariffs on IT and network equipment. Later the WTO encouraged the breakup of national post and telegraph agencies (PTTs) that dominated national telecommunications systems. The Telecommunications Act of 1996 and the administration’s Framework for Global E-Commerce were additional key policy positions on Internet policy. The result of this process was essentially the global Internet structure that gives us relatively free international data, phone, and video service.
Summary
As Lotus and Electronic Freedom Frontier founder Mitch Kapor once said: “Architecture is politics”. He added, “The structure of a network itself, more than the regulations which govern its use, significantly determines what people can and cannot do.” The technical “architecture” of the Internet was primarily designed to empower the network’s edges – the users and their hardware. Its power has been borne out as those edges are no longer large mainframes and supercomputers but laptops, smartphones, and the tiniest of sensors in the emerging Internet of Things (IoT). Many of these devices have as much or more processing power than the computers the Internet was invented and developed on. The design of the Internet turned out to be a unique project in political engineering.
Citation APA (7th Edition)
Pennings, A.J. (2021, Sep 22). Engineering the Politics of TCP/IP and the Enabling Framework of the Internet. apennings.com. https://apennings.com/telecom-policy/engineering-tcp-ip-politics-and-the-enabling-framework-of-the-internet/
Notes
[1] Larsen, Rebekah (2012) “The Political Nature of TCP/IP,” Momentum: Vol. 1 : Iss. 1 , Article 20.
Available at: https://repository.upenn.edu/momentum/vol1/iss1/20
[2] Levy also described more specific hacker ethics and beliefs in chapter 2, Hackers: Heroes of the Computer Revolution. These include openness, decentralization, free access to computers, and world improvement and upholding democracy.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Ⓒ ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Originally from New York, he started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and spending most of his career at New York University. He has also spent time at the East-West Center in Honolulu, Hawaii. When not in the Republic of Korea, he lives in Austin, Texas.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: global e-commerce > Net Neutrality > Network Layers > NII > NREN > TCP/IP > Telecommunications Act of 1996