Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

ICTs for SDG 7: Twelve Ways Digital Technologies can Support Energy Access for All

Posted on | September 29, 2023 | No Comments

Digital Technologies, often known as Information and Communication Technologies (ICTs), are crucial in supporting energy development and access in numerous ways. ICTs can enhance energy production, distribution, and consumption, as well as promote energy efficiency and help facilitate the transition to clean and sustainable energy sources. ICTs are accorded a significant role in supporting the achievement of the United Nations’ Sustainable Development Goals (SDG), including SDG 7, which aims to ensure access to affordable, reliable, sustainable, and modern energy for all. To harness the full potential of ICTs for energy development, it is essential to invest in grid infrastructure and equipment, cybersecurity and data privacy, as well as digital literacy and skills.

Energy grids

Here are twelve ways that ICTs can support energy development and access:

    1) Smart Electrical Grids
    2) Renewable Energy Integration
    3) Energy Monitoring and Management
    4) Demand Response Programs
    5) Energy Efficiency
    6) Energy Storage
    7) Predictive Maintenance
    8) Remote Monitoring and Control
    9) Electric Vehicle (EV) Charging Infrastructure
    10) Energy Access in Remote Areas
    11) Data Analytics and Predictive Modeling, and
    12) Research and Development.

1) ICTs enable the implementation of smart grids, which are intelligent electricity distribution systems. It allows for real-time monitoring, control, and automation of grid operations. Smart grids use sensors, digital communication lines, and advanced analytics to monitor and manage electricity flows in real time. These Internet of Things (IoT) networks can optimize energy distribution, reduce energy losses, and integrate renewable energy sources more effectively. IoT sensors in energy infrastructure enable remote monitoring, maintenance, and early detection of faults. This can reduce downtime and improve energy availability.

2) ICTs facilitate the integration of renewable energy sources, such as solar, wind, geothermal, and hydroelectric energy, into energy grids. They provide real-time data on energy generation, storage, and consumption, allowing grid operators to balance supply and demand efficiently. Two main types of renewable energy generation resources need to be integrated: distributed generation, which refers to small-scale renewable generation close to a distribution grid; and centralized, utility-scale generation, which refers to larger projects that connect to major grids through transmission lines (See above image). Generating electricity using renewable energy resources rather than fossil fuels (coal, oil, and natural gas) can help reduce greenhouse gas emissions (GHGs) from the power generation sector and help address climate change.

3) Smart meters and energy management systems use ICTs to provide consumers with real-time information about their energy usage, replacing the electromechnical meters that were unreliable and easy to tamper with. These devices empower individuals and businesses to make informed decisions that reduce energy consumption and costs. Smart meters allow for instantaneous monitoring of energy consumption, enabling utilities to optimize energy distribution and consumers to track and manage their usage. The Asian Development Bank has been very active in supporting the transition to smart meters and in the process help countries meet their carbon commitments under the Paris Agreement, a legally binding international treaty on climate change adopted by 196 Parties at the UN Climate Change Conference (COP21) in Paris, France, in December, 2015.

4) ICTs enable demand response programs that encourage consumers to adjust their energy usage during peak demand periods in response to price signals and grid conditions. Utilities can send signals to smart devices (such as electric vehicles) to reduce energy consumption when necessary, avoiding blackouts and reducing the need to engage additional power plants. The New York Independent System Operator (NYISO), other electric distribution utilities, and wholesale system operators offer demand response programs to help avoid overload, keep prices down, reduce emissions, and avoid expensive equipment upgrades.

ICT can also deliver related energy education and awareness campaigns through websites, mobile apps, dashboards, and social media to inform consumers about energy-saving practices and sustainable energy choices. Mobile payment platforms can also facilitate access to prepaid energy services, making it easier for people to pay for electricity and monitor their energy usage. Digital platforms can connect consumers with renewable energy providers, allowing individuals and businesses to purchase renewable energy certificates or even invest in community solar projects.

5) ICTs can be used to monitor and control energy-consuming devices and systems, such as HVAC (heating, ventilation, and air conditioning), lighting, and appliances to optimize energy efficiency. Building management systems and home automation solutions are examples of ICT applications in this area. Energy-efficient homes, offices, and manufacturing facilities use less energy to heat, cool, and run appliances, electronics, and equipment. Energy-efficient production facilities use less energy to produce goods, resulting in price reductions. Key principles of the EU energy policy on efficiency focus on producing only the energy that is really needed, avoid investments in assets that are destined to be stranded, and reduce and manage demand for energy in a cost-effective way.

The utilization of more electrical energy technologies will assist the transition to more efficient energy sources while reducing green house gases and other potential pollutants. Heat pumps, for example, are an exciting addition that operate like air conditioned cooling, only in reverse. Heat pumps are used in EVs and are making inroads into homes and businesses.

6) ICTs support the management and optimization of energy storage systems, including batteries called BESS (Battery Energy Storage Systems) and pumped hydro storage. The latter moves water to higher elevations when power is available and runs it down through generators to produce electricity. These technologies store excess energy when it’s abundant and release it when demand is high. Tesla’s Megapack and Powerwalls use energy software platforms called Opticaster and Virtual Machine Mode that manage energy storage products as well as assist efficient electrical transmission over long grid lines.

Tesla Master Plan 3

7) In energy production facilities, ICTs can be used to monitor the condition of equipment and predict when maintenance is needed. This reduces downtime, extends equipment lifespan, and improves overall efficiency. ICT-based weather and renewable energy forecasting models improve the accuracy of predicting renewable energy generation, aiding grid operators in planning and resource allocation. Robust ICT networks can also ensure timely communication during energy-related emergencies, helping coordinate disaster response and recovery efforts.

8) ICTs enable remote monitoring and control of energy infrastructure, such as power plants and substations. These processes use a combination of hardware and software to track key metrics and the overall performance. Their equipment mix includes IoT-enabled sensors that track relevant data, while software solutions produce a dashboard of alerts, trends, and updates that can also enhance the safety and reliability of energy production and distribution.

9) Digital technologies play a critical role in managing EV charging infrastructures. They can help distribute electricity efficiently to both stationary and wireless charging stations. Mobile apps provide users with real-time information about charging availability, compatibility, and costs. They can also keep drivers and passengers entertained and productive while waiting for the charging to conclude.

10) ICT can facilitate the development of microgrids in off-grid or remote areas, providing access to reliable electricity through localized energy generation and distribution systems. These alternate grids use ICTs to support the deployment of standalone renewable energy systems, providing access to electricity and related clean energy sources such as geothermal, hydroelectric, solar, and wind. Renewable energy, innovative financing, and an ecosystem approach can work together to provide innovative solutions to rural areas.

11) ICTs enable data analytics and predictive modeling to forecast energy consumption patterns, grid behavior, and the impact of impending weather conditions. Analysing and interpreting vast amounts of data allows energy companies to optimise power generation through real-time monitoring of energy components, cost forecasting, fault detection, consumption analysis, and predictive maintenance.

These insights can inform energy planning and policy decisions. The ICT-enabled data collection, analysis, and reporting on energy access and usage can help policymakers and organizations track progress toward SDG 7 targets.

12) ICTs support research and development efforts in the energy sector by facilitating simulations and the testing of new technologies and energy solutions. Energy and fuel choices are critical determinants of economic prosperity, environmental quality, and national security and need to be central to academic and commercial research.

To fully address ICT for SDG 7, it’s essential to confront digital divides, expand internet access, and promote digital literacy in underserved communities. Supportive efforts among governments, utilities, technology providers, and civil society are vital to advancing energy access and sustainability. Collaboration among governments, utilities, technology providers, and research institutions can advance the integration of ICTs into the energy sector to ensure sustainable and reliable energy development for all.

Notes

[1] Some of the categories and text for this essay was generated by Chat GPT, edited with the assistance of Grammarly and written in line with my expertise and knowledge from my teaching an ICT and SDGs course for six years.

Citation APA (7th Edition)

Pennings, A.J. (2023, Sept 29). ICTs for SDG 7: Twelve Ways Digital Technologies can Support Energy Access for All. apennings.com https://apennings.com/science-and-technology-studies/icts-for-sdg-7-twelve-ways-digital-technologies-can-support-energy-access-for-all/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea where he teaches courses in ICT for sustainable development as well as broadband networks and sensing technogies. From 2002-2012 was on the faculty of New York University and he also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

The Increasing Value of Science, Technology, and Society Studies (STS)

Posted on | August 27, 2023 | No Comments

I regularly teach a course called Introduction to Science, Technology, and Society Studies (STS). It investigates how science and technology both shape and are shaped by society. The course seeks to understand their cultural, economic, ethical, historical, and political dimensions by investigating the dynamic interplay between these key factors of modern life.

Below I outline class topics, list major universities offering similar programs, and introduce some general areas of STS research. The scholarship produced by STS is used worldwide by engineers, journalists, legislators, policy-makers, as well as managers and other industry actors. It also has relevance to the general public engaged in climate, health, digital media, and other societal issues arising from science and technology adoption.

In class, we cover the following topics: Artificial Intelligence, Biomedicine, Cyberspace, Electric Vehicles and Smart Grids, Nanotechnology, Robotics, and even Space Travel. Tough subjects, but just as challenging is the introduction of perspectives from business, cognitive science, ethics, futurism, humanities, and social sciences like politics that can provide insights into relationships between science, technology, and society.[1]

STS is offered by many of the most highly-rated universities, often in Engineering programs but also in related Environment, Humanities, and Medical programs.

Although I teach in South Korea, the program was developed at Stony Brook University (SBU) in New York as part of the Department of Technology and Society (DTS), offering BS, MS, and PhD degrees at the College of Engineering and Applied Sciences (CEAS). The DTS motto is “Engineering has become much too important to be left to the engineers,” which paraphrases James Bryant Conant, who wrote after examining the results of the atomic bomb on Hiroshima that, “Science is much too important to be left to the scientists.” DTS programs prepare graduates with the technical and science capacity to collaborate productively with sister engineering and science departments at SBU and SUNY Korea while applying social science expertise and humanistic sensibility to holistic engineering education.

Our DTS undergraduate program at SUNY Korea has an additional emphasis on information and communication technologies (ICT) due to Korea’s leadership in this area.

Notable STS Programs

– Massachusetts Institute of Technology (MIT) has a Program in Science, Technology, and Society and is considered the initial founder of the field in the early 1970s.

– Stanford University’s Program in Science, Technology, and Society explores scientific and technological developments’ social, political, and ethical dimensions.

– University of California, Berkeley’s Science, Technology, Medicine, and Society Center addresses the social, cultural, and political implications of science and technology. It is known for its engagement with critical theory and social justice issues.

Harvard University’s Program on Science, Technology, and Society is part of the John F. Kennedy School of Government and provides a platform for examining the societal impact of science and technology through various courses and research opportunities.

– Cornell University, New York: Cornell’s Department of Science and Technology Studies was an early innovator in this area and offers undergraduate and graduate programs focusing on the history, philosophy, and social aspects of science and technology.

– The University of Edinburgh’s Science, Technology, and Innovation Studies Department in the United Kingdom is known for its research and teaching in the field.

University of California, San Diego’s Science Studies Program is part of the Department of Literature and offers interdisciplinary courses that examine the cultural, historical, and ethical dimensions of science and technology.

– The University of Twente in the Netherlands has a renowned Science, Technology, and Society Studies program emphasizing a multidisciplinary approach.

– In Sweden, Lund University’s Department of Sociology offers a strong STS program that covers topics such as the sociology of knowledge, science communication, and the ethical aspects of technology.

– York University in Canada has a Science and Technology Studies program that encourages critical thinking about the role of science and technology in contemporary culture.

While this list is incomplete, let me mention the Department of Technology and Society (DTS) and its history at Stony Brook University, which also dates back to the early 1970s. The Department of Technology and Society at SUNY Korea offers its degrees from DTS in New York, including BS and MS degrees in Technological Systems Management and a PhD in Technology, Policy, and Innovation.

Many engineering programs have turned to STS to provide students with conceptual tools to think about engineering problems and solutions in more sophisticated ways. It is often allied with Technology Management programs that include business perspectives and information technology practices. Some programs feature standalone courses on the sociocultural and political aspects of technology and engineering, often taught by faculty from outside the engineering school. Others incorporate STS material into traditional engineering courses, e.g., by making ethical or societal impact assessments part of capstone projects.

So, Science, Technology, and Society Studies (STS) scholars study the complex interplay between these domains to understand how they influence each other and impact human life.

Key Research Inquiries

Historical Context – STS scholars often delve into the past developments of scientific discoveries and technological innovations, as well as the social and cultural contexts in which they emerged. They readily explore the historical development of scientific and technical knowledge to uncover how specific ideas, inventions, and discoveries have emerged and changed over time. This exercise helps to contextualize the current state of science and technology and understand their origins.

Social Implications – STS emphasizes the social consequences of scientific and technological advancements. This examination relates to ethics, equity, power dynamics, and social justice issues. For instance, STS might analyze how certain technologies disproportionately affect different groups within society or how they might be used to reinforce existing inequalities.

Policy and Governance – STS researchers analyze how scientific and technological innovations are regulated, legislated, and governed by local, national, and international policies. They explore how scientific expertise, public opinion, industry interests, and political considerations influence policy decisions. They also assess the effectiveness of these policies in managing potential risks and benefits.

Public Perception and Communication – STS studies also explore how scientific information and technological advancements are communicated to the public. These inquiries involve investigating how public perceptions and attitudes towards science and technology are formed. They recognize that media narratives and communication channels influence these perceptions.

Public Engagement and Input – STS emphasizes the importance of involving the public in discussions about scientific and technological matters. It examines how scientific knowledge is communicated to the public, how public perceptions influence scientific research, and how public input can shape technological development.

Social Construction of Science and Technology – STS emphasizes that science and technology are not solely products of objective inquiry or innovation but are also influenced by social and cultural factors. It examines how scientific knowledge is constructed, contested, and accepted within different communities and how economic, political, and cultural forces shape technologies.

Ethical and Moral Considerations – The STS field often addresses scientific and technological advancements’ ethical and moral implications. This analysis includes discussions on the responsible development of new technologies, the potential for unintended consequences, and the distribution of benefits and risks across different social groups.

Innovation Studies – STS scholars also study innovation processes, including how scientific knowledge translates into technological applications, how creative ecosystems are established, and how collaboration between researchers, policymakers, and industry actors contributes to technological progress.

Environmental Analysis – Science, Technology, Society, and Environment (STSE) Studies interrogate how scientific innovations, technology investments, and industrial applications affect human society and the natural environment. Education is an important component as people make decisions that often guide the environmental work of scientists and engineers. STS also investigates how scientific and technological tools can help create more climate-resilient urban and rural infrastructures.

Technological Determinism – STS often confronts the idea of technological determinism, which suggests that technology significantly drives social change. It investigates the institutional factors and human agency that significantly shape technological development and its impacts while recognizing science and technology’s driving forces.

Digital Culture and Network Infrastructure – Some STS scholars study the overall media environment, the culture it engenders, and its enabling frameworks. It considers the interaction and interdependence of various media forms and linkages within and between networks. They explore how communication speed, information storage, and digital processing influence human perception, culture, economics, and the environment. These inquiries often include raising questions about privacy, censorship, propaganda, and the responsible use of media technologies.

Energy and Carbon Dependence – While engineers study the chemical, electrical, electromagnetic, mechanical, and thermodynamics of energy, STS scholars examine the central role of these energies in modern life. Again, they take various multidisciplinary, social-scientific perspectives on energy and environment, analyzing the economic, political, and social aspects of the production and consumption of energy, including the controversies, domestication, and innovation of new forms of energy.

Interdisciplinary Approach – As mentioned throughout this post, STS is inherently interdisciplinary, drawing on insights from sociology, anthropology, history, philosophy, political science, and more. This multidisciplinary perspective allows for a comprehensive examination of the complex relationships between science, technology, and society.

Conclusion

STS is relevant in addressing contemporary issues such as artificial intelligence, biotechnology, environmental challenges, privacy concerns, and more. It encourages a holistic understanding of the complex interactions between science, technology, and society, crucial for making informed decisions and policies in an increasingly technologically driven world. It is research-driven, using both quantitative and qualitative methods. By studying STS interactions, it aims to contribute to more informed decision-making, responsible innovation, and a better understanding of the role of science and technology in modern societies.

Notes

[1] I have previously described how I use the 4 Cs of the cyberpunk genre for techno-social analysis.
[2] Some of the categories and text for this essay was generated by Chat GPT and edited with the use of Grammarly in line with additional knowledge from my teaching the STS course for six years.

Citation APA (7th Edition)

Pennings, A.J. (2023, Aug 27). The Increasing Value of Science, Technology, and Society Studies (STS). apennings.com https://apennings.com/technologies-of-meaning/the-value-of-science-technology-and-society-studies-sts/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he started programs in Digital Communications and Information Systems Management while teaching digital economics. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Pressing Global Standards for Internet Protocols

Posted on | July 14, 2023 | No Comments

Developing new standards that would allow computers to connect and exchange information was central to the growth of international data networks. Computers and telecommunications networks are routinized codes and procedures constituted and shaped technologically by economic, engineering, and political decisions. Telecommunications requires agreement about the practices of interconnection, the interaction protocols, and the technical standards needed to couple disparate nodes. This post looks at the importance of standards development as the Internet emerged. Several powerful contenders developed competing designs before TCP/IP eventually emerged as the global solution. The OSI (International Organization for Standardization) model, for example, was influential, but turned out to be fatal distraction for many companies.

Standards sometimes emerge out of functionality, sometimes out of cooperation, and often out of pure economic power. Each of these conditions was present in the fight to develop telecommunications equipment for international data communications during the early 1970s and into the 1980s. Standards allow different types of equipment to work together. Choices of standards involve the exclusion of some specifications and the inclusion of others. Standards create competitive advantages for companies. Ultimately, these standards determine whose equipment will be used and whose will either be scrapped or never get off the design board. This situation is, even more, the case when they bridge national boundaries where the protocols and equipment have to be harmonized for effective technical communications.

Users (usually international corporations), international coordinating bodies, and computer equipment manufacturers were all starting to react to the new economic conditions of the 1970s. The movement to floating foreign exchange rates and the increased demand for ICT were especially problematic. Banks and other financial institutions such as the New York Stock Exchange (NYSE) were also very keen to develop data solutions to expand their scope over wider market areas, speed up “back office” data processing services, and provide new services.

Meanwhile, the ITU began soliciting the positions of its member nations and associated corporations regarding their plans to develop data communications and possible joint solutions. Perhaps most importantly, IBM’s Systems Network Architecture (SNA), a proprietary network, had the force of the monolithic computer corporation behind it. SNA was a potential de facto standard for international data communications because of the company’s overwhelming market share in computers.

Several other companies came out with proprietary networks as well during the mid-1970s. Burroughs, Honeywell, and Xerox all drew on ARPANET technology, but designed their network to work with the computers that only they manufactured.[1] As electronic money and other desired services emerged worldwide, these three stakeholders (users, ITU, computer OEMs) attempted to develop the conduits for world’s new wealth.

International organizations were also key to the standards development in the arena of international data communications. The ITU and the OSI initiated international public standards on behalf of their member states and telecommunications agencies. The ITU’s Consultative Committee on International Telegraphy and Telephony (CCITT) was responsible for coordinating computer communication standards and policies among its member Post, Telephone, and Telegraphy (PTT) organizations. This committee produced “Recommendations” for standardization, which usually were accepted readily by its member nations.[2] As early as 1973, the ITU started to develop its X-series of telecommunications protocols for data packet transfer (X indicated data communications in the CCITT’s taxonomy).

Another important standards body mentioned above, is the International Organization for Standards (ISO). The ISO was formed in 1946 to coordinate standards in a wide range of industries. In this case, they represented primarily the telecommunications and computer equipment manufacturers. ANSI, the American National Standards Institute, represented the US.

Controversy emerged in October 1974 and revolved around IBM’s SNA network, which the Canadian PTT had taken issue with. The Trans-Canada Telephone System (TCTS) wanted to produce and promote its own packet-switching network that it called Datapac. It had been developing its own protocols and was concerned that IBM would develop monopolistic control over the data communications market if allowed to continue to build its own transborder private networks. Although most computers connected at the time were IBM, the TCTS wanted circuitry that would allow other types of computers to use the network.

Both sides came to a “standoff” in mid 1975 as IBM wanted the PTT to use its SNA standards and the carrier tried to persuade IBM to conform to Canada’s requirements. The International Telecommunications Union attempted to resolve the situation by forming an ad hoc group to come up with universal standards for connecting “public” networks. Britain, Canada and France along with BBN spin-off Telenet from the US started to work on what was to become the X.25 data networking standard.

The ITU’s CCITT, who represented the interests of the PTT telecommunications carriers, proposed X.25 and X.75 standards out of a sense of mutual interest among its members in retaining their monopoly positions. US representatives, including the US Defense Communications Agency were pushing the new TCP/IP protocols developed by ARPANET because of its inherent network and management advantages for computer operators. Packet-switching broke up information and repackaged it in individual packets of bits that needed to be passed though the telecommunications circuit to the intended destination. TCP gave data processing managers more control because it was responsible for initiating and setting up the connection between hosts.

In order for this to work, all the packets must arrived safely and be placed in the proper order. In order to get reliable information a data checking procedure needs to catch packets that are lost or damaged. TCP placed this responsibility at the computer host while X.25 placed it within the network, and thus under the control of network provider. The US pushed hard for the TCP/IP standard in the CCITT proceedings but were refused by the PTTs who had other plans.[1]

Tensions increased due to a critical timeframe. The CCITT wanted to specify a protocol by 1976 as it met only every four years to vote on new standards. They had to work quickly in order to meet and vote on the standards in the 1976 CCITT plenary that was coming together in September. Otherwise they would have to wait until 1980.

The X.25 standards were developed and examined throughout the summer of 1976 and approved by the CCITT members in September. The hastily contrived protocol was approved over the objections of US representatives who wanted TCP/IP institutionalized. The PTTs and other carriers argued that TCP/IP was unproven and requiring its implementation on all the hosts they would serve was unreasonable. Given ARPANET hosts’ difficulty implementing TCP/IP by 1983, their concerns had substance. X.25 and another standard, X.75, put PTTs in a dominant position regarding datacom, despite the robustness of computer innovations, and the continuing call by corporations for better service.

The ARPANET’s packet-switching techniques made it into the commercial world with the help of the X-series of protocols defined by the ITU in conjunction with some former ARPANET employees. A store-and-forward technology rooted in telegraphy, it passed data packets over a special network to find the quickest route to its destination. What was needed was an interface to connect the corporation or research institute’s computer to the network.

The X.25 protocol was created to provide the connection from the computer to the data network. At the user’s firm, “dumb” terminals, word processors, mainframes, and minicomputers (known in the vernacular as DTE or Data Terminal Equipment) could be connected to the X.25 interface equipment with technology called PADs (Packet Assemblers/Dissamblers). The conversion of data from the external device to the X.25 network was transparent to the terminal and would not effect the message. An enterprise could build its own network by installing a number of switching computers connected by high-speed lines (usually 56k up to the late 1980s).

X.25 connected these specially designed computers to the data network. The network could also be set up by a separate company or government organization to provide data networking services to customers. In many cases a hybrid network could be set up combining private facilities with connections to a public-switched data network.[4]

Developed primarily by Larry Roberts from ARPA, who later went to work with Telenet’s value-added networks, X.25 was a compromise that provided basic data communications for transnational users while keeping the carriers in charge. The standard was eagerly anticipated by the national PTTs who were beginning to realize the importance of data communications and the danger of allowing computer manufacturers to monopolize the standards process by developing proprietary networks. What was surprising though, was the endorsement of X.25 by the transnational banks and other major users of computer communications. As Schiller explained:

  • What is unusual is that U.S. transnational corporations, in the face of European intransigence, seem to have endorsed the X.25 standard. In a matter of a few months, Manufacturers Hanover, Chase Manhattan, and Bank of America announced their support for X.25, the U.S. Federal Reserve bruited the idea of acceptance, and the Federal Government endorsed an X.25-based interim standard for its National Communications System. Bank of America, which on a busy day passes $20 billion in assets through its worldwide network “cannot stall its expansion planning until IBM gives its blessing to a de facto international standard,” claims one report. Yet even more unusual, large users’ demands found their mark even over the interests of IBM, with its tremendous market share of the world’s computer base. In summer, 1981, IBM announced its decision to support the X.25 standard within the United States.[5]

Telenet subsequently filed an application with the FCC to extend its domestic value-added services internationally using the X.25 standard and a number of PTTs such as France’s Transpac, Japan’s DDX, and the British Post Office’s PSS also converted to the new standard. Computer equipment manufacturers were forced to develop equipment for the new standard. This was not universally criticized, as the standards provided a potentially large audience for new equipment.

Although the X-series did not resolve all of the issues for transnational data networking users, it did provide a significant crack in the limitations on international data communications and provided a system that worked well enough for the computers of the time. Corporate users as well as the PTTs were temporary placated. A number of privately owned network service providers such as Cybernet and Tymnet used the new protocols as did new publicly-owned networks such as Uninet, Euronet, and Nordic Data Network.

In another attempt to preclude US dominance in networking technology, the British Standards Institute proposed to the ISO in 1977 that the global data communications infrastructure needed a standard architecture. The move was controversial because of the recent work and subsequent unhappiness over X.25. The next year, members party to the International Standards Organization (ISO), namely Japan, France, the US, Britain, and Canada set out to create a new set of standards they called Open Systems Interconnection or OSI using generic components which many different equipment manufacturers could offer. Most equipment for telecommunications networks was built by national electronics manufacturers for domestic markets, but the internationalization of communications require a different approach because multiple countries need to be connected and that required compatibility. Work on ISO was done primarily by Honeywell Information Systems, who actually drew heavily on IBM’s SNA (Systems Network Architecture). The layered model was initally favored by the EU that was suspicious of the predominant US protocols.

Libicki describes the process:

  • “The OSI reference model breaks down the problem of data communications into seven layers; this division, in theory, is simple and clean, as show in Figure 4. An application sends data to the application layer, which formats them; to the presentation layer, which specifies byte conversion (e.g. ASCII, byte-ordered integers); to the session layer, which sets up the parameters for dialogue, to the transport layer, which puts sequence numbers on and wraps checksums around packets; to the network layer, which adds addressing and handling information; to the data-link layer, which adds bytes to ensure hop-to-hop integrity and media access; to the physical layer, which translates bits into electrical (or photonic) signals that flow out the wire. The receiver unwraps the message in reverse order, translating the signals into bits, taking the right bits off the network and retaining packets correctly addressed, ensuring message reliability and correct sequencing , establishing dialogue, reading the bytes correctly as characters, numbers, or whatever, and placing formatted bytes into the application. This wrapping and unwrapping process can be considered a flow and the successive attachment and detachment of headers. Each layer in the sender listens only to the layer above it and talks only to the one immediately below it and to a parallel layers in the receiver. It is otherwise blissfully unaware of the activities of the other layers.”[6]

Specifying protocols before their actual implementation turned out to be bad policy. Unfortunately for Japan and Europe, countries who had large domestic equipment manufacturers and did not want the US to control international telecommunications equipment markets, the opposite happened. These countries lost valuable time developing products with OSI standards while the computer networking community increasingly used TCP/IP. As the Internet took off, the manufacturing winners were companies like Cisco and Lucent. They ended up years ahead of other telecom equipment manufacturers and gave the US the early advantage in Internetworking.[7]

In another post, I explore the engineering of a particular political philosopy into TCP/IP.

Citation APA (7th Edition)

Pennings, A.J. (2023, July 14). Pressing Global Standards for Internet Protocols. apennings.com https://apennings.com/digital-coordination/pressing-global-standards-for-internet-protocols/

Share

Notes

[1] Janet Abbate. (1999) History of the Internet. Cambridge, MA: The MIT Press. p. 149.
[2] Janet Abbate. (1999) History of the Internet. Cambridge, MA: The MIT Press. p. 150.
[3] ibid, p. 155.
[4] Helmers, S.A. (1989) Data Communications: A Beginner’s Guide to Concepts and Technology. Englewood Cliffs, NJ: Prentice Hall. p. 180.
[5] Schiller, D. (1982) Telematics and Government. Norwood, NJ: Ablex Publishing Corporation. p. 109.
[6] Libicki, M.C. (1995) “Standards: The Rough Road to the Common Byte.” In Kahin, B. and Abbate, J. Standards Policy for Information Infrastructure. Cambridge, MA: The MIT Press. pp. 46-47.
[7] Abbate, p. 124.

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea where he teaches broadband technologies and policy. From 2002-2012 was on the faculty of New York University where he taught digital economics while managing programs addressing information systems and telecommunications. He also taught in Digital Media MBA atSt. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Public and Private Goods: Social and Policy Implications

Posted on | May 24, 2023 | No Comments

In a previous related posts, I wrote about how digital content and services can be considered “misbehaving economic goods” because most don’t conform to the standard product that is individually owned and consumed in its entirety. In this post, I expand that analysis to a wider continuum of different types of public and private goods.

Economics is mainly based on the assumption that when a good or service is consumed, it is used up wholly by its one owner. But not all goods and services fit this standard model. This post looks at four different types of economic goods:

    – private goods,
    – public goods,
    – common goods, and
    – club goods.

Each of these economic product types displays varying degrees of “rivalry” and “excludability.” This means 1) the degree of consumption or “subtractibility,” and 2) whether non-paying consumers can be excluded from their consumption. In other words, does the product dissappear in its consumption? And, how easy is it to protect the product from unauthorized consumption? Understanding the characteristics of various goods and services can help guide economic processes towards more sustainable practices while maintaining high standards of living.

Media products like a cinema showing or a television program have different characteristics. They are not consumed by an individual owner But is it difficult to restrict non-paying users/viewer/consumers from enjoying them? Cinemas can project one movie to large groups because it is not diminished by any one viewer although it need walls and security to keep non-payers out.

TV and radio began by broadcasting a signal out to a large population. Anyone with a receiver could enjoy the broadcast. Cable distribution and encryption techniques allowed more channels and the ability to to monetize more households. These variations raise a number of questions about the ownership and consumption of different types of products and their economic analysis.

Over the top (OTT) television services have introduced new models. It doesn’t broadcast, but requires a subscription. Programming is not substracted and the platforms provide some exclusionary techniques to keep unauthorized viewers out. It is possible to share passwords among viewers and most OTT have warned consumers that they will charge more if the passwords are shared outside the “household,” a particular concern for college students leaving home for campus.

So, the characteristics of goods and services raises many questions about how society should organize itself to offer these different types of economic products. One issue is government regulation. Media products and services have traditionally required a fair amount of government regulation and sometimes government ownership of key resources. This is primarily because the main economic driver was the electromagnetic spectrum, which governments claimed for the public good. Should the government claim claim control over the sources of water too? Some goods, like fish, are mainly harvested from resources like lakes, rivers, and oceans that prosper if they are protected, and access restricted from overuse or pollution.

This section looks at different types of goods in more detail.

Private Goods

The standard category for economic goods is private goods. Private goods are rivalrous and excludable. For example, a person eating an apple consumes that particular fruit, which is not available for rivals to eat. An apple can be cut up and shared, but it is ultimately “subtracted” from the economy. Having lived in orchard country, I know you can enter and steal an apricot or apple, but because its bulky, you are not likely to take much.

Economists like to use the term households, partially because many products, such as a refrigerator or a car, are shared among a small group. Other examples of private goods include food items like ice cream, clothing, and durable goods like a television.

Common Goods

Common goods are rivalrous but non-excludable, which means they can be subtracted from the economy, but it may be difficult to exclude others. Public libraries loan out books, making them unavailable to others. Tablespace and comfortable chairs at libraries can also be occupied, although it is difficult to exclude people from them.

Fishing results in catches that are consumed as sashimi or other fish fillets. But the openness of lakes, rivers, and oceans makes it challenging to exclude people from fishing them. Similarly, groundwater can be drilled and piped to the surface, but it isn’t easy to keep others from consuming water from the same source.

Oil then, is a common good. In the US, if you own the property rights to the land where you can drill, you can claim ownership of all you pump. Most other countries have nationalized their oil production and cut deals with major drilling and distribution companies to extract, refine, and sell the oil. Russia privatized its oil industries after the collapse of communist USSR, but has re-nationalized much of its control under Rosneft, a former state enterprise that is now a public-traded monopoly.

Oil retrieved from the ground and used in an automobile is rivalrous of course. An internal combustion engine explodes the hydrocarbons to push a piston that turns an axle and spins the wheels. The gas or petrol consumed in the production of the energy. Howevery, when the energy is released, by-products like carbon monoxide and carbon dioxide enter the atmosphere. This imposes a cost of others and its called an externality.

Club Goods

Club goods are non-rivalrous and excludable. In other words, they cannot be consumed with usage, but it is possible to exclude consumers who do not pay. A movie theater can exclude people from attending the movie, but the film is not consumed by the audiences. It is not subtracted from the economy. The audience doesn’t compete for the cinematic experience; it shares the experience. That is why club goods are often called “collective goods.” These goods are usually made artificially scarce to help produce revenue.

Software is cheaply reproduced and not consumed by a user. However, the history of this product is wrought with the challenges of making it excludable. IBM did not try to monetize software and focused on selling large mainframes and “support” that included the software. But Micro-Soft (Its original spelling) made excludability a major concern and developed several systems used to protect software use from non-licensees.

It only recently moved to a more “freemium” model with Windows 10. Freemium became particularly attractive with the digital economy and the proliferation of apps. A free but limited app could be offered for free to get a consumer to try it. If they like it enough, they can pay for the full application. This strategy takes advantage of network effects and makes sure it gets out to a maximum amount of people.

Public Goods

The other category to consider are those products that are not subtracted from the economy when consumed and whose characteristics make it difficult to exclude nonpaying customers. Broadcast television shows or radio programs transmitted by electromagnetic waves were early examples. Carrying media content to whoever could receive the signals, the television broadcasts were not consumed by any one receiver. It was also difficult to exclude anyone who had the right equipment from enjoying the programs.

The technological exploitation of radio waves presented challenges for monetization and profitability. While some countries like Britain and New Zealand charged a fee on a device for a “licence” to receive content, advertising became an important source of income for broadcasters. It had been pioneered by broadsheets and newspapers as well as billboards and other types of public displays. As radio receivers became popular during the 1920s, it became feasible to advertise on its signals. In 1922, WEAF, a New York-based radio station charged US$50 for a ten-minute “toll broadcast” about the merits of a Jackson Heights apartment complex. These later became known as commercials and were adopted by television as well.

Cable television delivered programming that was originally not rivalrous but developed techniques to exclude non-paying viewers. They broadcast content to paying subscribers via radio frequency (RF) signals transmitted through coaxial cables, or light pulses emitted within fiber-optic cables. Set-top boxes were needed to de-scramble and decode cable channels and allow subscribers to view a single channel.

Unfortunately, this has led to monopoly privileges and has resulted in many viewers “cutting the cord” to cable TV. Cable TV is being challenged by streaming services that easily exclude non-paying members. Or does it? Netflix is trying to limit access to people sharing their plans with other people.

Generally recognized public goods also include firework displays, flood defenses, sanitation collection infrastructure, sewage treatment plants, national defense, radio frequencies, Global Positioning Satellites (GPS) and crime control.

Public goods are suspect to the “free-rider” phenomenon. A person living in a zone that floods regularly but doesn’t pay for taxes going into levees or other protections gets a “free ride.” Perhaps a better example is national defense.

Anti-Rival Goods

What happens when a product actually becomes more valuable when it is used? It is possible that an economic good not only be not subtracted but increase in value when it is used? And increase its value when used by more people. A text application has no value by itself, but as more people join the service, it becomes more valuable. This is an established principle called network effects.

Merit Goods.

Merit goods are goods and services that society deems valuable and the market system does not readily supply. Healthcare and education, child care, public libraries, public spaces, and school meals are examples. Merit goods can generate positive externalities that circulate as positive effects on society. Knowledge creates positive externalities, it spills over to some who were not involved in its creation or consumption.

These are not necessarily all public goods. While medical knowledge is becoming more readily available, a surgeon can operate on a person’s heart, and her resources are not available to others. Hospital beds are limited and medical drugs and subtracted when used. An emerging issue is medical knowledge produced through data science techniques. The notion of public goods is increasingly being used to guide policy development around clinical data.

Economic Goods and Social Policy

Market theory is based a standard model where products are brought to market and are bought and consumed by an individual buyer, whether an individual or a more corporate environment. But as mentioned in a previous post, some products are misbehaving economic goods. A variety of goods do not fit this economic model and as a result present a number of problems for economic theory, technological innovation, and public policy.

Much political debate about economic issues quickly divides between free-market philosophies that champion enterprise and market solutions on the one hand, and economic management by government on the other. The former may be best for private goods, but other goods and services may require alternative solutions to balance production and social concerns.

Much of US technological development was ushered in during the New Deal which recognized the role of public utilities in offering goods like electricity, telephones, and clean water for sanitation and drinking. The move to deregulation that started in the 1970s quickly became more ideological rather than practical, except for telecommunications. Digital technologies emerged within market philosophies, but practical questions have challenged the pure free enterprise orthodoxy.

Summary

Modern economics is largely based on the idea that goods are primarily private goods. But as we move towards a society based more on sustainable and digital processes, we need to examine the characteristics of the goods and services we value. We need to design systems of production and distribution around their characteristics.

Citation APA (7th Edition)

Pennings, A.J. (2023, May 25). Public and Private Goods: Social and Policy Implications. apennings.com https://apennings.com/media-strategies/public-and-private-goods-social-and-policy-implications/

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught digital economics and comparative political economy from 2002-2012 at New York University. He has also spent time as a intern and then fellow working with a team of development economists at the East-West Center in Honolulu, Hawaii.

Remote Sensing Technologies for Disaster Risk Reduction

Posted on | May 22, 2023 | No Comments

I teach a graduate class: EST 561 – Sensing Technologies for Disaster Risk Reduction. Most of the focus is on remote sensing satellites, but drones, robots, and land-based vehicles are also important. Broadly speaking, we say sensing technologies involve ‘acquiring information at a distance.’ This could be a satellite sensing the quality of the crops in a valley, or a commercial airline using radar to determine the weather ahead. It could also be a car navigating through hazardous snow or fog.

Remote sensing is a key technology for disaster risk reduction (DRR) and can be useful in disaster management situations as well. The Sendai Agreement, signed in Japan during 2015 was developed in response to the increasing frequency and severity of disasters around the world such as droughts, hurricanes, fires, and floods. These events have resulted in significant loss of life, damage to infrastructure, and economic losses. The Sendai Agreement stressed disaster understanding, governance, and investing in resilience and preparedness for effective response.

Sensing technologies can provide valuable information about potential hazards, assessing their impact, and supporting response and recovery efforts. This information can support decision-makers and emergency responders before, during, and after disasters. By providing high-resolution maps and imagery (either real-time or archived for analysis over time) they can identify vulnerable areas and monitor changes in the environment, such as changes in land use, crop health, deforestation, and urbanization. They can also monitor structural damage in buildings and infrastructure. For example, Seattle-based drone maker BRINC sent drones to Turkey’s earthquake-ridden Antakya region to view areas that first responders couldn’t reach.

Remote sensing is conducted from a stable platform and observes targets from a distance. This information provides early warning signs of natural hazards, such as floods, wildfires, and landslides, by monitoring changes in environmental conditions, such as rainfall, temperature, and vegetation health. Sensing technology can quickly assess damage after a disaster, allowing decision-makers to prioritize response efforts and allocate resources more effectively. Remote sensing can also provide baseline data on the environment and infrastructure, which can be used to identify potential disaster risks and plan for response and recovery efforts.

This post provides context for our entry into the seven processes of remote sensing that are useful for identifying discrete strategies for analytical purposes. These involve understanding sources of illumination, possible interference in the atmosphere, interactions with the target, recording of energy by the sensor, processing of the information, interpretation, and analysis, as well as the application of data in disaster risk reduction and management situations.

Citation APA (7th Edition)

Pennings, A.J. (2023, May 22). Remote Sensing Technologies for Disaster Risk Reduction. apennings.com https://apennings.com/digital-geography/remote-sensing-technologies-for-disaster-risk-reduction/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, Ph.D. is a Professor in the Department of Technology and Society, State University of New York, Korea where he manages an undergraduate program with a specialization in ICT4D. After getting his PhD from the University of Hawaii, he moved to New Zealand to teach at Victoria University in Wellington. From 2002-2012 he was on the faculty of New York University. His US home is in Austin, Texas.

It’s the E-Commerce, Stupid

Posted on | May 7, 2023 | No Comments

The U.S. developed a comparative advantage in Internet e-commerce partly because of its policy stance. It recognized from early on, when it amended the charter of the National Science Foundation to allow commercial traffic, that the Internet had vast economic potential. Consequently, a strategic U.S. policy agenda on e-commerce emerged as a high priority in the mid-1990s.

On July 1, 1997, the Clinton administration held a ceremony in the East Room of the White House to announce their new initiative, A Framework for Global E-Commerce. Essentially it was a hands-off approach to net business to be guided by the following five principles:

– The private sector should lead the development of the Internet and electronic commerce.
– Government should avoid undue restrictions on electronic commerce.
– Where government is needed, its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce.
– Governments should recognize the unique qualities of the Internet.
– Electronic commerce over the Internet should be facilitated on a global basis.

Clinton also asked Treasury Secretary, Robert Rubin to prevent “discriminatory taxes on electronic commerce” and the U.S. Trade Representative, Charlene Barshefsky, to petition the World Trade Organization to make the Internet a free-trade zone within the year. On February 19, 1998, the U.S. submitted a proposal to the WTO General Council requesting that bit-based electronic transmissions continued to be spared arduous tariffs.[1]

The WTO adopted the Declaration on Global Electronic Commerce on May 20, 1998. Members agreed to “continue their current practice of not imposing customs duties on electronic transmissions.” They also set out to study the trade-related aspects of global Internet commerce, including the needs of developing countries and related work in other international forums.[2]

Later that year, the OECD held a ministerial meeting on electronic commerce in Canada, where the WTO General Council adopted the Work Program on Electronic Commerce. In addition, the September meeting mandated the WTO’s Council for Trade in Services to examine and report on the treatment of electronic commerce in the GATS legal framework.

The WTO already protected e-commerce from taxation until 2001 by the time of “The Battle for Seattle” ministerial meeting in the state of Washington. But concerns were growing as e-commerce took off during the “dot-com craze” of the late 1990s. Particularly, the E.U. and other trade concerns worried about the unfettered ability of software products to be downloaded. France also attacked Yahoo! because its auction site trafficked in Nazi memorabilia. It got the search company to remove the items and established a precedent for a nation-state to police a website in another country. The WTO produced a definition of e-commerce that suggests some of the difficulties in developing meaningful trade policy.

The WTO defined E-commerce as “the production, advertising, sale, and distribution of products via telecommunications networks.” This extensive characterization has made it challenging to classify e-commerce as falling under the framework of the GATT, GATS, or TRIPs agreements. Each had different parameters that influenced the rollout of e-commerce technologies such as the Internet and Internet Protocol Television (IPTV). Nevertheless, the long-awaited convergence of digital technologies required an overarching multilateral trade framework.

Notes

[1] WTO information on e-commerce from Patrick Grady and Kathleen MacMillan’s (1999) Seattle and Beyond: The WTO Millennium Round. Ottawa, Ontario: Global Economics Ltd.
[2] The Geneva Ministerial Declaration on Global Electronic Commerce. Second Session Geneva, 18 and 20 May 1998 at http://www.wto.org/english/tratop_e/ecom_e/mindec1_e.htm

Citation APA (7th Edition)

Pennings, A.J. (2023, May 7). It’s the E-Commerce, Stupid. apennings.com https://apennings.com/uncategorized/its-the-e-commerce-stupid/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he taught comparative political economy, digital economics and traditional macroeconomics. He also taught in Digital Media MBA atSt. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Deregulating U.S. Data Communications

Posted on | May 6, 2023 | No Comments

While deregulation has largely been seen as a phenomenon of the 1980s, tensions in the telecommunications policy structure can be traced back to 1959 when the FCC announced its Above 890 Decision. This determination held that other carriers besides AT&T were free to use the radio spectrum above 890 MHz. The FCC maintained that the spectrum above that level was large enough and technically available for other potential service providers. The fact that no one applied to use the frequencies until many years later does not lessen its significance–the FCC was responding to those who might use the spectrum for data communications and other new services. Because of the Above 890 Decision, part of the telecommunications spectrum was now deregulated for non-AT&T use.[1]

The Bell organization responded to the FCC’s decision with a discounted bulk private line service called Telpak. Although the FCC would declare AT&Ts low-end services (12 and 24 voice circuits) discriminatory in 1976 and illegal because of its extremely low pricing, Telpak (60 and 240 voice circuits) would persist until the middle of 1981. Users became accustomed to it and developed property rights in it over the years as it became part of their infrastructure. AT&T had offered Telpak to deter large users from building their own private telecommunications systems. The Above 890 Decision meant that large corporations such as General Motors could use the frequencies to connect facilities in several locations and even with clients and suppliers. The low tariffs, however, effectively undercut the costs of the private systems and convinced users to stay with Ma Bell.[2]

The Above 890 decision had its toll on the major carriers; however, one that would have far-reaching consequences for the development of national and international telecommunications. One consequence was an alliance between potential manufacturers of microwave equipment that could operate on these frequencies and the potential new bandwidth users. The National Association of Manufacturers had a special committee on radio manufacture that lobbied hard for permission to produce equipment that operated in these ranges. Retailers such as Montgomery Ward, for example, were investigating the potential of installing their own networks to connect their mail order houses, catalog stores, and retail stores which were dispersed widely around the country.[4]

The biggest success, however, occurred when a small company called MCI received permission to set up a microwave system between St. Louis and Chicago. The FCC was impressed with MCI’s market research that indicated large numbers of lower volume users were not being met by AT&T Telpak services. So, despite objections from AT&T, the FCC granted the tariffs for the new routes with both voice and customized data circuits. The MCI startup was the largest private venture initiative in Wall Street’s history up until that time.[5]

The Data Transmission Company (DATRAN) made a subsequent application to provide a nationwide data communication network that the Bell System was not offering. Other than providing a leased circuit with which the user could choose to transmit data, AT&T was not offering any specific data service. It provided its private line service in December of 1973 and switched data service in early 1975.[6] DATRAN ran into financial trouble and never became a threat to the Bell system. Unable to obtain funding, it ceased business in 1976. What it did do was stimulate AT&T into making a significant data-oriented response. In fact, it initiated a crash program at Bell Labs to develop data transmission solutions. AT&T soon came up with Data-Under-Voice, an adequate solution for the time that required only minor adjustments to its existing long-line microwave systems.[7]

The term “online” emerged as a way to avoid the FCC’s requirement to regulate all communications. While the nascent computer industry was experimenting with data transfer over telephone lines, it was coming to the attention of the FCC whose purview according to the Communications Act of 1934 was to regulate “all communication by air or wire.”[8]

The agency initiated a series of “Computer Inquiries” to determine what, if any, stance it should take regarding data communications. The First Computer Inquiry initiated during the 1960s investigated whether data communications should be excluded from government regulations. But just as important, it provided an early voice for the computer users to initiate change in the telecommunications network structure. It was after all, a time in which the only thing attached to the telephone network was a black rotary phone and few basic modems sanctioned by the Bell System. Computer One’s verdict in the early 1970s was to grant more power to corporate users to design and deploy a data communications infrastructure that would best suit their needs. The FCC subsequently created a distinction between unregulated computer services and regulated telecommunications.

Such a differentiation did not ensure however, the successful growth and modernization of network services for eager corporate computer users. A Second Computer Inquiry was initiated in 1976 amidst a widespread adoption of computer technologies by the Fortune 500. But they needed to use the basic telecommunications infrastructure which had been largely built by AT&T. Although AT&T’s Bell Labs had invented the transistor and connected SAGE’s radars over long distances to their central computers, they were not moving fast enough for corporate users. The Bell telephone network was preoccupied with offering universal telephone service and did not see connecting large mainframes as a major market, at first. Their hesitancy was also the result of previous regulation. The Consent Decree of 1956 had restricted AT&T from entering the computer business as well as engaging in any international activities.

The FCC’s decision at the conclusion of the Second Computer Inquiry allowed AT&T to move into the data communications area through an unregulated subsidiary. However, the ultimate fate of domestic data communications would require the resolution of a 1974 antitrust suit against AT&T. In 1982, the Justice Department’s Consent Decree settled against the domestic blue-chip monopoly and broke up the company. This action had a dramatic influence on the shaping of data communications and the Internet until the Telecommunications Act of 1996 created a whole new regulatory model.

In retrospect, Computer One and Computer Two determined that the FCC would continue to work in the interests of the corporate users and the development of data communications, even if that meant ruling against the dominant communications carrier.

Notes

[1] Schiller, D. (1982) Telematics and Government. Norwood, NJ: Ablex Publishing Corporation, p. 38.
[2] ibid, p. 42.
[3] Martin, J. (1976) Telecommunications and the Computer. New York: Prentice Hall, p. 348.
[4] Phister, M. (1979) Data Processing Technology and Economics. Santa Monica, CA: Digital Press.
[5] Phister, M. (1979) Data Processing Technology and Economics. Santa Monica, CA: Digital Press. p.79.
[6] ibid, p. 549.
[7] McGillem, C.D. and McLauchlan, W.P. (1978) Hermes Bound. IN: Purdue University Office of Publications. p. 173.
[8] The transition to “online” from Schiller, D. (1982) Telematics and Government. Norwood, NJ: Ablex Publishing Corporation.

Citation APA (7th Edition)

Pennings, A.J. (2023, May 6). Deregulating U.S. Data Communications. apennings.com https://apennings.com/how-it-came-to-rule-the-world/deregulating-telecommunications/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he taught comparative political economy, digital economics, and traditional macroeconomics. He also taught in Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

“Survivable Communications,” Packet-Switching, and the Internet

Posted on | April 29, 2023 | No Comments

In 1956, President Eisenhower won reelection in a landslide a few months after he signed the legislation for a national interstate highway. Although heavily lobbied for by the auto companies, Eisenhower justified the expensive project in terms of civil defense, arguing that major urban areas needed to be evacuated quickly in case of a USSR bomber attack. The year before, the USSR had denoted the infamous hydrogen bomb with over 1000 times the destructive force of the atomic bomb dropped on Hiroshima. A significant concern dealt with the possibility of a Soviet attack taking out crucial communications capabilities and leaving U.S. commanders without the ability to coordinate civil defense and armed forces. In particular, crucial points of the national communications system could be destroyed, bringing down significant parts of the communications network.

The need for a national air defense system fed the development of the first digital network in the 1950s called the Semi-Automatic Ground Environment (SAGE), which linked a system of radar sites to a centralized computer system developed by IBM and MIT. Later called NORAD, it that found itself burrowed into the granite of Colorado’s Cheyenne Mountains. The multibillion-dollar project also created the rudiments of the modern computer industry and helped AT&T enter the data communications business.

While Lincoln had set up a telegraph room in the War Department outside the White House, but President McKinley was the first U.S. president to centralize electronic information in the White House. During the Spanish-American war, at the turn of the century, McKinley followed activities in both the Caribbean and Pacific through text messages coming into Washington DC over telegraph lines. The Cuban Missile Crisis in 1962 made obvious the new need for a new command and control system to effectively coordinate military activities and intelligence. President Kennedy’s face-off with Nikita Khrushchev over the deployment of Russian missiles off the coast of Florida in Cuba sparked increasing interest in using computers for centralizing and controlling information flows.

The result was the Worldwide Military Command and Control Systems (WWMCCS), a network of centers worldwide organized into a hierarchy for moving information from dispersed military activities and sensors to the top executive. WWMCCS used leased telecommunications lines, although data rates were still so slow that the information was often put on magnetic tape disks and transported over land or via aircraft.[1] Unfortunately this system also failed during the Six-Day War between Egypt and Israel in 1967. Orders were sent by the Joint Chiefs of Staff to move the USS Liberty away from the Israeli coastline. Despite high-priority messages sent to the ship sent through WWMCCS, none were received for over 13 hours. By that time, the Israelis had attacked the ship and 37 of the crew were killed.

In strategic terms, this communications approach suggested a fundamental weakness. Conventional or nuclear attacks could cut communication lines, resulting in the chain of command being disrupted. Political leadership was needed for the flexible response strategy of nuclear war that relied on adapting and responding tactically to an escalating confrontation. The notion of “survivable communications” began to circulate in the early 1960s as a way of ensuring centralized command and control as well as decreasing the temptation to launch a preemptive first strike.

Paul Baran of RAND, an Air Force-sponsored think tank, took an interest in this problem and set out to design a distributed network of switching nodes. Baran had worked with Hughes Aircraft during the late 1950s, helping to create SAGE-like systems for the Army and Navy using transistor technology.[2]

Baran’s eleven-volume On Distributed Communications (1964) set out a plan to develop a store-and-forward message-switching system with redundant communication links that would be automatically used if the others went out of commission. Store-and-forward techniques had been used successfully by telegraph companies. They devised methods for storing incoming messages on paper tape at transitional stations before sending them to their destination or the next intermediate stop when a line was free. At first, this was done manually, but by the time Baran confronted this issue, the telegraph companies were already beginning to use computers.[3] But this was only one part of the solution that would form the foundation for the Internet.

While Baran’s work focused more on making communication links redundant, the trajectory of his work increasingly encountered the need for computer-oriented solutions. AT&T had already built a distributed voice network for the Department of Defense organized in “polygrids” to address survivability. Called AUTOVON, the network tried to protect itself by locating the switching centers in highly protected underground centers away from urban areas. Baran studied this system and discovered three major weaknesses. The first was that although AT&T’s distributed system had switching nodes that were dispersed; the decision to switch was still located in a single operations control center.

The second problem was that the system was largely manual. Operators monitored the network from a control center, and if traffic needed to be rerouted, they would instruct operators at the switching nodes with the proper instructions. The third problem was maintaining the quality of the transmission. A message would have to be rerouted many times before it reached its final destination, increasing the chances of transmission problems. His solution was a computerized network with both digital switching and digital transmission. Instead of the routing decisions coming in from a staffed control center, the nodes would make the switching determinations themselves. The messages would need to be broken up into discreet packages that could be routed separately and resent if a problem occurred.

The proposed solution was packet-switching technology. This yet-to-be-devised equipment would transmit data via addressed “packets” or what Paul Baran called initially “message blocks.” Digital bits were organized into individual blocks of information that could travel separately. Instead of single dedicated lines for continuously transmitting data, packets could be routed through different routes of telecommunications lines. Still using the abstraction of store-and-forward, packets were stored shortly at the next node and switched to the best route to get to their destination. Each packet was equipped with an address as well as the content of the message that could eventually send voice, video, or computer data.

The term “packet-switching” was actually named by Donald Davies of the National Physical Laboratory (NPL) in England. The British government had spearheaded computer technology to win the Second World War, but in its aftermath, it sought to “win the peace” by cutting down on its military and nationalizing key industries. Having booted Winston Churchill out of the Prime Minister’s office, it started down a long road toward rebuilding its war-torn nation.

It was concerned about using computers efficiently and subsidized programs to develop data communications, but the country could not compete with the U.S.’s Cold War mobilization that shaped computer and data communications through its massive budgets, G.I. Bill, the Space Race, and fear of nuclear attack. The British initiatives were soon outpaced by a newly created military agency called ARPA, dedicated to researching and developing new military technologies for all the branches of the U.S. Armed Forces. It would contract out for the realization of Baran’s ideas on survivable communications, creating ARPANET, the first operational packet-switching network.[4]

Citation APA (7th Edition)

Pennings, A.J. (2023, Apr 29). “Survivable Communications,” Packet-Switching, and the Internet. apennings.com https://apennings.com/how-it-came-to-rule-the-world/the-cold-war/survivable-communications-packet-switching-and-the-internet/

Notes

[1] Janet Abbate. (1999) History of the Internet. Cambridge, MA: The MIT Press. p. 31.Abbate, J. p. 134.
[2] Founded by the enigmatic Howard Hughes during the Great Depression, the company was a major government contractor. Stewart Brand interview with Paul Baran, in Wired, March 2001. P. 146.
[3] Abbate, J. pp. 9-21.
[4] Abbate, J. pp. 9-21.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he taught comparative political economy and digital media. He also taught in Digital Media MBA atSt. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    December 2024
    M T W T F S S
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.