Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

Pressing Global Standards for Internet Protocols

Posted on | July 14, 2023 | No Comments

Developing new standards that would allow computers to connect and exchange information was central to the growth of international data networks. Computers and telecommunications networks are routinized codes and procedures constituted and shaped technologically by economic, engineering, and political decisions. Telecommunications requires agreement about the practices of interconnection, the interaction protocols, and the technical standards needed to couple disparate nodes. This post looks at the importance of standards development as the Internet emerged. Several powerful contenders developed competing designs before TCP/IP eventually emerged as the global solution. The OSI (International Organization for Standardization) model, for example, was influential, but turned out to be fatal distraction for many companies.

Standards sometimes emerge out of functionality, sometimes out of cooperation, and often out of pure economic power. Each of these conditions was present in the fight to develop telecommunications equipment for international data communications during the early 1970s and into the 1980s. Standards allow different types of equipment to work together. Choices of standards involve the exclusion of some specifications and the inclusion of others. Standards create competitive advantages for companies. Ultimately, these standards determine whose equipment will be used and whose will either be scrapped or never get off the design board. This situation is, even more, the case when they bridge national boundaries where the protocols and equipment have to be harmonized for effective technical communications.

Users (usually international corporations), international coordinating bodies, and computer equipment manufacturers were all starting to react to the new economic conditions of the 1970s. The movement to floating foreign exchange rates and the increased demand for ICT were especially problematic. Banks and other financial institutions such as the New York Stock Exchange (NYSE) were also very keen to develop data solutions to expand their scope over wider market areas, speed up “back office” data processing services, and provide new services.

Meanwhile, the ITU began soliciting the positions of its member nations and associated corporations regarding their plans to develop data communications and possible joint solutions. Perhaps most importantly, IBM’s Systems Network Architecture (SNA), a proprietary network, had the force of the monolithic computer corporation behind it. SNA was a potential de facto standard for international data communications because of the company’s overwhelming market share in computers.

Several other companies came out with proprietary networks as well during the mid-1970s. Burroughs, Honeywell, and Xerox all drew on ARPANET technology, but designed their network to work with the computers that only they manufactured.[1] As electronic money and other desired services emerged worldwide, these three stakeholders (users, ITU, computer OEMs) attempted to develop the conduits for world’s new wealth.

International organizations were also key to the standards development in the arena of international data communications. The ITU and the OSI initiated international public standards on behalf of their member states and telecommunications agencies. The ITU’s Consultative Committee on International Telegraphy and Telephony (CCITT) was responsible for coordinating computer communication standards and policies among its member Post, Telephone, and Telegraphy (PTT) organizations. This committee produced “Recommendations” for standardization, which usually were accepted readily by its member nations.[2] As early as 1973, the ITU started to develop its X-series of telecommunications protocols for data packet transfer (X indicated data communications in the CCITT’s taxonomy).

Another important standards body mentioned above, is the International Organization for Standards (ISO). The ISO was formed in 1946 to coordinate standards in a wide range of industries. In this case, they represented primarily the telecommunications and computer equipment manufacturers. ANSI, the American National Standards Institute, represented the US.

Controversy emerged in October 1974 and revolved around IBM’s SNA network, which the Canadian PTT had taken issue with. The Trans-Canada Telephone System (TCTS) wanted to produce and promote its own packet-switching network that it called Datapac. It had been developing its own protocols and was concerned that IBM would develop monopolistic control over the data communications market if allowed to continue to build its own transborder private networks. Although most computers connected at the time were IBM, the TCTS wanted circuitry that would allow other types of computers to use the network.

Both sides came to a “standoff” in mid 1975 as IBM wanted the PTT to use its SNA standards and the carrier tried to persuade IBM to conform to Canada’s requirements. The International Telecommunications Union attempted to resolve the situation by forming an ad hoc group to come up with universal standards for connecting “public” networks. Britain, Canada and France along with BBN spin-off Telenet from the US started to work on what was to become the X.25 data networking standard.

The ITU’s CCITT, who represented the interests of the PTT telecommunications carriers, proposed X.25 and X.75 standards out of a sense of mutual interest among its members in retaining their monopoly positions. US representatives, including the US Defense Communications Agency were pushing the new TCP/IP protocols developed by ARPANET because of its inherent network and management advantages for computer operators. Packet-switching broke up information and repackaged it in individual packets of bits that needed to be passed though the telecommunications circuit to the intended destination. TCP gave data processing managers more control because it was responsible for initiating and setting up the connection between hosts.

In order for this to work, all the packets must arrived safely and be placed in the proper order. In order to get reliable information a data checking procedure needs to catch packets that are lost or damaged. TCP placed this responsibility at the computer host while X.25 placed it within the network, and thus under the control of network provider. The US pushed hard for the TCP/IP standard in the CCITT proceedings but were refused by the PTTs who had other plans.[1]

Tensions increased due to a critical timeframe. The CCITT wanted to specify a protocol by 1976 as it met only every four years to vote on new standards. They had to work quickly in order to meet and vote on the standards in the 1976 CCITT plenary that was coming together in September. Otherwise they would have to wait until 1980.

The X.25 standards were developed and examined throughout the summer of 1976 and approved by the CCITT members in September. The hastily contrived protocol was approved over the objections of US representatives who wanted TCP/IP institutionalized. The PTTs and other carriers argued that TCP/IP was unproven and requiring its implementation on all the hosts they would serve was unreasonable. Given ARPANET hosts’ difficulty implementing TCP/IP by 1983, their concerns had substance. X.25 and another standard, X.75, put PTTs in a dominant position regarding datacom, despite the robustness of computer innovations, and the continuing call by corporations for better service.

The ARPANET’s packet-switching techniques made it into the commercial world with the help of the X-series of protocols defined by the ITU in conjunction with some former ARPANET employees. A store-and-forward technology rooted in telegraphy, it passed data packets over a special network to find the quickest route to its destination. What was needed was an interface to connect the corporation or research institute’s computer to the network.

The X.25 protocol was created to provide the connection from the computer to the data network. At the user’s firm, “dumb” terminals, word processors, mainframes, and minicomputers (known in the vernacular as DTE or Data Terminal Equipment) could be connected to the X.25 interface equipment with technology called PADs (Packet Assemblers/Dissamblers). The conversion of data from the external device to the X.25 network was transparent to the terminal and would not effect the message. An enterprise could build its own network by installing a number of switching computers connected by high-speed lines (usually 56k up to the late 1980s).

X.25 connected these specially designed computers to the data network. The network could also be set up by a separate company or government organization to provide data networking services to customers. In many cases a hybrid network could be set up combining private facilities with connections to a public-switched data network.[4]

Developed primarily by Larry Roberts from ARPA, who later went to work with Telenet’s value-added networks, X.25 was a compromise that provided basic data communications for transnational users while keeping the carriers in charge. The standard was eagerly anticipated by the national PTTs who were beginning to realize the importance of data communications and the danger of allowing computer manufacturers to monopolize the standards process by developing proprietary networks. What was surprising though, was the endorsement of X.25 by the transnational banks and other major users of computer communications. As Schiller explained:

  • What is unusual is that U.S. transnational corporations, in the face of European intransigence, seem to have endorsed the X.25 standard. In a matter of a few months, Manufacturers Hanover, Chase Manhattan, and Bank of America announced their support for X.25, the U.S. Federal Reserve bruited the idea of acceptance, and the Federal Government endorsed an X.25-based interim standard for its National Communications System. Bank of America, which on a busy day passes $20 billion in assets through its worldwide network “cannot stall its expansion planning until IBM gives its blessing to a de facto international standard,” claims one report. Yet even more unusual, large users’ demands found their mark even over the interests of IBM, with its tremendous market share of the world’s computer base. In summer, 1981, IBM announced its decision to support the X.25 standard within the United States.[5]

Telenet subsequently filed an application with the FCC to extend its domestic value-added services internationally using the X.25 standard and a number of PTTs such as France’s Transpac, Japan’s DDX, and the British Post Office’s PSS also converted to the new standard. Computer equipment manufacturers were forced to develop equipment for the new standard. This was not universally criticized, as the standards provided a potentially large audience for new equipment.

Although the X-series did not resolve all of the issues for transnational data networking users, it did provide a significant crack in the limitations on international data communications and provided a system that worked well enough for the computers of the time. Corporate users as well as the PTTs were temporary placated. A number of privately owned network service providers such as Cybernet and Tymnet used the new protocols as did new publicly-owned networks such as Uninet, Euronet, and Nordic Data Network.

In another attempt to preclude US dominance in networking technology, the British Standards Institute proposed to the ISO in 1977 that the global data communications infrastructure needed a standard architecture. The move was controversial because of the recent work and subsequent unhappiness over X.25. The next year, members party to the International Standards Organization (ISO), namely Japan, France, the US, Britain, and Canada set out to create a new set of standards they called Open Systems Interconnection or OSI using generic components which many different equipment manufacturers could offer. Most equipment for telecommunications networks was built by national electronics manufacturers for domestic markets, but the internationalization of communications require a different approach because multiple countries need to be connected and that required compatibility. Work on ISO was done primarily by Honeywell Information Systems, who actually drew heavily on IBM’s SNA (Systems Network Architecture). The layered model was initally favored by the EU that was suspicious of the predominant US protocols.

Libicki describes the process:

  • “The OSI reference model breaks down the problem of data communications into seven layers; this division, in theory, is simple and clean, as show in Figure 4. An application sends data to the application layer, which formats them; to the presentation layer, which specifies byte conversion (e.g. ASCII, byte-ordered integers); to the session layer, which sets up the parameters for dialogue, to the transport layer, which puts sequence numbers on and wraps checksums around packets; to the network layer, which adds addressing and handling information; to the data-link layer, which adds bytes to ensure hop-to-hop integrity and media access; to the physical layer, which translates bits into electrical (or photonic) signals that flow out the wire. The receiver unwraps the message in reverse order, translating the signals into bits, taking the right bits off the network and retaining packets correctly addressed, ensuring message reliability and correct sequencing , establishing dialogue, reading the bytes correctly as characters, numbers, or whatever, and placing formatted bytes into the application. This wrapping and unwrapping process can be considered a flow and the successive attachment and detachment of headers. Each layer in the sender listens only to the layer above it and talks only to the one immediately below it and to a parallel layers in the receiver. It is otherwise blissfully unaware of the activities of the other layers.”[6]

Specifying protocols before their actual implementation turned out to be bad policy. Unfortunately for Japan and Europe, countries who had large domestic equipment manufacturers and did not want the US to control international telecommunications equipment markets, the opposite happened. These countries lost valuable time developing products with OSI standards while the computer networking community increasingly used TCP/IP. As the Internet took off, the manufacturing winners were companies like Cisco and Lucent. They ended up years ahead of other telecom equipment manufacturers and gave the US the early advantage in Internetworking.[7]

In another post, I explore the engineering of a particular political philosopy into TCP/IP.

Citation APA (7th Edition)

Pennings, A.J. (2023, July 14). Pressing Global Standards for Internet Protocols. apennings.com https://apennings.com/digital-coordination/pressing-global-standards-for-internet-protocols/

Share

Notes

[1] Janet Abbate. (1999) History of the Internet. Cambridge, MA: The MIT Press. p. 149.
[2] Janet Abbate. (1999) History of the Internet. Cambridge, MA: The MIT Press. p. 150.
[3] ibid, p. 155.
[4] Helmers, S.A. (1989) Data Communications: A Beginner’s Guide to Concepts and Technology. Englewood Cliffs, NJ: Prentice Hall. p. 180.
[5] Schiller, D. (1982) Telematics and Government. Norwood, NJ: Ablex Publishing Corporation. p. 109.
[6] Libicki, M.C. (1995) “Standards: The Rough Road to the Common Byte.” In Kahin, B. and Abbate, J. Standards Policy for Information Infrastructure. Cambridge, MA: The MIT Press. pp. 46-47.
[7] Abbate, p. 124.

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea where he teaches broadband technologies and policy. From 2002-2012 was on the faculty of New York University where he taught digital economics while managing programs addressing information systems and telecommunications. He also taught in Digital Media MBA atSt. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Public and Private Goods: Social and Policy Implications

Posted on | May 24, 2023 | No Comments

In a previous related posts, I wrote about how digital content and services can be considered “misbehaving economic goods” because most don’t conform to the standard product that is individually owned and consumed in its entirety. In this post, I expand that analysis to a wider continuum of different types of public and private goods.

Economics is mainly based on the assumption that when a good or service is consumed, it is used up wholly by its one owner. But not all goods and services fit this standard model. This post looks at four different types of economic goods:

    – private goods,
    – public goods,
    – common goods, and
    – club goods.

Each of these economic product types displays varying degrees of “rivalry” and “excludability.” This means 1) the degree of consumption or “subtractibility,” and 2) whether non-paying consumers can be excluded from their consumption. In other words, does the product dissappear in its consumption? And, how easy is it to protect the product from unauthorized consumption? Understanding the characteristics of various goods and services can help guide economic processes towards more sustainable practices while maintaining high standards of living.

Media products like a cinema showing or a television program have different characteristics. They are not consumed by an individual owner But is it difficult to restrict non-paying users/viewer/consumers from enjoying them? Cinemas can project one movie to large groups because it is not diminished by any one viewer although it need walls and security to keep non-payers out.

TV and radio began by broadcasting a signal out to a large population. Anyone with a receiver could enjoy the broadcast. Cable distribution and encryption techniques allowed more channels and the ability to to monetize more households. These variations raise a number of questions about the ownership and consumption of different types of products and their economic analysis.

Over the top (OTT) television services have introduced new models. It doesn’t broadcast, but requires a subscription. Programming is not substracted and the platforms provide some exclusionary techniques to keep unauthorized viewers out. It is possible to share passwords among viewers and most OTT have warned consumers that they will charge more if the passwords are shared outside the “household,” a particular concern for college students leaving home for campus.

So, the characteristics of goods and services raises many questions about how society should organize itself to offer these different types of economic products. One issue is government regulation. Media products and services have traditionally required a fair amount of government regulation and sometimes government ownership of key resources. This is primarily because the main economic driver was the electromagnetic spectrum, which governments claimed for the public good. Should the government claim claim control over the sources of water too? Some goods, like fish, are mainly harvested from resources like lakes, rivers, and oceans that prosper if they are protected, and access restricted from overuse or pollution.

This section looks at different types of goods in more detail.

Private Goods

The standard category for economic goods is private goods. Private goods are rivalrous and excludable. For example, a person eating an apple consumes that particular fruit, which is not available for rivals to eat. An apple can be cut up and shared, but it is ultimately “subtracted” from the economy. Having lived in orchard country, I know you can enter and steal an apricot or apple, but because its bulky, you are not likely to take much.

Economists like to use the term households, partially because many products, such as a refrigerator or a car, are shared among a small group. Other examples of private goods include food items like ice cream, clothing, and durable goods like a television.

Common Goods

Common goods are rivalrous but non-excludable, which means they can be subtracted from the economy, but it may be difficult to exclude others. Public libraries loan out books, making them unavailable to others. Tablespace and comfortable chairs at libraries can also be occupied, although it is difficult to exclude people from them.

Fishing results in catches that are consumed as sashimi or other fish fillets. But the openness of lakes, rivers, and oceans makes it challenging to exclude people from fishing them. Similarly, groundwater can be drilled and piped to the surface, but it isn’t easy to keep others from consuming water from the same source.

Oil then, is a common good. In the US, if you own the property rights to the land where you can drill, you can claim ownership of all you pump. Most other countries have nationalized their oil production and cut deals with major drilling and distribution companies to extract, refine, and sell the oil. Russia privatized its oil industries after the collapse of communist USSR, but has re-nationalized much of its control under Rosneft, a former state enterprise that is now a public-traded monopoly.

Oil retrieved from the ground and used in an automobile is rivalrous of course. An internal combustion engine explodes the hydrocarbons to push a piston that turns an axle and spins the wheels. The gas or petrol consumed in the production of the energy. Howevery, when the energy is released, by-products like carbon monoxide and carbon dioxide enter the atmosphere. This imposes a cost of others and its called an externality.

Club Goods

Club goods are non-rivalrous and excludable. In other words, they cannot be consumed with usage, but it is possible to exclude consumers who do not pay. A movie theater can exclude people from attending the movie, but the film is not consumed by the audiences. It is not subtracted from the economy. The audience doesn’t compete for the cinematic experience; it shares the experience. That is why club goods are often called “collective goods.” These goods are usually made artificially scarce to help produce revenue.

Software is cheaply reproduced and not consumed by a user. However, the history of this product is wrought with the challenges of making it excludable. IBM did not try to monetize software and focused on selling large mainframes and “support” that included the software. But Micro-Soft (Its original spelling) made excludability a major concern and developed several systems used to protect software use from non-licensees.

It only recently moved to a more “freemium” model with Windows 10. Freemium became particularly attractive with the digital economy and the proliferation of apps. A free but limited app could be offered for free to get a consumer to try it. If they like it enough, they can pay for the full application. This strategy takes advantage of network effects and makes sure it gets out to a maximum amount of people.

Public Goods

The other category to consider are those products that are not subtracted from the economy when consumed and whose characteristics make it difficult to exclude nonpaying customers. Broadcast television shows or radio programs transmitted by electromagnetic waves were early examples. Carrying media content to whoever could receive the signals, the television broadcasts were not consumed by any one receiver. It was also difficult to exclude anyone who had the right equipment from enjoying the programs.

The technological exploitation of radio waves presented challenges for monetization and profitability. While some countries like Britain and New Zealand charged a fee on a device for a “licence” to receive content, advertising became an important source of income for broadcasters. It had been pioneered by broadsheets and newspapers as well as billboards and other types of public displays. As radio receivers became popular during the 1920s, it became feasible to advertise on its signals. In 1922, WEAF, a New York-based radio station charged US$50 for a ten-minute “toll broadcast” about the merits of a Jackson Heights apartment complex. These later became known as commercials and were adopted by television as well.

Cable television delivered programming that was originally not rivalrous but developed techniques to exclude non-paying viewers. They broadcast content to paying subscribers via radio frequency (RF) signals transmitted through coaxial cables, or light pulses emitted within fiber-optic cables. Set-top boxes were needed to de-scramble and decode cable channels and allow subscribers to view a single channel.

Unfortunately, this has led to monopoly privileges and has resulted in many viewers “cutting the cord” to cable TV. Cable TV is being challenged by streaming services that easily exclude non-paying members. Or does it? Netflix is trying to limit access to people sharing their plans with other people.

Generally recognized public goods also include firework displays, flood defenses, sanitation collection infrastructure, sewage treatment plants, national defense, radio frequencies, Global Positioning Satellites (GPS) and crime control.

Public goods are suspect to the “free-rider” phenomenon. A person living in a zone that floods regularly but doesn’t pay for taxes going into levees or other protections gets a “free ride.” Perhaps a better example is national defense.

Anti-Rival Goods

What happens when a product actually becomes more valuable when it is used? It is possible that an economic good not only be not subtracted but increase in value when it is used? And increase its value when used by more people. A text application has no value by itself, but as more people join the service, it becomes more valuable. This is an established principle called network effects.

Merit Goods.

Merit goods are goods and services that society deems valuable and the market system does not readily supply. Healthcare and education, child care, public libraries, public spaces, and school meals are examples. Merit goods can generate positive externalities that circulate as positive effects on society. Knowledge creates positive externalities, it spills over to some who were not involved in its creation or consumption.

These are not necessarily all public goods. While medical knowledge is becoming more readily available, a surgeon can operate on a person’s heart, and her resources are not available to others. Hospital beds are limited and medical drugs and subtracted when used. An emerging issue is medical knowledge produced through data science techniques. The notion of public goods is increasingly being used to guide policy development around clinical data.

Economic Goods and Social Policy

Market theory is based a standard model where products are brought to market and are bought and consumed by an individual buyer, whether an individual or a more corporate environment. But as mentioned in a previous post, some products are misbehaving economic goods. A variety of goods do not fit this economic model and as a result present a number of problems for economic theory, technological innovation, and public policy.

Much political debate about economic issues quickly divides between free-market philosophies that champion enterprise and market solutions on the one hand, and economic management by government on the other. The former may be best for private goods, but other goods and services may require alternative solutions to balance production and social concerns.

Much of US technological development was ushered in during the New Deal which recognized the role of public utilities in offering goods like electricity, telephones, and clean water for sanitation and drinking. The move to deregulation that started in the 1970s quickly became more ideological rather than practical, except for telecommunications. Digital technologies emerged within market philosophies, but practical questions have challenged the pure free enterprise orthodoxy.

Summary

Modern economics is largely based on the idea that goods are primarily private goods. But as we move towards a society based more on sustainable and digital processes, we need to examine the characteristics of the goods and services we value. We need to design systems of production and distribution around their characteristics.

Citation APA (7th Edition)

Pennings, A.J. (2023, May 25). Public and Private Goods: Social and Policy Implications. apennings.com https://apennings.com/media-strategies/public-and-private-goods-social-and-policy-implications/

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught digital economics and comparative political economy from 2002-2012 at New York University. He has also spent time as a intern and then fellow working with a team of development economists at the East-West Center in Honolulu, Hawaii.

Remote Sensing Technologies for Disaster Risk Reduction

Posted on | May 22, 2023 | No Comments

I teach a graduate class: EST 561 – Sensing Technologies for Disaster Risk Reduction. Most of the focus is on remote sensing satellites, but drones, robots, and land-based vehicles are also important. Broadly speaking, we say sensing technologies involve ‘acquiring information at a distance.’ This could be a satellite sensing the quality of the crops in a valley, or a commercial airline using radar to determine the weather ahead. It could also be a car navigating through hazardous snow or fog.

Remote sensing is a key technology for disaster risk reduction (DRR) and can be useful in disaster management situations as well. The Sendai Agreement, signed in Japan during 2015 was developed in response to the increasing frequency and severity of disasters around the world such as droughts, hurricanes, fires, and floods. These events have resulted in significant loss of life, damage to infrastructure, and economic losses. The Sendai Agreement stressed disaster understanding, governance, and investing in resilience and preparedness for effective response.

Sensing technologies can provide valuable information about potential hazards, assessing their impact, and supporting response and recovery efforts. This information can support decision-makers and emergency responders before, during, and after disasters. By providing high-resolution maps and imagery (either real-time or archived for analysis over time) they can identify vulnerable areas and monitor changes in the environment, such as changes in land use, crop health, deforestation, and urbanization. They can also monitor structural damage in buildings and infrastructure. For example, Seattle-based drone maker BRINC sent drones to Turkey’s earthquake-ridden Antakya region to view areas that first responders couldn’t reach.

Remote sensing is conducted from a stable platform and observes targets from a distance. This information provides early warning signs of natural hazards, such as floods, wildfires, and landslides, by monitoring changes in environmental conditions, such as rainfall, temperature, and vegetation health. Sensing technology can quickly assess damage after a disaster, allowing decision-makers to prioritize response efforts and allocate resources more effectively. Remote sensing can also provide baseline data on the environment and infrastructure, which can be used to identify potential disaster risks and plan for response and recovery efforts.

This post provides context for our entry into the seven processes of remote sensing that are useful for identifying discrete strategies for analytical purposes. These involve understanding sources of illumination, possible interference in the atmosphere, interactions with the target, recording of energy by the sensor, processing of the information, interpretation, and analysis, as well as the application of data in disaster risk reduction and management situations.

Citation APA (7th Edition)

Pennings, A.J. (2023, May 22). Remote Sensing Technologies for Disaster Risk Reduction. apennings.com https://apennings.com/digital-geography/remote-sensing-technologies-for-disaster-risk-reduction/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, Ph.D. is a Professor in the Department of Technology and Society, State University of New York, Korea where he manages an undergraduate program with a specialization in ICT4D. After getting his PhD from the University of Hawaii, he moved to New Zealand to teach at Victoria University in Wellington. From 2002-2012 he was on the faculty of New York University. His US home is in Austin, Texas.

It’s the E-Commerce, Stupid

Posted on | May 7, 2023 | No Comments

The U.S. developed a comparative advantage in Internet e-commerce partly because of its policy stance. It recognized from early on, when it amended the charter of the National Science Foundation to allow commercial traffic, that the Internet had vast economic potential. Consequently, a strategic U.S. policy agenda on e-commerce emerged as a high priority in the mid-1990s.

On July 1, 1997, the Clinton administration held a ceremony in the East Room of the White House to announce their new initiative, A Framework for Global E-Commerce. Essentially it was a hands-off approach to net business to be guided by the following five principles:

– The private sector should lead the development of the Internet and electronic commerce.
– Government should avoid undue restrictions on electronic commerce.
– Where government is needed, its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce.
– Governments should recognize the unique qualities of the Internet.
– Electronic commerce over the Internet should be facilitated on a global basis.

Clinton also asked Treasury Secretary, Robert Rubin to prevent “discriminatory taxes on electronic commerce” and the U.S. Trade Representative, Charlene Barshefsky, to petition the World Trade Organization to make the Internet a free-trade zone within the year. On February 19, 1998, the U.S. submitted a proposal to the WTO General Council requesting that bit-based electronic transmissions continued to be spared arduous tariffs.[1]

The WTO adopted the Declaration on Global Electronic Commerce on May 20, 1998. Members agreed to “continue their current practice of not imposing customs duties on electronic transmissions.” They also set out to study the trade-related aspects of global Internet commerce, including the needs of developing countries and related work in other international forums.[2]

Later that year, the OECD held a ministerial meeting on electronic commerce in Canada, where the WTO General Council adopted the Work Program on Electronic Commerce. In addition, the September meeting mandated the WTO’s Council for Trade in Services to examine and report on the treatment of electronic commerce in the GATS legal framework.

The WTO already protected e-commerce from taxation until 2001 by the time of “The Battle for Seattle” ministerial meeting in the state of Washington. But concerns were growing as e-commerce took off during the “dot-com craze” of the late 1990s. Particularly, the E.U. and other trade concerns worried about the unfettered ability of software products to be downloaded. France also attacked Yahoo! because its auction site trafficked in Nazi memorabilia. It got the search company to remove the items and established a precedent for a nation-state to police a website in another country. The WTO produced a definition of e-commerce that suggests some of the difficulties in developing meaningful trade policy.

The WTO defined E-commerce as “the production, advertising, sale, and distribution of products via telecommunications networks.” This extensive characterization has made it challenging to classify e-commerce as falling under the framework of the GATT, GATS, or TRIPs agreements. Each had different parameters that influenced the rollout of e-commerce technologies such as the Internet and Internet Protocol Television (IPTV). Nevertheless, the long-awaited convergence of digital technologies required an overarching multilateral trade framework.

Notes

[1] WTO information on e-commerce from Patrick Grady and Kathleen MacMillan’s (1999) Seattle and Beyond: The WTO Millennium Round. Ottawa, Ontario: Global Economics Ltd.
[2] The Geneva Ministerial Declaration on Global Electronic Commerce. Second Session Geneva, 18 and 20 May 1998 at http://www.wto.org/english/tratop_e/ecom_e/mindec1_e.htm

Citation APA (7th Edition)

Pennings, A.J. (2023, May 7). It’s the E-Commerce, Stupid. apennings.com https://apennings.com/uncategorized/its-the-e-commerce-stupid/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he taught comparative political economy, digital economics and traditional macroeconomics. He also taught in Digital Media MBA atSt. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Deregulating U.S. Data Communications

Posted on | May 6, 2023 | No Comments

While deregulation has largely been seen as a phenomenon of the 1980s, tensions in the telecommunications policy structure can be traced back to 1959 when the FCC announced its Above 890 Decision. This determination held that other carriers besides AT&T were free to use the radio spectrum above 890 MHz. The FCC maintained that the spectrum above that level was large enough and technically available for other potential service providers. The fact that no one applied to use the frequencies until many years later does not lessen its significance–the FCC was responding to those who might use the spectrum for data communications and other new services. Because of the Above 890 Decision, part of the telecommunications spectrum was now deregulated for non-AT&T use.[1]

The Bell organization responded to the FCC’s decision with a discounted bulk private line service called Telpak. Although the FCC would declare AT&Ts low-end services (12 and 24 voice circuits) discriminatory in 1976 and illegal because of its extremely low pricing, Telpak (60 and 240 voice circuits) would persist until the middle of 1981. Users became accustomed to it and developed property rights in it over the years as it became part of their infrastructure. AT&T had offered Telpak to deter large users from building their own private telecommunications systems. The Above 890 Decision meant that large corporations such as General Motors could use the frequencies to connect facilities in several locations and even with clients and suppliers. The low tariffs, however, effectively undercut the costs of the private systems and convinced users to stay with Ma Bell.[2]

The Above 890 decision had its toll on the major carriers; however, one that would have far-reaching consequences for the development of national and international telecommunications. One consequence was an alliance between potential manufacturers of microwave equipment that could operate on these frequencies and the potential new bandwidth users. The National Association of Manufacturers had a special committee on radio manufacture that lobbied hard for permission to produce equipment that operated in these ranges. Retailers such as Montgomery Ward, for example, were investigating the potential of installing their own networks to connect their mail order houses, catalog stores, and retail stores which were dispersed widely around the country.[4]

The biggest success, however, occurred when a small company called MCI received permission to set up a microwave system between St. Louis and Chicago. The FCC was impressed with MCI’s market research that indicated large numbers of lower volume users were not being met by AT&T Telpak services. So, despite objections from AT&T, the FCC granted the tariffs for the new routes with both voice and customized data circuits. The MCI startup was the largest private venture initiative in Wall Street’s history up until that time.[5]

The Data Transmission Company (DATRAN) made a subsequent application to provide a nationwide data communication network that the Bell System was not offering. Other than providing a leased circuit with which the user could choose to transmit data, AT&T was not offering any specific data service. It provided its private line service in December of 1973 and switched data service in early 1975.[6] DATRAN ran into financial trouble and never became a threat to the Bell system. Unable to obtain funding, it ceased business in 1976. What it did do was stimulate AT&T into making a significant data-oriented response. In fact, it initiated a crash program at Bell Labs to develop data transmission solutions. AT&T soon came up with Data-Under-Voice, an adequate solution for the time that required only minor adjustments to its existing long-line microwave systems.[7]

The term “online” emerged as a way to avoid the FCC’s requirement to regulate all communications. While the nascent computer industry was experimenting with data transfer over telephone lines, it was coming to the attention of the FCC whose purview according to the Communications Act of 1934 was to regulate “all communication by air or wire.”[8]

The agency initiated a series of “Computer Inquiries” to determine what, if any, stance it should take regarding data communications. The First Computer Inquiry initiated during the 1960s investigated whether data communications should be excluded from government regulations. But just as important, it provided an early voice for the computer users to initiate change in the telecommunications network structure. It was after all, a time in which the only thing attached to the telephone network was a black rotary phone and few basic modems sanctioned by the Bell System. Computer One’s verdict in the early 1970s was to grant more power to corporate users to design and deploy a data communications infrastructure that would best suit their needs. The FCC subsequently created a distinction between unregulated computer services and regulated telecommunications.

Such a differentiation did not ensure however, the successful growth and modernization of network services for eager corporate computer users. A Second Computer Inquiry was initiated in 1976 amidst a widespread adoption of computer technologies by the Fortune 500. But they needed to use the basic telecommunications infrastructure which had been largely built by AT&T. Although AT&T’s Bell Labs had invented the transistor and connected SAGE’s radars over long distances to their central computers, they were not moving fast enough for corporate users. The Bell telephone network was preoccupied with offering universal telephone service and did not see connecting large mainframes as a major market, at first. Their hesitancy was also the result of previous regulation. The Consent Decree of 1956 had restricted AT&T from entering the computer business as well as engaging in any international activities.

The FCC’s decision at the conclusion of the Second Computer Inquiry allowed AT&T to move into the data communications area through an unregulated subsidiary. However, the ultimate fate of domestic data communications would require the resolution of a 1974 antitrust suit against AT&T. In 1982, the Justice Department’s Consent Decree settled against the domestic blue-chip monopoly and broke up the company. This action had a dramatic influence on the shaping of data communications and the Internet until the Telecommunications Act of 1996 created a whole new regulatory model.

In retrospect, Computer One and Computer Two determined that the FCC would continue to work in the interests of the corporate users and the development of data communications, even if that meant ruling against the dominant communications carrier.

Notes

[1] Schiller, D. (1982) Telematics and Government. Norwood, NJ: Ablex Publishing Corporation, p. 38.
[2] ibid, p. 42.
[3] Martin, J. (1976) Telecommunications and the Computer. New York: Prentice Hall, p. 348.
[4] Phister, M. (1979) Data Processing Technology and Economics. Santa Monica, CA: Digital Press.
[5] Phister, M. (1979) Data Processing Technology and Economics. Santa Monica, CA: Digital Press. p.79.
[6] ibid, p. 549.
[7] McGillem, C.D. and McLauchlan, W.P. (1978) Hermes Bound. IN: Purdue University Office of Publications. p. 173.
[8] The transition to “online” from Schiller, D. (1982) Telematics and Government. Norwood, NJ: Ablex Publishing Corporation.

Citation APA (7th Edition)

Pennings, A.J. (2023, May 6). Deregulating U.S. Data Communications. apennings.com https://apennings.com/how-it-came-to-rule-the-world/deregulating-telecommunications/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he taught comparative political economy, digital economics, and traditional macroeconomics. He also taught in Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

“Survivable Communications,” Packet-Switching, and the Internet

Posted on | April 29, 2023 | No Comments

In 1956, President Eisenhower won reelection in a landslide a few months after he signed the legislation for a national interstate highway. Although heavily lobbied for by the auto companies, Eisenhower justified the expensive project in terms of civil defense, arguing that major urban areas needed to be evacuated quickly in case of a USSR bomber attack. The year before, the USSR had denoted the infamous hydrogen bomb with over 1000 times the destructive force of the atomic bomb dropped on Hiroshima. A significant concern dealt with the possibility of a Soviet attack taking out crucial communications capabilities and leaving U.S. commanders without the ability to coordinate civil defense and armed forces. In particular, crucial points of the national communications system could be destroyed, bringing down significant parts of the communications network.

The need for a national air defense system fed the development of the first digital network in the 1950s called the Semi-Automatic Ground Environment (SAGE), which linked a system of radar sites to a centralized computer system developed by IBM and MIT. Later called NORAD, it that found itself burrowed into the granite of Colorado’s Cheyenne Mountains. The multibillion-dollar project also created the rudiments of the modern computer industry and helped AT&T enter the data communications business.

While Lincoln had set up a telegraph room in the War Department outside the White House, but President McKinley was the first U.S. president to centralize electronic information in the White House. During the Spanish-American war, at the turn of the century, McKinley followed activities in both the Caribbean and Pacific through text messages coming into Washington DC over telegraph lines. The Cuban Missile Crisis in 1962 made obvious the new need for a new command and control system to effectively coordinate military activities and intelligence. President Kennedy’s face-off with Nikita Khrushchev over the deployment of Russian missiles off the coast of Florida in Cuba sparked increasing interest in using computers for centralizing and controlling information flows.

The result was the Worldwide Military Command and Control Systems (WWMCCS), a network of centers worldwide organized into a hierarchy for moving information from dispersed military activities and sensors to the top executive. WWMCCS used leased telecommunications lines, although data rates were still so slow that the information was often put on magnetic tape disks and transported over land or via aircraft.[1] Unfortunately this system also failed during the Six-Day War between Egypt and Israel in 1967. Orders were sent by the Joint Chiefs of Staff to move the USS Liberty away from the Israeli coastline. Despite high-priority messages sent to the ship sent through WWMCCS, none were received for over 13 hours. By that time, the Israelis had attacked the ship and 37 of the crew were killed.

In strategic terms, this communications approach suggested a fundamental weakness. Conventional or nuclear attacks could cut communication lines, resulting in the chain of command being disrupted. Political leadership was needed for the flexible response strategy of nuclear war that relied on adapting and responding tactically to an escalating confrontation. The notion of “survivable communications” began to circulate in the early 1960s as a way of ensuring centralized command and control as well as decreasing the temptation to launch a preemptive first strike.

Paul Baran of RAND, an Air Force-sponsored think tank, took an interest in this problem and set out to design a distributed network of switching nodes. Baran had worked with Hughes Aircraft during the late 1950s, helping to create SAGE-like systems for the Army and Navy using transistor technology.[2]

Baran’s eleven-volume On Distributed Communications (1964) set out a plan to develop a store-and-forward message-switching system with redundant communication links that would be automatically used if the others went out of commission. Store-and-forward techniques had been used successfully by telegraph companies. They devised methods for storing incoming messages on paper tape at transitional stations before sending them to their destination or the next intermediate stop when a line was free. At first, this was done manually, but by the time Baran confronted this issue, the telegraph companies were already beginning to use computers.[3] But this was only one part of the solution that would form the foundation for the Internet.

While Baran’s work focused more on making communication links redundant, the trajectory of his work increasingly encountered the need for computer-oriented solutions. AT&T had already built a distributed voice network for the Department of Defense organized in “polygrids” to address survivability. Called AUTOVON, the network tried to protect itself by locating the switching centers in highly protected underground centers away from urban areas. Baran studied this system and discovered three major weaknesses. The first was that although AT&T’s distributed system had switching nodes that were dispersed; the decision to switch was still located in a single operations control center.

The second problem was that the system was largely manual. Operators monitored the network from a control center, and if traffic needed to be rerouted, they would instruct operators at the switching nodes with the proper instructions. The third problem was maintaining the quality of the transmission. A message would have to be rerouted many times before it reached its final destination, increasing the chances of transmission problems. His solution was a computerized network with both digital switching and digital transmission. Instead of the routing decisions coming in from a staffed control center, the nodes would make the switching determinations themselves. The messages would need to be broken up into discreet packages that could be routed separately and resent if a problem occurred.

The proposed solution was packet-switching technology. This yet-to-be-devised equipment would transmit data via addressed “packets” or what Paul Baran called initially “message blocks.” Digital bits were organized into individual blocks of information that could travel separately. Instead of single dedicated lines for continuously transmitting data, packets could be routed through different routes of telecommunications lines. Still using the abstraction of store-and-forward, packets were stored shortly at the next node and switched to the best route to get to their destination. Each packet was equipped with an address as well as the content of the message that could eventually send voice, video, or computer data.

The term “packet-switching” was actually named by Donald Davies of the National Physical Laboratory (NPL) in England. The British government had spearheaded computer technology to win the Second World War, but in its aftermath, it sought to “win the peace” by cutting down on its military and nationalizing key industries. Having booted Winston Churchill out of the Prime Minister’s office, it started down a long road toward rebuilding its war-torn nation.

It was concerned about using computers efficiently and subsidized programs to develop data communications, but the country could not compete with the U.S.’s Cold War mobilization that shaped computer and data communications through its massive budgets, G.I. Bill, the Space Race, and fear of nuclear attack. The British initiatives were soon outpaced by a newly created military agency called ARPA, dedicated to researching and developing new military technologies for all the branches of the U.S. Armed Forces. It would contract out for the realization of Baran’s ideas on survivable communications, creating ARPANET, the first operational packet-switching network.[4]

Citation APA (7th Edition)

Pennings, A.J. (2023, Apr 29). “Survivable Communications,” Packet-Switching, and the Internet. apennings.com https://apennings.com/how-it-came-to-rule-the-world/the-cold-war/survivable-communications-packet-switching-and-the-internet/

Notes

[1] Janet Abbate. (1999) History of the Internet. Cambridge, MA: The MIT Press. p. 31.Abbate, J. p. 134.
[2] Founded by the enigmatic Howard Hughes during the Great Depression, the company was a major government contractor. Stewart Brand interview with Paul Baran, in Wired, March 2001. P. 146.
[3] Abbate, J. pp. 9-21.
[4] Abbate, J. pp. 9-21.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he taught comparative political economy and digital media. He also taught in Digital Media MBA atSt. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

The Digital Spreadsheet: Interface to Space-Time, and Beyond?

Posted on | April 16, 2023 | No Comments

“I must confess that I was not able to find a way to explain the atomistic character of nature. My opinion is that … one has to find a possibility to avoid the space-time continuum altogether. But I have not the slightest idea what kind of elementary concepts could be used in such a theory.” — Albert Einstein (1954)

As an avid bike rider, I’m intrigued by the perception of reality. Accurately perceiving road conditions, nearby traffic, and even the dynamics of staying balanced on my bike all seem crucial to avoiding unpleasant experiences. I’m also intrigued by the “perceptual” characteristics of digital spreadsheets. What do they let us see? What are the power implications of these perceptions? And, at another level, what are the calculative and predictive qualities of spreadsheet formulas? Do the mathematics of spreadsheets have correspondence with reality? Are they illusions? or do they create new realities?

This post examines connections between my investigation of spreadsheets and some of the cutting-edge theories of quantum physics, neuroscience, and how the human perceptual apparatus interacts with the world. This is not my usual fare, and it’s a big gap to traverse, so this post is exploratory and more of a search for concepts and language to frame the connection. It may not produce the intelligible results I’m hoping for, which is an understanding of how language, numbers, and mathematics operating in the grids of the digital spreadsheet are a source of productivity and power in world. Still, a few valuable ideas may emerge about understanding the digital spreadsheets and their interaction with the objective world, and possibly beyond.

Historically, I’m theoretically influenced by remediation theory, the notion that new media incorporate old media as they try improve on previous technologies and “heal” our perception of reality. It investigates how these remediated technologies converge and produce a more “authentic” version of the world.[1] Television, for example, not only remediates the sound of radio and the optics of film to improve its experience, but now the windowed look of computers, especially on financial channels. Even the microcomputer (PC), that went from the command line interface in early versions to the graphical user interface (GUI) known as WIMP, which stands for “windows, icons, menus, and pointer,” became a more rectified computer experience and easier to use.

I’ve been using this framework to explore spreadsheet components and how they come together to create a pretty incredible tool or “interface” with the world. Previous work on remediation confirmed that its objective is to examine its relationship with reality. Each component or media in the spreadsheet (writing, lists, tables, cells, formulas) introduces its own utility, and with it, a type of perceptual and organizing power. Furthermore, they pair up or work integratively to create a more complex cognitive/media experience.

To start off, the list is an ancient writing technology that has proven its powerful ability to be used to organize armies, monasteries, and palaces.[2] The book and movie Schindler’s List (1992) showed how lists can operate as a technology of social control, as well as a technology of resistance. The list is integrated into the “table-oriented interface” of the spreadsheet that displays and organizes information in a perceptual geometric space to show categories, cells, and relationships between data sets. The spreadsheet was the “killer app” for the PC, meaning people started to buy the personal computer just to run the application.

Spreadsheets produce a more “abstracted” version of reality. But do we really know what this means? Remediation involves cognitive/social media tools such as language, writing, numerals, and other forms of media and representation, such as simulations and mathematical formulations. In basic counting, fingers represent items that can then be displaced. Likewise, alphanumerical numbers represent and aggregate large quantities. With zero and positional notation, the numbers can get very large and still be manageable and manipulated to produce operations that produce intelligible and meaningful data, including information that can inform policy and strategy at the corporate and governmental organization level. The abstraction process renders events and inventories into qualities, dealing with ideas rather than events and items.

Words play an important function in organzing the spreadsheet. They provide both context and content. Lists are labelled and give context to other words and numbers. Words identify things and start to substitute for them and create relationships between them. Words become abstractions as they represent concepts or ideas rather than tangible physical objects. Words allow things to dissappear and yet still linger. What and where is the relationship between items in cells and rows? They transform data into aggregated or recategorized information.

Here, I want to consider the implications of Donald Hoffman’s theories of biological and cognitive construction of space-time reality for understanding the power of spreadsheets. First, I’m struck by Hoffman’s contention, shared by many physicists such as Nima Arkani-Hamed, that space-time is doomed and dissolving into more fundamental structures. This contention was confirmed by the 2022 Nobel Prize winners for Physics, where quantum theory won out over relativity.[3] The framework of reality sculpted by Isaac Newton, James Clerk Maxwell, Albert Einstein, and others have been incredibly useful as rigorous explorations/extensions of our corporeal space-time capabilities, but are they sufficient explanations of full reality?

Donald David Hoffman is an American cognitive psychologist and professor at the University of California, Irvine. Interestingly, he also has joint appointments in the School of Computer Science, the Department of Philosophy, and the Department of Logic and Philosophy of Science. He works on cognitive and mathematical theories that tie perception with how we construct an objective world. His “Interface Theory of Perception (ITP)” is central to this post’s inquiry into the power of the spreadsheet.[4]

Hoffman contends that new orders of objective complexity and structure are made perceptible by mathematical computations of formulas. Hoffman argues his theory is further supported by Kurt Gödel’s famous “Incompleteness Theorem” that showed that any consistent mathematical system contains propositions that cannot be proved or disproved within the system. Therefore, digital spreadsheets can possibly be viewed as an interface to multiple and successive stages of reality, including the levels of the objective space-time paradigm, but perhaps infinitely beyond.

Wolfgang Smith echoed Arthur Eddington’s mathematics that conjectured the act of measurement itself summons the quantum into the corporeal reality. Once you measure, you invoke the reality. Smith also contends that the act of measurement brings what he calls the physical (quantum) world into the corporeal (perceptual) world. Mathematical measurement triggers the transition from the physical to the corporal. It brings the subcorporeal potential into the world of objective possibilities.

The world we see with our five senses is real and consequential, but our language and mathematical models invoke the quantum world in a process Smith calls vertical causation. Unlike horizontal causation that occurs in space-time, vertical causation instantaneously links the physicist’s world with the corporeal world. But, by distinguishing two ontological planes, the lived world, and the physical world, you can observe certain discontinuities. Drawing on the famous Heisenberg uncertainty principle, the act of observation interrupts the multilocality of particles, splits them into multiple realities, and brings them to a precise ontological place, the corporeal instrument. So, do the observational characteristics of the spreadsheet disrupt the multilocational potentials of the quantum world and bring them to an exact corporeal location?

Hoffman claims that this world is quite different from the world we construct with our perceptual apparatus. He likes to use the example of the desktop icon of a document on a personal computer. The icon is not the document. But you also don’t want to drag it over to a trash can, unless you are prepared to part with that document, and probably lose hours of work. The desktop icon hides the reality of computing devices because that much information is not necessary for using it effectively but gives us an indexical connection. In other words, the icons we perceive have an indexical connection with reality, they are connected, but they are not indicative of all the possible domains of that reality.

We don’t generally interact with the mechanics of the computer, just as we don’t generally interact with the quantum mechanics of reality. A long line of thinkers, from the Greek atomist Democritus, to Descartes’s mind-body dualism, and Alfred Whitehead’s critique of bifurcation have addressed this issue. But maps have been proven to be tethered to reality. Hoffman suggests that what we see in the world is a construction, but nevertheless, one that has payoffs and consequences.[5] Iconic representations can guide useful behaviors, such as crossing the street without being hit by a BMW icon.

So, let’s consider the spreadsheet to be an interface. An interface is a site where independent and often unrelated systems interact. They transcend a boundary and connect and act on or communicate with each other. For spreadsheet use, those systems are the graphical “gridmatic” display of the application on the screen and the “reality” they interface. Reality is a big term, but Newton-Maxwell-Einstein’s notion of space-time apply, to a point. So, the interface of the digital spreadsheet is conducted systematically to examine its power using a “formal” analysis of the spreadsheet. That requires examining the meaning-making capabilities of the parts-the writing, lists, tables, and formulas, and the whole working together.

Getting to the latter part of the interface is difficult, but starting with the former, we can explore the perceptual aspects. The spreadsheet has several media components that begin to address the rift between the objective and post-spacetime world. The spreadsheet experience is initially structured with the words and/or numbers in cells that indicate corresponding indexical and symbolic connections. The base-10 Indo-Arabic numeral system with zero used to create positional notations has been largely accepted worldwide as the accounting standard. They also create categories for lists and rows. Tables provide visualizations of 2-D matrix relationships. This critique of Hoffman is interesting, but gets it wrong. Its not that the timetable gives some indication of when the bus is coming. The timetable comes before the bus and sets up the whole transportation framework. Most intriguing is the vast array of constructions that are produced by the myriad of formulas incorporated into spreadsheets. 

The remediation contention is that the spreadsheet emerged to give us a more healed or rectified representation of reality. It does this by successfully integrating several media components, including formulas. Given that arrangement, it should be possible to continue to examine the media components and formulas as providing particular points of view that produce knowledge about specific aspects of “reality.” The components combine and build up to provide increased utility. Here, we may not get evolutionary payoffs, but power in social contexts. Formulas should also confer a utility or power that is measurable.

Take for example, the power of the ratio. A ratio sets up relationships across time and space. It is a technique that “fixes” or freezes a relationship in order to construct a moment of reality. Ratios have analytic capacity and can identify high and low performing assets, track overall employee performance, and evaluate profitability.

This type of analysis requires media strategies that analyse the components and formulas of spreadsheets as signifying or meaning-making processes. The spreadspread needs to be examined closely considering its interacting media interface and consider strongly the insights provided by a range of social and physical sciences on the other side of the interface. While Hoffman’s ocular-centric approach does not apply intimately to the spreadsheet, his insistance on a scientific approach is worthy of note and study.

Conclusion

This is a quest for information about how spreadsheets operate effectively in the world, so ultimately, it is quite agnostic about many mathematical, philosophical, and scientific debates. For example, I venture into the realms of discourse initiated by the Greek Philosopher Plato. It starts with remediation theory’s observation that new media incorporate older media to produce a more “healed” or authentic version of reality. Digital spreadsheets integrate several important media components towards this end, each of which organizes aspects of reality. The PC and its windowed graphical user interface empowered the spreadsheet to combine words, numbers, lists, tables, and mathematical formulas.

The digital spreadsheet, a visual interface, uses language and numerical information organized for intelligibility and uses media and mathematical formulas to initiate horizontal and vertical causation. The spreadsheet interfaces the conscious agent with space-time and the quantum realm to operate in the space-time reality and to “summon” new realities through the acts of writing and measurement. According to Hoffman, consciousness has an “earthsuit” (a biological machine hosting the ghost) with a set of perceptual tools shaped by evolutionary forces that helps me ride a bike safely. Perhaps it now has an additional tool or interface, the digital spreadsheet.

A last note refers to the overall social impact. How much of the social change since 1979 can be attributed to the digital spreadsheet? A professor of mine in graduate school, Majid Tehranian, used to refer to the rise of “spreadsheet capitalism.” To what extent can we attribute the massive shift towards financialization in the early 1980s to the technology of the PC-assisted digital spreadsheet? Can we say it “conjured” a new epoch into existence?[8] If so, what were the implications of this historical shift?

Notes

[1] Bolter, J. D, and Grusin, R.A. Remediation: Understanding New Media. Cambridge, Mass: MIT Press, 1999. Print.
[2] Jack Goody’s (1984) Writing and the Organization of Society is informative on the use of the list as a historical management tool.
[3] Alain Aspect, John Clauser, and Anton Zeilinger won the 2022 Nobel Prize for Physics for experiments that proved the quantum nature of reality using the experiment that Einstein himself configured to prove relativity. Smith argues this supports his notion of vertical causality.
[4] So, the Hoffman conceptualization consists of the following three interconnected theories. First is the evolutionary natural selection view that “Fitness Beats Truth (FBT).” Reproduction comes first; accurately gauging the domains of reality is not a necessary requirement. Theorem 2 is the “Interface Theory of Perception (ITP)” that I draw on here, although I find his ocular-centric (vision focus) perspective limited, as shown below. The third theorem is “Conscious Realism,” a fascinating set of ideas that proposes the primacy of conscious agents in the world rather than an objective space-time reality. Evolutionary game theory is where Hoffman stays to keep his theory in the scientific regime. Our biological interface with the world is evolutionarily gained. Our eyes and brains, for example, are designed (or purposely shaped) for reproductive payoffs, staying alive to procreate. I have extraordinary capabilities for surviving my bike rides (so far). But evolution has no particular interest in the truth of reality. Biological agents don’t need to handle that much complexity. They are only interested in reproductive and survival payoffs.
[5] Hoffman, D.D. The Case Against Reality: Why Evolution Hid the Truth from Our Eyes Illustrated Edition. W. W. Norton & Company. Print.
[6] According to Dr. Wolfgang Smith, the act of measurement brings what he calls the physical (quantum) world into the corporeal (perceptual) world. Quantity and qualities are real. The world we see with our five senses is real, but our language and mathematical models invoke the quantum world. Measurement is a transition to and from the physical to the corporal. It brings the subcorporeal potential into the world of possibilities. He says in one world, the grass is green, and in the other, “its all quantum stuff.”
[7] Drawing on mathematician William Demski’s “complex specified information,” Smith suggests this is how complex designs are formed. It mathematically proves that improbable events, such as the writing of a book are not feasible. He explains, “A single letter of the alphabet is specified without being complex. A long sentence of random letters is complex without being specified. A Shakespearean sonnet is both complex and specified.” Unfortunately, Demski was embraced by “intelligent design” movement that ultimately caused him much distress and hampered his career succes.
[8] I start to use term like “summon,” “conjure,” and “evoke” as they have a mystical, almost magical resonance. It is on purpose, but not without recognizing a potential cost in terms of acceptability and viability. There is also a touch of numeromancy and even numerology here that I want to avoid.

I’ve often approached spreadsheets from what I call a techno-epistemological inquiry. It recognizes the unique knowledge-producing capabilities that emerged with digital technologies, particularly databases and spreadsheets. This strategy has been influenced by post-structuralism and “deconstruction” methods that expose the instability of meaning and how power centers in society use language and other signifying practices to intercede to produce and fix meaning.For example, VisiCalc and Lotus 1-2-3 began to be used in the early 1980s financial revolution to perform complex business calculations and interact with the data to evaluate different scenarios. Digital spreadsheets allowed financial analysts to inventory and value the assets of corporations and state-owned enterprises (SOEs). Subsequently, the information could be used to borrow money and take over other businesses, as well as enable government agencies and SOEs to be privatized and sold in global capital markets.

Citation APA (7th Edition)

Pennings, A.J. (2023, April 16). The Digital Spreadsheet: Interface to Space-Time and Beyond? apennings.com https://apennings.com/technologies-of-meaning/the-digital-spreadsheet-interface-to-space-time-and-beyond/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. Although his major focus is on ICT, he sometimes teaches quantum theory in his introduction to science and technology class. From 2002-2012 was on the faculty of New York University where he taught comparative political economy, digital economics and traditional macroeconomics. He also taught in Digital Media MBA atSt. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Zeihan’s Global Prognostics and Sustainable Development, Part II: Implications of Tesla’s Master Plan 3

Posted on | March 19, 2023 | No Comments

“Prediction is very difficult, especially if it’s about the future.”

– Niels Bohr (and Yogi Berra)

This post continues the examination of Peter Zeihan’s geopolitical forecasting by contrasting it with Tesla’s guidance on achieving a sustainable global energy economy as presented in its Master Plan 3. Tesla faces extraordinary supply chain challenges in achieving its goals for building electric vehicles (EVs), charging stations, as well as battery packs for homes, businesses, and electric grids. Zeihan has warned about material constraints that will threaten sustainable development with what he considers will be the end of globalization. Nevertheless, Tesla argues that the “sustainable energy economy” is achievable in our lifetime, and a desirable goal.[1]

Tesla’s recent “Investor Day” presentations differed from previous events as it started by highlighting the so-called Master Plan 3. More of a vision for sustainable energy than a venue for future products, the Austin event wasn’t a big hit for investors looking for short-term guidance. Rather, it provided a macro view of a potential sustainable energy economy and Tesla’s contribution to that future. This plan included information on the total amount of energy currently being used worldwide, how much of that is from sustainable renewable sources, and how much is from fossil fuels?

Tesla Master Plan 3

Not surprisingly, it highlighted solutions that mainly favored Telsa’s product lines: large-scale batteries for homes, industrial mini-grids, and electricity producers; solar panels for houses and commercial properties; and electric vehicles for consumers and semis for long-haul transport. Particularly interesting was the manager of charging and the construction of a network infrastructure of superchargers. The rest of the event elaborated on the innovations for profitable production and recycling systems, as well as the efficiencies in geographical range and charging times for EVs.

Less attention was put on political context for reconstituting supply chains for needed materials and minerals in the post-Covid, high inflation, Russia-Ukraine war environment. This is more the domain of Peter Zeihan, a geostrategist with four books to his credit, mostly about the political economy of global energy flows. He worked in Austin, Texas for a decade as part of Stratfor, a forecasting company. In 2012, he started his own firm, Zeihan on Geopolitics. His primary areas of expertise are economic geography, population studies (demographics), and energy, but he considers himself a generalist who designs forecasts for specific clients. He has become a popular YouTube star primarily because of his statements about the possible end of globalization and its impacts on various countries and the supply challenges of different industries.

He also gained traction with his analysis of the Russian war on Ukraine and the consequences of continued fighting, particularly its implications for sustainable development worldwide. With both countries preoccupied or sanctioned, we face losing major suppliers of critical materials needed for the green economy. Russia is the 2nd largest exporter of crude and refined petroleum products, the largest source of palladium, and the 2nd largest source of platinum group materials (ruthenium, rhodium, palladium, osmium, iridium, and platinum) that often occur together in the same mineral deposits. Also, we are losing the third largest copper, titanium, and aluminum sources. Russia and particularly Ukraine, in combination, are also first in neon gases, which is critical for laser technology. All these are critical for the green revolution and continued development of the information and communication technologies (ICT) revolution.

Zeihan is also concerned that the withdrawal of political support for the US Navy operating worldwide to ensure freedom of navigation will be problematic for global trade. During World War II, the Battle of the Atlantic took over 5,000 commercial ships and 60,000 lives from Allied and Axis powers. Lack of maritime protection could collapse the intricate supply chains for the materials and sophisticated technologies needed for the sustainable energy revolution. Is that something we could see in our modern era of globalization?

While not a critic of sustainable energy, Zeihan is less confident than Tesla about its prospects. Claiming to have solar panels on his Colorado house, he is particularly concerned about the geographical distribution of good sunlight and wind energy. He points to Germany’s attempts to go green as particularly problematic. It has poor solar and wind potential and was recently cut off from Russian natural gas and oil. As a result, it has been forced to return to coal and lignite, both significant emitters of carbon dioxide and other pollutants that threaten its climate and pollution goals.

Zeihan points out that the border area around Mexico and the southwest US has significant solar and wind potential. They can provide the new efficiencies of smaller electrical grids run by renewables while still having ready access to the necessary hydrocarbons for paints, plastics, PVC resins, and other carbon-based industrial materials. Even companies from Germany are moving to the area to take advantage of cheap energy and carbon.

This geographic advantage appears to be no mystery to Tesla as it built a major “Gigafactory” in Austin, Texas, with its rooftop solar panels spelling out its logo that can be seen from space. It also announced a new Gigafactory facility across the border in Monterrey, Mexico, rumored to be designed to produce a new $25,000 consumer EV. Telsa is also building a major lithium refinery in Corpus Christi that will be designed to support 50GWh a year of storage capacity and easily draw in the abundant metal from significant producers in Central and South America. In addition, Musk’s related company, SpaceX, has been building and testing rockets in Boca Chica on the coast of the Gulf of Mexico for years now, very close to the border.

Looking back at Tesla’s Master Plan 2 in 2016, it emphasized several important strategies. These included solar roofs with integrated battery storage, expansion of the EV product line, and developing full self-driving (FSD) that is 10x safer due to massive machine learning from vehicle fleets on the road sending back information. Also, they suggested opportunities for using your car with FSD as a “robo-taxi” when you weren’t using it. FSD is still a work in progress, but Tesla Dojo supercomputers are collecting “big data” from over 400,000 participating drivers that have driven over 100 million miles. Tesla estimates that some 6 billion miles of driving will be needed to make the FSD relatively foolproof. Modeling with digital twin vehicles is taking up some of the slack in self-driving testing, but FSD is not universally accepted nor fully tested for its impact on sustainability.

In retrospect, Tesla’s Powerwalls are now in many homes, and its Megapacks (Megapack, Megapack 2, Megapack 2XL) are making a significant difference in both mini and major electric grids. For the latter, the Megapacks have drawn praise for its Virtual Machine Mode (VMM) firmware that smoothes out oscillations in long-range electrical grid transmissions. In addition, Megapacks have been standardized to an optimal size based on the legal requirements for transporting them over common road infrastructure. This standard means they can also be easily and quickly transported and deployed in various storage arrangements that can be scaled up quickly.

Master Plan 3 has a more global macro-perspective examining what Tesla thinks is needed for a sustainable civilization. It proposes that the sustainable energy economy is achievable and within reach during our lifetimes. It starts with some calculations dealing with quantities of electricity produced. The Master Plan suggests that the world needs some 30 TWh of ongoing renewable power capture/generation and a backup of 240 TWh of battery storage capacity. This storage number is a lot of battery capacity, but the trend is toward reducing cobalt, nickel, and even lithium. Instead, using more metals like iron, magnesiumm, and phosphorus provide safer and more long-term energy storage. The TWh (Terawatt-hours) is used in measuring energy and is equal to a million (1,000,000) megawatt-hours (MWh) or a billion (1,000,000,000) Kilowatt-hours (KWh). A Tesla uses about 34 KWh of electricity to go 100 miles. That’s 34,000 kWh per 100,000 miles of travel. Those figures are hard to fathom, and keep in mind that the Petawatt (PWh) units are even larger than TWh.

PWh units (a trillion KWh/hrs) are helpful when considering fossil fuels. The world uses some 165 PWh/year, roughly 80% of which are from combusting extracted hydrocarbons. Furthermore, about 2/3 of that is wasted. For example, an ICE car only uses about 25% of the energy of the fuel pumped into it at the fuel station. Factor in the mining and transportation of carbon-based fuels, and you get even less efficiency. So Tesla argues that instead of the 165 PWh/yr of current energy consumption (of which 20% is renewable), the world only needs 82 PWh/yr of energy if a transition occurs to sustainable sources. That means the world needs half as much energy from current consumption levels if it converts to an electric economy because of the waste factor in hydrocarbon combustion.

Significant transitions will likely occur in the following areas. First, the switch to renewable sources for electricity grids (instead of coal, natural gas, and oil) is expected (by Tesla) to replace 46 PWh/yr of fossil fuel combustion (35%). This transition is already occurring quite rapidly. Sixty percent of new investments in the existing grid have been in renewables.

EVs will replace 28 PWh/yr (21%), and high-temperature heat (stored heat for industrial purposes) and hydrogen will displace another 22PWh/yr (7%). Bill Gates has invested heavily in micro-grids using mirrors to concentrate sun energy and heat liquids to temperatures of over 1000 degrees Celsius. Geothermal and hydrogen are used as well. Replacing fossil fuels in boats and aircraft would reduce another 7PWh/yr (5%). Electric vertical take-off and landing vehicles (eVTOLs) are on pace to reconfigure certain supply runs and delivery speeds, bypassing trucking and trains. Shipping is already energy efficient compared to other types of transportation but still accounts for 3% of CO2 greenhouse gases (GHGs). Finally, heat pumps in buildings will be vital to the move to sustainable energy. These are like air conditioning (AC) units in reverse as they don’t pump heat but refrigerants. Tesla has no current plans to produce them, but they could replace some 29 PWh/yr of fossil fuels, primarily natural gas.

Another issue is real estate. Telsa says that only 0.2% of the Earth’s 510 million square kilometers of surface area is required. But less than 30% or 153 million square kilometers is land. So 0.2% of that is 30,600,000 square km. They further calculate that solar direct land area needed is 0.14% while wind direct land area is 0.03%. So, that is a lot of necessary land, but this is energy we are talking about, and it is absolutely critical for modern life.

Lastly, Tesla suggests this can be done with roughly USD 10 trillion in capital investment. Most of that (70%) will be required to switch to EVs, and another 10% for planes and ships. USD 2 trillion will be needed for the renewable energy grid, heat pumps for buildings, and high-temperature heating for industrial processes. Ten trillion dollars is about 10% of the 2022 World GDP of USD 95 trillion.

Tesla’s Investor Day presentation broadly sketched a vision for a sustainable energy economy and how the company would contribute to that plan. However, Peter Zeihan’s work suggests a tougher road ahead with limited premium locations for solar and wind. Furthermore, a deglobalization trend and geopolitical conflict threaten access to critical resources needed for a green energy revolution.

Notes

[1] This post contains links but most of the information comes from either Tesla or Zeihan. Additional examination of the energy numbers, geopolitical concepts, and technological possibities are needed.

Citation APA (7th Edition)

Pennings, A.J. (2023, Mar 19). Zeihan’s Global Prognostics and Sustainable Development, Part II: Implications of Tesla’s Master Plan 3. apennings.com https://apennings.com/dystopian-economies/zeihans-global-prognostics-and-sustainable-development-part-ii-implications-of-teslas-master-plan-3/

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he taught comparative political economy, digital economics and traditional macroeconomics. He also taught in Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    July 2024
    M T W T F S S
    1234567
    891011121314
    15161718192021
    22232425262728
    293031  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.