Anthony J. Pennings, PhD


Networking Connected Vehicles in the Automatrix

Posted on | January 15, 2024 | No Comments

Networking of connected vehicles draws on a combination of public-switched wireless communications, GPS and other satellites, and Vehicular Ad hoc Networks (VANET) that directly connect autos with each other and roadside infrastructure.[1] Connecting to 4G LTE, 5G, and even 3G and 2.5G in some cases provides access to the wider world of web devices and resources. Satellites provide geo-location services, emergency, and broadcast entertainment. VANETs enable vehicles to communicate with each other and with roadside infrastructure to improve road safety, traffic efficiency, and provide various applications and services.

This image shows an early version of a connected automatix infrastructure including a VANET.

This post outlines the major ways connected cars and other vehicles use broadband data communications. It builds some earlier work I started on the idea of the Automatrix, starting with “Google: Monetizing the Automatrix” and “Google You Can Drive My Car.” It is also written in anticipation of a continued discussion on net neutrality and connected vehicles although that is beyond the scope of this post.

Public-Switched Wireless Communications

Wireless communications include radio connectivity, cellular network architecture, and “home” orientation. This infrastructure differs significantly from the fixed broadband Internet and World Wide Web model designed around stationary “edge” devices with single Internet Protocol (IP) addresses. Mobile devices have been able to utilize the wireless cellular topology for unprecedented connectivity by replacing the IP address with a new number called the IMSI that identifies itself and maintains a link to a home network, usually a paid service plan with a cellular provider, e.g., Verizon, Orange, Vodaphone.

The digital signal transmission codes have changed over time, allowing for better signal quality, reduced interference, and improved capacity for handling voice and data services. These included Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), and Code Division Multiple Access (CDMA) that support both voice and data services. GSM was widely adopted standard for public-switched wireless communications, but has been largely replaced by CDMA and Long-Term Evolution (LTE) fourth-generation (4G) and more energy hungry and shorter range fifth-generation (5G) networks. With LTE traditional voice calls became digital and users could access a variety of data services, including text messaging, mobile internet, and multimedia content based on Internet Protocols (IP).

The public-switched wireless network divides a geographic coverage area into “cells” where each spatial division is served by a base station or cell tower that manages the electromagnetic spectrum transmissions and supports mobility as users move between cells. As a mobile device transitions from one cell to another, a “handoff” occurs that ensures uninterrupted connectivity as users move across different cells. Roaming agreements between different carriers enable users to maintain connectivity even when outside their home network coverage area. Digital switching systems are employed in the core network infrastructure to handle call routing, signaling, and management.

A key concept in the wireless public network is the notion of “home” with mobile devices typically using SIM cards with an international mobile subscriber identity (IMSI) number to authenticate and identify users on the network. SIM cards store subscriber information, including user credentials and network preferences.

Wireless communications incorporate security measures to protect user privacy and data. Encryption and authentication mechanisms help secure communication over the wireless networks.


Satellites play a crucial role in enhancing the capabilities of connected cars by providing various services and functionalities. They extend connectivity to areas with limited or no terrestrial network coverage, allowing access for connected cars traveling through remote or rural locations where traditional cellular coverage may be sparse. GPS satellites provide accurate location information, enabling navigation systems in cars to determine the vehicle’s position, calculate routes, and provide turn-by-turn directions.

Satellites also support a range of location-based services providing real-time traffic information, points of interest, and location-based notifications, enhancing the overall navigation experience. Satellite connectivity facilitates remote diagnostics and maintenance monitoring for connected vehicles. Satellites have provided remote monitoring and management of vehicle fleets. Fleet operators can track vehicle locations, monitor driving behavior, manage fuel efficiency, and schedule maintenance using satellite-based telematics solutions.

Satellites contribute to enhanced safety features in connected cars by enabling automatic crash notification systems. In the event of a collision, the vehicle can send an automatic distress signal with its location to emergency services, facilitating a quicker response. In the case of theft or emergency, satellite communication can be used to remotely disable the vehicle, track its location, or provide assistance to drivers.

Satellites also play a role in delivering over-the-air (OTA) updates to connected cars, allowing manufacturers to use satellite communication to send software updates, firmware upgrades, and map updates directly to the vehicles, ensuring they remain up-to-date with the latest features and improvements. They can also remotely assess vehicle health, identify potential issues, and schedule maintenance, reducing the need for physical visits to service centers.

Lastly, satellites support the delivery of entertainment and infotainment services to connected cars. Satellite radio services, for example, provide a wide range of channels with music, news, and other content, accessible to drivers and passengers in areas with limited terrestrial radio coverage.

Satellites can contribute to Vehicle-to-Everything (V2X) communication by providing a reliable and wide-reaching communication infrastructure. V2X communication allows connected cars to exchange information with other vehicles, infrastructure (such as traffic signals), and even pedestrians, enhancing safety and traffic efficiency.

The integration of satellite technology enhances the overall connectivity, safety, and functionality of connected cars, contributing to a more advanced and intelligent automatrix.

Vehicular Ad hoc Networks (VANETs)

VANETs play a significant role in enhancing communication and connectivity among vehicles and with roadside infrastructure. VANETs have no base stations and devices can only transmit to other devices in the near proximity, such as other cars, emergency vehicles (ambulances, police, etc.) and roadside devices.

Here are some key characteristics of vehicular networks:

– A dynamic and rapidly changing network topology due to the constant movement of vehicles. Nodes (vehicles) enter and leave the network frequently, leading to a highly active environment.
– Direct communication between vehicles, allowing them to share information such as speed, position, and other relevant data. V2V communication plays a crucial role in enhancing road safety and traffic efficiency.
– Interactions between vehicles and roadside infrastructure, such as traffic lights, road signs, and sensors, enable vehicles to receive real-time information about traffic conditions and other relevant data.
– In the absence of a fixed infrastructure for communication, vehicles act as both nodes and routers, forming an ad hoc network where communication links are established based on proximity.
– Broadcast mode disseminates information about traffic warnings, road conditions, and emergency alerts to nearby vehicles.
– Low-latency communication supports real-time applications like collision avoidance systems and emergency alerts. Timely information exchange is crucial for the effectiveness of these applications.
– Security and privacy techniques for authentication, confidentiality, and data integrity.
– Connected vehicles support various traffic safety applications, including collision and lane-switching warnings, as well as collaborative cruise control. These applications aim to enhance overall road safety.
– Vehicular communication is influenced by signal fading and attenuation, especially in urban environments with obstacles. These factors need to be overcome for reliable communication.[3]

VANETs play a crucial role in the development of Intelligent Transportation Systems (ITS) and contribute to creating safer, more efficient, and connected road networks. Due to the rapid mobility of vehicles, the Automatrix may experience frequent connectivity disruptions. Protocols and mechanisms are important to cope with intermittent connectivity.

One of the reasons I liked the category of the Automatrix was that the attention was on the context, not exclusively the individual vehicles. When it comes to connected cars, the implications of net neutrality are significant and can influence various aspects of their functionality and services.[4]

Connected cars contribute to the broader concept of the Internet of Things (IoT) by creating an interconnected network where vehicles, infrastructure, and users communicate and collaborate to enhance safety, efficiency, and overall driving experience. These connected vehicles leverage various sensors, embedded and internal Ethernet systems, and communication protocols to tether to Bluetooth and access mobile cellular and satellite services.


[1] Wahid I, Tanvir S, Ahmad M, Ullah F, AlGhamdi AS, Khan M, Alshamrani SS. (23 July 2022) Vehicular Ad Hoc Networks Routing Strategies for Intelligent Transportation System. Electronics 2022, 11(15), 2298;
[2] Image from Hakim Badis, Abderrezak Rachedi, in Modeling and Simulation of Computer Networks and Systems, 2015

Citation APA (7th Edition)

Pennings, A.J. (2024, Jan 15). Networking Connected Cars in the Automatrix.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Net Neutrality and the Use of Virtual Private Networks (VPNs)

Posted on | November 26, 2023 | No Comments

Net neutrality regulations strive to treat VPNs (Virtual Private Networks) neutrally, meaning that Internet Service Providers (ISPs) should not discriminate against or block the use of VPN services. As a regulatory principle, Net neutrality advocates for equal treatment of all data on the Internet, regardless of the type of content, application, or service. VPN is a technology that establishes an encrypted connection over the Internet by allowing users to access a private network remotely. This connection provides anonymity, privacy, and security but may also be used in sensitive activities, including bypassing geographical restrictions imposed by licensing agreements, ISPs, or regional authorities.

In this post, I investigate the complexities of VPNs and their implications for both content providers and ISPs. First, I describe how VPNs work. Then I explore how content service providers like video streaming platforms treat VPNs. Next, I do a similar analysis of different strategies used by ISPs when they want to hamper VPN use. Lastly, I return to the VPNs’ relationship to net neutrality.

VPNs are widely used for personal and business purposes to protect sensitive data and enable secure remote access to private networks. In many cases, ISPs and other carriers, as well as OTT (Over-the-Top) content providers, may attempt to block or restrict the use of Virtual Private Networks (VPNs). However, the extent to which VPNs are blocked can vary depending on the region, the specific ISP, and local regulations.

How does a VPN work?

A VPN works by creating a secure and encrypted connection between the user’s device and a VPN server. When a user contacts a VPN, they are authenticated, typically by entering a username and password, often automatically through VPN client software. Some VPNs may also use additional authentication methods, such as multi-factor authentication, for enhanced security. When the connection is authenticated, the communication between the user’s device (computer, smartphone, etc.) and the VPN server is encrypted for security.

The encrypted data moving between user and server is encapsulated with a process known as tunneling. This creates a private and protected pathway for data to travel between the user’s device and the VPN server. Various tunneling protocols, such as OpenVPN, L2TP/IPsec, or IKEv2/IPsec, are used to establish this secure connection. The VPN server then assigns the user’s device a new IP address, replacing the device’s original IP address. This is often a virtual IP address within a range managed by the VPN server.

All Internet traffic to the user’s device is then routed through the VPN server. This means that websites, services, and online resources such as a streaming service, perceive the user’s location as that of the VPN server rather than the user’s actual location. Users can access content that may be geo-restricted or censored in their physical location by connecting to a VPN server in a different geographic location. This allows them to appear as if they are accessing the Internet from the location of the VPN server.

Anti-VPN Technologies Used by Content Providers

VPNs become a net neutrality issue when they are targeted by either content providers or ISPs. Some content providers and streaming services may block access from known VPN IP addresses to enforce regional restrictions on their content. Streaming services negotiate licensing agreements with content providers to distribute content only in specific regions. Other concerns include copyright infringement by other content providers and the quality of service of traffic routed through multiple servers. Complicated data packet routes can cause latency or buffering issues, which degrade the streaming experience. Nevertheless, VPNs can circumvent this blocking by masking the user’s real IP address and making it appear as if they are connecting from a different location.

Content services employ various techniques to detect the use of VPNs and proxy servers. They maintain databases of IP addresses associated with VPNs and proxy servers and compare the user’s IP address against these databases to check for matches. If the detected IP address is on the list of known VPN servers, the streaming service may block access or display an error message.

Content providers such as video streaming services may also analyze user behavior to detect patterns indicative of VPN usage. For example, if a user rapidly connects from different geographical locations, it may raise suspicion and trigger additional checks to determine if a VPN is in use. VPN detection may involve checking for DNS (Domain Name System) leaks that reveals DNS requests or vulnerabilities in WebRTC (Web Real-Time Communication) protocols that gives real-time guarantees but can reveal client credentials. These leaks can expose the user’s actual IP address, allowing the content services to identify VPN usage.

Streaming services may decide to block entire IP ranges associated with data centers or hosting providers commonly used by VPN services. This approach helps prevent access from a broad range of VPN users sharing similar IP addresses. Streaming services regularly use geolocation services to determine the physical location of an IP address. If the detected location does not match the expected geographical area based on the user’s account information, it may trigger suspicion of VPN use.

VPN connections often exhibit different speed characteristics compared to regular links. Streaming services may analyze the connection speed and behavior to identify patterns associated with VPN usage. Lastly, some streaming services may employ captcha challenges or additional verification steps when they detect suspicious activity, such as rapid and frequent connection attempts from different locations. This targeting can inconvenience users but serves to identify and block VPN usage.

How ISPs treat VPNs

Net neutrality principles call for ISPs to treat all data packets on the Internet equally. It can prohibit ISPs from discriminating against specific online services, applications, or providers, including the data packets generated by VPN services. This norm means that ISPs should not block or throttle VPN traffic just because it is VPN traffic. VPN providers, like any other online service, should be able to reach users without facing unfair restrictions.

Nevertheless, ISPs may employ various techniques to block or throttle VPN traffic. These measures are often implemented for network management, compliance with regional regulations, or enforcing content restrictions. Deep Packet Inspection (DPI) is a technology that allows ISPs to inspect the content of data packets passing through their networks. By analyzing the characteristics of the traffic, including protocol headers and content payload, DPI can identify patterns associated with VPN traffic. ISPs may use DPI to detect and block specific VPN protocols or to throttle VPN traffic. Some advanced filtering technologies can detect and block VPN traffic. However, this approach is more common in regions with strict Internet censorship.

ISPs can block or restrict traffic on specific ports commonly associated with VPN protocols. For example, they might block traffic on ports used by OpenVPN (e.g., TCP port 1194 or UDP port 1194) or other well-known VPN protocols. By blocking these ports, ISPs aim to prevent establishing VPN connections. ISPs may also maintain lists of IP addresses associated with known VPN servers and block traffic to and from these addresses. This method targets specific VPN servers or services rather than attempting to identify VPN traffic based on its characteristics.

Some VPN protocols obfuscate or disguise their traffic, making it more challenging for ISPs to detect and block them. This subterfuge includes techniques like adding a layer of encryption or using obfuscated protocols that resemble regular HTTPS traffic. ISPs may also analyze traffic patterns and behaviors to identify characteristics associated with VPN usage. For example, rapid and frequent connection attempts from different locations might trigger suspicion and lead to traffic restrictions. VPNs can circumvent this blocking by masking the user’s actual IP address and making it appear as if they are connecting from a different location.

DNS filtering blocks access to specific domain names associated with VPN services. This method aims to prevent users from resolving the domain names of VPN servers, making it more difficult for them to establish connections. ISPs may implement filtering at the application layer to identify and block VPN traffic based on the behavior and characteristics of specific VPN applications. Instead of outright blocking VPN traffic, some ISPs may employ bandwidth throttling to reduce the speed of VPN connections. This slowing can make VPN usage less practical or effective for users, especially when attempting to stream high-quality video or engage in other bandwidth-intensive activities.

The effectiveness of these methods can vary, and users often find workarounds to bypass VPN restrictions. VPN providers may also respond by developing new techniques to evade detection. The cat-and-mouse game between VPN providers and ISPs is ongoing, with each side adapting its strategies to stay ahead. Users who encounter VPN restrictions may explore alternative VPN protocols, use obfuscation features, or consider other means to maintain privacy and access unrestricted Internet content.

Net neutrality aims to prevent anti-competitive practices by ISPs. While some telecom entities block VPNs for legitimate reasons, such as maintaining network integrity or complying with local regulations, their actions can also violate user privacy and restrict the free flow of information. If ISPs were to block or throttle VPN traffic selectively, it could impact competition by favoring certain online services over others. This interference could be particularly concerning if ISPs were to prioritize their own VPN services over those provided by third-party VPN providers. Advocates for net neutrality argue that it is crucial for maintaining a level playing field on the Internet, fostering competition, innovation, and the free flow of information.

However, the specific regulations and enforcement mechanisms related to net neutrality can differ, and debates on this topic continue in various jurisdictions. In some countries, governments or ISPs may implement restrictions on the use of VPNs as part of broader Internet censorship efforts. These restrictions can be aimed at controlling access to certain websites, services, or content deemed inappropriate or against local laws. While net neutrality principles provide a foundation for treating VPNs fairly, the actual implementation and regulatory landscape can vary by country. Some regions have specific regulations that address net neutrality, while others may not. Additionally, the status of net neutrality can change based on regulatory decisions and legislative developments.

Citation APA (7th Edition)

Pennings, A.J. (2023, Nov 25). Net Neutrality and the Use of Virtual Private Networks (VPNs).



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.

Deep Packet Inspection of Internet Traffic and Net Neutrality

Posted on | November 4, 2023 | No Comments

With a 3-2 shift in the Federal Communications Commission (FCC) leaning towards restoring net neutrality, advocates are again arguing for the equal treatment of all data traffic by Internet service providers (ISPs). Net neutrality principles strive to prevent ISPs such as AT&T, Comcast Xfinity, Korea Telecom, Vodaphone, etc. from engaging in practices that could stifle competition, limit consumer choice, or infringe on the free flow of information online.

Deep Packet Inspection (DPI) is a network technology used to inspect and analyze the contents of data packets running through the Internet. It is a critical component of many network security, monitoring, and optimization solutions.[1] However, DPI can be used in ways that violate net neutrality principles, such as by degrading or blocking specific types of content, devices, services, or applications. In such cases, DPI is directly at odds with net neutrality or the “Open Internet,” which encompasses a broader range of principles and values related to maintaining a free, accessible, and inclusive Internet environment for all users.

The importance of DPI in relation to net neutrality depends on how it is used and the specific context in which it is applied. It can be both important and controversial in the context of net neutrality. When ISPs employ DPI to discriminate against or favor certain types of traffic, it can undermine the open and neutral character of the Internet. This intrusion can lead to anti-competitive behavior and harm consumers’ access to a diverse and free Internet.

DPI can also be used for legitimate network management and security purposes. For instance, it can help identify and mitigate distributed denial-of-service (DDoS) attacks, detect malware, and manage network congestion. In these cases, DPI serves to protect the integrity and security of the network without violating net neutrality.

Deep Packet Inspection is used for examining the contents of data packets as they pass through a network. This involves prioritizing or limiting specific types of traffic to optimize network performance. Several technologies are essential for deep packet inspection to fulfill its various functions, including network management, security, application optimization, quality of service (QoS), and traffic shaping. Advanced DPI systems may incorporate machine learning and artificial intelligence (AI) algorithms to improve accuracy in identifying new or unknown applications and to detect evolving threats by analyzing network behavior over time.

DPI begins with the acquisition of data packets from network traffic. This can be achieved using packet capture technologies, such as network taps, port mirroring, or packet sniffers. These tools intercept and copy data packets for analysis. Once captured, the data packets are parsed to extract relevant information. This process involves breaking down the packets into their constituent parts, such as headers and payloads. DPI may perform content analysis to extract valuable information from packets, such as identifying files, images, video, or text within network traffic. Once packets are captured, they must be processed efficiently. High-performance technologies, such as multi-core CPUs or specialized hardware accelerators, are essential for quickly analyzing and processing packets.

DPI systems may classify network flows based on various criteria, such as source/destination IP addresses, ports, or traffic characteristics. Flow classification is essential for monitoring and controlling different types of traffic effectively. This is useful for security, compliance, and traffic optimization purposes. These can be used to block or throttle (slow down) specific websites or services.

DPI systems also need to understand various network protocols, such as HTTP, SMTP, FTP, or proprietary protocols used by specific applications. Protocol decoding engines are necessary to extract and interpret protocol-specific information. They can decode and analyze the data exchanged within these protocols, making it possible to identify the applications and services being used.

DPI relies on pattern matching algorithms to identify specific content within packets. Regular expressions, string matching, or more advanced techniques like Aho-Corasick algorithms are used to detect patterns associated with threats, protocols, or applications. Sophisticated DPI algorithms are used to analyze packet payloads, extract data, and identify application behavior, even if it uses non-standard ports or encryption.[2]

DPI often employs signature-based analysis, where patterns in packet contents are matched against a database of known patterns associated with specific applications or threats. This allows for the identification of applications, services, or security risks. DPI can also employ behavioral analysis techniques to identify anomalies or suspicious activities within network traffic. For example, it can detect unusual patterns in data transfer or deviations from expected behavior. DPI systems rely on extensive signature databases that contain patterns, behaviors, or attributes associated with specific applications, malware, or network threats. To remain effective, DPI systems need to regularly update their signature databases to account for new applications, protocols, or emerging threats. This requires efficient mechanisms for signature updates and database management. Regular updates to these databases are crucial to stay current with new threats and applications.

It’s important to note that DPI technology raises important considerations related to user privacy and network neutrality. The use of DPI for deep inspection of user traffic often involves monitoring the content of communications without user consent or proper safeguards. DPI systems must incorporate strong security and privacy measures to protect the data they handle and to ensure compliance with legal and regulatory requirements.

Since DPI involves the inspection of data content, it must be performed securely. Data encryption and privacy measures are crucial to protect the confidentiality of network traffic and user data. DPI systems generate logs and reports for monitoring, compliance, and troubleshooting purposes. Robust reporting and logging mechanisms are essential. Ensuring that DPI respects user privacy rights is crucial in any context.

Encrypted traffic poses a challenge for DPI. Some systems incorporate SSL/TLS decryption capabilities to inspect encrypted data, although this must be done with care to protect user privacy and maintain compliance with data protection regulations.

The use of DPI for legitimate security and network management purposes should be balanced with privacy concerns and adhere to relevant laws and regulations. DPI technology may need to integrate with other network security and monitoring solutions, such as firewalls, intrusion detection systems (IDS), and intrusion prevention systems (IPS).

Net neutrality regulations often require ISPs to be transparent about their traffic management practices, and DPI can be a tool to monitor and enforce these rules. In this context, DPI can play a positive role in upholding net neutrality by ensuring that ISPs are following the established regulations.

In summary, the importance of DPI for net neutrality largely depends on how it is applied and the specific goals it serves. When used in ways that violate net neutrality principles, such as blocking, degrading, or throttling certain content or devices, DPI is detrimental to the open Internet. However, when it is employed for network management, security, and ensuring ISP compliance with net neutrality regulations, it can be an important tool for maintaining a free, fast, and open Internet while still safeguarding the network’s integrity and security. Balancing these interests and ensuring proper oversight and transparency is essential in the discussion of DPI and net neutrality.

Citation APA (7th Edition)

Pennings, A.J. (2023, Nov 4). Deep Packet Inspection of Internet Traffic and Net Neutrality.


[1] See Pennings, A.J. (2021, May 16). US Internet Policy, Part 5: Trump, Title I, and the End of Net Neutraliy.

[2] Çelebi, M. Yavanoglu, U. Accelerating Pattern Matching Using a Novel Multi-Pattern-Matching Algorithm on GPU. Applied Sciences. 2023; 13(14):8104.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in Digital Media Management MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

US Internet Policy, Part 7: Net Neutrality Discussion Returns with New FCC Democratic Majority

Posted on | October 9, 2023 | No Comments

The election of Joe Biden as US president in 2020 significantly impacted Internet policy discussions. After the Georgia senatorial runoff that shifted the balance of power to the Democrats, preparation at the Federal Communications Commission (FCC) began to target many issues that were dismissed or ignored during the Trump administration.

But plans stalled as Gigi Sohn, President Biden’s nominee to the FCC, was subject to an intense lobbying effort from the telecom industry to block her seat at the commission. The former FCC staffer, longtime consumer broadband advocate, and first openly LGBTIQ+ nominee for commissioner eventually withdrew her consideration for the post in March 2023. Democrats finally regained majority control of the FCC when a new nominee, Anna Gomez, was confirmed by the US Senate on September 7, 2023.[1]

Pending Internet Policy Issues

– More, better, and cheaper broadband access and connectivity through mobile, satellite, and wireline facilities, especially in rural areas.
– Antitrust concerns about cable and telco ISPs, including net neutrality.
– Privacy and the collection of behavioral data by platforms to predict, guide, and manipulate online user actions.
Section 230 reform for Internet platforms and content producers, including assessing social media companies’ legal responsibilities for user-generated content.
Security issues, including ransomware and other threats to infrastructure, including Border Gateway Protocol (BGP) security between countries.
– Deep fakes, memes, and other issues of misrepresentation, including fake news.
– eGovernment and digital money, particularly the role of blockchain, CBDCs, and cryptocurrencies.
– Formation of Web 3.0, where services are monetized but ownership democratized with new trust-based protocols using blockchain technologies, the core technologies of crypto and nfts.

Addressing Net Neutrality

FCC Chairwoman Jessica Rosenworcel has scheduled October 19 for a vote on how to proceed with new rulemaking and address some issues that have come to the forefront of public scrutiny. With two other Biden appointments, the FCC is poised to act on the party’s priorities, including restoring net neutrality regulations. Such rules barred broadband providers from interfering with web traffic but were gutted by Republican commissioners during the administration of President Donald Trump. Chairwoman Rosenworcel’s speech:

Net neutrality is the legal principle that Internet Service Providers (ISPs) should treat all data and online content equally. It derives from commercial law that strives to treat all customers equally. For example, a hotel should not be able to restrict certain people from lodging at their facilities. It was applied to railroad law to ensure towns along a train route would not be excluded from sending their goods, such as cattle or wheat, to market. The common carrier precendent was applied to telegraph and later to telephone regulation. The principle has been bandied back and forth in the FCC for many years, reflecting different philosophies and sympathies for lobbying arguments.

My previous posts reviewed the issues dealing with wired broadband net neutrality based on FCC’s rulemaking based on the Communications Act of 1934 that emphasized common carriage, the commercial obligation to serve all customers equally and fairly. Historically, these legislated guidelines allowed the US telecommunications system to dramatically expand voice communications from the 1930s through the 1970s.[2]

The FCC later decided that data communications and computer processing service providers operating on top of the telco infrastructure would be better served as lightly regulated Title I “enhanced” companies. This designation allowed the Internet to take off in the 1990s and fostered the growth of thousands of Internet Service Providers (ISPs). For example, it allowed dial-up phone users to connect to ISPs to connect to the Internet for long durations without paying extra toll charges. This dynamic would change as competition heated up to provide “broadband” for the Internet and interactive television.

Consolidation Under Deregulated “Information Services”

Under GOP-leaning Michael Powell’s FCC chairmanship, the ISP market structure consolidated dramatically with deregulation for both cable TV companies and Plain Old Telephone companies (POTs), allowing them to enter new markets. Cable television companies had developed broadband capabilities in the late 1990s with cable modems and coaxial cables to connect to the Internet. Likewise, the Regional Bell Operating Companies (RBOCs) that had carved up America’s telecommunications after the breakup of AT&T in the 1980s, developed Asymmetric Digital Subscriber Lines (ADSL or DSL) broadband technologies to provide high-speed services to households over copper lines. This service uses faster coaxial or fiber optic lines to transmit to a local node or curb and then copper lines into the premise. These companies had envisioned developing joint “information highways” going back to the Bell Atlantic/Tele-Communications, Inc. (TCI) deal that was announced in October 1993. That deal died in 1997 but was finally consummated by AT&T on March 9, 1999, in an all-stock deal worth about $48 billion.

AT&T wanted those cable lines from TCI to expand their local phone service, which it was already doing in another agreement with Time Warner. The merger would allow them to extend their markets and combine infrastructure for cost savings and efficiencies. This combination could provide a significant competitive advantage against other telephone providers and new entrants like satellite or wireless providers. It would also allow them to offer a broader range of services, including bundled packages. But AT&T and RBOCs were limited by the FCC’s ruling on the Telecommunications Act of 1996 that distinguished between Title II common carrier services and Title I deregulated information services. FCC decisions in 2005 facilitated significant changes in the market structure of the Internet.

In 2005, both cable and phone companies suddenly became deregulated ISPs. This change allowed significant consolidation as telephone and cable companies, competing to provide “triple play” (TV, broadband, and voice) services to households, frantically merged with other telecommunications companies to dominate “broadband.” AT&T and Verizon, traditional telephone companies, merged with cable companies (and mobile) to create telecom behemoths. The road kill included thousands of smaller ISPs that eventually were no longer able to compete or even interconnect with the larger companies.

Two things led to sweeping deregulation. First, a U.S. Supreme Court decision (National Cable & Telecommunications Association v. Brand X Internet Services) upheld the FCC’s 2002 ruling that providing cable modem service (i.e., cable television broadband Internet) is an interstate information service. This decision meant that cable companies were confirmed in June of 2005 as subject to the less stringent Title I of the Communications Act of 1934. Two months later, Powell’s FCC allowed former Bell telephone companies to become Title I “information services” during George W. Bush’s administration. The Regional Bell Operating Companies (RBOCs), companies that had carved up America’s telecommunications after the breakup of AT&T in the 1980s and developed Asymmetric Digital Subscriber Lines (ADSL) broadband technologies for “information highways” suddenly became deregulated ISPs.

Although there are currently 2940 Internet service providers in the United States, the top 8 companies have over 90 percent of the subscribers. These are the top 8 Internet providers in the U.S. as of June 2023:

– AT&T 22%
– Spectrum 20%
Xfinity 19%
– Verizon 6%
– Cox 5%
– T-Mobile 5%
– Century Link 2%
– Frontier 2%

The Internet and its World Wide Web were designed to allow devices like PCs, laptops, and mobile phones to talk to each other without much interference from the intermediate network that moves their data. Net neutrality strives to ensure that all online content, services, and applications running through that network are treated equally, regardless of their source. This equality promotes free access to information and prevents ISPs from blocking or throttling (slowing down) specific websites or services. Net neutrality allows users to choose which websites and services they access, without interference from ISPs. Users can explore a diverse range of content and make their own decisions about what to consume. It also ensures that nonprofit organizations, activists, and community groups have equal access to the Internet, allowing them to advocate for social and political causes without discrimination. The danger is that ISPs could examine and manipulate users’ Internet traffic, compromising their privacy and secure communication.

However, the current reality is that net neutrality is not being enforced. It was defeated in the 2017 FCC decision by another vote of 3-2. Pai’s FCC was concerned that net neutrality regulations would discourage ISPs from investing in network infrastructure and improving Internet speeds since they cannot charge content providers for prioritized access. Many net neutrality critics argued that without paid prioritization, the quality of some services would suffer, particularly during peak times when networks become congested.

Big ISPs argued that without the ability to create tiered service plans or charge content providers for faster access, they would struggle to manage network traffic and recoup the costs of infrastructure investments. They suggested that net neutrality rules limit an ISPs’ ability to manage and optimize network traffic efficiently, potentially affecting all users’ service quality. The general argument was that government regulation of the Internet stifles innovation and imposes unnecessary bureaucratic burdens on ISPs that hinder user performance.

It’s important to note that net neutrality is a complex policy principle, and its impact on underserved and economically disadvantaged communities depends on effective enforcement and regulatory oversight. Additionally, while net neutrality works to ensure equitable access to the Internet, broader efforts, such as affordable broadband access programs and digital literacy initiatives, are critical to addressing the digital divide and promoting digital inclusion for all, including those with lower incomes.


[1] The Federal Communications Commission (FCC) is meant to be an independent agency of the United States government responsible for regulating communications by wire and radio in the United States. It is designed to operate independently of partisan politics. The FCC comprises five commissioners appointed by the President of the United States and confirmed by the Senate. No more than three commissioners can be members of the same political party by law. The political affiliation of FCC commissioners can vary depending on the presidential administration in power during their appointments. As a result, the FCC’s policies and priorities may shift with changes in leadership and the political makeup of the commission. Therefore, the FCC’s stance on various issues, including telecommunications, broadband regulation, net neutrality, and media ownership, can change over time based on the views and priorities of the commissioners appointed by the current administration. It is important to recognize that a combination of legal mandates, policy considerations, public input, and the political environment at the time influences the FCC’s actions and decisions.

Citation APA (7th Edition)

Pennings, A.J. (2023, Oct 9). US Internet Policy, Part 7: Net Neutrality Discussion Returns with New FCC Democratic Majority.

[2] List of Prevous Posts in this Series

Pennings, A.J. (2022, Jun 22). US Internet Policy, Part 6: Broadband Infrastructure and the Digital Divide.

Pennings, A.J. (2021, May 16). US Internet Policy, Part 5: Trump, Title I, and the End of Net Neutraliy.

Pennings, A.J. (2021, Mar 26). Internet Policy, Part 4: Obama and the Return of Net Neutrality, Temporarily.

Pennings, A.J. (2021, Feb 5). US Internet Policy, Part 3: The FCC and Consolidation of Broadband.

Pennings, A.J. (2020, Mar 24). US Internet Policy, Part 2: The Shift to Broadband.

Pennings, A.J. (2020, Mar 15). US Internet Policy, Part 1: The Rise of ISPs.

Related Posts

Pennings, A.J. (2023, May 6). Deregulating US Data Communications.

Pennings, A.J. (2021, Sep 22). Engineering the Politics of TCP/IP and the Enabling Framework of the Internet.

Pennings, A.J. (2019, Nov 26). The CDA’s Section 230: How Facebook and other ISP became Exempt from Third Party Liabilities.

Pennings, A.J. (2018, Oct 17). Potential Bill on Net Neutrality and Deep Pocket Inspection

Pennings, A.J. (2016, Nov 15). Broadband Policy and the Fall of the ISPs.

Pennings, A.J. (2011, Jan 31). Comcast and General Electric Complete NBC Universal Deal.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University starting programs in Digital Communications and Information Systems Management while teaching digital economics and policy. He also helped set up the Digital Media Management program at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

ICTs for SDG 7: Twelve Ways Digital Technologies can Support Energy Access for All

Posted on | September 29, 2023 | No Comments

Digital Technologies, often known as Information and Communication Technologies (ICTs), are crucial in supporting energy development and access in numerous ways. ICTs can enhance energy production, distribution, and consumption, as well as promote energy efficiency and help facilitate the transition to clean and sustainable energy sources. ICTs are accorded a significant role in supporting the achievement of the United Nations’ Sustainable Development Goals (SDG), including SDG 7, which aims to ensure access to affordable, reliable, sustainable, and modern energy for all. To harness the full potential of ICTs for energy development, it is essential to invest in grid infrastructure and equipment, cybersecurity and data privacy, as well as digital literacy and skills.

Energy grids

Here are twelve ways that ICTs can support energy development and access:

    1) Smart Electrical Grids
    2) Renewable Energy Integration
    3) Energy Monitoring and Management
    4) Demand Response Programs
    5) Energy Efficiency
    6) Energy Storage
    7) Predictive Maintenance
    8) Remote Monitoring and Control
    9) Electric Vehicle (EV) Charging Infrastructure
    10) Energy Access in Remote Areas
    11) Data Analytics and Predictive Modeling, and
    12) Research and Development.

1) ICTs enable the implementation of smart grids, which are intelligent electricity distribution systems. It allows for real-time monitoring, control, and automation of grid operations. Smart grids use sensors, digital communication lines, and advanced analytics to monitor and manage electricity flows in real time. These Internet of Things (IoT) networks can optimize energy distribution, reduce energy losses, and integrate renewable energy sources more effectively. IoT sensors in energy infrastructure enable remote monitoring, maintenance, and early detection of faults. This can reduce downtime and improve energy availability.

2) ICTs facilitate the integration of renewable energy sources, such as solar, wind, geothermal, and hydroelectric energy, into energy grids. They provide real-time data on energy generation, storage, and consumption, allowing grid operators to balance supply and demand efficiently. Two main types of renewable energy generation resources need to be integrated: distributed generation, which refers to small-scale renewable generation close to a distribution grid; and centralized, utility-scale generation, which refers to larger projects that connect to major grids through transmission lines (See above image). Generating electricity using renewable energy resources rather than fossil fuels (coal, oil, and natural gas) can help reduce greenhouse gas emissions (GHGs) from the power generation sector and help address climate change.

3) Smart meters and energy management systems use ICTs to provide consumers with real-time information about their energy usage, replacing the electromechnical meters that were unreliable and easy to tamper with. These devices empower individuals and businesses to make informed decisions that reduce energy consumption and costs. Smart meters allow for instantaneous monitoring of energy consumption, enabling utilities to optimize energy distribution and consumers to track and manage their usage. The Asian Development Bank has been very active in supporting the transition to smart meters and in the process help countries meet their carbon commitments under the Paris Agreement, a legally binding international treaty on climate change adopted by 196 Parties at the UN Climate Change Conference (COP21) in Paris, France, in December, 2015.

4) ICTs enable demand response programs that encourage consumers to adjust their energy usage during peak demand periods in response to price signals and grid conditions. Utilities can send signals to smart devices (such as electric vehicles) to reduce energy consumption when necessary, avoiding blackouts and reducing the need to engage additional power plants. The New York Independent System Operator (NYISO), other electric distribution utilities, and wholesale system operators offer demand response programs to help avoid overload, keep prices down, reduce emissions, and avoid expensive equipment upgrades.

ICT can also deliver related energy education and awareness campaigns through websites, mobile apps, dashboards, and social media to inform consumers about energy-saving practices and sustainable energy choices. Mobile payment platforms can also facilitate access to prepaid energy services, making it easier for people to pay for electricity and monitor their energy usage. Digital platforms can connect consumers with renewable energy providers, allowing individuals and businesses to purchase renewable energy certificates or even invest in community solar projects.

5) ICTs can be used to monitor and control energy-consuming devices and systems, such as HVAC (heating, ventilation, and air conditioning), lighting, and appliances to optimize energy efficiency. Building management systems and home automation solutions are examples of ICT applications in this area. Energy-efficient homes, offices, and manufacturing facilities use less energy to heat, cool, and run appliances, electronics, and equipment. Energy-efficient production facilities use less energy to produce goods, resulting in price reductions. Key principles of the EU energy policy on efficiency focus on producing only the energy that is really needed, avoid investments in assets that are destined to be stranded, and reduce and manage demand for energy in a cost-effective way.

The utilization of more electrical energy technologies will assist the transition to more efficient energy sources while reducing green house gases and other potential pollutants. Heat pumps, for example, are an exciting addition that operate like air conditioned cooling, only in reverse. Heat pumps are used in EVs and are making inroads into homes and businesses.

6) ICTs support the management and optimization of energy storage systems, including batteries called BESS (Battery Energy Storage Systems) and pumped hydro storage. The latter moves water to higher elevations when power is available and runs it down through generators to produce electricity. These technologies store excess energy when it’s abundant and release it when demand is high. Tesla’s Megapack and Powerwalls use energy software platforms called Opticaster and Virtual Machine Mode that manage energy storage products as well as assist efficient electrical transmission over long grid lines.

Tesla Master Plan 3

7) In energy production facilities, ICTs can be used to monitor the condition of equipment and predict when maintenance is needed. This reduces downtime, extends equipment lifespan, and improves overall efficiency. ICT-based weather and renewable energy forecasting models improve the accuracy of predicting renewable energy generation, aiding grid operators in planning and resource allocation. Robust ICT networks can also ensure timely communication during energy-related emergencies, helping coordinate disaster response and recovery efforts.

8) ICTs enable remote monitoring and control of energy infrastructure, such as power plants and substations. These processes use a combination of hardware and software to track key metrics and the overall performance. Their equipment mix includes IoT-enabled sensors that track relevant data, while software solutions produce a dashboard of alerts, trends, and updates that can also enhance the safety and reliability of energy production and distribution.

9) Digital technologies play a critical role in managing EV charging infrastructures. They can help distribute electricity efficiently to both stationary and wireless charging stations. Mobile apps provide users with real-time information about charging availability, compatibility, and costs. They can also keep drivers and passengers entertained and productive while waiting for the charging to conclude.

10) ICT can facilitate the development of microgrids in off-grid or remote areas, providing access to reliable electricity through localized energy generation and distribution systems. These alternate grids use ICTs to support the deployment of standalone renewable energy systems, providing access to electricity and related clean energy sources such as geothermal, hydroelectric, solar, and wind. Renewable energy, innovative financing, and an ecosystem approach can work together to provide innovative solutions to rural areas.

11) ICTs enable data analytics and predictive modeling to forecast energy consumption patterns, grid behavior, and the impact of impending weather conditions. Analysing and interpreting vast amounts of data allows energy companies to optimise power generation through real-time monitoring of energy components, cost forecasting, fault detection, consumption analysis, and predictive maintenance.

These insights can inform energy planning and policy decisions. The ICT-enabled data collection, analysis, and reporting on energy access and usage can help policymakers and organizations track progress toward SDG 7 targets.

12) ICTs support research and development efforts in the energy sector by facilitating simulations and the testing of new technologies and energy solutions. Energy and fuel choices are critical determinants of economic prosperity, environmental quality, and national security and need to be central to academic and commercial research.

To fully address ICT for SDG 7, it’s essential to confront digital divides, expand internet access, and promote digital literacy in underserved communities. Supportive efforts among governments, utilities, technology providers, and civil society are vital to advancing energy access and sustainability. Collaboration among governments, utilities, technology providers, and research institutions can advance the integration of ICTs into the energy sector to ensure sustainable and reliable energy development for all.


[1] Some of the categories and text for this essay was generated by Chat GPT, edited with the assistance of Grammarly and written in line with my expertise and knowledge from my teaching an ICT and SDGs course for six years.

Citation APA (7th Edition)

Pennings, A.J. (2023, Sept 29). ICTs for SDG 7: Twelve Ways Digital Technologies can Support Energy Access for All.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea where he teaches courses in ICT for sustainable development as well as broadband networks and sensing technogies. From 2002-2012 was on the faculty of New York University and he also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

The Increasing Value of Science, Technology, and Society Studies (STS)

Posted on | August 27, 2023 | No Comments

I regularly teach a course called Introduction to Science, Technology, and Society Studies (STS). It investigates how science and technology both shape and are shaped by society. The course seeks to understand their cultural, economic, ethical, historical, and political dimensions by investigating the dynamic interplay between these key factors of modern life.

Below I outline class topics, list major universities offering similar programs, and introduce some general areas of STS research. The scholarship produced by STS is used worldwide by engineers, journalists, legislators, policy-makers, as well as managers and other industry actors. It also has relevance to the general public engaged in climate, health, digital media, and other societal issues arising from science and technology adoption.

In class, we cover the following topics: Artificial Intelligence, Biomedicine, Cyberspace, Electric Vehicles and Smart Grids, Nanotechnology, Robotics, and even Space Travel. Tough subjects, but just as challenging is the introduction of perspectives from business, cognitive science, ethics, futurism, humanities, and social sciences like politics that can provide insights into relationships between science, technology, and society.[1]

STS is offered by many of the most highly-rated universities, often in Engineering programs but also in related Environment, Humanities, and Medical programs.

Although I teach in South Korea, the program was developed at Stony Brook University (SBU) in New York as part of the Department of Technology and Society (DTS), offering BS, MS, and PhD degrees at the College of Engineering and Applied Sciences (CEAS). The DTS motto is “Engineering has become much too important to be left to the engineers,” which paraphrases James Bryant Conant, who wrote after examining the results of the atomic bomb on Hiroshima that, “Science is much too important to be left to the scientists.” DTS programs prepare graduates with the technical and science capacity to collaborate productively with sister engineering and science departments at SBU and SUNY Korea while applying social science expertise and humanistic sensibility to holistic engineering education.

Our DTS undergraduate program at SUNY Korea has an additional emphasis on information and communication technologies (ICT) due to Korea’s leadership in this area.

Notable STS Programs

– Massachusetts Institute of Technology (MIT) has a Program in Science, Technology, and Society and is considered the initial founder of the field in the early 1970s.

– Stanford University’s Program in Science, Technology, and Society explores scientific and technological developments’ social, political, and ethical dimensions.

– University of California, Berkeley’s Science, Technology, Medicine, and Society Center addresses the social, cultural, and political implications of science and technology. It is known for its engagement with critical theory and social justice issues.

Harvard University’s Program on Science, Technology, and Society is part of the John F. Kennedy School of Government and provides a platform for examining the societal impact of science and technology through various courses and research opportunities.

– Cornell University, New York: Cornell’s Department of Science and Technology Studies was an early innovator in this area and offers undergraduate and graduate programs focusing on the history, philosophy, and social aspects of science and technology.

– The University of Edinburgh’s Science, Technology, and Innovation Studies Department in the United Kingdom is known for its research and teaching in the field.

University of California, San Diego’s Science Studies Program is part of the Department of Literature and offers interdisciplinary courses that examine the cultural, historical, and ethical dimensions of science and technology.

– The University of Twente in the Netherlands has a renowned Science, Technology, and Society Studies program emphasizing a multidisciplinary approach.

– In Sweden, Lund University’s Department of Sociology offers a strong STS program that covers topics such as the sociology of knowledge, science communication, and the ethical aspects of technology.

– York University in Canada has a Science and Technology Studies program that encourages critical thinking about the role of science and technology in contemporary culture.

While this list is incomplete, let me mention the Department of Technology and Society (DTS) and its history at Stony Brook University, which also dates back to the early 1970s. The Department of Technology and Society at SUNY Korea offers its degrees from DTS in New York, including BS and MS degrees in Technological Systems Management and a PhD in Technology, Policy, and Innovation.

Many engineering programs have turned to STS to provide students with conceptual tools to think about engineering problems and solutions in more sophisticated ways. It is often allied with Technology Management programs that include business perspectives and information technology practices. Some programs feature standalone courses on the sociocultural and political aspects of technology and engineering, often taught by faculty from outside the engineering school. Others incorporate STS material into traditional engineering courses, e.g., by making ethical or societal impact assessments part of capstone projects.

So, Science, Technology, and Society Studies (STS) scholars study the complex interplay between these domains to understand how they influence each other and impact human life.

Key Research Inquiries

Historical Context – STS scholars often delve into the past developments of scientific discoveries and technological innovations, as well as the social and cultural contexts in which they emerged. They readily explore the historical development of scientific and technical knowledge to uncover how specific ideas, inventions, and discoveries have emerged and changed over time. This exercise helps to contextualize the current state of science and technology and understand their origins.

Social Implications – STS emphasizes the social consequences of scientific and technological advancements. This examination relates to ethics, equity, power dynamics, and social justice issues. For instance, STS might analyze how certain technologies disproportionately affect different groups within society or how they might be used to reinforce existing inequalities.

Policy and Governance – STS researchers analyze how scientific and technological innovations are regulated, legislated, and governed by local, national, and international policies. They explore how scientific expertise, public opinion, industry interests, and political considerations influence policy decisions. They also assess the effectiveness of these policies in managing potential risks and benefits.

Public Perception and Communication – STS studies also explore how scientific information and technological advancements are communicated to the public. These inquiries involve investigating how public perceptions and attitudes towards science and technology are formed. They recognize that media narratives and communication channels influence these perceptions.

Public Engagement and Input – STS emphasizes the importance of involving the public in discussions about scientific and technological matters. It examines how scientific knowledge is communicated to the public, how public perceptions influence scientific research, and how public input can shape technological development.

Social Construction of Science and Technology – STS emphasizes that science and technology are not solely products of objective inquiry or innovation but are also influenced by social and cultural factors. It examines how scientific knowledge is constructed, contested, and accepted within different communities and how economic, political, and cultural forces shape technologies.

Ethical and Moral Considerations – The STS field often addresses scientific and technological advancements’ ethical and moral implications. This analysis includes discussions on the responsible development of new technologies, the potential for unintended consequences, and the distribution of benefits and risks across different social groups.

Innovation Studies – STS scholars also study innovation processes, including how scientific knowledge translates into technological applications, how creative ecosystems are established, and how collaboration between researchers, policymakers, and industry actors contributes to technological progress.

Environmental Analysis – Science, Technology, Society, and Environment (STSE) Studies interrogate how scientific innovations, technology investments, and industrial applications affect human society and the natural environment. Education is an important component as people make decisions that often guide the environmental work of scientists and engineers. STS also investigates how scientific and technological tools can help create more climate-resilient urban and rural infrastructures.

Technological Determinism – STS often confronts the idea of technological determinism, which suggests that technology significantly drives social change. It investigates the institutional factors and human agency that significantly shape technological development and its impacts while recognizing science and technology’s driving forces.

Digital Culture and Network Infrastructure – Some STS scholars study the overall media environment, the culture it engenders, and its enabling frameworks. It considers the interaction and interdependence of various media forms and linkages within and between networks. They explore how communication speed, information storage, and digital processing influence human perception, culture, economics, and the environment. These inquiries often include raising questions about privacy, censorship, propaganda, and the responsible use of media technologies.

Energy and Carbon Dependence – While engineers study the chemical, electrical, electromagnetic, mechanical, and thermodynamics of energy, STS scholars examine the central role of these energies in modern life. Again, they take various multidisciplinary, social-scientific perspectives on energy and environment, analyzing the economic, political, and social aspects of the production and consumption of energy, including the controversies, domestication, and innovation of new forms of energy.

Interdisciplinary Approach – As mentioned throughout this post, STS is inherently interdisciplinary, drawing on insights from sociology, anthropology, history, philosophy, political science, and more. This multidisciplinary perspective allows for a comprehensive examination of the complex relationships between science, technology, and society.


STS is relevant in addressing contemporary issues such as artificial intelligence, biotechnology, environmental challenges, privacy concerns, and more. It encourages a holistic understanding of the complex interactions between science, technology, and society, crucial for making informed decisions and policies in an increasingly technologically driven world. It is research-driven, using both quantitative and qualitative methods. By studying STS interactions, it aims to contribute to more informed decision-making, responsible innovation, and a better understanding of the role of science and technology in modern societies.


[1] I have previously described how I use the 4 Cs of the cyberpunk genre for techno-social analysis.
[2] Some of the categories and text for this essay was generated by Chat GPT and edited with the use of Grammarly in line with additional knowledge from my teaching the STS course for six years.

Citation APA (7th Edition)

Pennings, A.J. (2023, Aug 27). The Increasing Value of Science, Technology, and Society Studies (STS).



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he started programs in Digital Communications and Information Systems Management while teaching digital economics. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Pressing Global Standards for Internet Protocols

Posted on | July 14, 2023 | No Comments

Developing new standards that would allow computers to connect and exchange information was central to the growth of international data networks. Computers and telecommunications networks are routinized codes and procedures constituted and shaped technologically by economic, engineering, and political decisions. Telecommunications requires agreement about the practices of interconnection, the interaction protocols, and the technical standards needed to couple disparate nodes. This post looks at the importance of standards development as the Internet emerged. Several powerful contenders developed competing designs before TCP/IP eventually emerged as the global solution. The OSI (International Organization for Standardization) model, for example, was influential, but turned out to be fatal distraction for many companies.

Standards sometimes emerge out of functionality, sometimes out of cooperation, and often out of pure economic power. Each of these conditions was present in the fight to develop telecommunications equipment for international data communications during the early 1970s and into the 1980s. Standards allow different types of equipment to work together. Choices of standards involve the exclusion of some specifications and the inclusion of others. Standards create competitive advantages for companies. Ultimately, these standards determine whose equipment will be used and whose will either be scrapped or never get off the design board. This situation is, even more, the case when they bridge national boundaries where the protocols and equipment have to be harmonized for effective technical communications.

Users (usually international corporations), international coordinating bodies, and computer equipment manufacturers were all starting to react to the new economic conditions of the 1970s. The movement to floating foreign exchange rates and the increased demand for ICT were especially problematic. Banks and other financial institutions such as the New York Stock Exchange (NYSE) were also very keen to develop data solutions to expand their scope over wider market areas, speed up “back office” data processing services, and provide new services.

Meanwhile, the ITU began soliciting the positions of its member nations and associated corporations regarding their plans to develop data communications and possible joint solutions. Perhaps most importantly, IBM’s Systems Network Architecture (SNA), a proprietary network, had the force of the monolithic computer corporation behind it. SNA was a potential de facto standard for international data communications because of the company’s overwhelming market share in computers.

Several other companies came out with proprietary networks as well during the mid-1970s. Burroughs, Honeywell, and Xerox all drew on ARPANET technology, but designed their network to work with the computers that only they manufactured.[1] As electronic money and other desired services emerged worldwide, these three stakeholders (users, ITU, computer OEMs) attempted to develop the conduits for world’s new wealth.

International organizations were also key to the standards development in the arena of international data communications. The ITU and the OSI initiated international public standards on behalf of their member states and telecommunications agencies. The ITU’s Consultative Committee on International Telegraphy and Telephony (CCITT) was responsible for coordinating computer communication standards and policies among its member Post, Telephone, and Telegraphy (PTT) organizations. This committee produced “Recommendations” for standardization, which usually were accepted readily by its member nations.[2] As early as 1973, the ITU started to develop its X-series of telecommunications protocols for data packet transfer (X indicated data communications in the CCITT’s taxonomy).

Another important standards body mentioned above, is the International Organization for Standards (ISO). The ISO was formed in 1946 to coordinate standards in a wide range of industries. In this case, they represented primarily the telecommunications and computer equipment manufacturers. ANSI, the American National Standards Institute, represented the US.

Controversy emerged in October 1974 and revolved around IBM’s SNA network, which the Canadian PTT had taken issue with. The Trans-Canada Telephone System (TCTS) wanted to produce and promote its own packet-switching network that it called Datapac. It had been developing its own protocols and was concerned that IBM would develop monopolistic control over the data communications market if allowed to continue to build its own transborder private networks. Although most computers connected at the time were IBM, the TCTS wanted circuitry that would allow other types of computers to use the network.

Both sides came to a “standoff” in mid 1975 as IBM wanted the PTT to use its SNA standards and the carrier tried to persuade IBM to conform to Canada’s requirements. The International Telecommunications Union attempted to resolve the situation by forming an ad hoc group to come up with universal standards for connecting “public” networks. Britain, Canada and France along with BBN spin-off Telenet from the US started to work on what was to become the X.25 data networking standard.

The ITU’s CCITT, who represented the interests of the PTT telecommunications carriers, proposed X.25 and X.75 standards out of a sense of mutual interest among its members in retaining their monopoly positions. US representatives, including the US Defense Communications Agency were pushing the new TCP/IP protocols developed by ARPANET because of its inherent network and management advantages for computer operators. Packet-switching broke up information and repackaged it in individual packets of bits that needed to be passed though the telecommunications circuit to the intended destination. TCP gave data processing managers more control because it was responsible for initiating and setting up the connection between hosts.

In order for this to work, all the packets must arrived safely and be placed in the proper order. In order to get reliable information a data checking procedure needs to catch packets that are lost or damaged. TCP placed this responsibility at the computer host while X.25 placed it within the network, and thus under the control of network provider. The US pushed hard for the TCP/IP standard in the CCITT proceedings but were refused by the PTTs who had other plans.[1]

Tensions increased due to a critical timeframe. The CCITT wanted to specify a protocol by 1976 as it met only every four years to vote on new standards. They had to work quickly in order to meet and vote on the standards in the 1976 CCITT plenary that was coming together in September. Otherwise they would have to wait until 1980.

The X.25 standards were developed and examined throughout the summer of 1976 and approved by the CCITT members in September. The hastily contrived protocol was approved over the objections of US representatives who wanted TCP/IP institutionalized. The PTTs and other carriers argued that TCP/IP was unproven and requiring its implementation on all the hosts they would serve was unreasonable. Given ARPANET hosts’ difficulty implementing TCP/IP by 1983, their concerns had substance. X.25 and another standard, X.75, put PTTs in a dominant position regarding datacom, despite the robustness of computer innovations, and the continuing call by corporations for better service.

The ARPANET’s packet-switching techniques made it into the commercial world with the help of the X-series of protocols defined by the ITU in conjunction with some former ARPANET employees. A store-and-forward technology rooted in telegraphy, it passed data packets over a special network to find the quickest route to its destination. What was needed was an interface to connect the corporation or research institute’s computer to the network.

The X.25 protocol was created to provide the connection from the computer to the data network. At the user’s firm, “dumb” terminals, word processors, mainframes, and minicomputers (known in the vernacular as DTE or Data Terminal Equipment) could be connected to the X.25 interface equipment with technology called PADs (Packet Assemblers/Dissamblers). The conversion of data from the external device to the X.25 network was transparent to the terminal and would not effect the message. An enterprise could build its own network by installing a number of switching computers connected by high-speed lines (usually 56k up to the late 1980s).

X.25 connected these specially designed computers to the data network. The network could also be set up by a separate company or government organization to provide data networking services to customers. In many cases a hybrid network could be set up combining private facilities with connections to a public-switched data network.[4]

Developed primarily by Larry Roberts from ARPA, who later went to work with Telenet’s value-added networks, X.25 was a compromise that provided basic data communications for transnational users while keeping the carriers in charge. The standard was eagerly anticipated by the national PTTs who were beginning to realize the importance of data communications and the danger of allowing computer manufacturers to monopolize the standards process by developing proprietary networks. What was surprising though, was the endorsement of X.25 by the transnational banks and other major users of computer communications. As Schiller explained:

  • What is unusual is that U.S. transnational corporations, in the face of European intransigence, seem to have endorsed the X.25 standard. In a matter of a few months, Manufacturers Hanover, Chase Manhattan, and Bank of America announced their support for X.25, the U.S. Federal Reserve bruited the idea of acceptance, and the Federal Government endorsed an X.25-based interim standard for its National Communications System. Bank of America, which on a busy day passes $20 billion in assets through its worldwide network “cannot stall its expansion planning until IBM gives its blessing to a de facto international standard,” claims one report. Yet even more unusual, large users’ demands found their mark even over the interests of IBM, with its tremendous market share of the world’s computer base. In summer, 1981, IBM announced its decision to support the X.25 standard within the United States.[5]

Telenet subsequently filed an application with the FCC to extend its domestic value-added services internationally using the X.25 standard and a number of PTTs such as France’s Transpac, Japan’s DDX, and the British Post Office’s PSS also converted to the new standard. Computer equipment manufacturers were forced to develop equipment for the new standard. This was not universally criticized, as the standards provided a potentially large audience for new equipment.

Although the X-series did not resolve all of the issues for transnational data networking users, it did provide a significant crack in the limitations on international data communications and provided a system that worked well enough for the computers of the time. Corporate users as well as the PTTs were temporary placated. A number of privately owned network service providers such as Cybernet and Tymnet used the new protocols as did new publicly-owned networks such as Uninet, Euronet, and Nordic Data Network.

In another attempt to preclude US dominance in networking technology, the British Standards Institute proposed to the ISO in 1977 that the global data communications infrastructure needed a standard architecture. The move was controversial because of the recent work and subsequent unhappiness over X.25. The next year, members party to the International Standards Organization (ISO), namely Japan, France, the US, Britain, and Canada set out to create a new set of standards they called Open Systems Interconnection or OSI using generic components which many different equipment manufacturers could offer. Most equipment for telecommunications networks was built by national electronics manufacturers for domestic markets, but the internationalization of communications require a different approach because multiple countries need to be connected and that required compatibility. Work on ISO was done primarily by Honeywell Information Systems, who actually drew heavily on IBM’s SNA (Systems Network Architecture). The layered model was initally favored by the EU that was suspicious of the predominant US protocols.

Libicki describes the process:

  • “The OSI reference model breaks down the problem of data communications into seven layers; this division, in theory, is simple and clean, as show in Figure 4. An application sends data to the application layer, which formats them; to the presentation layer, which specifies byte conversion (e.g. ASCII, byte-ordered integers); to the session layer, which sets up the parameters for dialogue, to the transport layer, which puts sequence numbers on and wraps checksums around packets; to the network layer, which adds addressing and handling information; to the data-link layer, which adds bytes to ensure hop-to-hop integrity and media access; to the physical layer, which translates bits into electrical (or photonic) signals that flow out the wire. The receiver unwraps the message in reverse order, translating the signals into bits, taking the right bits off the network and retaining packets correctly addressed, ensuring message reliability and correct sequencing , establishing dialogue, reading the bytes correctly as characters, numbers, or whatever, and placing formatted bytes into the application. This wrapping and unwrapping process can be considered a flow and the successive attachment and detachment of headers. Each layer in the sender listens only to the layer above it and talks only to the one immediately below it and to a parallel layers in the receiver. It is otherwise blissfully unaware of the activities of the other layers.”[6]

Specifying protocols before their actual implementation turned out to be bad policy. Unfortunately for Japan and Europe, countries who had large domestic equipment manufacturers and did not want the US to control international telecommunications equipment markets, the opposite happened. These countries lost valuable time developing products with OSI standards while the computer networking community increasingly used TCP/IP. As the Internet took off, the manufacturing winners were companies like Cisco and Lucent. They ended up years ahead of other telecom equipment manufacturers and gave the US the early advantage in Internetworking.[7]

In another post, I explore the engineering of a particular political philosopy into TCP/IP.

Citation APA (7th Edition)

Pennings, A.J. (2023, July 14). Pressing Global Standards for Internet Protocols.



[1] Janet Abbate. (1999) History of the Internet. Cambridge, MA: The MIT Press. p. 149.
[2] Janet Abbate. (1999) History of the Internet. Cambridge, MA: The MIT Press. p. 150.
[3] ibid, p. 155.
[4] Helmers, S.A. (1989) Data Communications: A Beginner’s Guide to Concepts and Technology. Englewood Cliffs, NJ: Prentice Hall. p. 180.
[5] Schiller, D. (1982) Telematics and Government. Norwood, NJ: Ablex Publishing Corporation. p. 109.
[6] Libicki, M.C. (1995) “Standards: The Rough Road to the Common Byte.” In Kahin, B. and Abbate, J. Standards Policy for Information Infrastructure. Cambridge, MA: The MIT Press. pp. 46-47.
[7] Abbate, p. 124.


AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea where he teaches broadband technologies and policy. From 2002-2012 was on the faculty of New York University where he taught digital economics while managing programs addressing information systems and telecommunications. He also taught in Digital Media MBA atSt. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Public and Private Goods: Social and Policy Implications

Posted on | May 24, 2023 | No Comments

In a previous related posts, I wrote about how digital content and services can be considered “misbehaving economic goods” because most don’t conform to the standard product that is individually owned and consumed in its entirety. In this post, I expand that analysis to a wider continuum of different types of public and private goods.

Economics is mainly based on the assumption that when a good or service is consumed, it is used up wholly by its one owner. But not all goods and services fit this standard model. This post looks at four different types of economic goods:

    – private goods,
    – public goods,
    – common goods, and
    – club goods.

Each of these economic product types displays varying degrees of “rivalry” and “excludability.” This means 1) the degree of consumption or “subtractibility,” and 2) whether non-paying consumers can be excluded from their consumption. In other words, does the product dissappear in its consumption? And, how easy is it to protect the product from unauthorized consumption? Understanding the characteristics of various goods and services can help guide economic processes towards more sustainable practices while maintaining high standards of living.

Media products like a cinema showing or a television program have different characteristics. They are not consumed by an individual owner But is it difficult to restrict non-paying users/viewer/consumers from enjoying them? Cinemas can project one movie to large groups because it is not diminished by any one viewer although it need walls and security to keep non-payers out.

TV and radio began by broadcasting a signal out to a large population. Anyone with a receiver could enjoy the broadcast. Cable distribution and encryption techniques allowed more channels and the ability to to monetize more households. These variations raise a number of questions about the ownership and consumption of different types of products and their economic analysis.

Over the top (OTT) television services have introduced new models. It doesn’t broadcast, but requires a subscription. Programming is not substracted and the platforms provide some exclusionary techniques to keep unauthorized viewers out. It is possible to share passwords among viewers and most OTT have warned consumers that they will charge more if the passwords are shared outside the “household,” a particular concern for college students leaving home for campus.

So, the characteristics of goods and services raises many questions about how society should organize itself to offer these different types of economic products. One issue is government regulation. Media products and services have traditionally required a fair amount of government regulation and sometimes government ownership of key resources. This is primarily because the main economic driver was the electromagnetic spectrum, which governments claimed for the public good. Should the government claim claim control over the sources of water too? Some goods, like fish, are mainly harvested from resources like lakes, rivers, and oceans that prosper if they are protected, and access restricted from overuse or pollution.

This section looks at different types of goods in more detail.

Private Goods

The standard category for economic goods is private goods. Private goods are rivalrous and excludable. For example, a person eating an apple consumes that particular fruit, which is not available for rivals to eat. An apple can be cut up and shared, but it is ultimately “subtracted” from the economy. Having lived in orchard country, I know you can enter and steal an apricot or apple, but because its bulky, you are not likely to take much.

Economists like to use the term households, partially because many products, such as a refrigerator or a car, are shared among a small group. Other examples of private goods include food items like ice cream, clothing, and durable goods like a television.

Common Goods

Common goods are rivalrous but non-excludable, which means they can be subtracted from the economy, but it may be difficult to exclude others. Public libraries loan out books, making them unavailable to others. Tablespace and comfortable chairs at libraries can also be occupied, although it is difficult to exclude people from them.

Fishing results in catches that are consumed as sashimi or other fish fillets. But the openness of lakes, rivers, and oceans makes it challenging to exclude people from fishing them. Similarly, groundwater can be drilled and piped to the surface, but it isn’t easy to keep others from consuming water from the same source.

Oil then, is a common good. In the US, if you own the property rights to the land where you can drill, you can claim ownership of all you pump. Most other countries have nationalized their oil production and cut deals with major drilling and distribution companies to extract, refine, and sell the oil. Russia privatized its oil industries after the collapse of communist USSR, but has re-nationalized much of its control under Rosneft, a former state enterprise that is now a public-traded monopoly.

Oil retrieved from the ground and used in an automobile is rivalrous of course. An internal combustion engine explodes the hydrocarbons to push a piston that turns an axle and spins the wheels. The gas or petrol consumed in the production of the energy. Howevery, when the energy is released, by-products like carbon monoxide and carbon dioxide enter the atmosphere. This imposes a cost of others and its called an externality.

Club Goods

Club goods are non-rivalrous and excludable. In other words, they cannot be consumed with usage, but it is possible to exclude consumers who do not pay. A movie theater can exclude people from attending the movie, but the film is not consumed by the audiences. It is not subtracted from the economy. The audience doesn’t compete for the cinematic experience; it shares the experience. That is why club goods are often called “collective goods.” These goods are usually made artificially scarce to help produce revenue.

Software is cheaply reproduced and not consumed by a user. However, the history of this product is wrought with the challenges of making it excludable. IBM did not try to monetize software and focused on selling large mainframes and “support” that included the software. But Micro-Soft (Its original spelling) made excludability a major concern and developed several systems used to protect software use from non-licensees.

It only recently moved to a more “freemium” model with Windows 10. Freemium became particularly attractive with the digital economy and the proliferation of apps. A free but limited app could be offered for free to get a consumer to try it. If they like it enough, they can pay for the full application. This strategy takes advantage of network effects and makes sure it gets out to a maximum amount of people.

Public Goods

The other category to consider are those products that are not subtracted from the economy when consumed and whose characteristics make it difficult to exclude nonpaying customers. Broadcast television shows or radio programs transmitted by electromagnetic waves were early examples. Carrying media content to whoever could receive the signals, the television broadcasts were not consumed by any one receiver. It was also difficult to exclude anyone who had the right equipment from enjoying the programs.

The technological exploitation of radio waves presented challenges for monetization and profitability. While some countries like Britain and New Zealand charged a fee on a device for a “licence” to receive content, advertising became an important source of income for broadcasters. It had been pioneered by broadsheets and newspapers as well as billboards and other types of public displays. As radio receivers became popular during the 1920s, it became feasible to advertise on its signals. In 1922, WEAF, a New York-based radio station charged US$50 for a ten-minute “toll broadcast” about the merits of a Jackson Heights apartment complex. These later became known as commercials and were adopted by television as well.

Cable television delivered programming that was originally not rivalrous but developed techniques to exclude non-paying viewers. They broadcast content to paying subscribers via radio frequency (RF) signals transmitted through coaxial cables, or light pulses emitted within fiber-optic cables. Set-top boxes were needed to de-scramble and decode cable channels and allow subscribers to view a single channel.

Unfortunately, this has led to monopoly privileges and has resulted in many viewers “cutting the cord” to cable TV. Cable TV is being challenged by streaming services that easily exclude non-paying members. Or does it? Netflix is trying to limit access to people sharing their plans with other people.

Generally recognized public goods also include firework displays, flood defenses, sanitation collection infrastructure, sewage treatment plants, national defense, radio frequencies, Global Positioning Satellites (GPS) and crime control.

Public goods are suspect to the “free-rider” phenomenon. A person living in a zone that floods regularly but doesn’t pay for taxes going into levees or other protections gets a “free ride.” Perhaps a better example is national defense.

Anti-Rival Goods

What happens when a product actually becomes more valuable when it is used? It is possible that an economic good not only be not subtracted but increase in value when it is used? And increase its value when used by more people. A text application has no value by itself, but as more people join the service, it becomes more valuable. This is an established principle called network effects.

Merit Goods.

Merit goods are goods and services that society deems valuable and the market system does not readily supply. Healthcare and education, child care, public libraries, public spaces, and school meals are examples. Merit goods can generate positive externalities that circulate as positive effects on society. Knowledge creates positive externalities, it spills over to some who were not involved in its creation or consumption.

These are not necessarily all public goods. While medical knowledge is becoming more readily available, a surgeon can operate on a person’s heart, and her resources are not available to others. Hospital beds are limited and medical drugs and subtracted when used. An emerging issue is medical knowledge produced through data science techniques. The notion of public goods is increasingly being used to guide policy development around clinical data.

Economic Goods and Social Policy

Market theory is based a standard model where products are brought to market and are bought and consumed by an individual buyer, whether an individual or a more corporate environment. But as mentioned in a previous post, some products are misbehaving economic goods. A variety of goods do not fit this economic model and as a result present a number of problems for economic theory, technological innovation, and public policy.

Much political debate about economic issues quickly divides between free-market philosophies that champion enterprise and market solutions on the one hand, and economic management by government on the other. The former may be best for private goods, but other goods and services may require alternative solutions to balance production and social concerns.

Much of US technological development was ushered in during the New Deal which recognized the role of public utilities in offering goods like electricity, telephones, and clean water for sanitation and drinking. The move to deregulation that started in the 1970s quickly became more ideological rather than practical, except for telecommunications. Digital technologies emerged within market philosophies, but practical questions have challenged the pure free enterprise orthodoxy.


Modern economics is largely based on the idea that goods are primarily private goods. But as we move towards a society based more on sustainable and digital processes, we need to examine the characteristics of the goods and services we value. We need to design systems of production and distribution around their characteristics.

Citation APA (7th Edition)

Pennings, A.J. (2023, May 25). Public and Private Goods: Social and Policy Implications.


AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught digital economics and comparative political economy from 2002-2012 at New York University. He has also spent time as a intern and then fellow working with a team of development economists at the East-West Center in Honolulu, Hawaii.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    June 2024
    M T W T F S S
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.