Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

How Do Artificial Intelligence and Big Data Use APIs and Web Scraping to Collect Data? Implications for Net Neutrality

Posted on | January 19, 2024 | No Comments

One of the books I use in a course called EST 202 – Introduction to Science, Technology, and Society Studies is Michio Kaku’s Physics of the Future (2011). Despite its age, it’s a great starting point for teaching topics like Computers, Robotics, Nanotechnology, Space Travel, and Energy. It also has a chapter on Artificial Intelligence (AI) that I use with the caveat that it doesn’t include a major change in AI occurring around the time it was published. That was the importance of data networking for AI data collection and learning. High-speed broadband networks have become fundamental to new AI and also “Big Data” because the success of these services now depend on their ability to scour the Internet and other networked data sources to find useful information.[1]

web scraping

This post looks at how collecting information from various structured and “unstructured” data sources have become an essential process for procuring information resources for AI and Big Data.[2] In particular, it looks at two strategies that are used to search networked sources for relevant data. It then discusses some ramifications for net neutrality, a regulatory stance that seeks to avoid discrimination against data content providers, including generative AI, by Internet Service Providers (ISPs).

Broadband communications enable the transfer of data between different applications on sensors, smart devices and cloud locations, contributing to the overall effectiveness of AI models and Big Data analytics. AI encompasses various technologies and approaches, including machine learning (ML), neural networks, natural language processing, expert systems, and robotics.[See 3] Big Data technologies include tools and frameworks designed to process, store, and analyze large datasets.

Technologies like MapReduce and Hadoop at Google and Yahoo! created the programming framework that led to applications like Apache Spark, NoSQL databases, and various data warehousing solutions. These are general-purpose cluster computing systems with programs written in Scala, Java, and Python that make parallel jobs easy to write and manage. These operating engines direct workloads, perform queries, conduct analyses, and support computation graphs at a totally new scale. They work across a wide range of low-cost servers, collecting information from mobile devices, PCs, and the IoTs such as autos, cash registers, and building environmental systems. Information from these data sources becomes fodder for analysis and innovative value creation.

APIs (Application Programming Interfaces) and web scraping collect information from the data networks, including the Internet. APIs are instrumental in integrating data into AI applications and machine learning models. APIs are also crucial in facilitating Big Data collection by providing a relatively standardized way for different software applications to communicate and exchange data. Web scraping is important to both AI and Big Data as the process of extracting information from HTML and CSS-coded websites collects large volumes of usable data.

What are the Differences between Big Data and AI?

While AI and Big Data are distinct concepts, they often intersect as AI systems frequently rely on large datasets for training and learning. Big Data technologies play a crucial role in managing the data requirements of AI applications, providing the necessary infrastructure for processing and analyzing vast amounts of information needed to build and continually train AI models.

The purpose of AI is to enable digital machines to perform tasks that would typically mimic or simulate human-like intelligence. This includes areas such as natural language processing, computer vision, machine learning, and robotics. AI systems can be designed to perform specific tasks, learn from experience, and adapt to changing situations.

AI applications are diverse and can be found in areas such as virtual assistants, image and speech recognition, recommendation engines, autonomous vehicles, and healthcare diagnostics. They strive to tackle tasks such as problem-solving, learning, reasoning, perception, and language understanding.

We are far from attributing human intelligence and consciousness in AI, but data networking appears to be key to ML. Kaku (2011) suggested three traits that would be a good start to theorize consciousness in AI:

1. sensing and recognizing the environment
2. self-awareness
3. planning for the future by setting goals and plans, that is, simulating the future and plotting strategy

Accepting these characteristics, it would be useful to examine the role of online data collection on each of them and collectively in the context of AI.

The purpose of Big Data is to handle and analyze massive volumes of data to derive valuable insights and identify patterns or correlations within the data. It draws on the substantial amount of data that organizations generate, process, and store. Big Data technologies enable organizations to manage and extract value from the datasets to produce meaningful insights, identify patterns, and understand trends that can inform decision-making processes.

Big Data applications span various industries and use cases, including business analytics, financial analysis, healthcare informatics, scientific research, and predictive modeling. Big Data focuses on the efficient handling of large volumes of data that involves data storage, retrieval, processing, and analysis.

Why AI and Big Data Use APIs for Data Collection

An API is a set of rules and tools that allows developers to access the functionality or data of a web service. APIs facilitate Big Data collection and AI machine learning models by providing a communication interface for applications and data networks. APIs allow applications to interact with each other, access external services, and integrate seamlessly into broader systems. Image from [4]

For example, APIs provided by cloud platforms, such as Google Cloud AI, Microsoft Azure Cognitive Services, and Amazon AI, allow developers to access pre-trained AI models for image recognition, natural language processing, and speech recognition. APIs provided by these platforms enable AI applications to access real-time social media and video streams, including posts, comments, and user interactions.

Many online platforms, including social media, e-commerce, and financial services, offer APIs that enable developers to use machine learning capabilities without managing the underlying infrastructure. Services like Amazon SageMaker, Google Cloud AI, and Azure Machine Learning provide APIs for training, deploying, and working machine learning models.

Big Data applications use APIs to collect and funnel large volumes of data into comprehensive datasets. Many governments and organizations release datasets publicly as part of open data initiatives that produce classifications based on the input data or make predictions about human behaviors. Big Data applications can access these datasets over the Internet to support tasks like urban planning, healthcare analytics, and environmental monitoring.

Likewise, APIs are instrumental in integrating machine learning (ML) models into AI applications. APIs and web scraping can be employed to gather relevant and diverse sets of data from the Internet. For example, web scraping collects images from various sources during image recognition tasks and processes them with Convolutional Neural Networks (CNNs), a type of deep learning architecture that uses algorithms specifically for processing pixel data. CNNs consist of layers with learnable filters (kernels) that detect image patterns like edges, textures, and more complex features. CNNs automatically learn and extract hierarchical features from images that help to identify and recognize objects.

Many AI and ML platforms provide APIs that allow developers to access pre-trained AI models they can use without extensive training. These are deep learning models trained on large datasets that find patterns or makes predictions based on data to accomplish specific tasks. They can be used as is or further fine-tuned to fit an application’s particular needs. These models, often made by Google, Meta, Microsoft and NVIDIA, can perform specific tasks such as creative (art, games, media) workflow, cybersecurity, image recognition, natural language processing, and sentiment analysis.

APIs enable integrating data from diverse sources, allowing Big Data applications to pull data from multiple locations and create a comprehensive dataset. APIs are used for real-time data streaming from sources such as social media platforms, financial markets, or IoT devices. Real-time APIs enable continuous data ingestion, enabling Big Data systems to analyze and respond to events as they happen.

Big Data systems often interact with databases to collect structured data. Many databases use APIs to enable programmatic access for querying and retrieving data. This practice is common in scenarios where relational databases or NoSQL databases are part of the data collection process.

Cloud providers offer APIs to access their services and resources. Big Data applications can leverage APIs to collect and process data in cloud-based storage and analytics services. This capacity facilitates scalability and flexibility in handling large datasets.

The Internet of Things (IoT) relies on APIs to enable data collection and integration between mulitple devices, sensors, and applications. IoT devices collectively generate vast amounts of data that APIs collect and manage. For example, MQTT is a messaging protocol API designed for low-bandwidth, high-latency, or unreliable networks and is commonly used for real-time communication in IoT environments. Also, RESTful APIs are used for building scalable and stateless web services and communicate between IoT devices and backend cloud servers. IoT applications requiring data retrieval, updates, and management commonly use APIs to provide a standardized way for AI and Big Data applications to collect data from connected devices such as in home automation and smart city projects.

Some companies and services that specialize in aggregating data from various sources offer APIs for accessing their aggregated datasets. Big Data applications can use these APIs to access pre-processed and curated data relevant to their analysis such as aggregated banking data.

AI both guides and uses ETL (Extract, Transform, Load) data aggregation processes. They often use APIs as part of the extraction phase but also for data transformation and enrichment. For example, ETL data collected from one source may be enriched with additional information from another source using their respective APIs. ETL cleans and organizes raw data and prepares it for data analytics and machine learning in data warehouse environments.

APIs often include mechanisms for authentication and authorization, ensuring that only authorized users or applications can access specific data. This is crucial for maintaining data security and privacy while collecting information for Big Data analysis.

In summary, APIs provide a standardized and efficient means for Big Data applications to collect data from many sources, ranging from online platforms and databases to IoT devices and cloud services. They enable interoperability between different systems and contribute to the integration of diverse datasets for analysis and decision-making.

How AI and Big Data Use Web Scraping

AI and machine learning (ML) can utilize web scraping as a method for collecting data from websites. They use web scraping for: training datasets and machine learning, text and content analysis, market research, resume parsing, price monitoring, social media monitoring and data aggregation, image and video collection, financial data extraction, healthcare data acquisition, and weather data retrieval.

Natural Language Processing (NLP) models, a subset of AI and ML, benefit from gathering text data for training. Web scraping is used to extract textual content from websites, enabling the creation of datasets for tasks such as sentiment analysis, named entity recognition, or language modeling.

AI applications involved in market analysis or competitor tracking use web scraping to collect data from competitors’ websites. This data can be analyzed to gain insights into market trends, pricing strategies, and product features. AI applications use web scraping to monitor product prices, availability, and customer reviews from e-commerce websites. This data can inform marketing strategies and enhance recommendation algorithms.

AI-powered recruitment and job matching systems utilize web scraping to extract job postings from various websites. This acquired dataset provides a view of the job market, salary ranges, and in-demand skills. This information can be used to make informed decisions about talent acquisition, workforce planning, and skill development. Additionally, web scraping can be employed to parse resumes and extract relevant information for matching candidates with job opportunities.

AI models that analyze social media trends, sentiments, or user behavior can utilize web scraping to collect data from platforms like X, Facebook, or Instagram. This data is valuable for training models in social media analytics.

Web scraping can gather relevant and diverse datasets of imagery from the web. For image recognition tasks, web scraping can collect graphics and pictures from various sources. AI applications, especially those dealing with computer vision tasks, often use web scraping to collect image and video datasets. This is common in tasks such as object detection, image classification, and facial recognition. Full self driving (FSD) draws on imagery from cameras to label potential dangers and obstacles.

AI and ML models in finance leverage web scraping to collect financial data, news, or market updates from financial websites. This data can be used for predicting financial market trends or making investment decisions.

Some AI applications in healthcare use web scraping to collect medical literature, patient reviews, and information about healthcare providers. This data can be utilized for building models related to healthcare analytics or patient sentiment analysis.

AI models predicting weather patterns may use web scraping to collect real-time weather data from various sources, including weather websites. This data is crucial for training accurate and up-to-date weather prediction models. They are also economically efficient, allowing many news sources to gather weather information from all over the planet without having to collect it themselves.

Web scraping should be conducted responsibly and ethically, respecting the terms of service of websites and relevant legal regulations. Additionally, websites may have varying degrees of resistance to web scraping, and proper measures should be taken to ensure compliance and minimize any negative impact on the targeted websites.

Implications for Net Neutrality

I’m currently reviewing new technologies and devices to consider their implications for broadband policy. These include connected cars as part of my Automatrix series, Virtual Private Networks (VPNs), and Deep Packet Inspection (DPI). I intend to readdress broadband policy issues in light of the FCC’s new emphasis on net neutrality and take a more critical look at content providers. These platforms and websites collect huge amounts of data on human behavior to influence economic and political decisions.[5] It is too early to draw substantive conclusions about the amount of data traffic that AI will produce. Still, I wanted to explain the predominant collection processes and raise some issues.

Net neutrality principles have typically advocated equal treatment of data traffic and regulations restricting ISP discrimination against content providers operating at the Internet’s edge. The Internet and its World Wide Web (WWW) were designed to prioritize capability at the “host” level – the clouds, devices, and platforms at the network’s edges. AI also operates at the edges. Following historical and legal precedents that reach back to the telegraph and even railroads, the regulatory regime for telecommunications has been codified for the carrier to move information commodities and content with transparency and non-interference.

ISPs have pushed back in the computer age, looking to use the increasing intelligence in their telecommunications networks to extract additional value from informational exchanges. They argue the capital-intensive nature of their service provision requires them to invest in the newest technologies. They further contend that their investments can also offer value-added services that would benefit their customers, such as IPTV and search engines. Content dompetitors have complained this gives the ISPs a competitive and potentially dangerous advantage.

Although it’s early in the era of AI and Big Data data collection, we can expect that they will have a major impact on network resources. Congestion issues are a major concern for ISPs that risk losing customer confidence if traffic slows, videos buffer, and games lag. Will data collection seriously affect broadband usage? Using APIs and large-scale web scraping, particularly when conducted by big entities, might disproportionately affect network speeds. API-based data collection and web scraping practices should be mindful of their impact on the broader networked world.

Notes

[1] Pennings, A.J. (2013, Feb 15). Working Big Data – Hadoop and the Transformation of Data Processing. apennings.com https://apennings.com/data-analytics-and-meaning/working-big-data-hadoop-and-the-transformation-of-data-processing/ and Pennings, A.J. (2011, Dec 11). The New Frontier of Big Data. apennings.com https://apennings.com/technologies-of-meaning/the-new-frontier-of-big-data/ Image of web scraping from https://prowebscraping.com/web-scraping/ offering related services.

[2] Data retrieval has historically drawn from the records of structured databases. IBM has made the distinction between structured and unstructured data where structured data is sourced from “GPS sensors, online forms, network logs, web server logs, OLTP systems, etc., whereas unstructured data sources include email messages, word-processing documents, PDF files, etc.” IBM’s Watson for example, was heavily dependent on the structured information model in its early days. See Pennings, A.J. (2014, Nov 11). IBM’s Watson AI Targets Healthcare. apennings.com https://apennings.com/data-analytics-and-meaning/ibms-watson-ai-targets-healthcare/

[3] AI encompasses various technologies and approaches, including machine learning, neural networks, natural language processing, expert systems, and robotics. Machine learning (ML), a subset of AI, involves algorithms that allow systems to learn from data. Neural networks teach computers to process data with deep learning that uses interconnected nodes or neurons in a layered structure that was inspired by the human brain. Natural language processing is machine learning technology that teaches computers to comprehend, interpret, and manipulate human language. Expert systems use AI to simulate the expertise, judgment, and experience of a human or an organization in a particular field. Robotics is the field of creating intelligent machines that can assist humans in a variety of ways.

[4] Pascal, Heus (2023, Jun 23). AI, APIs, metadata, and data: the digital knowledge and machine intelligence ecosystem. https://blog.postman.com/author/pascal-heus/ https://blog.postman.com/ai-apis-metadata-data-digital-knowledge-and-machine-intelligence-ecosystem/

[5] Large-scale web scraping often involves the extraction of personal data from websites, and this can raise privacy concerns. If not done responsibly, scraping personal or sensitive information might violate privacy regulations. Net neutrality discussions often extend to privacy considerations, emphasizing the need for responsible and ethical data practices. ISPs might be tempted to intervene in web scraping activities by implementing measures such as blocking or throttling, especially if the scraping activity is seen as detrimental to their networks or if it violates terms of service. Such interventions could raise questions about net neutrality, as they involve discriminatory actions against specific types of traffic.

Note: Chat GPT was used for parts of this post. Multiple prompts were used and parsed.

Citation APA (7th Edition)

Pennings, A.J. (2024, Jan 19). How Do Artificial Intelligence and Big Data Use APIs and Web Scraping to Collect Data? Implications for Net Neutrality. apennings.com https://apennings.com/technologies-of-meaning/how-do-artificial-intelligence-and-big-data-use-apis-and-web-scraping-to-collect-data-implications-for-net-neutrality/

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea, where he teaches broadband and cloud policy for sustainable development. From 2002 to 2012, he was on the faculty of New York University, teaching comparative political economy and digital economics. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.

Networking Connected Vehicles in the Automatrix

Posted on | January 15, 2024 | No Comments

Networking of connected vehicles draws on a combination of public-switched wireless communications, GPS and other satellites, and Vehicular Ad hoc Networks (VANET) that directly connect autos with each other and roadside infrastructure.[1] Connecting to 4G LTE, 5G, and even 3G and 2.5G in some cases provides access to the wider world of web devices and resources. Satellites provide geo-location services, emergency, and broadcast entertainment. VANETs enable vehicles to communicate with each other and with roadside infrastructure to improve road safety, traffic efficiency, and provide various applications and services.

This image shows an early version of a connected automatix infrastructure including a VANET.

This post outlines the major ways connected cars and other vehicles use broadband data communications. It builds some earlier work I started on the idea of the Automatrix, starting with “Google: Monetizing the Automatrix” and “Google You Can Drive My Car.” It is also written in anticipation of a continued discussion on net neutrality and connected vehicles although that is beyond the scope of this post.

Public-Switched Wireless Communications

Wireless communications include radio connectivity, cellular network architecture, and “home” orientation. This infrastructure differs significantly from the fixed broadband Internet and World Wide Web model designed around stationary “edge” devices with single Internet Protocol (IP) addresses. Mobile devices have been able to utilize the wireless cellular topology for unprecedented connectivity by replacing the IP address with a new number called the IMSI that identifies itself and maintains a link to a home network, usually a paid service plan with a cellular provider, e.g., Verizon, Orange, Vodaphone.

The digital signal transmission codes have changed over time, allowing for better signal quality, reduced interference, and improved capacity for handling voice and data services. These included Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), and Code Division Multiple Access (CDMA) that support both voice and data services. GSM was widely adopted standard for public-switched wireless communications, but has been largely replaced by CDMA and Long-Term Evolution (LTE) fourth-generation (4G) and more energy hungry and shorter range fifth-generation (5G) networks. With LTE traditional voice calls became digital and users could access a variety of data services, including text messaging, mobile internet, and multimedia content based on Internet Protocols (IP).

The public-switched wireless network divides a geographic coverage area into “cells” where each spatial division is served by a base station or cell tower that manages the electromagnetic spectrum transmissions and supports mobility as users move between cells. As a mobile device transitions from one cell to another, a “handoff” occurs that ensures uninterrupted connectivity as users move across different cells. Roaming agreements between different carriers enable users to maintain connectivity even when outside their home network coverage area. Digital switching systems are employed in the core network infrastructure to handle call routing, signaling, and management.

A key concept in the wireless public network is the notion of “home” with mobile devices typically using SIM cards with an international mobile subscriber identity (IMSI) number to authenticate and identify users on the network. SIM cards store subscriber information, including user credentials and network preferences.

Wireless communications incorporate security measures to protect user privacy and data. Encryption and authentication mechanisms help secure communication over the wireless networks.

Satellites

Satellites play a crucial role in enhancing the capabilities of connected cars by providing various services and functionalities. They extend connectivity to areas with limited or no terrestrial network coverage, allowing access for connected cars traveling through remote or rural locations where traditional cellular coverage may be sparse. GPS satellites provide accurate location information, enabling navigation systems in cars to determine the vehicle’s position, calculate routes, and provide turn-by-turn directions.

Satellites also support a range of location-based services providing real-time traffic information, points of interest, and location-based notifications, enhancing the overall navigation experience. Satellite connectivity facilitates remote diagnostics and maintenance monitoring for connected vehicles. Satellites have provided remote monitoring and management of vehicle fleets. Fleet operators can track vehicle locations, monitor driving behavior, manage fuel efficiency, and schedule maintenance using satellite-based telematics solutions.

Satellites contribute to enhanced safety features in connected cars by enabling automatic crash notification systems. In the event of a collision, the vehicle can send an automatic distress signal with its location to emergency services, facilitating a quicker response. In the case of theft or emergency, satellite communication can be used to remotely disable the vehicle, track its location, or provide assistance to drivers.

Satellites also play a role in delivering over-the-air (OTA) updates to connected cars, allowing manufacturers to use satellite communication to send software updates, firmware upgrades, and map updates directly to the vehicles, ensuring they remain up-to-date with the latest features and improvements. They can also remotely assess vehicle health, identify potential issues, and schedule maintenance, reducing the need for physical visits to service centers.

Lastly, satellites support the delivery of entertainment and infotainment services to connected cars. Satellite radio services, for example, provide a wide range of channels with music, news, and other content, accessible to drivers and passengers in areas with limited terrestrial radio coverage.

Satellites can contribute to Vehicle-to-Everything (V2X) communication by providing a reliable and wide-reaching communication infrastructure. V2X communication allows connected cars to exchange information with other vehicles, infrastructure (such as traffic signals), and even pedestrians, enhancing safety and traffic efficiency.

The integration of satellite technology enhances the overall connectivity, safety, and functionality of connected cars, contributing to a more advanced and intelligent automatrix.

Vehicular Ad hoc Networks (VANETs)

VANETs play a significant role in enhancing communication and connectivity among vehicles and with roadside infrastructure. VANETs have no base stations and devices can only transmit to other devices in the near proximity, such as other cars, emergency vehicles (ambulances, police, etc.) and roadside devices.

Here are some key characteristics of vehicular networks:

– A dynamic and rapidly changing network topology due to the constant movement of vehicles. Nodes (vehicles) enter and leave the network frequently, leading to a highly active environment.
– Direct communication between vehicles, allowing them to share information such as speed, position, and other relevant data. V2V communication plays a crucial role in enhancing road safety and traffic efficiency.
– Interactions between vehicles and roadside infrastructure, such as traffic lights, road signs, and sensors, enable vehicles to receive real-time information about traffic conditions and other relevant data.
– In the absence of a fixed infrastructure for communication, vehicles act as both nodes and routers, forming an ad hoc network where communication links are established based on proximity.
– Broadcast mode disseminates information about traffic warnings, road conditions, and emergency alerts to nearby vehicles.
– Low-latency communication supports real-time applications like collision avoidance systems and emergency alerts. Timely information exchange is crucial for the effectiveness of these applications.
– Security and privacy techniques for authentication, confidentiality, and data integrity.
– Connected vehicles support various traffic safety applications, including collision and lane-switching warnings, as well as collaborative cruise control. These applications aim to enhance overall road safety.
– Vehicular communication is influenced by signal fading and attenuation, especially in urban environments with obstacles. These factors need to be overcome for reliable communication.[3]

VANETs play a crucial role in the development of Intelligent Transportation Systems (ITS) and contribute to creating safer, more efficient, and connected road networks. Due to the rapid mobility of vehicles, the Automatrix may experience frequent connectivity disruptions. Protocols and mechanisms are important to cope with intermittent connectivity.

One of the reasons I liked the category of the Automatrix was that the attention was on the context, not exclusively the individual vehicles. When it comes to connected cars, the implications of net neutrality are significant and can influence various aspects of their functionality and services.[4]

Connected cars contribute to the broader concept of the Internet of Things (IoT) by creating an interconnected network where vehicles, infrastructure, and users communicate and collaborate to enhance safety, efficiency, and overall driving experience. These connected vehicles leverage various sensors, embedded and internal Ethernet systems, and communication protocols to tether to Bluetooth and access mobile cellular and satellite services.

Notes

[1] Wahid I, Tanvir S, Ahmad M, Ullah F, AlGhamdi AS, Khan M, Alshamrani SS. (23 July 2022) Vehicular Ad Hoc Networks Routing Strategies for Intelligent Transportation System. Electronics 2022, 11(15), 2298; https://www.mdpi.com/2079-9292/11/15/2298
[2] Image from Hakim Badis, Abderrezak Rachedi, in Modeling and Simulation of Computer Networks and Systems, 2015 https://www.sciencedirect.com/topics/computer-science/vehicular-ad-hoc-network
[3] https://www.emqx.com/en/blog/connected-cars-and-automotive-connectivity-all-you-need-to-know
[4] https://edition.cnn.com/2023/09/26/tech/fcc-net-neutrality-internet-providers/index.html

Citation APA (7th Edition)

Pennings, A.J. (2024, Jan 15). Networking Connected Cars in the Automatrix. apennings.com https://apennings.com/telecom-policy/networking-in-the-automatrix/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Net Neutrality and the Use of Virtual Private Networks (VPNs)

Posted on | November 26, 2023 | No Comments

Net neutrality regulations strive to treat VPNs (Virtual Private Networks) neutrally, meaning that Internet Service Providers (ISPs) should not discriminate against or block the use of VPN services. As a regulatory principle, Net neutrality advocates for equal treatment of all data on the Internet, regardless of the type of content, application, or service. VPN is a technology that establishes an encrypted connection over the Internet by allowing users to access a private network remotely. This connection provides anonymity, privacy, and security but may also be used in sensitive activities, including bypassing geographical restrictions imposed by licensing agreements, ISPs, or regional authorities.

In this post, I investigate the complexities of VPNs and their implications for both content providers and ISPs. First, I describe how VPNs work. Then I explore how content service providers like video streaming platforms treat VPNs. Next, I do a similar analysis of different strategies used by ISPs when they want to hamper VPN use. Lastly, I return to the VPNs’ relationship to net neutrality.

VPNs are widely used for personal and business purposes to protect sensitive data and enable secure remote access to private networks. In many cases, ISPs and other carriers, as well as OTT (Over-the-Top) content providers, may attempt to block or restrict the use of Virtual Private Networks (VPNs). However, the extent to which VPNs are blocked can vary depending on the region, the specific ISP, and local regulations.

How does a VPN work?

A VPN works by creating a secure and encrypted connection between the user’s device and a VPN server. When a user contacts a VPN, they are authenticated, typically by entering a username and password, often automatically through VPN client software. Some VPNs may also use additional authentication methods, such as multi-factor authentication, for enhanced security. When the connection is authenticated, the communication between the user’s device (computer, smartphone, etc.) and the VPN server is encrypted for security.

The encrypted data moving between user and server is encapsulated with a process known as tunneling. This creates a private and protected pathway for data to travel between the user’s device and the VPN server. Various tunneling protocols, such as OpenVPN, L2TP/IPsec, or IKEv2/IPsec, are used to establish this secure connection. The VPN server then assigns the user’s device a new IP address, replacing the device’s original IP address. This is often a virtual IP address within a range managed by the VPN server.

All Internet traffic to the user’s device is then routed through the VPN server. This means that websites, services, and online resources such as a streaming service, perceive the user’s location as that of the VPN server rather than the user’s actual location. Users can access content that may be geo-restricted or censored in their physical location by connecting to a VPN server in a different geographic location. This allows them to appear as if they are accessing the Internet from the location of the VPN server.

Anti-VPN Technologies Used by Content Providers

VPNs become a net neutrality issue when they are targeted by either content providers or ISPs. Some content providers and streaming services may block access from known VPN IP addresses to enforce regional restrictions on their content. Streaming services negotiate licensing agreements with content providers to distribute content only in specific regions. Other concerns include copyright infringement by other content providers and the quality of service of traffic routed through multiple servers. Complicated data packet routes can cause latency or buffering issues, which degrade the streaming experience. Nevertheless, VPNs can circumvent this blocking by masking the user’s real IP address and making it appear as if they are connecting from a different location.

Content services employ various techniques to detect the use of VPNs and proxy servers. They maintain databases of IP addresses associated with VPNs and proxy servers and compare the user’s IP address against these databases to check for matches. If the detected IP address is on the list of known VPN servers, the streaming service may block access or display an error message.

Content providers such as video streaming services may also analyze user behavior to detect patterns indicative of VPN usage. For example, if a user rapidly connects from different geographical locations, it may raise suspicion and trigger additional checks to determine if a VPN is in use. VPN detection may involve checking for DNS (Domain Name System) leaks that reveals DNS requests or vulnerabilities in WebRTC (Web Real-Time Communication) protocols that gives real-time guarantees but can reveal client credentials. These leaks can expose the user’s actual IP address, allowing the content services to identify VPN usage.

Streaming services may decide to block entire IP ranges associated with data centers or hosting providers commonly used by VPN services. This approach helps prevent access from a broad range of VPN users sharing similar IP addresses. Streaming services regularly use geolocation services to determine the physical location of an IP address. If the detected location does not match the expected geographical area based on the user’s account information, it may trigger suspicion of VPN use.

VPN connections often exhibit different speed characteristics compared to regular links. Streaming services may analyze the connection speed and behavior to identify patterns associated with VPN usage. Lastly, some streaming services may employ captcha challenges or additional verification steps when they detect suspicious activity, such as rapid and frequent connection attempts from different locations. This targeting can inconvenience users but serves to identify and block VPN usage.

How ISPs treat VPNs

Net neutrality principles call for ISPs to treat all data packets on the Internet equally. It can prohibit ISPs from discriminating against specific online services, applications, or providers, including the data packets generated by VPN services. This norm means that ISPs should not block or throttle VPN traffic just because it is VPN traffic. VPN providers, like any other online service, should be able to reach users without facing unfair restrictions.

Nevertheless, ISPs may employ various techniques to block or throttle VPN traffic. These measures are often implemented for network management, compliance with regional regulations, or enforcing content restrictions. Deep Packet Inspection (DPI) is a technology that allows ISPs to inspect the content of data packets passing through their networks. By analyzing the characteristics of the traffic, including protocol headers and content payload, DPI can identify patterns associated with VPN traffic. ISPs may use DPI to detect and block specific VPN protocols or to throttle VPN traffic. Some advanced filtering technologies can detect and block VPN traffic. However, this approach is more common in regions with strict Internet censorship.

ISPs can block or restrict traffic on specific ports commonly associated with VPN protocols. For example, they might block traffic on ports used by OpenVPN (e.g., TCP port 1194 or UDP port 1194) or other well-known VPN protocols. By blocking these ports, ISPs aim to prevent establishing VPN connections. ISPs may also maintain lists of IP addresses associated with known VPN servers and block traffic to and from these addresses. This method targets specific VPN servers or services rather than attempting to identify VPN traffic based on its characteristics.

Some VPN protocols obfuscate or disguise their traffic, making it more challenging for ISPs to detect and block them. This subterfuge includes techniques like adding a layer of encryption or using obfuscated protocols that resemble regular HTTPS traffic. ISPs may also analyze traffic patterns and behaviors to identify characteristics associated with VPN usage. For example, rapid and frequent connection attempts from different locations might trigger suspicion and lead to traffic restrictions. VPNs can circumvent this blocking by masking the user’s actual IP address and making it appear as if they are connecting from a different location.

DNS filtering blocks access to specific domain names associated with VPN services. This method aims to prevent users from resolving the domain names of VPN servers, making it more difficult for them to establish connections. ISPs may implement filtering at the application layer to identify and block VPN traffic based on the behavior and characteristics of specific VPN applications. Instead of outright blocking VPN traffic, some ISPs may employ bandwidth throttling to reduce the speed of VPN connections. This slowing can make VPN usage less practical or effective for users, especially when attempting to stream high-quality video or engage in other bandwidth-intensive activities.

The effectiveness of these methods can vary, and users often find workarounds to bypass VPN restrictions. VPN providers may also respond by developing new techniques to evade detection. The cat-and-mouse game between VPN providers and ISPs is ongoing, with each side adapting its strategies to stay ahead. Users who encounter VPN restrictions may explore alternative VPN protocols, use obfuscation features, or consider other means to maintain privacy and access unrestricted Internet content.

Net neutrality aims to prevent anti-competitive practices by ISPs. While some telecom entities block VPNs for legitimate reasons, such as maintaining network integrity or complying with local regulations, their actions can also violate user privacy and restrict the free flow of information. If ISPs were to block or throttle VPN traffic selectively, it could impact competition by favoring certain online services over others. This interference could be particularly concerning if ISPs were to prioritize their own VPN services over those provided by third-party VPN providers. Advocates for net neutrality argue that it is crucial for maintaining a level playing field on the Internet, fostering competition, innovation, and the free flow of information.

However, the specific regulations and enforcement mechanisms related to net neutrality can differ, and debates on this topic continue in various jurisdictions. In some countries, governments or ISPs may implement restrictions on the use of VPNs as part of broader Internet censorship efforts. These restrictions can be aimed at controlling access to certain websites, services, or content deemed inappropriate or against local laws. While net neutrality principles provide a foundation for treating VPNs fairly, the actual implementation and regulatory landscape can vary by country. Some regions have specific regulations that address net neutrality, while others may not. Additionally, the status of net neutrality can change based on regulatory decisions and legislative developments.

Citation APA (7th Edition)

Pennings, A.J. (2023, Nov 25). Net Neutrality and the Use of Virtual Private Networks (VPNs). apennings.com https://apennings.com/telecom-policy/net-neutrality-and-the-use-of-virtual-private-networks-vpns/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.

Deep Packet Inspection of Internet Traffic and Net Neutrality

Posted on | November 4, 2023 | No Comments

With a 3-2 shift in the Federal Communications Commission (FCC) leaning towards restoring net neutrality, advocates are again arguing for the equal treatment of all data traffic by Internet service providers (ISPs). Net neutrality principles strive to prevent ISPs such as AT&T, Comcast Xfinity, Korea Telecom, Vodaphone, etc. from engaging in practices that could stifle competition, limit consumer choice, or infringe on the free flow of information online. This post describes Deep Packet Inspection (DPI) and how it can influence the capability of ISPs and nations to potentially discriminate against certain network traffic.

Deep Packet Inspection (DPI) is a network technology used to inspect and analyze the contents of data packets running through the Internet. It is a critical component of many network security, monitoring, and optimization solutions.[1] However, DPI can be used in ways that violate net neutrality principles, such as by degrading or blocking specific types of content, devices, services, or applications. In such cases, DPI is directly at odds with net neutrality or the “Open Internet,” which encompasses a broader range of principles and values related to maintaining a free, accessible, and inclusive Internet environment for all users.

The importance of DPI in relation to net neutrality depends on how it is used and the specific context in which it is applied. It can be both important and controversial in the context of net neutrality. When ISPs employ DPI to discriminate against or favor certain types of traffic, it can undermine the open and neutral character of the Internet. This intrusion can lead to anti-competitive behavior and harm consumers’ access to a diverse and free Internet.

DPI can also be used for legitimate network management and security purposes. For instance, it can help identify and mitigate distributed denial-of-service (DDoS) attacks, detect malware, and manage network congestion. In these cases, DPI serves to protect the integrity and security of the network without violating net neutrality.

Deep Packet Inspection is used for examining the contents of data packets as they pass through a network. This involves prioritizing or limiting specific types of traffic to optimize network performance. Several technologies are essential for deep packet inspection to fulfill its various functions, including network management, security, application optimization, quality of service (QoS), and traffic shaping. Advanced DPI systems may incorporate machine learning and artificial intelligence (AI) algorithms to improve accuracy in identifying new or unknown applications and to detect evolving threats by analyzing network behavior over time.

DPI begins with the acquisition of data packets from network traffic. This can be achieved using packet capture technologies, such as network taps, port mirroring, or packet sniffers. These tools intercept and copy data packets for analysis. Once captured, the data packets are parsed to extract relevant information. This process involves breaking down the packets into their constituent parts, such as headers and payloads. DPI may perform content analysis to extract valuable information from packets, such as identifying files, images, video, or text within network traffic. Once packets are captured, they must be processed efficiently. High-performance technologies, such as multi-core CPUs or specialized hardware accelerators, are essential for quickly analyzing and processing packets.

DPI systems may classify network flows based on various criteria, such as source/destination IP addresses, ports, or traffic characteristics. Flow classification is essential for monitoring and controlling different types of traffic effectively. This is useful for security, compliance, and traffic optimization purposes. These can be used to block or throttle (slow down) specific websites or services.

DPI systems also need to understand various network protocols, such as HTTP, SMTP, FTP, or proprietary protocols used by specific applications. Protocol decoding engines are necessary to extract and interpret protocol-specific information. They can decode and analyze the data exchanged within these protocols, making it possible to identify the applications and services being used.

DPI relies on pattern matching algorithms to identify specific content within packets. Regular expressions, string matching, or more advanced techniques like Aho-Corasick algorithms are used to detect patterns associated with threats, protocols, or applications. Sophisticated DPI algorithms are used to analyze packet payloads, extract data, and identify application behavior, even if it uses non-standard ports or encryption.[2]

DPI often employs signature-based analysis, where patterns in packet contents are matched against a database of known patterns associated with specific applications or threats. This allows for the identification of applications, services, or security risks. DPI can also employ behavioral analysis techniques to identify anomalies or suspicious activities within network traffic. For example, it can detect unusual patterns in data transfer or deviations from expected behavior. DPI systems rely on extensive signature databases that contain patterns, behaviors, or attributes associated with specific applications, malware, or network threats. To remain effective, DPI systems need to regularly update their signature databases to account for new applications, protocols, or emerging threats. This requires efficient mechanisms for signature updates and database management. Regular updates to these databases are crucial to stay current with new threats and applications.

It’s important to note that DPI technology raises important considerations related to user privacy and network neutrality. The use of DPI for deep inspection of user traffic often involves monitoring the content of communications without user consent or proper safeguards. DPI systems must incorporate strong security and privacy measures to protect the data they handle and to ensure compliance with legal and regulatory requirements.

Since DPI involves the inspection of data content, it must be performed securely. Data encryption and privacy measures are crucial to protect the confidentiality of network traffic and user data. DPI systems generate logs and reports for monitoring, compliance, and troubleshooting purposes. Robust reporting and logging mechanisms are essential. Ensuring that DPI respects user privacy rights is crucial in any context.

Encrypted traffic poses a challenge for DPI. Some systems incorporate SSL/TLS decryption capabilities to inspect encrypted data, although this must be done with care to protect user privacy and maintain compliance with data protection regulations.

The use of DPI for legitimate security and network management purposes should be balanced with privacy concerns and adhere to relevant laws and regulations. DPI technology may need to integrate with other network security and monitoring solutions, such as firewalls, intrusion detection systems (IDS), and intrusion prevention systems (IPS).

Net neutrality regulations often require ISPs to be transparent about their traffic management practices, and DPI can be a tool to monitor and enforce these rules. In this context, DPI can play a positive role in upholding net neutrality by ensuring that ISPs are following the established regulations.

In summary, the importance of DPI for net neutrality largely depends on how it is applied and the specific goals it serves. When used in ways that violate net neutrality principles, such as blocking, degrading, or throttling certain content or devices, DPI is detrimental to the open Internet. However, when it is employed for network management, security, and ensuring ISP compliance with net neutrality regulations, it can be an important tool for maintaining a free, fast, and open Internet while still safeguarding the network’s integrity and security. Balancing these interests and ensuring proper oversight and transparency is essential in the discussion of DPI and net neutrality.

Citation APA (7th Edition)

Pennings, A.J. (2023, Nov 4). Deep Packet Inspection of Internet Traffic and Net Neutrality. apennings.com https://apennings.com/technologies-of-meaning/deep-packet-inspection-of-internet-traffic-and-net-neutrality/

Notes

[1] See Pennings, A.J. (2021, May 16). US Internet Policy, Part 5: Trump, Title I, and the End of Net Neutraliy. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-5-trump-title-i-and-the-end-of-net-neutrality/

[2] Çelebi, M. Yavanoglu, U. (2023) Accelerating Pattern Matching Using a Novel Multi-Pattern-Matching Algorithm on GPU. Applied Sciences. 13(14):8104. https://doi.org/10.3390/app13148104

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor at Stony Brook University. He teaches broadband policy and ICT for sustainable development. Previously he taught digital economics and information systems management at New York University’s Department of Management and Technology. He also taught in Digital Media Management MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

US Internet Policy, Part 7: Net Neutrality Discussion Returns with New FCC Democratic Majority

Posted on | October 9, 2023 | No Comments

The election of Joe Biden as US president in 2020 significantly impacted Internet policy discussions. After the Georgia senatorial runoff that shifted the balance of power to the Democrats, preparation at the Federal Communications Commission (FCC) began to target many issues that were dismissed or ignored during the Trump administration.

But plans stalled as Gigi Sohn, President Biden’s nominee to the FCC, was subject to an intense lobbying effort from the telecom industry to block her seat at the commission. The former FCC staffer, longtime consumer broadband advocate, and first openly LGBTIQ+ nominee for commissioner eventually withdrew her consideration for the post in March 2023. Democrats finally regained majority control of the FCC when a new nominee, Anna Gomez, was confirmed by the US Senate on September 7, 2023.[1]

Pending Internet Policy Issues

– More, better, and cheaper broadband access and connectivity through mobile, satellite, and wireline facilities, especially in rural areas.
– Antitrust concerns about cable and telco ISPs, including net neutrality.
– Privacy and the collection of behavioral data by platforms to predict, guide, and manipulate online user actions.
Section 230 reform for Internet platforms and content producers, including assessing social media companies’ legal responsibilities for user-generated content.
Security issues, including ransomware and other threats to infrastructure, including Border Gateway Protocol (BGP) security between countries.
– Deep fakes, memes, and other issues of misrepresentation, including fake news.
– eGovernment and digital money, particularly the role of blockchain, CBDCs, and cryptocurrencies.
– Formation of Web 3.0, where services are monetized but ownership democratized with new trust-based protocols using blockchain technologies, the core technologies of crypto and nfts.

Addressing Net Neutrality

FCC Chairwoman Jessica Rosenworcel has scheduled October 19 for a vote on how to proceed with new rulemaking and address some issues that have come to the forefront of public scrutiny. With two other Biden appointments, the FCC is poised to act on the party’s priorities, including restoring net neutrality regulations. Such rules barred broadband providers from interfering with web traffic but were gutted by Republican commissioners during the administration of President Donald Trump. Chairwoman Rosenworcel’s speech:

Net neutrality is the legal principle that Internet Service Providers (ISPs) should treat all data and online content equally. It derives from commercial law that strives to treat all customers equally. For example, a hotel should not be able to restrict certain people from lodging at their facilities. It was applied to railroad law to ensure towns along a train route would not be excluded from sending their goods, such as cattle or wheat, to market. The common carrier precendent was applied to telegraph and later to telephone regulation. The principle has been bandied back and forth in the FCC for many years, reflecting different philosophies and sympathies for lobbying arguments.

My previous posts reviewed the issues dealing with wired broadband net neutrality based on FCC’s rulemaking based on the Communications Act of 1934 that emphasized common carriage, the commercial obligation to serve all customers equally and fairly. Historically, these legislated guidelines allowed the US telecommunications system to dramatically expand voice communications from the 1930s through the 1970s.[2]

The FCC later decided that data communications and computer processing service providers operating on top of the telco infrastructure would be better served as lightly regulated Title I “enhanced” companies. This designation allowed the Internet to take off in the 1990s and fostered the growth of thousands of Internet Service Providers (ISPs). For example, it allowed dial-up phone users to connect to ISPs to connect to the Internet for long durations without paying extra toll charges. This dynamic would change as competition heated up to provide “broadband” for the Internet and interactive television.

Consolidation Under Deregulated “Information Services”

Under GOP-leaning Michael Powell’s FCC chairmanship, the ISP market structure consolidated dramatically with deregulation for both cable TV companies and Plain Old Telephone companies (POTs), allowing them to enter new markets. Cable television companies had developed broadband capabilities in the late 1990s with cable modems and coaxial cables to connect to the Internet. Likewise, the Regional Bell Operating Companies (RBOCs) that had carved up America’s telecommunications after the breakup of AT&T in the 1980s, developed Asymmetric Digital Subscriber Lines (ADSL or DSL) broadband technologies to provide high-speed services to households over copper lines. This service uses faster coaxial or fiber optic lines to transmit to a local node or curb and then copper lines into the premise. These companies had envisioned developing joint “information highways” going back to the Bell Atlantic/Tele-Communications, Inc. (TCI) deal that was announced in October 1993. That deal died in 1997 but was finally consummated by AT&T on March 9, 1999, in an all-stock deal worth about $48 billion.

AT&T wanted those cable lines from TCI to expand their local phone service, which it was already doing in another agreement with Time Warner. The merger would allow them to extend their markets and combine infrastructure for cost savings and efficiencies. This combination could provide a significant competitive advantage against other telephone providers and new entrants like satellite or wireless providers. It would also allow them to offer a broader range of services, including bundled packages. But AT&T and RBOCs were limited by the FCC’s ruling on the Telecommunications Act of 1996 that distinguished between Title II common carrier services and Title I deregulated information services. FCC decisions in 2005 facilitated significant changes in the market structure of the Internet.

In 2005, both cable and phone companies suddenly became deregulated ISPs. This change allowed significant consolidation as telephone and cable companies, competing to provide “triple play” (TV, broadband, and voice) services to households, frantically merged with other telecommunications companies to dominate “broadband.” AT&T and Verizon, traditional telephone companies, merged with cable companies (and mobile) to create telecom behemoths. The road kill included thousands of smaller ISPs that eventually were no longer able to compete or even interconnect with the larger companies.

Two things led to sweeping deregulation. First, a U.S. Supreme Court decision (National Cable & Telecommunications Association v. Brand X Internet Services) upheld the FCC’s 2002 ruling that providing cable modem service (i.e., cable television broadband Internet) is an interstate information service. This decision meant that cable companies were confirmed in June of 2005 as subject to the less stringent Title I of the Communications Act of 1934. Two months later, Powell’s FCC allowed former Bell telephone companies to become Title I “information services” during George W. Bush’s administration. The Regional Bell Operating Companies (RBOCs), companies that had carved up America’s telecommunications after the breakup of AT&T in the 1980s and developed Asymmetric Digital Subscriber Lines (ADSL) broadband technologies for “information highways” suddenly became deregulated ISPs.

Although there are currently 2940 Internet service providers in the United States, the top 8 companies have over 90 percent of the subscribers. These are the top 8 Internet providers in the U.S. as of June 2023:

– AT&T 22%
– Spectrum 20%
Xfinity 19%
– Verizon 6%
– Cox 5%
– T-Mobile 5%
– Century Link 2%
– Frontier 2%

The Internet and its World Wide Web were designed to allow devices like PCs, laptops, and mobile phones to talk to each other without much interference from the intermediate network that moves their data. Net neutrality strives to ensure that all online content, services, and applications running through that network are treated equally, regardless of their source. This equality promotes free access to information and prevents ISPs from blocking or throttling (slowing down) specific websites or services. Net neutrality allows users to choose which websites and services they access, without interference from ISPs. Users can explore a diverse range of content and make their own decisions about what to consume. It also ensures that nonprofit organizations, activists, and community groups have equal access to the Internet, allowing them to advocate for social and political causes without discrimination. The danger is that ISPs could examine and manipulate users’ Internet traffic, compromising their privacy and secure communication.

However, the current reality is that net neutrality is not being enforced. It was defeated in the 2017 FCC decision by another vote of 3-2. Pai’s FCC was concerned that net neutrality regulations would discourage ISPs from investing in network infrastructure and improving Internet speeds since they cannot charge content providers for prioritized access. Many net neutrality critics argued that without paid prioritization, the quality of some services would suffer, particularly during peak times when networks become congested.

Big ISPs argued that without the ability to create tiered service plans or charge content providers for faster access, they would struggle to manage network traffic and recoup the costs of infrastructure investments. They suggested that net neutrality rules limit an ISPs’ ability to manage and optimize network traffic efficiently, potentially affecting all users’ service quality. The general argument was that government regulation of the Internet stifles innovation and imposes unnecessary bureaucratic burdens on ISPs that hinder user performance.

It’s important to note that net neutrality is a complex policy principle, and its impact on underserved and economically disadvantaged communities depends on effective enforcement and regulatory oversight. Additionally, while net neutrality works to ensure equitable access to the Internet, broader efforts, such as affordable broadband access programs and digital literacy initiatives, are critical to addressing the digital divide and promoting digital inclusion for all, including those with lower incomes.

Notes

[1] The Federal Communications Commission (FCC) is meant to be an independent agency of the United States government responsible for regulating communications by wire and radio in the United States. It is designed to operate independently of partisan politics. The FCC comprises five commissioners appointed by the President of the United States and confirmed by the Senate. No more than three commissioners can be members of the same political party by law. The political affiliation of FCC commissioners can vary depending on the presidential administration in power during their appointments. As a result, the FCC’s policies and priorities may shift with changes in leadership and the political makeup of the commission. Therefore, the FCC’s stance on various issues, including telecommunications, broadband regulation, net neutrality, and media ownership, can change over time based on the views and priorities of the commissioners appointed by the current administration. It is important to recognize that a combination of legal mandates, policy considerations, public input, and the political environment at the time influences the FCC’s actions and decisions.

Citation APA (7th Edition)

Pennings, A.J. (2023, Oct 9). US Internet Policy, Part 7: Net Neutrality Discussion Returns with New FCC Democratic Majority. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-7-net-neutrality-discussion-returns-with-new-fcc-democratic-majority/

[2] List of Prevous Posts in this Series

Pennings, A.J. (2022, Jun 22). US Internet Policy, Part 6: Broadband Infrastructure and the Digital Divide. apennings.com https://apennings.com/telecom-policy/u-s-internet-policy-part-6-broadband-infrastructure-and-the-digital-divide/

Pennings, A.J. (2021, May 16). US Internet Policy, Part 5: Trump, Title I, and the End of Net Neutraliy. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-5-trump-title-i-and-the-end-of-net-neutrality/

Pennings, A.J. (2021, Mar 26). Internet Policy, Part 4: Obama and the Return of Net Neutrality, Temporarily. apennings.com https://apennings.com/telecom-policy/internet-policy-part-4-obama-and-the-return-of-net-neutrality/

Pennings, A.J. (2021, Feb 5). US Internet Policy, Part 3: The FCC and Consolidation of Broadband. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-3-the-fcc-and-consolidation-of-broadband/

Pennings, A.J. (2020, Mar 24). US Internet Policy, Part 2: The Shift to Broadband. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-2-the-shift-to-broadband/

Pennings, A.J. (2020, Mar 15). US Internet Policy, Part 1: The Rise of ISPs. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-1-the-rise-of-isps/

Related Posts

Pennings, A.J. (2023, May 6). Deregulating US Data Communications. apennings.com https://apennings.com/how-it-came-to-rule-the-world/deregulating-telecommunications/

Pennings, A.J. (2021, Sep 22). Engineering the Politics of TCP/IP and the Enabling Framework of the Internet. apennings.com https://apennings.com/telecom-policy/engineering-tcp-ip-politics-and-the-enabling-framework-of-the-internet/

Pennings, A.J. (2019, Nov 26). The CDA’s Section 230: How Facebook and other ISP became Exempt from Third Party Liabilities. apennings.com https://apennings.com/telecom-policy/the-cdas-section-230-how-facebook-and-other-isps-became-exempt-from-third-party-content-liabilities/

Pennings, A.J. (2018, Oct 17). Potential Bill on Net Neutrality and Deep Pocket Inspection apennings.com https://apennings.com/telecom-policy/potential-bill-on-net-neutrality-and-deep-pocket-inspection/

Pennings, A.J. (2016, Nov 15). Broadband Policy and the Fall of the ISPs. apennings.com https://apennings.com/global-e-commerce/broadband-and-the-fall-of-the-us-internet-service-providers/

Pennings, A.J. (2011, Jan 31). Comcast and General Electric Complete NBC Universal Deal. apennings.com https://apennings.com/media-strategies/comcast-and-general-electric-complete-nbc-universal-deal/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University starting programs in Digital Communications and Information Systems Management while teaching digital economics and policy. He also helped set up the Digital Media Management program at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

ICTs for SDG 7: Twelve Ways Digital Technologies can Support Energy Access for All

Posted on | September 29, 2023 | No Comments

Digital Technologies, often known as Information and Communication Technologies (ICTs), are crucial in supporting energy development and access in numerous ways. ICTs can enhance energy production, distribution, and consumption, as well as promote energy efficiency and help facilitate the transition to clean and sustainable energy sources. ICTs are accorded a significant role in supporting the achievement of the United Nations’ Sustainable Development Goals (SDG), including SDG 7, which aims to ensure access to affordable, reliable, sustainable, and modern energy for all. To harness the full potential of ICTs for energy development, it is essential to invest in grid infrastructure and equipment, cybersecurity and data privacy, as well as digital literacy and skills.

Energy grids

Here are twelve ways that ICTs can support energy development and access:

    1) Smart Electrical Grids
    2) Renewable Energy Integration
    3) Energy Monitoring and Management
    4) Demand Response Programs
    5) Energy Efficiency
    6) Energy Storage
    7) Predictive Maintenance
    8) Remote Monitoring and Control
    9) Electric Vehicle (EV) Charging Infrastructure
    10) Energy Access in Remote Areas
    11) Data Analytics and Predictive Modeling, and
    12) Research and Development.

1) ICTs enable the implementation of smart grids, which are intelligent electricity distribution systems. It allows for real-time monitoring, control, and automation of grid operations. Smart grids use sensors, digital communication lines, and advanced analytics to monitor and manage electricity flows in real time. These Internet of Things (IoT) networks can optimize energy distribution, reduce energy losses, and integrate renewable energy sources more effectively. IoT sensors in energy infrastructure enable remote monitoring, maintenance, and early detection of faults. This can reduce downtime and improve energy availability.

2) ICTs facilitate the integration of renewable energy sources, such as solar, wind, geothermal, and hydroelectric energy, into energy grids. They provide real-time data on energy generation, storage, and consumption, allowing grid operators to balance supply and demand efficiently. Two main types of renewable energy generation resources need to be integrated: distributed generation, which refers to small-scale renewable generation close to a distribution grid; and centralized, utility-scale generation, which refers to larger projects that connect to major grids through transmission lines (See above image). Generating electricity using renewable energy resources rather than fossil fuels (coal, oil, and natural gas) can help reduce greenhouse gas emissions (GHGs) from the power generation sector and help address climate change.

3) Smart meters and energy management systems use ICTs to provide consumers with real-time information about their energy usage, replacing the electromechnical meters that were unreliable and easy to tamper with. These devices empower individuals and businesses to make informed decisions that reduce energy consumption and costs. Smart meters allow for instantaneous monitoring of energy consumption, enabling utilities to optimize energy distribution and consumers to track and manage their usage. The Asian Development Bank has been very active in supporting the transition to smart meters and in the process help countries meet their carbon commitments under the Paris Agreement, a legally binding international treaty on climate change adopted by 196 Parties at the UN Climate Change Conference (COP21) in Paris, France, in December, 2015.

4) ICTs enable demand response programs that encourage consumers to adjust their energy usage during peak demand periods in response to price signals and grid conditions. Utilities can send signals to smart devices (such as electric vehicles) to reduce energy consumption when necessary, avoiding blackouts and reducing the need to engage additional power plants. The New York Independent System Operator (NYISO), other electric distribution utilities, and wholesale system operators offer demand response programs to help avoid overload, keep prices down, reduce emissions, and avoid expensive equipment upgrades.

ICT can also deliver related energy education and awareness campaigns through websites, mobile apps, dashboards, and social media to inform consumers about energy-saving practices and sustainable energy choices. Mobile payment platforms can also facilitate access to prepaid energy services, making it easier for people to pay for electricity and monitor their energy usage. Digital platforms can connect consumers with renewable energy providers, allowing individuals and businesses to purchase renewable energy certificates or even invest in community solar projects.

5) ICTs can be used to monitor and control energy-consuming devices and systems, such as HVAC (heating, ventilation, and air conditioning), lighting, and appliances to optimize energy efficiency. Building management systems and home automation solutions are examples of ICT applications in this area. Energy-efficient homes, offices, and manufacturing facilities use less energy to heat, cool, and run appliances, electronics, and equipment. Energy-efficient production facilities use less energy to produce goods, resulting in price reductions. Key principles of the EU energy policy on efficiency focus on producing only the energy that is really needed, avoid investments in assets that are destined to be stranded, and reduce and manage demand for energy in a cost-effective way.

The utilization of more electrical energy technologies will assist the transition to more efficient energy sources while reducing green house gases and other potential pollutants. Heat pumps, for example, are an exciting addition that operate like air conditioned cooling, only in reverse. Heat pumps are used in EVs and are making inroads into homes and businesses.

6) ICTs support the management and optimization of energy storage systems, including batteries called BESS (Battery Energy Storage Systems) and pumped hydro storage. The latter moves water to higher elevations when power is available and runs it down through generators to produce electricity. These technologies store excess energy when it’s abundant and release it when demand is high. Tesla’s Megapack and Powerwalls use energy software platforms called Opticaster and Virtual Machine Mode that manage energy storage products as well as assist efficient electrical transmission over long grid lines.

Tesla Master Plan 3

7) In energy production facilities, ICTs can be used to monitor the condition of equipment and predict when maintenance is needed. This reduces downtime, extends equipment lifespan, and improves overall efficiency. ICT-based weather and renewable energy forecasting models improve the accuracy of predicting renewable energy generation, aiding grid operators in planning and resource allocation. Robust ICT networks can also ensure timely communication during energy-related emergencies, helping coordinate disaster response and recovery efforts.

8) ICTs enable remote monitoring and control of energy infrastructure, such as power plants and substations. These processes use a combination of hardware and software to track key metrics and the overall performance. Their equipment mix includes IoT-enabled sensors that track relevant data, while software solutions produce a dashboard of alerts, trends, and updates that can also enhance the safety and reliability of energy production and distribution.

9) Digital technologies play a critical role in managing EV charging infrastructures. They can help distribute electricity efficiently to both stationary and wireless charging stations. Mobile apps provide users with real-time information about charging availability, compatibility, and costs. They can also keep drivers and passengers entertained and productive while waiting for the charging to conclude.

10) ICT can facilitate the development of microgrids in off-grid or remote areas, providing access to reliable electricity through localized energy generation and distribution systems. These alternate grids use ICTs to support the deployment of standalone renewable energy systems, providing access to electricity and related clean energy sources such as geothermal, hydroelectric, solar, and wind. Renewable energy, innovative financing, and an ecosystem approach can work together to provide innovative solutions to rural areas.

11) ICTs enable data analytics and predictive modeling to forecast energy consumption patterns, grid behavior, and the impact of impending weather conditions. Analysing and interpreting vast amounts of data allows energy companies to optimise power generation through real-time monitoring of energy components, cost forecasting, fault detection, consumption analysis, and predictive maintenance.

These insights can inform energy planning and policy decisions. The ICT-enabled data collection, analysis, and reporting on energy access and usage can help policymakers and organizations track progress toward SDG 7 targets.

12) ICTs support research and development efforts in the energy sector by facilitating simulations and the testing of new technologies and energy solutions. Energy and fuel choices are critical determinants of economic prosperity, environmental quality, and national security and need to be central to academic and commercial research.

To fully address ICT for SDG 7, it’s essential to confront digital divides, expand internet access, and promote digital literacy in underserved communities. Supportive efforts among governments, utilities, technology providers, and civil society are vital to advancing energy access and sustainability. Collaboration among governments, utilities, technology providers, and research institutions can advance the integration of ICTs into the energy sector to ensure sustainable and reliable energy development for all.

Notes

[1] Some of the categories and text for this essay was generated by Chat GPT, edited with the assistance of Grammarly and written in line with my expertise and knowledge from my teaching an ICT and SDGs course for six years.

Citation APA (7th Edition)

Pennings, A.J. (2023, Sept 29). ICTs for SDG 7: Twelve Ways Digital Technologies can Support Energy Access for All. apennings.com https://apennings.com/science-and-technology-studies/icts-for-sdg-7-twelve-ways-digital-technologies-can-support-energy-access-for-all/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea where he teaches courses in ICT for sustainable development as well as broadband networks and sensing technogies. From 2002-2012 was on the faculty of New York University and he also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

The Increasing Value of Science, Technology, and Society Studies (STS)

Posted on | August 27, 2023 | No Comments

I regularly teach a course called Introduction to Science, Technology, and Society Studies (STS). It investigates how science and technology both shape and are shaped by society. The course seeks to understand their cultural, economic, ethical, historical, and political dimensions by investigating the dynamic interplay between these key factors of modern life.

Below I outline class topics, list major universities offering similar programs, and introduce some general areas of STS research. The scholarship produced by STS is used worldwide by engineers, journalists, legislators, policy-makers, as well as managers and other industry actors. It also has relevance to the general public engaged in climate, health, digital media, and other societal issues arising from science and technology adoption.

In class, we cover the following topics: Artificial Intelligence, Biomedicine, Cyberspace, Electric Vehicles and Smart Grids, Nanotechnology, Robotics, and even Space Travel. Tough subjects, but just as challenging is the introduction of perspectives from business, cognitive science, ethics, futurism, humanities, and social sciences like politics that can provide insights into relationships between science, technology, and society.[1]

STS is offered by many of the most highly-rated universities, often in Engineering programs but also in related Environment, Humanities, and Medical programs.

Although I teach in South Korea, the program was developed at Stony Brook University (SBU) in New York as part of the Department of Technology and Society (DTS), offering BS, MS, and PhD degrees at the College of Engineering and Applied Sciences (CEAS). The DTS motto is “Engineering has become much too important to be left to the engineers,” which paraphrases James Bryant Conant, who wrote after examining the results of the atomic bomb on Hiroshima that, “Science is much too important to be left to the scientists.” DTS programs prepare graduates with the technical and science capacity to collaborate productively with sister engineering and science departments at SBU and SUNY Korea while applying social science expertise and humanistic sensibility to holistic engineering education.

Our DTS undergraduate program at SUNY Korea has an additional emphasis on information and communication technologies (ICT) due to Korea’s leadership in this area.

Notable STS Programs

– Massachusetts Institute of Technology (MIT) has a Program in Science, Technology, and Society and is considered the initial founder of the field in the early 1970s.

– Stanford University’s Program in Science, Technology, and Society explores scientific and technological developments’ social, political, and ethical dimensions.

– University of California, Berkeley’s Science, Technology, Medicine, and Society Center addresses the social, cultural, and political implications of science and technology. It is known for its engagement with critical theory and social justice issues.

Harvard University’s Program on Science, Technology, and Society is part of the John F. Kennedy School of Government and provides a platform for examining the societal impact of science and technology through various courses and research opportunities.

– Cornell University, New York: Cornell’s Department of Science and Technology Studies was an early innovator in this area and offers undergraduate and graduate programs focusing on the history, philosophy, and social aspects of science and technology.

– The University of Edinburgh’s Science, Technology, and Innovation Studies Department in the United Kingdom is known for its research and teaching in the field.

University of California, San Diego’s Science Studies Program is part of the Department of Literature and offers interdisciplinary courses that examine the cultural, historical, and ethical dimensions of science and technology.

– The University of Twente in the Netherlands has a renowned Science, Technology, and Society Studies program emphasizing a multidisciplinary approach.

– In Sweden, Lund University’s Department of Sociology offers a strong STS program that covers topics such as the sociology of knowledge, science communication, and the ethical aspects of technology.

– York University in Canada has a Science and Technology Studies program that encourages critical thinking about the role of science and technology in contemporary culture.

While this list is incomplete, let me mention the Department of Technology and Society (DTS) and its history at Stony Brook University, which also dates back to the early 1970s. The Department of Technology and Society at SUNY Korea offers its degrees from DTS in New York, including BS and MS degrees in Technological Systems Management and a PhD in Technology, Policy, and Innovation.

Many engineering programs have turned to STS to provide students with conceptual tools to think about engineering problems and solutions in more sophisticated ways. It is often allied with Technology Management programs that include business perspectives and information technology practices. Some programs feature standalone courses on the sociocultural and political aspects of technology and engineering, often taught by faculty from outside the engineering school. Others incorporate STS material into traditional engineering courses, e.g., by making ethical or societal impact assessments part of capstone projects.

So, Science, Technology, and Society Studies (STS) scholars study the complex interplay between these domains to understand how they influence each other and impact human life.

Key Research Inquiries

Historical Context – STS scholars often delve into the past developments of scientific discoveries and technological innovations, as well as the social and cultural contexts in which they emerged. They readily explore the historical development of scientific and technical knowledge to uncover how specific ideas, inventions, and discoveries have emerged and changed over time. This exercise helps to contextualize the current state of science and technology and understand their origins.

Social Implications – STS emphasizes the social consequences of scientific and technological advancements. This examination relates to ethics, equity, power dynamics, and social justice issues. For instance, STS might analyze how certain technologies disproportionately affect different groups within society or how they might be used to reinforce existing inequalities.

Policy and Governance – STS researchers analyze how scientific and technological innovations are regulated, legislated, and governed by local, national, and international policies. They explore how scientific expertise, public opinion, industry interests, and political considerations influence policy decisions. They also assess the effectiveness of these policies in managing potential risks and benefits.

Public Perception and Communication – STS studies also explore how scientific information and technological advancements are communicated to the public. These inquiries involve investigating how public perceptions and attitudes towards science and technology are formed. They recognize that media narratives and communication channels influence these perceptions.

Public Engagement and Input – STS emphasizes the importance of involving the public in discussions about scientific and technological matters. It examines how scientific knowledge is communicated to the public, how public perceptions influence scientific research, and how public input can shape technological development.

Social Construction of Science and Technology – STS emphasizes that science and technology are not solely products of objective inquiry or innovation but are also influenced by social and cultural factors. It examines how scientific knowledge is constructed, contested, and accepted within different communities and how economic, political, and cultural forces shape technologies.

Ethical and Moral Considerations – The STS field often addresses scientific and technological advancements’ ethical and moral implications. This analysis includes discussions on the responsible development of new technologies, the potential for unintended consequences, and the distribution of benefits and risks across different social groups.

Innovation Studies – STS scholars also study innovation processes, including how scientific knowledge translates into technological applications, how creative ecosystems are established, and how collaboration between researchers, policymakers, and industry actors contributes to technological progress.

Environmental Analysis – Science, Technology, Society, and Environment (STSE) Studies interrogate how scientific innovations, technology investments, and industrial applications affect human society and the natural environment. Education is an important component as people make decisions that often guide the environmental work of scientists and engineers. STS also investigates how scientific and technological tools can help create more climate-resilient urban and rural infrastructures.

Technological Determinism – STS often confronts the idea of technological determinism, which suggests that technology significantly drives social change. It investigates the institutional factors and human agency that significantly shape technological development and its impacts while recognizing science and technology’s driving forces.

Digital Culture and Network Infrastructure – Some STS scholars study the overall media environment, the culture it engenders, and its enabling frameworks. It considers the interaction and interdependence of various media forms and linkages within and between networks. They explore how communication speed, information storage, and digital processing influence human perception, culture, economics, and the environment. These inquiries often include raising questions about privacy, censorship, propaganda, and the responsible use of media technologies.

Energy and Carbon Dependence – While engineers study the chemical, electrical, electromagnetic, mechanical, and thermodynamics of energy, STS scholars examine the central role of these energies in modern life. Again, they take various multidisciplinary, social-scientific perspectives on energy and environment, analyzing the economic, political, and social aspects of the production and consumption of energy, including the controversies, domestication, and innovation of new forms of energy.

Interdisciplinary Approach – As mentioned throughout this post, STS is inherently interdisciplinary, drawing on insights from sociology, anthropology, history, philosophy, political science, and more. This multidisciplinary perspective allows for a comprehensive examination of the complex relationships between science, technology, and society.

Conclusion

STS is relevant in addressing contemporary issues such as artificial intelligence, biotechnology, environmental challenges, privacy concerns, and more. It encourages a holistic understanding of the complex interactions between science, technology, and society, crucial for making informed decisions and policies in an increasingly technologically driven world. It is research-driven, using both quantitative and qualitative methods. By studying STS interactions, it aims to contribute to more informed decision-making, responsible innovation, and a better understanding of the role of science and technology in modern societies.

Notes

[1] I have previously described how I use the 4 Cs of the cyberpunk genre for techno-social analysis.
[2] Some of the categories and text for this essay was generated by Chat GPT and edited with the use of Grammarly in line with additional knowledge from my teaching the STS course for six years.

Citation APA (7th Edition)

Pennings, A.J. (2023, Aug 27). The Increasing Value of Science, Technology, and Society Studies (STS). apennings.com https://apennings.com/technologies-of-meaning/the-value-of-science-technology-and-society-studies-sts/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he started programs in Digital Communications and Information Systems Management while teaching digital economics. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Pressing Global Standards for Internet Protocols

Posted on | July 14, 2023 | No Comments

Developing new standards that would allow computers to connect and exchange information was central to the growth of international data networks. Computers and telecommunications networks are routinized codes and procedures constituted and shaped technologically by economic, engineering, and political decisions. Telecommunications requires agreement about the practices of interconnection, the interaction protocols, and the technical standards needed to couple disparate nodes. This post looks at the importance of standards development as the Internet emerged. Several powerful contenders developed competing designs before TCP/IP eventually emerged as the global solution. The OSI (International Organization for Standardization) model, for example, was influential, but turned out to be fatal distraction for many companies.

Standards sometimes emerge out of functionality, sometimes out of cooperation, and often out of pure economic power. Each of these conditions was present in the fight to develop telecommunications equipment for international data communications during the early 1970s and into the 1980s. Standards allow different types of equipment to work together. Choices of standards involve the exclusion of some specifications and the inclusion of others. Standards create competitive advantages for companies. Ultimately, these standards determine whose equipment will be used and whose will either be scrapped or never get off the design board. This situation is, even more, the case when they bridge national boundaries where the protocols and equipment have to be harmonized for effective technical communications.

Users (usually international corporations), international coordinating bodies, and computer equipment manufacturers were all starting to react to the new economic conditions of the 1970s. The movement to floating foreign exchange rates and the increased demand for ICT were especially problematic. Banks and other financial institutions such as the New York Stock Exchange (NYSE) were also very keen to develop data solutions to expand their scope over wider market areas, speed up “back office” data processing services, and provide new services.

Meanwhile, the ITU began soliciting the positions of its member nations and associated corporations regarding their plans to develop data communications and possible joint solutions. Perhaps most importantly, IBM’s Systems Network Architecture (SNA), a proprietary network, had the force of the monolithic computer corporation behind it. SNA was a potential de facto standard for international data communications because of the company’s overwhelming market share in computers.

Several other companies came out with proprietary networks as well during the mid-1970s. Burroughs, Honeywell, and Xerox all drew on ARPANET technology, but designed their network to work with the computers that only they manufactured.[1] As electronic money and other desired services emerged worldwide, these three stakeholders (users, ITU, computer OEMs) attempted to develop the conduits for world’s new wealth.

International organizations were also key to the standards development in the arena of international data communications. The ITU and the OSI initiated international public standards on behalf of their member states and telecommunications agencies. The ITU’s Consultative Committee on International Telegraphy and Telephony (CCITT) was responsible for coordinating computer communication standards and policies among its member Post, Telephone, and Telegraphy (PTT) organizations. This committee produced “Recommendations” for standardization, which usually were accepted readily by its member nations.[2] As early as 1973, the ITU started to develop its X-series of telecommunications protocols for data packet transfer (X indicated data communications in the CCITT’s taxonomy).

Another important standards body mentioned above, is the International Organization for Standards (ISO). The ISO was formed in 1946 to coordinate standards in a wide range of industries. In this case, they represented primarily the telecommunications and computer equipment manufacturers. ANSI, the American National Standards Institute, represented the US.

Controversy emerged in October 1974 and revolved around IBM’s SNA network, which the Canadian PTT had taken issue with. The Trans-Canada Telephone System (TCTS) wanted to produce and promote its own packet-switching network that it called Datapac. It had been developing its own protocols and was concerned that IBM would develop monopolistic control over the data communications market if allowed to continue to build its own transborder private networks. Although most computers connected at the time were IBM, the TCTS wanted circuitry that would allow other types of computers to use the network.

Both sides came to a “standoff” in mid 1975 as IBM wanted the PTT to use its SNA standards and the carrier tried to persuade IBM to conform to Canada’s requirements. The International Telecommunications Union attempted to resolve the situation by forming an ad hoc group to come up with universal standards for connecting “public” networks. Britain, Canada and France along with BBN spin-off Telenet from the US started to work on what was to become the X.25 data networking standard.

The ITU’s CCITT, who represented the interests of the PTT telecommunications carriers, proposed X.25 and X.75 standards out of a sense of mutual interest among its members in retaining their monopoly positions. US representatives, including the US Defense Communications Agency were pushing the new TCP/IP protocols developed by ARPANET because of its inherent network and management advantages for computer operators. Packet-switching broke up information and repackaged it in individual packets of bits that needed to be passed though the telecommunications circuit to the intended destination. TCP gave data processing managers more control because it was responsible for initiating and setting up the connection between hosts.

In order for this to work, all the packets must arrived safely and be placed in the proper order. In order to get reliable information a data checking procedure needs to catch packets that are lost or damaged. TCP placed this responsibility at the computer host while X.25 placed it within the network, and thus under the control of network provider. The US pushed hard for the TCP/IP standard in the CCITT proceedings but were refused by the PTTs who had other plans.[1]

Tensions increased due to a critical timeframe. The CCITT wanted to specify a protocol by 1976 as it met only every four years to vote on new standards. They had to work quickly in order to meet and vote on the standards in the 1976 CCITT plenary that was coming together in September. Otherwise they would have to wait until 1980.

The X.25 standards were developed and examined throughout the summer of 1976 and approved by the CCITT members in September. The hastily contrived protocol was approved over the objections of US representatives who wanted TCP/IP institutionalized. The PTTs and other carriers argued that TCP/IP was unproven and requiring its implementation on all the hosts they would serve was unreasonable. Given ARPANET hosts’ difficulty implementing TCP/IP by 1983, their concerns had substance. X.25 and another standard, X.75, put PTTs in a dominant position regarding datacom, despite the robustness of computer innovations, and the continuing call by corporations for better service.

The ARPANET’s packet-switching techniques made it into the commercial world with the help of the X-series of protocols defined by the ITU in conjunction with some former ARPANET employees. A store-and-forward technology rooted in telegraphy, it passed data packets over a special network to find the quickest route to its destination. What was needed was an interface to connect the corporation or research institute’s computer to the network.

The X.25 protocol was created to provide the connection from the computer to the data network. At the user’s firm, “dumb” terminals, word processors, mainframes, and minicomputers (known in the vernacular as DTE or Data Terminal Equipment) could be connected to the X.25 interface equipment with technology called PADs (Packet Assemblers/Dissamblers). The conversion of data from the external device to the X.25 network was transparent to the terminal and would not effect the message. An enterprise could build its own network by installing a number of switching computers connected by high-speed lines (usually 56k up to the late 1980s).

X.25 connected these specially designed computers to the data network. The network could also be set up by a separate company or government organization to provide data networking services to customers. In many cases a hybrid network could be set up combining private facilities with connections to a public-switched data network.[4]

Developed primarily by Larry Roberts from ARPA, who later went to work with Telenet’s value-added networks, X.25 was a compromise that provided basic data communications for transnational users while keeping the carriers in charge. The standard was eagerly anticipated by the national PTTs who were beginning to realize the importance of data communications and the danger of allowing computer manufacturers to monopolize the standards process by developing proprietary networks. What was surprising though, was the endorsement of X.25 by the transnational banks and other major users of computer communications. As Schiller explained:

  • What is unusual is that U.S. transnational corporations, in the face of European intransigence, seem to have endorsed the X.25 standard. In a matter of a few months, Manufacturers Hanover, Chase Manhattan, and Bank of America announced their support for X.25, the U.S. Federal Reserve bruited the idea of acceptance, and the Federal Government endorsed an X.25-based interim standard for its National Communications System. Bank of America, which on a busy day passes $20 billion in assets through its worldwide network “cannot stall its expansion planning until IBM gives its blessing to a de facto international standard,” claims one report. Yet even more unusual, large users’ demands found their mark even over the interests of IBM, with its tremendous market share of the world’s computer base. In summer, 1981, IBM announced its decision to support the X.25 standard within the United States.[5]

Telenet subsequently filed an application with the FCC to extend its domestic value-added services internationally using the X.25 standard and a number of PTTs such as France’s Transpac, Japan’s DDX, and the British Post Office’s PSS also converted to the new standard. Computer equipment manufacturers were forced to develop equipment for the new standard. This was not universally criticized, as the standards provided a potentially large audience for new equipment.

Although the X-series did not resolve all of the issues for transnational data networking users, it did provide a significant crack in the limitations on international data communications and provided a system that worked well enough for the computers of the time. Corporate users as well as the PTTs were temporary placated. A number of privately owned network service providers such as Cybernet and Tymnet used the new protocols as did new publicly-owned networks such as Uninet, Euronet, and Nordic Data Network.

In another attempt to preclude US dominance in networking technology, the British Standards Institute proposed to the ISO in 1977 that the global data communications infrastructure needed a standard architecture. The move was controversial because of the recent work and subsequent unhappiness over X.25. The next year, members party to the International Standards Organization (ISO), namely Japan, France, the US, Britain, and Canada set out to create a new set of standards they called Open Systems Interconnection or OSI using generic components which many different equipment manufacturers could offer. Most equipment for telecommunications networks was built by national electronics manufacturers for domestic markets, but the internationalization of communications require a different approach because multiple countries need to be connected and that required compatibility. Work on ISO was done primarily by Honeywell Information Systems, who actually drew heavily on IBM’s SNA (Systems Network Architecture). The layered model was initally favored by the EU that was suspicious of the predominant US protocols.

Libicki describes the process:

  • “The OSI reference model breaks down the problem of data communications into seven layers; this division, in theory, is simple and clean, as show in Figure 4. An application sends data to the application layer, which formats them; to the presentation layer, which specifies byte conversion (e.g. ASCII, byte-ordered integers); to the session layer, which sets up the parameters for dialogue, to the transport layer, which puts sequence numbers on and wraps checksums around packets; to the network layer, which adds addressing and handling information; to the data-link layer, which adds bytes to ensure hop-to-hop integrity and media access; to the physical layer, which translates bits into electrical (or photonic) signals that flow out the wire. The receiver unwraps the message in reverse order, translating the signals into bits, taking the right bits off the network and retaining packets correctly addressed, ensuring message reliability and correct sequencing , establishing dialogue, reading the bytes correctly as characters, numbers, or whatever, and placing formatted bytes into the application. This wrapping and unwrapping process can be considered a flow and the successive attachment and detachment of headers. Each layer in the sender listens only to the layer above it and talks only to the one immediately below it and to a parallel layers in the receiver. It is otherwise blissfully unaware of the activities of the other layers.”[6]

Specifying protocols before their actual implementation turned out to be bad policy. Unfortunately for Japan and Europe, countries who had large domestic equipment manufacturers and did not want the US to control international telecommunications equipment markets, the opposite happened. These countries lost valuable time developing products with OSI standards while the computer networking community increasingly used TCP/IP. As the Internet took off, the manufacturing winners were companies like Cisco and Lucent. They ended up years ahead of other telecom equipment manufacturers and gave the US the early advantage in Internetworking.[7]

In another post, I explore the engineering of a particular political philosopy into TCP/IP.

Citation APA (7th Edition)

Pennings, A.J. (2023, July 14). Pressing Global Standards for Internet Protocols. apennings.com https://apennings.com/digital-coordination/pressing-global-standards-for-internet-protocols/

Share

Notes

[1] Janet Abbate. (1999) History of the Internet. Cambridge, MA: The MIT Press. p. 149.
[2] Janet Abbate. (1999) History of the Internet. Cambridge, MA: The MIT Press. p. 150.
[3] ibid, p. 155.
[4] Helmers, S.A. (1989) Data Communications: A Beginner’s Guide to Concepts and Technology. Englewood Cliffs, NJ: Prentice Hall. p. 180.
[5] Schiller, D. (1982) Telematics and Government. Norwood, NJ: Ablex Publishing Corporation. p. 109.
[6] Libicki, M.C. (1995) “Standards: The Rough Road to the Common Byte.” In Kahin, B. and Abbate, J. Standards Policy for Information Infrastructure. Cambridge, MA: The MIT Press. pp. 46-47.
[7] Abbate, p. 124.

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea where he teaches broadband technologies and policy. From 2002-2012 was on the faculty of New York University where he taught digital economics while managing programs addressing information systems and telecommunications. He also taught in Digital Media MBA atSt. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    November 2024
    M T W T F S S
     123
    45678910
    11121314151617
    18192021222324
    252627282930  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.