Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

Digital Spreadsheets – Part 5 – Numeracy and the Power of Zero

Posted on | February 17, 2020 | No Comments

Previously, I explored the electronic spreadsheet as a meaning-making application that was central to the financial explosion of the 1980s and its Apple II running VisiCalc spreadsheet economic aftershocks. Spreadsheets framed and produced information and meaning consequential to monetary and organizational practices as they became part of the daily routines of information workers. Although, initially, the purview of accountants and bookkeepers, spreadsheet usage became ubiquitous throughout work practices in conjunction with word processing and databases. The spreadsheet incorporated numerical formulations and innovations with systems of categorization and inventorying to become a tool of productivity and power over people and resources.

In my last post, I explored the importance of symbolic representation systems, mainly writing, in the workings of a spreadsheet. Written alphanumerical symbols have shaped Western society. For example, tables and lists are epistemological technologies that have historically organized administrative knowledge in castles, military camps, and monasteries. With the invention of ASCII characters, the facility of the PC-based applications, including the spreadsheet, became powerful tools for organizing written and numerical facts in modern corporations and other organizations worldwide.

In this post, I will focus specifically on the power of numeracy, with a special emphasis on the role of zero. The zero is an extraordinary cognitive invention that has been central to the quantitative workings of the spreadsheet. In conjunction with Indo-Arabic numerals and double-entry accounting techniques, the spreadsheet has been crucial to the rise of modern capitalism and that peculiar historical manifestation, the corporation.

Although still used on occasion for style, Roman numerals have been mathematically obsolete for several hundred years. Initially based on scratching or tallying systems for sheep and other items, Roman numbers most likely represented hand figurations or gestures. For example, the number 10 or X probably represented two thumbs crisscrossed. Addition and subtraction were relatively straightforward, but division and multiplication were not as “easy or obvious.” Roman numerals included thousands (“M”) but they never developed a representation for million or beyond.

The modern system of numeration is based on representations using ten different digits 0, …, 9 imported from the Middle East and Asia and will be called Indo-Arabic in this post. These numerals are said to have been designed based on the number of angles each numeral contained and over the years the way they are written have rounded out. The Arabian interest in Indian numerals based on zero arose to solve practical problems such as inheritances, purchases, sales contracts, tax collection and wills. Indo-Arabic numerals moving from India to the Middle East and finally to Europe were crucial for accounting and financial systems that have since become global standards and key ingredients in spreadsheet formulations.

Evidence dates the zero (also known as the naught, or nil) back some 2000 years to the Angkor Wat civilization in Cambodia; although it is generally recognized that India refined its use around 500AD, and it came to Europe in the early 1200’s from Arabia. Muhammed ibn-Musa al-Khwarizmi, or “Algorismus,” as his name was Latinized, was probably one of the most influential mathematicians in the transfer of this knowledge to the West. The Persian scholar taught in Baghdad sometime between 800 and 850. He wrote a book on the Hindu number system that was translated into Latin as De numero indorum or “On the Hindu numbers.” He later wrote another seminal book, Al-jabr w’al muqabalah, which became known in Europe as Algebra, based on the author’s Latin name and is the root of the English word “algorithm.”

One of the mathematicians who introduced these numbers to Europe was Leonardo of Pisa or Leonardo Pisano, famously known as “Fibonacci.” It is short for filius Bonacci, the son of Bonaccio. In his book, Liber abaci (Book of the Abacus or Book of Calculating) completed in 1202, he showed how the Indo-Arabic numbers could be used. The book was divided into four parts. The first introduced Indo-Arabic numbers, especially zephirum, which became zefiro in Italian, zero in the Venetian dialect. The second section showed how calculations dealing with currency conversions, compound interest, and the determination of profit could benefit businesses. The third and fourth sections addressed a number of mathematical problems including irrational numbers and the Fibonacci sequence that the author is most known for today. The video below introduces his relevance to commercial activities.

It was the development of the zero and the related positional system that made modern “Western” calculation systems so effective. It has become necessary for a variety of mathematical purposes including decimals, sets, and quite significantly, the mathematical systems that makes it easier to work with larger quantities. Using the place holding system, nine numbers plus zero can represent an infinity of figures. The same symbol, such as 7, takes on different meanings (7, 70, 700, etc.) depending on its location within the representation of the number. The positional base-10 system using ten different digits 0, …, 9 has been globally accepted as the primary mathematical standard for human calculation.

The base-10 positional system probably emerged from counting on our fingers, but it is adequately suited to arithmetical computations as it needs only ten different symbols and uses the zero to mark the place of a power of the base not actually occurring. Think of a car’s odometer that every ten miles causes the dial to turn and milestones such as 10,000 and 100,000 are markers of a car’s age that we often subscribe significance.

One of the strengths of the spreadsheet is its ability to combine complex calculations with human understanding of the base-10 mathematical system. While the “alien intelligence” of computers can now handle more complex base systems such as the duodecimal (base-12) and sexagesimal (base-60) place-holding systems used in time and geographic calculations, base-10 is useful because, quite frankly, humans are used to it. Although computers use a base-2 system with nearly infinite combinations of 1s and 0s, the positional base-10 system has been globally accepted as the mathematical standard for human calculation and a key component of spreadsheet usability.

The calculative abilities of zero with other Indo-Arabic numbers brought new levels of certainty and confidence for commerce and eventually science in the West. By 1300, zero-based accounting and other numerical techniques were being adapted by the merchant classes. Double-entry accounting techniques emerged first for tracking resources and checking for errors, but later resulted in the conceptual separation of a business from its owner, a precursor condition for the emergence of the modern corporation.

The spreadsheet drew on this history of numerical innovation to become a tool of organizational productivity and power. As part of my “formulating power” project, I will continue to examine the ways spreadsheets construct techno-epistemological knowledge using combinations of algorithms and other meaning-making practices.

Share

© ALL RIGHTS RESERVED

AnthonybwAnthony J. Pennings, PhD is Professor at the Dept of Technology and Society at the State University of New York (SUNY) in Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He also taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii in the 1990s.

“It’s the Infrastructure, Stupid”

Posted on | February 9, 2020 | No Comments

Infrastructure, at the deepest level, is a techno-socio bond that brings together new communications technologies, new energy sources, new modes of mobility and logistics, and new built environments, enabling communities to more efficiently manage, power, and move their economic activity, social life, and governance. – Jeremy Rifkin, The Green New Deal.[1]

The 1992 US presidential election was notable for the phrase, “It’s the Economy Stupid” as William Jefferson Clinton attacked incumbent president George H. Bush for ignoring economic and social problems at home. President Bush had a dramatic military victory that drove the Iraqis out of Kuwait with “Desert Storm,” but the economy at home lingered with high unemployment and unprecedented debt. After the election, the new Clinton-Gore administration implemented the Global Information Infrastructure (GII) that helped the obscure data networks used by academia and research institutes blossom into the Internet and its World Wide Web.

The framework combined with the rapidly emerging technology and financial resources to create the famous “Bull Run” economic revival of the late 1990s. Starting with the Netscape IPO in August, 1995, investment poured into “dot.coms” and telecoms. This information infrastructure continues as the dominant economic engine and social dynamic of the 21st century. However, opportunities exist to scale the networks, and include energy and mobility.

This week, I had a chance to attend the Japan-Texas Infrastructure Investment Forum in Austin. Texas and Japan are prolific trade partners and got together to discuss more “public goods” like transportation, water, and the possibilities of “smart cities.”

In this post, I want to connect the current imperative to build infrastructure in the US with green and smart technologies and this series’ emphasis on the New Deal. New thinking and infrastructure investments offer the opportunity to create enabling environments for sustainable and replenishing economic activity. These frameworks require examining political and juridical structures to open up avenues for new investments and evaluation systems. We may not want to reinvent the New Deal, but it’s a point of reference for examining the way forward.

If Texas were a country, it would have the 10th largest economy in the world. Bigger than Australia, Canada, and even Russia. Japan, of course, is number 3, after the US and China. Japan is moving many of its US operations to Texas due to the Lone Star state’s business environment, rich resources and cosmopolitan cities. Texas exports chemicals, food, oil and gas, and electronic products.

I was primarily interested in seeing the presentations on Smart City developments, but was intrigued by the talks on water management in Texas (I’m Dutch, it’s in my genes) and transportation. I didn’t know, for example, that Texas desalinates over 150 million gallons of water every day. Also, Texans drive over 550 million miles a day. What would be the implications of renewable-powered desalination for agriculture and general water needs? How do you manage that much road maintenance? What alternatives are available to the traditional internal combustion vehicle? Just a few tidbits that set the context for the day’s presentations on building infrastructure in Texas.

One of the objectives for Japan was to pursue contracts for their Shinkansen, the high-speed railroad that makes Japan fairly easy to traverse. It’s only a matter of time before Dallas, Austin, and Houston are connected with more efficient lines, and Japan wants to get in on that.

They even brought in the Japan Overseas Infrastructure Investment Corporation for Transport & Urban Development (JOIN) and the Japan Bank for International Cooperation (JBIC) to support the funding of the operation. Having both driven the routes to Dallas and Houston as well as taken the railroad in Japan, I certainly would enjoy the Shinkansen bullet train for my next Texas trip.

Texas is unique because it is a major carbon-extracting and exporting state. But like Saudi Arabia, it recognizes the importance of pursuing infrastructural projects with a green tinge. Growth in Texas is expected to increase substantially over the next several decades, and that means new strategies for mobility, water availability, and disaster risk reduction.

Now we have to confront and analyze the implications of a post-petroleum age. Exploring the New Deal gives us a better perspective on the size of that task, how deep and sometimes intrusive the process will be. The New Deal was a monumental, multi-decade endeavor that made American great and will not be easily matched.

Roosevelt’s New Deal was primarily an infrastructure program. Facing economic collapse and massive unemployment, FDR promised “a new deal for the American people.”[2] In the first 100 days of the FDR administration, some 15 bills were passed to assist the recovery and put people to work. Some of the major projects were the Civilian Conservation Corps (CCC), the Public Works Administration (PWA), the Tennessee Valley Authority (TVA), and the related Rural Electrification Act. These were all designed to get people back to work and build an infrastructure that would support the new energy paradigm – hydrocarbons and electricity.

While economic recovery sputtered throughout the 1930s,
federally-funded infrastructure projects built the roads, tunnels, and bridges for cars and trucks, the dams and coal-fired utilities for electrification, and some 125,000 public buildings including schools, hospitals, and government facilities. Even airports like LaGuardia outside Manhattan were a product of the New Deal.

The Hoover Dam tapped the Colorado River and provided electricity for the entire Southwest, including Los Angeles and a small town called Las Vegas. When the dam was completed in 1936, it was the most extensive electricity producing facility in the world, providing power to Arizona, California, and Nevada. It electrified homes, entertainment, industry, and agriculture.

Another big infrastructure project for the New Deal was the Interstate Highway System of 1938 that eventually laid down almost 47,000 miles of public roads. Before the attack on Pearl Harbor Roosevelt appointed a National Interregional Highway Committee to study the need for several cross-country inter-state highways. The building of the “Autobahn” in Nazi Germany was a major source of motivation. In Interregional Highways, the committee recommended constructing 40,000 miles (64,000 km) interstate highway system. It was an extraordinary push for the mobilization and motorization of the US. It provided extraordinary interconnection between cities and was the “killer app” for the automobile.

Vice-President Gore was influenced by his father, Senator Al Gore Sr., who co-authored the Federal-Aid Highway Act of 1956 infrastructure program during the Eisenhower Administration. Dwight Eisenhower had studied the German Reichsautobahnen as he plotted the invasion of Europe during World War II and was committed to building the nation-wide highway in the US. It created a network of roadways that sparked the US economy, eventually reaching some 46,000 miles. The son used the inspiration to conceptualize the National Information Infrastructure plan that turned the NSFNET into the Internet.

Citation APA (7th Edition)

Pennings, A.J. (2020, Feb 9). It’s the Infrastructure, Stupid. apennings.com https://apennings.com/democratic-political-economies/from-new-deal-to-green-new-deal-part-3-its-the-infrastructure-stupid/

Notes

[1] Quote “It’s the Infrastructure Stupid” in the title is also from Jeremy Rifkin, The Green New Deal. “It’s the Economy, Stupid” was coined by campaign strategist James Carville during the Clinton-Gore 1992 presidential campaign during the 1992 election to counter George H. Bush’s success with the first Iraq War.
[2] Blitz, M. (2017, November 20). When America’s Infrastructure Saved Democracy. Retrieved February 8, 2020, from https://www.popularmechanics.com/technology/infrastructure/a24692/fdr-new-deal-wpa-infrastructure/

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

FROM NEW DEAL TO GREEN NEW DEAL, Part 2: The Failure of the National Industrial Recovery Act of 1933

Posted on | February 6, 2020 | No Comments

This is the second of an ongoing series about how the New Deal restructured the American political economy and what lessons it has for transiting to a Green New Deal. The first one dealt with the New Deal emerging from the wake of the Great Depression and the immediate policy responses by the Roosevelt administration to deal with the banking and financial crisis. This post looks at a failure, the National Industrial Recovery Agency’s (NIRA) attempts to administer a wide range of codes and practices for business and industry. What are the lessons of the NIRA for the Green New Deal?

The green energy revolution strives for zero emissions of harmful carbon by-products and near-zero marginal costs after installation. Hydrocarbons from coal or oil are incredible sources of energy but emit deadly carbon monoxide and climate threatening carbon dioxide. They are also used up in consumption and require constant replenishment. Good for petro-states like Russia and Saudi Arabia but a constant currency drain for the rest of the world. Renewables produce a constant supply of energy. They don’t last forever and eventually require replacement, but their economics are extraordinary and will make possible exciting new opportunities like desalination and purification of saltwater, for example.

The Green New Deal will reach deep into the commerce and industrialization of the global economy. While the movement is gaining momentum, a few sold Teslas, and some homes with solar panels do not a revolution make. Although I recently had my first ride in a Tesla and it was awesome.

The Green New Deal will need to continue to technologically build the Smart Grid while regulatory changing the utilities to allow smaller microgrids that can utilize local resources, including buying energy from residences and small businesses. Broadband networks and the Internet of Things (IoT) will be crucial to the convergence and providing the “smart” aspects of the grid. Other industries that will be affected include agriculture, architecture, automobiles, construction, supply chain logistics, military, etc. [1]

How will they all work together? Not only between different industries but different companies within the same sector. How will winners emerge? What will happen to losers? Solyndra, for example, became a major political issue when it filed for bankruptcy in 2011. It was a manufacturer of innovative thin- film solar cells based in Fremont, California. Solyndra received significant subsidies in guaranteed loans from the Department of Energy as part of the economic stimulus plan. But it still couldn’t compete with more traditional solar cell technology companies, especially from China. What are we to make of situations like this?

The Green New Deal faces many significant and complicated issues. What are the electrical standards, for example, the building codes? The sewage interconnections? Establishing networks of automobile recharging (or hydrogen refueling) stations? Can prices within an industry be regularized without penalizing consumers? Does labor organization need to be revived from its decimation during the Reagan years?

A step back for the New Deal…

In his larger attempt at industrial structuring, President Roosevelt sent a plan to Congress that became the National Industrial Recovery Act of 1933. Congress passed it into law on June 16 of that year. The Act created the National Industrial Recovery Agency (NIRA) to administer codes of practice for business and industry. The Act was “a clear victory for the many prominent businessmen who were backing cartelization as a solution to the nation’s industrial problems.” [1]

The Act allowed industries to create “codes of fair practice.” By suspending antitrust laws, these codes allowed corporations in particular sectors to set prices, restrict output, and increase profits. These were agreements that enabled the NIRA to administer, with the dominant trade associations, what were in effect national cartels. Although it was later declared unconstitutional by the Supreme Court, the codes became part of later legislation and became part of the national restructuring of the US economy.

While Hoover had limited his activism to the establishment of the Reconstruction Finance Corporation that served as a lender of last resort to banks and the railroads, he opposed cartelization and thus alienated himself from many business leaders. The NIRA placated most of the big business concerns. However, for political reasons, Roosevelt’s plan was designed to serve many other constituencies. He had made concessions to liberal conservatives to reassure them that socialism was not anywhere near the path he was taking, nor was he forwarding “monopoly.” But opposition did mount from small business people and farmers who saw the codes being dominated by big business.

Finally, the Act started to antagonize large corporations because of Section 7a, which encouraged labor organization. and had led to a series of violent strikes.

A poultry company from Brooklyn, NY, sued the NIRA and the case went all the way to the Supreme Court. In Schechter Poultry Corp. v. The United States, the U.S. Supreme Court, rejected the compulsory-code system. SCOTUS argued that the NIRA improperly delegated legislative powers to the executive and that regulating poultry codes did not meet the standards of interstate commerce, a constitutional requirement for Federal regulation. By May of 1935, the Supreme Court declared the Act unconstitutional.

The New Deal shows us how massive and complicated a major economic reorganization can be. The Green New Deal should seriously study the issues that FDR confronted to revive the economy and chart a new course for the US that avoided revolution. The case of the NIRA gives us some idea of the scale of the transition and the challenges of government intervention in the economy.

Notes

[1] Rifkin, J. (2020) Green New Deal. Retrieved from

McQuail, K. (1982) Big Business and Presidential Power. NY: William Morrow and Company. p. 27.

Share

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, Ph.D. is Professor of the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University. Previously, he taught at Hannam University in South Korea, Marist College in New York, Victoria University in New Zealand. He keeps his American home in Austin, Texas and has taught there in the Digital Media MBA program at St. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.

YouTube Meaning-Creating (and Money-Making) Practices

Posted on | February 2, 2020 | No Comments

Note: This is required reading for my Visual Rhetoric and IT class.

Youtube has emerged as the primary global televisual medium, attracting about 1.3 billion viewers from countries around the world with over 5 billion videos watched every day. People suck up some 3.25 billion hours of YouTube videos each month and over ten thousand YouTube videos generated over 1 billion views since they were posted. YouTube contents range from homemade DIY videos to professional high definition television productions.

Alphabet recently announced that YouTube made $15 billion in advertising revenues in 2019, growing 36% over 2018. That is a lot of money to spread around.

YouTube provides opportunities for new publishers or “vloggers” covering a wide range of topics. Every minute, some 400 hours of video are uploaded to YouTube from all around the world. Not many of those owners get rich but some have done extraordinarily well. Together, the world’s 10 highest-paid YouTube stars made $180 million in the year between June 1, 2017, and June 1, 2018, almost double the year before.

One big star to emerge on YouTube is Daniel Middleton (DanTDM) who made US$18.5 million in 2018. Middleton is an Australian professional gamer, and his videos primarily cover games like Minecraft, Plants vs. Zombies, and other favorite games that DanTDM’s primary audience, young kids, enjoy. Here he reviews the massive hit called Fortnite.

What makes DanTDM’s YouTube videos successful? What does he do to keep the viewer’s interested in his content and what keeps his audience coming back for more? How does he create entertainment and meaning for those who watch his show?

Even more extraordinary is Ryan ToysReview (now called Ryan’s World!). Ryan is a 7-year-old host of the YouTube show and is the top-earning YouTube channel at $22.5 million for the year up to June 1, 2018.

What knowledge can we gather about Ryan’s World!? What observations can we make about his show and other popular channels?

This series of posts will set out to explore a crucial relationship in (digital) media studies – between cultural/technical production practices and the meanings, emotions, and feelings that are produced by those practices. Media production involves a combination of equipment and processes to capture and construct various images, edit sequences, and integrate audio and sound effects to produce specific results. These are the meaning-making techniques that construct our blockbuster movies, our Netflix binge favorites, our local newscasts, and also the YouTube channels that educate and entertain us. Can we use some of the same analytical techniques to “interrogate” YouTube channels?

A good deal of related work has been done on film and television. By exploring camera shots: close-ups, zooms, pans, shot composition, as well as montage: cutting rates, parallel editing, reaction shots, wipes, etc., film studies and even television studies have given us a literacy to understand the power of these mediums. These important meaning-making practices can also be discerned in the realm of YouTube videos.

Social media apps like YouTube present significant new complications in understanding the power of the global mediasphere. One area of concern are the metrics associated with YouTube. Ratings were always a significant part of television services to determine the value of programming. YouTube measures “views” and adds likes, dislikes, shares, play-listing, and subscribers to measure the credibility and commercial viability of a channel. But vulnerabilities in the system allow many of these numbers to be tweaked by the “fake-view” ecosystem that has grown around YouTube.

YouTube has become a new frontier for media studies. The opportunity exists now to pioneer strategies for understanding this intriguing visual medium and the sets of meanings they create. What techniques are used in YouTube “channels?” What types of persuasive techniques are effective on YouTube channels. How do they differ from techniques used in film and television? Who is driving the narration of the video and what voices are they using?

But there are broader issues to address as well. What are the cultural, economic, and social implications of YouTube? What new ideas and cultural forms diffuse via Youtube? What economic activities and opportunities are made available through the platform? What impact will YouTube have on existing institutions?

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

FROM NEW DEAL TO GREEN NEW DEAL, Part 1: Roosevelt Saves Capitalism

Posted on | February 2, 2020 | No Comments

“I pledge you, I pledge myself, to a new deal for the American people.” – FDR in the summer of 1932.

Recent discussions about a proposed Green New Deal encouraged me to review some of my notes on the original New Deal and if it could provide relevant insights into our current situation. The New Deal began in the early 1930s as a response to the economic crash in the late 1920s. It built up momentum during the “Great Depression” of the next decade. It ended, arguably, just before the end of the millennium with the repeal of the Glass-Steagall Act on November 12, 1999 when President Clinton signed the Financial Services Modernization Act. The New Deal was one of the most influential sets of legislation in American history and it set the course of modern US history.

On October 1, 1928, the Dow Industrial Average closed at 240.07. Earlier that year, “the Dow” was increased to thirty stocks from the traditional 12 stocks. The “Roaring Twenties” had been a good decade for many investors. But that was about to end for most.

It would continue to rise over the next year to 381 before beginning its dramatic descent. In October 1929, the Dow began its steep decline as investors hastened to liquidate their positions. From Thursday, October 24 to Tuesday, October 29, the stock market crashed dramatically, eventually falling to just 41.

Capitalism had run into deep trouble. Never before had public confidence been shaken so thoroughly. Unemployment was estimated to have fallen to 25% in the US and England. It was even worse in Germany, which had been strapped with war reparations at the Treaty of Versailles. Political unrest was brewing in democratic political economies around the world. It was little more than a decade earlier that a Communist revolution had occurred in Russia, and many believed that the problems of the industrial economy needed a communist or socialist solution.

In the US, protest marches became frequent. Rent riots broke out as people organized to prevent home evictions and farm foreclosures, often physically. The country was in such dire straits that many believed it could have gone in any political direction. This confusion lapsed into the term of the next President, Franklin D. Roosevelt, who beat Republican incumbent Herbert Hoover and was inaugurated as President on March 4 1933.[1]

Roosevelt moved immediately to save the banking system that had been experiencing significant drains on deposits. He closed the banks until the federal auditors could review the books. Although many were deemed insolvent, he decided to save the remaining banks by signing the Bank Moratorium that suspended acknowledgment of their demise.

After deliberating with Treasury Department officials, former Hoover advisers, and several leading bankers to develop intervention measures and reform practices, Roosevelt reopened many banks. However, he decided to leave them in the hands of their original owners instead of nationalizing them as many thought he might.

The Emergency Banking Act of March 9 allowed the Federal Reserve to make loans to businesses and nonmember banks against the assets that they were allowed to define very broadly. The Reconstruction Finance Corporation, Hoover’s singular response to the economic collapse, was authorized to buy stock in banks and thus provide them with working capital. Three days later, Roosevelt made his first “fireside chat,” in which he made a plea for citizens to redeposit their money. The legislation plus his encouragement were a success, and by the end of the month, he had saved the banking system.

Immediately after the inauguration, Roosevelt took controversial measures to stop the hoarding of gold. In April, Roosevelt took the US off the gold standard and called on Americans to turn in their gold coins for a new gold-backed paper currency. On Jan 31, 1934, the US devalued paper money from $20.67 to $35 for an oz of gold.

In 1935, amidst concerns about the threat of fascism growing in Europe, Roosevelt decided to move US gold reserves from New York to Fort Knox in Kentucky. Moving the gold past the Appalachian Mountains and next to a military tank battalion and training facility reduced the risk of a Nazi attack on Manhattan that might capture significant amounts of gold bullion. Central bank vaults were the primary targets of the Nazis in their early invasion of Czechoslovakia and Poland, as gold was essential to efforts to procure oil and other critical supplies internationally for its war effort.

Despite the boldness and swiftness in which FDR carried out his plan, it was basically a very conservative response to the national disaster. The program he embarked on was to essentially save capitalism. At the time, Russia was firmly in the grip of its Communist leaders, and Hitler and the National Socialists (Nazis) had strengthened their hold on Germany. Many business leaders, ensconced in the rhetoric of laissez-faire and free enterprise economics, were slow to realize the conservativeness of the New Deal. They chastised Roosevelt in the newspapers and on the radio. But he was very popular with the people, including Ronald Reagan, the future US president that would later become one of the New Deal’s fiercest critics.

Over the next few years, the administration established a reformed capitalist system based on the rationalization of business, finance, and labor practices and focused on long-term stability. Banks were separated from their other activities, such as investment banking and stock brokerage, through the Glass-Steagall Act of 1933. The Securities Exchange Act passed the next year created the Securities and Exchange Commission (SEC) to oversee and prevent manipulation and rigging in the stock markets. The Federal Reserve Board in Washington was also given greater powers to oversee the regional Reserve Banks. The Federal Deposit Insurance Corporation (FDIC) was instituted to prevent further bank panics and restore depositor confidence.[2]

Hopefully, the transition to a green economy will not take such a dramatic event as the Great Depression. Tragically, we might be facing a bigger threat with climate change and environmental pollution. The New Deal was central to transiting to a carbon-based industrial model still heavily reliant on manual labor. Coal and oil were central to the New Deal, as was the process of electrification.

A very crucial component of the new economic transition will be “green finance” – how will the Green New Deal be paid for, and what are the ideological implications of the process? Will the current banking system suffice? Can pension funds and other wealth management funds be sufficiently incentivized to make the crucial investments? Will the political will be developed to stop subsidizing fossil fuels? Perhaps most importantly, we’ll have to reconcile the roles of government and the private sector. Specifically, will the Green New Deal be “socialist?” or can capitalism be harnessed for the transition?

Part 2 of this series will discuss an early failure of the New Deal, the National Industrial Recovery Agency’s (NIRA) attempts to administer the economy and a wide range of codes and practices for business and industry.

Notes

[1] The Three Roosevelts. (n.d.). Retrieved February 16, 2020.
[2] McQuail, K. (1982) Big Business and Presidential Power. NY: William Morrow and Company.

Share

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, Ph.D. is Professor of the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University. Previously, he taught at Hannam University in South Korea, Marist College in New York, Victoria University in New Zealand. He keeps his American home in Austin, Texas and has taught there in the Digital Media MBA program at St. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.

Factors Supporting Early Computerization and Data Communications

Posted on | January 15, 2020 | No Comments

Several factors contributed to the development of computers and data networking in the early post-World War II era. This post looks at major influences that created the modern realm of computerization and networking of data and information that has transformed the world.

During the war, the first computers were created to calculate tables for artillery ballistics and to help decode encrypted messages. The British developed general-purpose computing technology to break the German’s Enigma-based codes. Eventually, a giant vacuum-tubed computer, the ENIAC (Electronic Numerical Integrator and Computer), emerged as the first completely electronic, general-purpose computer. Although it was completed too late to impact the outcome of war significantly, the promise of its potential made it a celebrity in the mid-1940s.

One major factor that supported post-war economic development was the availability of an electronics infrastructure on the East Coast. Wartime funding, primarily for the development of radar, helped build a complex of industrial organizations and expertise that provided a foundation for the computer and electronics industries. Located primarily in Boston, it stretched out to the IBM’s Hudson River facilities, down through Manhattan and New Jersey’s Bell Laboratories and to Philadelphia’s University of Pennsylvania. MIT emerged as the primary center of innovation with the development of the early real-time computers, monitors, modems, and time-sharing technology.

A second factor was the invention and sharing of the transistor by Bell Labs. This technology provided the seminal system for the miniaturization of processing power leading to the integrated circuit and later the microprocessor. What helped the process was that AT&T was facing anti-trust action and the divestiture of its manufacturing arm, Western Electric. It consequently decided to share its technology with other companies to avoid serious government intervention in its affairs. Companies such as Texas Instruments, Fairchild, and Motorola were a few of the new licensees. They set out to capitalize on the new invention. The result was a wide variety of inventions starting with missile guidance systems and computers and later consumer products like transistor radios and calculators.

The escalation of tensions with Communist China and the USSR was a third factor. The “Cold War” provided a permanent stream of funding for the development and maturation of information technologies and created the impetus to institutionalize a trajectory that President Eisenhower called the “Military-Industrial Complex.” The support for the industry was extensive, especially from the newly created Central Intelligence Agency (CIA) and the ultra-secretive National Security Agency (NSA). They literally built hectares of big mainframe computers, as did the Office of Naval Research and government organizations such as the Atomic Energy Commission at Los Alamos.

Related but still deserving of a separate mention was the desire to create an early warning defense system linking computers via telecommunications to an extensive grid of radars around the US and Canada. Created by MIT in the 1950s and built in IBM’s Poughkeepsie facilities, it would later become NORAD and be located deep within the Cheyenne Mountains in Colorado. The project cost the US government billions of dollars at a time that was considered big money. Called Semi-Automatic Ground Environment or SAGE, it created the foundation of the computer industry by supporting the IBM FSQ-7. Burroughs, DEC, and Honeywell were also spinoffs that became viable business computers suppliers. The SAGE technology was instrumental in the development of the IBM System/360 mainframe, released in the mid-1960s to become the re-programmable business computer of choice. It also helped develop the data communications modem and “survivable” communication technology concepts that would later be crucial for the Internet.

Fifth, the formation of ARPA in reaction to the USSR’s Sputnik satellite helped seed computer science departments throughout the US and directly funded the ARPANET, the precursor to the Internet. Specifically its Information Processing Technology Office (IPTO) spearheaded an aggressive attempt to develop interactive computer technologies. At first timesharing technologies were developed that shared a mainframe among numerous terminals, Later resource-sharing was created, allowing a terminal to access many different computers in different locations. Add TCP/IP and Hypertext protocols in the next two decades and we would have the modern Internet.

Sixth, the formation of the US space program leading to NASA propelled the development of US rockets and the capability to launch satellites into high orbits. Drawing on Nazi Germany’s V-2 rocketry, the US overcame a weak start to become the leader in the “space race.” The advent of human-crewed space flights served the dual purpose of developing rockets capable of carrying heavy payloads for nuclear warheads and satellites, as well as for harnessing the popular imagination for the future funding of the space program. A global network of communications satellites in the geosynchronous “Clarke Belt” orbit around the Earth and the “New Look” reconnaissance program with remote sensing and surveillance spacecraft effectively utilized space for military and commercial purposes.

The Communications Act of 1962, committed the US to the establishment of Intelsat, an international consortium for satellite communications. Intelsat mobilized national telecommunications organizations from around the world to invest in Arthur C. Clarke’s vision of a global communications system based on satellites placed in orbit 22,300 miles in space. Intelsat was quickly transformed into a workable commercial system for voice, video, and later data communications.

Seventh, the refinement of transistors into new “integrated circuits’ that put several transistors on a single “chip” furthered the miniaturization process of information processing. At first, the process was heavily subsidized by government and military projects. The goal to put humans on the moon created the need for a “fourth crewmember,” the Apollo Guidance System (AGS). It was a new set of “miniaturized” guidance technologies utilizing advanced ICs that could control major functions on the Command and Lunar Modules. Also, the implementation of a new defense policy that came to be called MAD (Mutually Assured Destruction) required a buildup of Minuteman intercontinental missiles. It also gave a significant boost to the subsidization and refinement of new ICs and the eventual development of semiconductor microprocessors with transistors etched into the material.

Eight, the bureaucracies of the “Great Society” created new needs for information processing technologies. Just as the New Deal helped IBM survive the Great Depression by passing the Social Security Act, the growth of the civilian government allowed new companies to prosper. Ross Perot’s Electronic Data Systems (EDS), for example, earned billions of dollars from Medicaid contracts. Xerox was another company that profited extensively from the Great Society, placing its photocopying machines in a wide variety of bureaucracies. It took part of its profits and invested them in a crucial technological incubation center, the Palo Alto Research Center (PARC).

PARC hired many of the best ARPA-sponsored computer scientists from around the US. It produced many of the seminal technologies crucial for graphical user interfaces, laser printers, local area networking, and object-oriented programming. In exchange for the opportunity to buy Apple shares, it gave Apple Computers much of the technology that went into the Lisa and the Macintosh. Other employees went to Microsoft and 3Com to produce Windows and Ethernet.

These factors reflect the extraordinary investments by the US government during the Cold War and Space Race to building a new infrastructure based on information technologies and data communications technologies. Later, global finance made significant investments to advance these technologies for their purposes. With the introduction of the Internet and World Wide Web, the infrastructure was developed enough to attract widespread commercial investment.

Share

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, Ph.D. is Professor of the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University. Previously, he taught at Hannam University in South Korea, Marist College in New York, Victoria University in New Zealand. He keeps his American home in Austin, Texas and has taught there in the Digital Media MBA program at St. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.

Four Futures: One Humanity

Posted on | January 2, 2020 | No Comments

I’ve had a long-term interest in an area of research called “Futures Studies.” I read Alvin Toffler’s Future Shock in high school and Third Wave in college and reading science fiction actually led to my studies in biochemistry and other sciences as an undergraduate. But the future is hard to predict and it was coming on so fast that by the 1990s, one of my favorite authors, William Gibson, quipped appropriately, “the best science fiction is on CNN.” So I wrote my PhD dissertation on electric money in dystopias and I continue to use “cy-fi” or cyberpunk as a mode of investigation for social and technological changes.

This post looks at a few of my favorite futurists and a book I recently found intriguing that presented four visions of the future.

One of my favorite futurists was Buckminister Fuller. He mixed science with politics and had a unique view on economics. Comprehensive Anticipatory Design Science was the name he gave to his approach to the future.

It was, in part, based on his “Synergetics” science that would eventually lead to a significant discovery in chemistry called Fullerenes or “Bucky Balls.” These are small soccer ball-like molecules that are leading to new materials and medicines. One of his first inventions based on Synergetics was the geodesic dome, which provided durable protection for radar installations starting in the Cold War and is still used in the design of many buildings.

Fuller challenged the status quo with his peculiar use of language that seemed to counter common sense and yet is surprisingly insightful. He often used the metaphor of a bow and arrow to describe the importance of history in the study of the future. The further back you can pull the bow’s string, he explained, the farther the arrow will travel. Likewise, he believed the further back you take your historical analysis, the better you can project into the future. Fuller promoted a utopian future based on technologies, if those technologies were based on synergetic design principles, and did more with less.

Another major influence was Jim Dator, a professor at the University of Hawaii while I was working on my PhD. He dissuaded his students of the idea of a one true future whose probability could be calculated with positivistic certainty, and suggested we use a futures visioning process to envision and develop several alternative scenarios.

Dator has been working on the analysis of four types of future scenarios: Continued Growth, Decline and Collapse, Limits and Discipline, and Transformation. Its possible to use a standard S growth curve to visualize these potential paths as shown below.

Continued Growth projects the current emphasis on economic development and its social and environment implications.

Decline and Collapse suggests a catastrophic turnaround due to natural or human-made disasters. Pollution and changes associated with massive carbon dioxide and methane releases are current concerns as they are linked with massive weather changes influencing droughts, floods, and wildfires.

Limits and Discipline appeals to a society that values precious places, processes, or values that are threatened by the existing economic and social trajectory. In this scenario, it is believed that life should be “disciplined” around a set of fundamental cultural, ideological, scientific, or religious values, including “green” solutions such as recycling or social distancing and mask-wearing in pandemic times.

Finally, a Transformative society anticipates a radical makeover of society based on technological or biological revolutions. A “singularity” of network connected humans and AI is one projected scenario. The creation of new genetically reconfigured “posthuman” bodies is another vision, perhaps due to the viral innovations of COVID-19 research. This scenario posits entirely redesigned sets of global economic and political structures.

S curve futures

So I was immediately drawn to Four Futures: Life After Capitalism (2016) by Peter Frase when it was recommended by a former classmate. It is an extremely interesting read. His publisher Verso, advertises it as “an exhilarating exploration into the utopias and dystopias that could develop from present society.”

For Frase, something new is coming, and its based on two main drivers: climate change and automation. These issues are bringing problems and promises for humanity and will likely result in one of four scenarios:

  • a society of equality and abundance (Communism);
  • a society of hierarchy and abundance (Rentism);
  • a society of equality and scarcity (Socialism); and;
  • a society of hierarchy and scarcity (Exterminism).

Or at least we can use these four “ideal types” to think about the future and plan strategies for approximating a preferred one.[1] They set up contrasting visions of the future and work to produce an analytical structure that provides provocative ideas and insights. Granted some of these terms are quite charged in contemporary society. Communism is the boogeyman of the right, Exterminism is the great fear of the left. Rentism is probably the most unfamaliar term but maybe the most relevant.

Socialism is a future that has not kicked its hydrocarbon habits but decides to share the misery. Frase puts socialism at the conjunction of equality and scarcity. Or as China’s Deng Xio Peng pondered after he replaced Mao, “I can distribute poverty or I can distribute wealth.” Contemporary socialism operates within the limits of hydrocarbon access on the one hand, and the extreme ecological damage it creates on the other.

Addressing the resultant environmental issues may create new conditions for democratic governance and distribution. Moving from carbon scarcity to electric abundance is the major economic and technological challenge of our time, but a post-scarcity without provisions for equality present equal dangers.

Exterminism is a future of both scarcity and inequality. Automation has made labor redundant, and environmental damage has made them dangerous. The rich hide in “enclave societies” behind gates or perhaps “off-world” and become increasingly desensitized to the conditions of the poor. Just as tabulating machines, punched cards, and tattooed prisoners enabled the Nazi’s Final Solution, social media and big data technologies are available for identifying those classified as unwanted by a society. Immigrants, refugees, gender deviants, as well as poor people in general, could be easily targeted.

Rentism is a future that is produced when strong intellectual property (IP) laws persist and dominate over a new era of manufactured commodities and creative products. These are produced by 3-D printers and Star Trek-like replicators as well as digital cameras and other ICTs. We already live in a world economy dominated by supply chains that produce major international flows of royalty payments. Copyrights and patents bestow rents to the owner of these intellectual properties, making them the newly rich and also creating a new era of scarcity. Legalism will proliferate to keep track of all the IP uses, but AI will largely replace lawyers.

This is not your father’s Communism. This is a highly automated future with increased leisure time and a new abundance of resources based on renewal energy. Frase sees it as the combination of social equality and economic abundance. I prefer the term Utopianism as it’s differently loaded with meaning.

While some think this only comes with great political mobilization and struggle, Frase believes this process will be facilitated by technological change and the institutional responses that come with regulatory adjustments. Those technological changes will also be the catalyst for a stronger democracy.

Frase grounds his work in The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014), by Erik Brynjolfsson and Andrew McAfee. The first machine age was made possible by the application of steam power to industrial processes that led to subsequent innovations in energy and other technologies changing work, society, and the economy. Brynjolfsson and McAfee argued the second machine age is based on digital technologies. These technologies produce information that has little to no marginal costs when reproduced and shared, continues to double in processing power every two years, and stays user-friendly with its “combinatorial” power.

Music, for example, can be reproduced and distributed with little cost and distributed to smartphones with incredible abilities to provide high-quality sound, produce playlists, and provide lyrics and other artist information. That same device records data, takes and stores pictures, makes phone calls. The authors call this “bounty,” massive benefits allowing us to do Bucky Fuller’s more with less – like talking or videoconferencing overseas for hours for virtually no cost.

Lastly, the term “spread” refers to the increasing inequality that is also resulting from the widespread adoption of new technology. Automation will continue to eliminate routine jobs and at least keep wages stagnating in certain areas. Furthermore, networked technologies tend to create winner-take-all markets, and the globally linked stock markets have dramatically improved the wealth of investors.

These digital technologies produce more: more education, more entertainment, more health care, more travel, etc. Still, the future of the social and political institutions that they will produce is yet to be determined. Futures studies is an interesting exercise in thinking about available directions and choices to be made.

Notes

[1] The notion of ideal types comes primarily from Max Weber.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, Ph.D. is a Professor in the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University. Previously, he taught at Hannam University in South Korea, Marist College in New York, Victoria University in New Zealand. He keeps his American home in Austin, Texas and has taught there in the Digital Media MBA program atSt. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.

The CDA’s Section 230: How Facebook and other ISPs became Exempt from Third Party Content Liabilities

Posted on | November 26, 2019 | No Comments

“The rewrite of the communications law that emerged by early 1996 was driven by the appetite of the Bell legatees to position themselves as central providers of both content and conduit for the information age.” – Patricia Aufderheide [1]

Facebook and other Internet Service Providers (ISPs) face criticism for the legitimacy of the third party content they carry and their attempts to manage it. Political advertising has been a major issue, but more recently, the legitimacy of President Trump’s tweets. Another issue is the social media activity on Robinhood and Reddit that created a spike in the share prices of the GameStop company. This was exactly the type of activity that prompted Congress to create legislation to protect “publishers.”

This post discusses how legislation in the 1990s gave web platforms the power to censure or take down third party content from their sites. According to Title V of the Telecommunications Act of 1996, online intermediaries such as ISPs and telcos were legally protected from what users and publishers might do or say. The legislation was passed to enhance these service providers’ ability to monitor and even delete content without becoming publishers. In an emerging age of user-curated and user-generated content, the legislation has specific implications for news and social media provision in general.

Specifically, Section 230 of the Communications Decency Act (CDA) stated, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[2] Section 230 also offered protection to bulletin boards and later bloggers who host comments. Bloggers and later vloggers such as YouTube channels were not liable for comments left by readers or tips left via email.[3] Although the Supreme Court struck down the CDA as being too restrictive to free speech, Section 230 continued to shape Internet services.

The Clinton Administration and its appointed Federal Communications Commission (FCC) Chairman worked with the Republican-controlled Congress to pass the Telecom Act of 1996 and its associated Section 230 of the CDA. President William Jefferson Clinton, Vice President Al Gore, and the new FCC Chair, Reed Hundt, drove the policy process intending to enact a new telecommunications re-regulation to help revive the economy with a strategy based on the centrality of information technology. An agreement depended on getting some support from the new Republican Congress.

Republicans grouped around Newt Gingrich, a history professor from Georgia, who had been elected to the US House of Representatives in 1979. “Newt” proposed a new strategy of extreme partisanship, encouraging a total lack of cooperation with Democrats. He attacked House Speaker Jim Wright on ethics charges of bribery and not reporting book receipts, eventually driving him from office. Often invoking the memory of Ronald Reagan, Gingrich helped move the political landscape significantly to the political right.

Gingrich became the chief architect of the infamous “Contract with America,” an attempt to revive the Reagan Revolution. The Contract attempted to legislate the Republican right’s radical agenda, such as cutting back on welfare, forcing a balanced budget, eliminating public television, and phasing out government regulations on the media and telecommunications industries. Newt also wanted to get rid of the FCC, and relax accounting and securities rules on corporations.

On November 8, 1994, the Republicans obtained the House majority with a flood of new freshmen Congressmen, including Sonny Bono, the former Mayor of Palm Springs and the slightly less talented side of the “Sonny and Cher” act. Another new Congressman, Joe Scarborough later became the host of “Morning Joe,” an MSNBC morning cable news show. With the Republican victory, Gingrich became the Speaker of House, coordinating the legislative agenda and stepping up his divisive efforts.

On the legislative docket was a signficant reform of the Communications Act of 1934, eagerly pursued by both the Democrats and the Republicans. Central for Gingrich was abolishing the FCC and rolling back anti-monopoly regulation. Tom DeLay (R-Tex.) worked with some 350 industry lobbyists drafting the telecom deregulation bills. Called Project Relief, the secretive group organized campaign contributions for the legislation’s supporters while charting a course for a new era of oligopoly-controlled media content distribution.

It was the Democrats that took the more prurient course. Despite Gore’s objection, Senator J. James Exon, a Democrat from Nebraska, inserted the Communications Decency Act that criminalized offensive content. Section 230 of the Communications Decency Act was not part of the original Senate legislation. Still, it took shape in negotiations with the House of Representatives, where it had been separately introduced by Congressmen Christopher Cox (R-CA) and Ron Wyden (D-OR). Called the Internet Freedom and Family Empowerment Act, it passed by a near-unanimous vote.

One of the key issues guiding Section 230 goes back to a lawsuit called Stratton Oakmont v. Prodigy. Stratton Oakmont was a financial institution, and Prodigy was an online service that ran a chat room and offered several other services such as news and weather. Stratton Oakmont sued Prodigy because an anonymous participant in the chat room tarnished the financial company’s good name.

Because the person who posted the information could not be found, Stratton Oakmont sued Prodigy. They won a 1995 U.S. New York Supreme Court decision that held that online service providers could be held liable for their users’ speech. They considered Prodigy to be a publisher as they had been filtering offensive content.

Congress, with encouragement from the telcos, did not agree and worked to overturn the Prodigy decision.[4] It targeted the Communications Decency Act of 1996 to establish immunity for ISPs for publishing third party information. It added Title V: Obscenity and Violence and was introduced to the Telecommunications Act of 1996 by the Senate Committee on Commerce, Science, and Transportation.

Written by Senators James Exon (D-NE) and Slade Gorton (R-WA), Title V attempted to regulate both the exposure of indecency to children and obscenity online. It was added to the “Telecom Act” in the Senate on June 15, 1995, by a vote of 81–18. Title V effectively immunized both ISPs and Internet users from torts committed by others using their online services. The exemption was designed to protect the service provider, even if they fail to take action after receiving complaints about the harmful or offensive content.

Postscript

Early in the 2020 presidential elections, President Trump issued an executive order directing the National Telecommunications and Information Administration (NTIA) to examine the scope of Section 230 and to examine the possibilities of limiting the power of social media platforms. The action came after Twitter placed fact-checking warnings on two of his tweets about election fraud. He claimed, without credible evidence, that voting with mail-in ballots will result in election fraud.

It’s not likely that President Trump can muster up the political support in Congress to make significant regulatory changes. The Department of Justice, led by Attorney General William Barr, has drafted legislation to narrow the guidelines for liability protections. Of particular concern were posts and submissions that violated criminal laws, but critics worry about a slippery slope.

Weeks before the election, Trump drafted an executive order that would require the Republican-led FCC to provide additional guidelines for web publishers. Trump’s election loss ended his quest to change Section 230 but the issue is being discussed again as social media became weaponized to counter short sellers in the financial markets.

Citation APA (7th Edition)

Pennings, A.J. (2019, Nov 26). The CDA’s Section 230: How Facebook and other ISPs became Exempt from Third Party Content Liabilities. apennings.com https://apennings.com/telecom-policy/the-cdas-section-230-how-facebook-and-other-isps-became-exempt-from-third-party-content-liabilities/

Notes

[1] Aufderheide, Patricia (1998) Communications Policy and the Public Interest: The Telecommunications Act of 1996. NY: Guilford Publications, Inc. p. 37.
[2] (47 U.S.C. § 230). U.S. Code Title 47. TELECOMMUNICATIONS Chapter 5. WIRE OR RADIO COMMUNICATION Subchapter II. COMMON CARRIERS Part I. Common Carrier Regulation Section 230.
[3] Mackey, Aaron, et al. “Section 230 of the Communications Decency Act.” Section 230 of the Communications Decency Act, Electronic Frontier Foundation, www.eff.org/issues/cda230. Accessed November 25, 2019.
[4] See H.R. Conf. Rep. 104-58, at 194. “information provided by another information content provider,” 47 U.S.C. § 230(c)(1).
[5] A tort is a legal liability for a civil wrong done by a person that unfairly causes someone else to suffer some harm or loss.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, Ph.D. is Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University. Previously, he taught at Hannam University in South Korea, Marist College in New York, Victoria University in New Zealand. He keeps his American home in Austin, Texas and has taught there in the Digital Media MBA program at St. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    November 2024
    M T W T F S S
     123
    45678910
    11121314151617
    18192021222324
    252627282930  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.