Anthony J. Pennings, PhD

WRITINGS ON AI POLICY, DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL E-COMMERCE

Not Like 1984: GUI and the Apple Mac

Posted on | May 27, 2017 | No Comments

In January of 1984, during the Super Bowl, America’s most popular sporting event, Apple announced the release of the Macintosh computer. It was with a commercial that was shown only once, causing a stir, and gaining millions of dollars in free publicity afterward. The TV ad was produced by Ridley Scott whose credits at the time included directing movies like Alien (1979) and Blade Runner (1982).

Scott drew on iconography from Fritz Lang’s Metropolis (1927) and George Orwell’s classic 1984 novel to produce a stunning dystopic metaphor of what life would be like under what was suggested as a monolithic IBM with a tinge of Microsoft. As human drones file into a techno-decrepit auditorium, they become transfixed by a giant “telescreen” filled with a close-up of a man, eerily reminiscent of an older Bill Gates. Intense eyes peer through wired-rimmed glasses and glare down on the transfixed audience as lettered captions transcribe mind-numbing propaganda:

    We are one people with one will, one resolve, one cause.
    Our Enemies will talk themselves to death, and we will bury them with their confusion.
    For we shall prevail.

However, from down a corridor, a brightly-lit female emerges. She runs into the theater and down the aisle. Finally, she winds up and throws an anvil, a large hammer, into the projected face. The televisual screen explodes, and the humans are startled out of their slumbered daze. The ad fades to white, and the screen lights up: “On January 24th, Apple Computer will introduce Macintosh. And you will see how 1984 won’t be like “1984.” The reason, of course, was what Steve Jobs called the “insanely great” new technology of the Macintosh unveiled by Apple. Against a black background, it ends with the famed logo, a rainbow-striped Apple with a bite out of the right side.

By 1983, Apple needed a new computer to compete with the IBM PC. Steve Jobs went to work, utilizing mouse and GUI (Graphical User Interface) technology developed at Xerox PARC in the late 1970s. In exchange for being allowed to buy 100,000 shares of Apple stock before the company went public, Xerox opened its R&D at PARC to Jobs.[17] Xerox was a multi-billion dollar company with a near monopoly on the copier needs of the Great Society’s great bureaucratic structures. To leverage its position to dominate the “paperless office,” Xerox sponsored the research and development of many computer innovations, but the Xerox leaders never understood the potential of the technology developed under their roofs.

One of these innovations was a powerful but expensive microcomputer called the Alto that integrated many of the new interface technologies that would become standard on personal computers. The new GUI system had the mouse, networking capability, and even a laser printer. It combined several PARC innovations, including bitmapped displays, hierarchical and pop-up menus, overlapped windows, tiled windows, scroll bars, push buttons, checkboxes, cut/move/copy/delete, multiple fonts, and text styles. Xerox didn’t know quite how to market the Alto, so it gave its microcomputer technology to Apple for an opportunity to buy the young company’s stock.

Apple took this technology and created the Lisa computer, an expensive but impressive prototype of the Macintosh. In 1983, the same year Lotus officially released its Lotus 1-2-3 spreadsheet program, Apple released Lisa Calc with six other applications – LisaWrite, LisaList, LisaProject, LisaDraw, LisaPaint, and LisaTerminal. It was the first spreadsheet program to use a mouse, but at a price approaching $10,000, the Lisa proved less than economically feasible. However, it inspired Apple to develop a lower-cost Macintosh and software companies such as Microsoft to begin to prepare software for the new style of computer.

The Apple Macintosh was based on the GUI, often called WIMP, for its Windows, Icons, “Mouse,” and Pull-down menus. The Apple II and IBM PC were still based on a command line interface, a “black hole” next to a > prompt that required code to be entered and executed. This system required extensive prior knowledge and/or access to readily available technical documentation. The GUI, however, allowed you to point to information already on the screen or categories that contained subsets of commands. Eventually, menu categories such as File, Edit, View, Tools, and Help were standardized on the top of GUI screens.

AA crucial issue for the Mac was good third-party software that could work in its GUI environment, especially a spreadsheet. Representatives from Jobs’ Macintosh team visited the fledgling companies that had previously supplied microcomputer software. Good software came from companies like Telos Software, which produced the picture-oriented FileVision database, and Living Videotext, which sold an application called ThinkTank that created “dynamic outlines.” Smaller groupings, such as a collaboration by Jay Bolter, Michael Joyce, and John B. Smith, created a program called Storyspace that was a hit with writers and English professors.

PARC was a research center supported by Xerox’s near monopoly on paper-based copying that grew tremendously with the growth of corporate, military, and government bureaucracies during the 1960s. Interestingly, in 1958, IBM passed up an opportunity to buy a young company that had developed a new copying technology called “xerography.” The monopoly allowed them to set up a relatively unencumbered research center to lead the company into the era of the “paperless office.” One of the outcomes of this research was the GUI technology.

Unfortunately, Xerox failed to capitalize on these new technologies. Subsequently, they sold their technology in exchange for the right to buy millions of dollars in Apple stock. Jobs and Apple used the technology to design and market the Lisa computer with GUI technology, and then, during the 1984 Super Bowl, dramatically announced the Macintosh. The “Mac” was a breath of fresh air for consumers who were intimated by the “command-line” techno-philosophy of the IBM computer and its clones.

Citation APA (7th Edition)

Pennings, A.J. (2017, May 27) Not Like 1984: GUI and the Apple Mac. apennings.com https://apennings.com/how-it-came-to-rule-the-world/not-like-1984-gui-and-the-apple-mac/

Notes

[17] Rose, Frank (1989) East of Eden: The End of Innocence at Apple Computer. NY: Viking Penguin Group. p.47.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

A First Pre-VisiCalc Attempt at Electronic Spreadsheets

Posted on | May 25, 2017 | No Comments

Computerized spreadsheets were conceived in the early 1960s when Richard Mattessich at the University of California at Berkeley conceptualized the electronic simulation of business accounting techniques in his Simulation of the Firm through a Budget Computer Program (1964). Mattessich envisaged the use of “accounting matrices” to provide a rectangular array of bookkeeping figures that would help analyze a company through numerical modeling. Mattessich’s thinking would predate popular spreadsheet programs like VisiCalc, Lotus 1-2-3, Excel, and Google Sheets.

The first actual computerized spreadsheet was developed based on an algebraic model written by Mattessich and developed by two of his assistants, Tom Schneider and Paul Zitlau. Together, they created a working prototype using Fortran IV programming language. It contained the basic ingredients for the digital spreadsheet including the crucial support for individual figures in cells by the entire calculative formulas behind each entry.[1]

Mattessich was sanguine on his invention, recognizing that it was a time when computers were being considered for several simulation projects, so it was “reasonable to exploit this idea for accounting purposes.”[2] These other projects included the modeling of ecological and weather systems as well as national economies. In fact, Mattissich’s concurrent work on national accounting systems was better received, and his book on the topic became a classic in its own right.[3] Mattessich’s attempts soon fell into obscurity because mainframe technology was not as powerful or interactive as the microprocessor-powered personal computer.

The computer systems of the mid-1960s were not conducive to the type of interactivity that would make spreadsheets so attractive in the 1980s. Computers were big, secluded, and attended to by a slew of programmer acolytes that religiously protected their technological and knowledge domains. They were the domain of the EDP department and removed from all but the highest management by procedures, receptionists, and security precautions.

Computers ran their programs in groups or “batches” of punched cards delivered to highly sequestered data processing centers, and the results picked up sometime later, sometimes hours, sometimes days. Batch processing was used primarily for payroll, accounts payables, and other accounting processes that could be done on a scheduled basis. Introducing minicomputers using integrated circuits such as DEC’s PDP-8 meant more companies could afford computers, but they did not significantly change their accounting procedures or herald the use of Mattissich’s spreadsheet.

Although the earliest PCs were weaker than their bigger contemporaries, the mainframes, and even the minicomputers, they had several advantages that increased their usefulness. Their main advantage was immediacy; the microcomputer was characterized in part by its accessibility: it was small, relatively cheap, and available via several retail outlets. It used a keyboard for human input, a cathode ray monitor to view data, and a newly invented floppy disk for storage.

But just as important was the fact that it bypassed the traditional data processing organization that was constantly striving to keep up with new processing requests. One implication was that frustrated accountants would go out and buy their own computers and software packages over the objections or indifference of the EDP department. It also meant a new flexibility in terms of the speed and amount of information immediately available. Levy recounts the following comments from a Vice-President of Data Processing at Connecticut Mutual who eventually bought one of the earliest microcomputer spreadsheet programs to do his own numerical analyses:

    DP always has more requests than it can handle. There are two kinds of backlog – the obvious one, of things requested, and a hidden one. People say, “I won’t ask for the information because I won’t get it anyway.” When those two guys designed VisiCalc, they opened up a whole new way. We realized that in three or four years, you might as well take your big minicomputer out on a boat and make an anchor out of it. With spreadsheets, a microcomputer gives you more power at a tenth the cost. Now people can do the calculations themselves, and they don’t have to deal with the bureaucracy.[4]

Despite the increasing processing power of the mainframes and minis, and new interactivity due to timesharing and the use of keyboards and cathode ray screens, the use of computerized spreadsheets never increased significantly until the introduction of the personal computer. It was only after the spreadsheet idea was rediscovered in the context of the microprocessing leap made in the next decade that Mattesich’s ideas would be acknowledged.

Citation APA (7th Edition)

Pennings, A.J. (2017, May 25) A First Pre-VisiCalc Attempt at Electronic Spreadsheets. apennings.com https://apennings.com/enterprise-systems/a-first-pre-visicalc-attempt-at-electronic-spreadsheets/

Notes

[1] Mattessich and Galassi credit assistants Tom Schneider and Paul Zitlau with the development of the first actual computerized spreadsheet based on an algebraic model written by Mattessich. It was reported in “The History of Spreadsheet: From Matrix Accounting to Budget Simulation and Computerization”, a paper presented at the 8th World Congress of Accounting Historians in Madrid, August 2000. See Mattessich, Richard, and Giuseppe Galassi. “History of the Spreadsheet: From Matrix Accounting to Budget Simulation and Computerization.” Accounting and History: A Selection of Papers Presented at the 8th World Congress of Accounting Historians: Madrid-Spain, 19–21 July 2000. Asociación Española de Contabilidad y Administración de Empresas, AECA, 2000. See also George J. Murphy’s “Mattessich, Richard V. (1922-),” in Michael Chatfield and Richard Vangermeersch, eds., The History of Accounting–An International Encyclopedia (New York: Garland Publishing Co., Inc, 1997): 405.
[2] This case of Richard Mattessich developing the first electronic spreadsheets has been made extensively including in “The History of Spreadsheet: From Matrix Accounting to Budget Simulation and Computerization”, a paper presented at the 8th World Congress of Accounting Historians in Madrid, August 2000 by Richard Mattessich and Giuseppe Galassi.
[3] Mattessich’s Accounting and Analytical Methods—Measurement and Projection of Income and Wealth in the Micro and Macro Economy (1964) was published by Irwin and was part of that movement towards national accounting systems. Mattessich, Richard. A later version was published as Accounting and Analytical Methods: Measurement and Projection of Income and Wealth in the Micro- and Macro-Economy. Scholars Book Company, 1977. It was mentioned in Chapter 3: Statistics: the Calculating Governmentality of my PhD dissertation, Symbolic Economies and the Politics of Global Cyberspaces (1993).
[4] Levy, S. (1989) “A Spreadsheet Way of Knowledge,” in Computers in the Human Context: Information Technology, Productivity, and People. Tom Forester (ed) Oxford, UK: Basil Blackwell.

Here are the hypertext links from the text formatted in APA (7th Edition) style:

Wikipedia contributors. (n.d.). Richard Mattessich. In Wikipedia. Retrieved January 22, 2025, from https://en.wikipedia.org/wiki/Richard_Mattessich

Obliquity. (n.d.). Fortran IV programming language history. Retrieved January 22, 2025, from https://www.obliquity.com/computer/fortran/history.html

Pennings, A. (n.d.). Steven Levy’s a spreadsheet way of knowledge. Retrieved January 22, 2025, from https://apennings.com/enterprise-systems/steven-levys-a-spreadsheet-way-of-knowledge/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

CISCO SYSTEMS: FROM CAMPUS TO THE WORLD’S MOST VALUABLE COMPANY, PART THREE: PUSHING TCP/IP

Posted on | April 20, 2017 | No Comments

Len Bosack and Sandy Lerner combined several technologies being developed at Stanford University and the Silicon Valley area to form the networking behemoth, Cisco Systems. However, success was by no means foreseen in the early years. The key was when the pair obtained access to Bill Yeager’s source code for the multiple-protocol “Blue Box” router in 1985. Yeager’s software became the foundation for the Cisco router operating system and a major stimulant for the adoption of the Transmission Control Protocol and Internet Protocol (TCP/IP) in data communications systems around the world.

In 1980, Yeager became responsible for networking computers at the Stanford Medical School. A number of devices were proliferating on campus such as the DEC10, PDP11/05 and VAX Systems, and especially a number of Xerox machines, including PARC Lisp machines and Altos file servers and printers. The Xerox influence was substantial as the project started with routing Parc Universal Packet (PUP) for the Xerox PARC systems and mainframes. It was later configured to include IP addresses for the new VAX750s and Xerox’s own proprietary network XNS – Xerox Network Services.

After some controversy, the Stanford Office of Technology Licensing eventually decided to allow Cisco to use the technology. The venerable institution ultimately decided they would try “to make the best of a bad situation,” and on April 15, 1987, licensed the router software and two computer boards to Cisco for $19,300 in cash and royalties of $150,000 as well as product discounts. It refused equity in Cisco Systems as “a matter of policy.”[1] The agreement named Yeager as the principal developer of the source code, making him one of the unsung heroes of the Internet age.

The couple initially ran the company from their home at 199 Oak Grove Avenue in Atherton, CA. Using their credit cards for startup capital, they set up an office and started assembling routers in their living room. Self-financing, they even installed a mainframe computer in their garage for $5000. They needed the big computer to stay on the ARPANET and take orders for their network equipment, making them an early e-commerce innovator.[2] Bosack focused on technology and Lerner on marketing. Lerner’s ad meme “IP Everywhere” was ahead of its time.

The early years were tough. Venture capital was difficult to acquire, and the couple reportedly made over 70 visits to potential investors to find money for their company. At that time, only semi-conductor companies were being funded, and the Internet was barely known outside of academia. Finally, Sequoia Capital stepped in for a considerable percentage of the business. Don Valentine had passed on Steve Jobs and Apple Computers and consequently had developed a more open mind to new innovations. They put in $2.5 million for one-third of the company and the ability to make major management decisions.

By the end of 1986, the company was growing rapidly. Although it took two years to get out of the garage, the computer and communications revolution was taking off. PCs were becoming commonplace and even more important, becoming networked. Also, TCP/IP was gaining traction. The Department of Defense mandated its use at the center of the ARPANET and granting projects that coded TCP/IP implementations for IBM machines and operating systems such as UNIX. The emerging NSFNET had also required TCP/IP protocols and compliant networking technologies. By mid-1985, almost 2000 computers hosted TCP/IP technology.

Despite the growing enthusiasm for TCP/IP in the military/academic/research institute sphere, the major manufacturers of computer communications equipment were focused on the OSI models and believed market forces would eventually move in its favor. However, in 1986, advocates of TCP/IP took action to improve and promote their protocols. The Internet Advisory Board (IAB) began to implement strategies in two parts. The first was to encourage more participation in TCP/IP standards development. It resulted in the May 1986 publication of RFC 985, “Requirements for Internet Gateways” and other recommendations. The second was to inform equipment vendors about the features and advantages of TCP/IP. This involved organizing several vendor conferences including the “TCP/IP Vendors Workshop” on August 25-27, 1986 and the “TCP/IP Interoperability Conference” during March 1987.

While some vendors were disappointed that no certification and testing process was forthcoming, it allowed the advocates of TCP/IP to provide some guidance for equipment manufacturers to incorporate their protocols. It was with under leadership of Dan Lynch that these conferences were started and in 1988 they came under the heading of INTEROP. As the industry took shape with innovations like Simple Network Management Protocol, or SNMP, introduced at INTEROP ’88, Cisco was one of its main beneficiaries.

In 1986, Cisco introduced the F Gateway Server (FGS) remote access server and also their first commercial multi-protocol network router, the Advanced Gateway Server (AGS). The “Massbus-Ethernet Interface Subsystem,” was an interface card made for DEC computers and could create bridges between local area networks of different protocols such as TCP/IP and PUP and its highest line rate on the system was 100Mbps FDDI. The AGS was capable of connecting multiple LAN and WAN networks. Its technical characteristics included a throughput of 10M bits a second and it could process 300 packets a second. The AGS supported 200 routing tables and cost approximately $5,550. The AGS was quickly adopted in networks such as CSUNet at Colorado State University. Soon, universities all around the country were calling and emailing them about their equipment.

In November 1986, Cisco moved to their first office, 1360 Willow Road, Menlo Park, CA. Revenues had reached $250,000 a month. By May of 1988, sales had doubled, and then just three months later, they doubled again. By 1989, Cisco had three products and 115 employees and reported revenues of $27 million.[2]

Notes

[1] Information on Cisco’s origins is relatively scarce and dominated by Cisco public relations. Tom Rindfleisch of Stanford University and Bill Yeager, a Senior Staff Engineer at Sun Micro Systems, Inc. present a larger story at http://smi-web.stanford.edu/people/tcr/tcr-cisco.html.
[2] Information on Cisco’s habitats from Segeller, (1999) Nerds 2.0.1: A Brief History of the Internet. New York: TVBooks. pp.240-247.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Lotus 1-2-3 – A Star is Born

Posted on | April 7, 2017 | No Comments

It was during the November of 1982 on the giant floor of the Comdex show in Las Vegas that Lotus 1-2-3 would first make its mark. While VisiCalc for the Apple II had shown both the viability of digital spreadsheets and the new “microcomputers,” Lotus 1-2-3 showed that spreadsheets would become indispensable for modern organizations and global digital finance.

Initially Jonathan Sachs was worried. He had spent nearly a year working with Mitch Kapor and his small startup company called Lotus Development Corporation on a new spreadsheet program for the recently released IBM PC. A former programmer at Data General Corp, he was apprehensive about the prospects of his new creation. It was too difficult for its intended audience, he thought, and would scare users away.

With questions in his mind and an updated resume in hand, Sachs took his baby to the crowds on the Las Vegas show floor. Lotus had begun to publicize its new spreadsheet nearly half a year before the Comdex show, and a Wall Street Journal article just before the event was beneficial. The new application proved almost as popular as Apple in its debut at the West Coast Computer Faire. Lotus 1-2-3 turned out to be a big hit with the swarming exhibit crowd.

“By the time the workers started to tear down the exhibit stalls at the end of the Comdex show, Lotus had taken about $3 million in orders based on the demo alone. Little did Sachs know his creation would change the computer industry forever.”[1] Nor did he know it would change the world of finance.

But that was just the start of their success. Soon, Lotus 1-2-3 would top the sales list of all computer software. After the new electronic spreadsheet was officially released on January 26, 1983, the company logged sales of some 60,000 copies in its first month. Because software can be reproduced and packaged quickly after it is developed, they were able, barely able, to keep up with demand. In a few months, Lotus 1-2-3 was heading distributor Softsel Computer Products Inc.’s best-seller list and where it would stay for the next two years. Lotus 1-2-3 would become the number-one best-selling computer software application of the 1980s.

Lotus 1-2-3’s success was based on great programming skills and market savvy. Sachs and Kapor decided to create a spreadsheet that would improve the functionality and speed of VisiCalc, the tremendously popular application designed for the Apple II. Kapor had created graphics capabilities for VisiCalc, while Sachs wrote for minicomputers at Data General. Because of his experience with the popular market, Kapor called the marketing shots. At the same time, Sachs had the better programming experience and worked to realize Kapor’s conceptual software ideas.

With startup funds from Ben Rosen, who had left Morgan Stanley to become a venture capitalist, the small firm began to work on a new product for the business market. Together, they decided to create a new microcomputer spreadsheet product by using a faster programming language than most of the competition, such as Context MBA and SuperCalc. Their target was a software application for the newly released IBM PC market.

With startup funds from Ben Rosen, who had left Morgan Stanley to become venture capitalist, the small firm began to work on a new product for the business market. Together they decided to create a new microcomputer spreadsheet product by using a faster programming language than most of the competition such as Context MBA and SuperCalc. Their target was a software application for the newly released IBM PC market.

Instead of the easier Pascal programming language, Kapor and Sachs decided to use the more tedious Assembly language to construct their new software package. Assembly worked closer to the original machine language of the computer. That meant it was much harder to program, but it could provide a faster and more robust final spreadsheet package. They spent 10 months developing the application that grew from a core product to the final 1-2-3 through a series of decisions to add on various features.

Step by step, they built on and tested the new prototype, at one point dropping a word processor module because it was too difficult. As Sachs said afterward, “From a programmer’s standpoint, it was a mind-boggling challenge to write that much code in assembly language that fast and get it to be really solid.” The finished product, Lotus 1-2-3 Release 1 For DOS, which included a spreadsheet, graphing capabilities, database storage, and a macro language, required only 192K of RAM. Because of the functionality of the assembly language, it was much faster than its major competitors, Context MBA, Multiplan, and VisiCalc, despite including the extra features.

Just as VisiCalc helped Apple’s sales, Lotus 1-2-3’s popularity helped IBM’s PC sales take off. Launched in the late summer of 1981, IBM faced stiff competition in the Apple II and a host of new computer manufacturers using the CP/M operating system. Although IBM had name recognition, particularly in the business world, it still needed the kind of practical application that would justify its expense. Lotus 1-2-3 would supply the incentive to put a PC “on top of every desk in the business world.” Sachs reminisced on the impact of Lotus 1-2-3 on the IBM PC, “It was pretty amazing because a factor of five more PCs got sold once that software was available. It was very closely tuned to the original IBM PC and pretty much used all of its features.”[2] Lotus could also run on “IBM-compatible” machines such as the simultaneously developed Compaq portable computer.

Both Sachs and Kapor left Lotus Corporation in the mid-1980s. The fast growth had taken its toll and created many problems that diminished their enthusiasm. Lotus faced many formidable challenges: Supply shortages, labor problems, and disputes with distributors took their toll on the original cast. Sachs left in 1985 after the introduction of Release 2 for MS-DOS, which included add-in support, better memory management, as well as more rows and support for math coprocessors. Kapor left in 1987 after Release 2.01 was introduced.

But neither left before Lotus 1-2-3, with its combination of graphics, spreadsheets, and data management caught the eye of business entrepreneurs and corporate executives. They saw the value of a computer program that simplified the monumental amount of numerical calculations and manipulation needed by the modern corporation. By October 1985, CFO magazine reported that “droves of middle managers and most financial executives are crunching numbers with spreadsheet programs such as Lotus 1-2-3.”[3]

Citation APA (7th Edition)

Pennings, A.J. (2017, Apr 7) Lotus 1-2-3 – A Star is Born. apennings.com https://apennings.com/financial-technology/digital-spreadsheets/lotus-1-2-3-a-star-is-born/

Notes

[1] Information about Lotus at Comdex from “From the floor at Comdex/Fall in 1982, Lotus 1-2-3 hit the ground running and has not slowed down”. CMP Media LLC. Accessed on August 26, 2003.
http://www.crn.com/sections/special/supplement/816/816p71_hof.asp
[2]Quotes attributed to Sachs were taken from “From the floor at Comdex/Fall in 1982, Lotus 1-2-3 hit the ground running and has not slowed down”. CMP Media LLC. Accessed on August 26, 2003.
http://www.crn.com/sections/special/supplement/816/816p71_hof.asp
[3] Quote from CFO on the impact of Lotus 1-2-3 in the corporate world from David M. Katz, “The taking of Lotus 1-2-3? Blame Microsoft.” CFO.com. December 31, 2002.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

That Remote Look: History of Sensing Satellites

Posted on | March 27, 2017 | No Comments

We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.

– President John F. Kennedy, September 12, 1962

During U.S. President Kennedy’s speech at Rice University, where he dedicated the new Manned Spacecraft Center in nearby Houston, he stressed that not only would the US go to the Moon, but it would “do the other things.” He mentioned:

“Within these last 19 months at least 45 satellites have circled the Earth. Some 40 of them were made in the United States of America. Transit satellites are helping our ships at sea to steer a safer course. TIROS satellites have given us unprecedented warnings of hurricanes and storms, and will do the same for forest fires and icebergs.”

The TIROS-1 satellite (shown above) was launched on April 1, 1960, from Cape Canaveral, Florida, and carried two TV cameras and two video recorders. RCA, a major TV and radio manufacturer, primarily built the satellite. Short for Television InfraRed Observational Satellite, TIROS weighed 122 kg and only stayed up for 78 days. Nevertheless, it showed the practicality of using the dynamics of electromagnetism for viewing cloud formations and observing patterns for weather event prediction.

President Dwight Eisenhower had been secretly coordinating the space program as part of the Cold War since the early 1950s. He had become accustomed to the valuable photographic information obtained from spy planes. When the new administration took office in early 1953, tensions with Communist countries increased rapidly. After the USSR conducted successful atomic and hydrogen bomb tests, he considered satellites a crucial new Cold War technology.

Eisenhower’s “New Look” policy identified aerospace as a decisive component of future US military strategy. The D-Day successful invasion of Europe, which he had managed as the head of the Allied Forces during World War II, had been meticulously reconnoitered with low and high-altitude photography from various reconnaissance aircraft. Given the growing nuclear capacity of the USSR, he particularly wanted satellites that could assess how rapidly the Communists were producing its long-range bombers and where they were being stationed. In addition, as the Soviets began to deploy rocket technology siphoned from defeated Nazi Germany, locating and monitoring launchpads with nuclear ballistic missiles became important.

The top-secret Corona spy program was the first attempt to start mapping the Earth from space with satellites. Their Corona spacecraft were built by Lockheed Martin for the CIA and Air Force and equipped with 70mm “Keyhole” cameras. These started with an imaging resolution of approximately 40 feet, enough to locate airfields and large rockets.

The destruction of an American U-2 spy plane during a USSR overflight on May 1, 1960, accelerated the need for satellite-based surveillance. President Eisenhower had proposed an “open skies” plan at a 1955 Summit conference in Geneva with England, France, and Russia that would allow each country to make flights over each others’ sovereign territory to conduct inspections of launchpads capable of rocketing Intercontinental Ballistic Missiles (ICBMs) into space. Soviet leader Nikita Khrushchev had refused the proposal and ordered missiles to bring down the high-altitude US spy plane. Khrushchev took pleasure in displaying the wreckage for the international press and in the following show trial for pilot Francis Gary Powers. The U-2 would once again show its value when it detected Soviet missiles in Cuba, but the new competition to conquer space would dramatically improve aerospace technology and the ability to see from space.

Like most of the early US attempts to achieve space flight, the first Keyhole-equipped satellites failed to achieve orbit or suffered other technical failures. The US had also obtained its rocket technology from the Nazis, and early adaptions such as the A-4 and Redstone rockets required much testing before reliable launches occurred. This knowledge was applied to the next generation Thor-Agena rockets that were used as launch vehicles for Corona spy satellites from June 1959. By the late summer of 1960, a capsule containing the first Keyhole film stock was retrieved in mid-air by an Air Force cargo plane as it parachuted back down to Earth. By 1963, Keyhole resolution had increased to 10 feet and to 5 feet by 1967.

The USSR, though, set a precedent for orbital overflight with its Sputnik satellites. While Eisenhower had sought at the Geneva Summit to assure the world of its peaceful intentions in space, the Soviets launched an R-7 ICBM 100 km into space two years later with a payload the size of a beach ball called the Sputnik. It is still a matter of speculation whether Eisenhower baited the USSR into going into orbital space first, but when the US and other countries around the world failed to protest the overflight of Sputnik, it set the legal precedent for satellites flying over other countries.

As the “Space Race” heated up during the mid-1960s, rocket capabilities improved and new applications were being conceived. The Mercury and Gemini space capsules began to use innovative photographic technologies to capture Earth images. Weather satellites like the TIROS-1 had been monitoring Earth’s atmosphere since 1960, and the idea of sensing land and ocean terrains was being developed. Although the details of the spy satellites were highly classified, enough information about the possibilities of high-altitude sensing of earth terrains circulated in the scientific community. In 1965, William Pecora, the director of the U.S. Geological Survey (USGS), proposed that a satellite program could gather information about the natural resources of our planet. The idea of remote sensing was born, and the USGS would partner with NASA to take the lead.

NASA, the National Aeronautics and Space Administration, had been created in 1958 to engage the public’s imagination and support for the civilian uses of spacecraft. The Apollo program was conceived as early as 1960 and eventually would reach the Moon. The program also sparked reflection, not just on reaching the apex of an extraordinary human journey, but on the origins of that trip. We went to the Moon, but we also discovered our home planet, what Buckminster Fuller called “Spaceship Earth.

History was made on Aug. 23, 1966, when the first photo of the Earth from the perspective of the Moon was transmitted by NASA’s Lunar Orbiter I. It was received at the NASA tracking station at Robledo De Chavela near Madrid, Spain. The image was taken during the spacecraft’s 16th orbit and was the first view of Earth taken by a spacecraft from the vicinity of the Moon. The Lunar Orbiter was a series of five unmanned missions designed to help select landing sites for the Apollo. In mapping the Moon’s surface, they pioneered some of the earliest remote sensing techniques.

In 1966, the USGS and the Department of the Interior (DOI) began working together to produce their own Earth-observing satellite program. Unfortunately, they faced many obstacles, including budget problems due to the increasing costs of the war in Vietnam. But they persevered, and on July 23, 1972, the Earth Resources Technology Satellite (ERTS) was launched. It was soon called Landsat 1, the first of the series of satellites launched to observe and study the Earth’s land masses. It carried a system of cameras built for remote sensing by the Radio Corporation of America (RCA) called the Return Beam Vidicon (RBV). Three independent cameras sensed different spectral wavelengths to obtain visible and near-infrared (IR) photographic images of the Earth. RBV data was processed to 70 millimeter (mm) black and white film rolls by NASA’s Goddard Space Flight Center and then analyzed and archived by the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center.

The second device on Landsat-1 was the Multispectral Scanner (MSS), built by the Hughes Aircraft Company. It provided radiometric images of the Earth through the ability to distinguish very slight differences in energy and continues to be a major contributor to Earth sensing data.

The Landsat satellite program has been the longest-running program for the acquisition and archiving of satellite-based images of Earth. Since the early 1970s, Landsat satellites have constantly circled the Earth, taking pictures, collecting “spectral information,” and storing them for scientific and emergency management services. These images serve various uses – from gauging global agricultural production to monitoring the risks of natural disasters.

A successful partnership between NASA and the U.S. Geological Survey (USGS), Landsat’s critical role is monitoring, analyzing, and managing the earth resources needed for sustainable human environments. It manages and provides the largest archive of remotely sensed – current and historical – land data in the world. Landsat uses a passive approach, measuring light and other energy reflected or emitted from the Earth. Much of this light is scattered by the atmosphere, but techniques have been developed for the Landsat space vehicles to improve image quality dramatically. Each day, Landsat-8 adds another 700 high-resolution images to an unparalleled database, allowing researchers to assess changes in Earth’s landscape over time. Landsat-9 will has even more sophisticated technologies and was launched into space in September 2021 replacing Landsat 7. It has better radiometric and geometric imaging capacity than previous Landsats, adding about 1,400 scenes per day to the Landsat global land archive publicly available from USGS.

Since 1960, the National Oceanic and Atmospheric Administration (NOAA) has worked with NASA to build and operate two fleets of satellites to monitor the Earth. One is the Polar-orbiting Environmental Satellites (POES) that fly north and south over the Arctic and Antarctica regions. These make about 14 orbits a day, with each rotation covering a different band of the Earth.

The other are the Geostationary Operational Environmental Satellite system (GOES) that operate in the higher geosynchronous “Clarke Belt.” This position allows them to measure reflected radiation and some Earth-emitted energies from a single stationary source over set locations. It then records a wide range of atmospheric and terrestrial information for weather and potential disaster warnings.

Both the Landsat satellites and the GOES satellites provide a constant stream of data and imagery to help understand weather events and earth resources. Both are vital to observing current meteorological and land-based events that warrant monitoring, study, and reporting.

Citation APA (7th Edition)

Pennings, A.J. (2017, Mar 27). That Remote Look: History of Sensing Satellites. apennings.com https://apennings.com/digital-geography/that-remote-look-history-of-sensing-satellites/

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

How “STAR WARS” and the Japanese Artificial Intelligence (AI) Threat Led to the Internet, Part IV: Al Gore and the Internet

Posted on | March 23, 2017 | No Comments

This is the fourth part of my argument about how the Internet changed from a military network to a wide scale global network of interconnected networks. Part I discussed the impact of the Strategic Defense Initiative (SDI) or “Star Wars” on funding for the National Science Foundation’s adoption of the ARPANET. In Part II I looked at how the fears about nuclear war and Japan’s artificial intelligence (AI) propelled early funding on the Internet. In Part III, I introduced the “Atari Democrats” and their early role in crafting the legislation in creating the Internet. This is a followup to make a some points about Al Gore’s role in the success of the Internet.

This story should probably have been laid to rest awhile ago, but I was always intrigued by it. The issue says a lot about way election campaigns operate, about role of science and technology in the economy, and especially about the impact of governance and statecraft in economic and technological development.

I’m talking about the famous story about Al Gore and the “invention of the Internet” meme. The story started after the Vice-President was interviewed by CNN’s Wolf Blitzer in 1999 and gained traction during the 2000 Presidential Campaign against George W. Bush. The accusation circulated that Gore claimed he “invented the Internet” and the phrase was used to tag the Vietnam Vet and Vice President as a “liar” and someone who couldn’t be trusted. Here is what he actually said.

Of course, the most controversial part of this interview about Vice President Gore’s plans to announce his candidacy was this statement, “During my service in the United States Congress, I took the initiative in creating the Internet.” That was turned into “inventing the Internet” and was used against him in the 2000 elections.

The meanings are quite different. Inventing suggests a combination of technical imagination and manipulation usually reserved for engineers. To “create” has a much more flexible meaning, indicating more of an art or a craft. There was no reason to say he “invented” the Internet except to frame it in a way that suggested he designed it technically and had patents to prove it, which does sound implausible. Gore would win the popular vote in 2000 but failed in his bid for the Presidency. The Supreme Court ruled he had lost Florida’s electoral votes in a close and controversial election.

The controversy probably says more about how little we understand technological development and how impoverished our conversation was about network infrastructure and information technologies. The history of information technologies, particularly communications networking, has been one of the interplay between technical innovation, market dynamics, and intellectual leadership guiding policy actions. The communications infrastructure, probably the world’s most giant machine, required a set of political skills to manifest. In addition to the engineering skills that created the famed data packets and their TCP/IP protocols, political skills were needed for the funding, regulatory changes, and the power needed to guide the international frameworks.

Al Gore got support from somebody generally considered to be the “real” inventor of the Internet. While Republicans continued to ridicule (or “swiftboat” Gore for trying to claim he “invented the Internet”, many in the scientific community including the engineers who designed the Internet, Robert Kahn and Vinton Cerf verified Gore’s role.

    As far back as the 1970s Congressman Gore promoted the idea of high speed telecommunications as an engine for both economic growth and the improvement of our educational system. He was the first elected official to grasp the potential of computer communications to have a broader impact than just improving the conduct of science and scholarship. Though easily forgotten, now, at the time this was an unproven and controversial concept. Our work on the Internet started in 1973 and was based on even earlier work that took place in the mid-late 1960s. But the Internet, as we know it today, was not deployed until 1983. When the Internet was still in the early stages of its deployment, Congressman Gore provided intellectual leadership by helping create the vision of the potential benefits of high speed computing and communication. As an example, he sponsored hearings on how advanced technologies might be put to use in areas like coordinating the response of government agencies to natural disasters and other crises. – Vint Cerf

Gore is a wealthy man with considerable economic and political success. He can defend himself and has probably come to terms with what happened. He is interesting because he has been a successful legislator and a mover of public opinion. He can also take much credit for bringing global warming or climate change to the public’s attention and mobilizing action on markets for carbon credits and the acceleration of developments in alternative energy. His actions and tactics are worth studying. We need more leaders who can create a positive future rather than obsessing over tearing down what we have.

In the 1980s, Congressman and then Senator Gore was heavily involved in sponsoring legislation to research and connect supercomputers. Gore, who had served in Vietnam as a military journalist, was an important member of the “Atari Democrats.” Along with Senator Gary Hart, Robert Reich, and other Democrats, they pushed forward “high tech” ideas and legislation for funding and research. The meaning of the term varied but “Atari Democrat” generally referred to a pro-technology and pro free trade social liberal Democrat. The term emerged around 1982 and generally linked them to the Democrats’ Greens and an emerging “neo-liberal” wing. It also suggested that they were “young moderates who saw investment and high technology as the contemporary answer to the New Deal.”

The New York Times also discussed the tensions that emerged during the 1980s between the traditional Democratic liberals and the Atari Democrats who attempted to find a middle ground. High-speed processors and new software systems were recognized at the time as a crucial components in developing a number of new military armaments, especially any space-based “Star Wars” technologies.

So what happened? In the Supercomputer Network Study Act of 1986 to direct the Office of Science and Technology Policy (the Office) to study critical issues and options regarding communications networks for supercomputers at universities and Federal research facilities in the United States and required the Office to report the results to the Congress within a year. The bill was not voted on but got attached to the Senate Bill S. 2184: National Science Foundation Authorization Act for Fiscal Year 1987. Still, the report was produced and pointed to the potential role of the NSF. It also lead to an agreement for the NSFNET backbone to be managed with Merit and IBM.

In October 1988, Gore sponsored additional legislation for “data superhighways” in the 100th Congress. S.2918 National High-Performance Computer Technology Act of 1988 The Act was to fund a 3 Gigabit network:

(1) work for the development of a three gigabit per second national research computer network to link government, industry, and education communities to;
(2) convene a committee to advise on network user needs; and
(3) determine the most efficient mechanism for assuring operating funds for the long-term maintenance and use of such a network.

It also directed the National Telecommunications and Information Administration to evaluate current telecommunications regulations and how it influences private industry participation in the data transmission field. Although no legislative action was taken on the bill,

In April 1993, the Supercomputing Act supported the University of Illinois’ National Center for Supercomputing Applications where the first browser was invented. Called Mosaic, it drew on the developments by Tim Berners-Lee work at CERN on hypertext protocols and the first Web server. Tim Berners-Lee is generally credited with developing the World Wide Web.

Notes

[1] From the Introduction to Amusing Ourselves to Death: Public Discourse in the Age of Show Business

Share

Citation APA (7th Edition)

Pennings, A.J. (2017, Mar 23) How “STAR WARS” and the Japanese Artificial Intelligence (AI) Threat Led to the Internet, Part IV: Al Gore and the Internet. apennings.com https://apennings.com/how-it-came-to-rule-the-world/how-%e2%80%9cstar-wars%e2%80%9d-and-the-japanese-artificial-intelligence-ai-threat-led-to-the-internet-al-gor/

© ALL RIGHTS RESERVED

AnthonybwAnthony J. Pennings, PhD is the Professor at the State University of New York in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He also taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii in the 1990s.

Space Shuttles, Satellites, and Competition in Launch Vehicles

Posted on | February 25, 2017 | No Comments

The NASA space shuttle program provided a valuable new launch vehicle for satellites. This post recounts the beginning of the US space shuttle development and its impact on satellite launches.

The notion of a reusable spacecraft had been a dream since the days of Flash Gordon in the 1930s, but a number of technical problems precluded its feasibility for NASA’s objectives. Foremost was the lack of sufficient insulation to protect the shuttle during multiple re-entries. NASA instead had relied on the launch-only rocket model inherited from Nazi Germany’s work on the V-2 that killed approximately 10,000 civilians in attacks on England but later launched the team that landed on the Moon. With the Apollo program winding down in the early 1970s, new plans were developed and forwarded for a reusable space shuttle.

In January 1972, President Nixon made the announcement that NASA would begin a program to build a Space Transportation System (STS), more commonly known as the Space Shuttle. During the previous summer of 1971, Nixon was convinced by John Ehrlichman and Casper Weinberg that the US should pursue the Space Shuttle.

NASA had various plans for “Reusable Ground Launch Vehicle” as early as 1966 but in the wake of the public’s boredom with the Moon visits, enthusiasm for space exploration diminished. Democrats running for President in 1972 were critical of the billions of dollars needed for the “space truck.” Senator Edmund Muskie (D-ME), campaigned on the promise of shelving the space shuttle. Senator Walter Mondale (D-MN), another candidate for president, called the Space Shuttle program “ridiculous” during a nationally televised debate. The country felt that problems of housing, urban decay, and poor nutrition for children were higher priorities.

But the Congressional vote that passed in the Spring of 1972 for the NASA budget 277-60 included funding for the Space Shuttle and Nixon’s resounding electoral victory later that year ensured the administration’s support, at least for awhile.

The next ten years were challenging ones for NASA which faced numerous funding and technical problems. The space agency made up for its diminishing budget by allocating more internal funds to the space shuttle project. Although enthusiasm for space exploration had diminished, the practical uses of space-based satellites were encouraging.

The space shuttle was be launched on the back of a traditional rocket, maintain a relatively low orbit, and then glide down to a runway on Earth. This latter part was particularly difficult at temperatures exceeded 3000 degrees F during the descent. This was solved by gluing some 33,000 silica thermal tiles to the bottom of the vehicle.

As the STS descended at 25 times the speed of sound, it also needed a complex guidance system to direct it. The avionics (guidance, navigation, and control) system used four computers to coordinate data from star trackers, gyros, accelerometers, star trackers, and inputs from ground-based laboratories to guide the spacecraft. Whereas the Mercury flights were satisfied with landings within a mile from their pickup ships, the space shuttle required a precise landing on a specific runway after a several thousand mile glide.[1]

On April 12, 1981, the space shuttle STS-1 Columbia blasted off from Cape Kennedy on its inaugural flight. After 54 hours and 37 earth orbits it landed safely at Edwards Air Force Base in California. (I remember the event because my little kitten, Marco Polo, went up to the TV and started to paw at the descending spacecraft) During its initial flight, it had a successful test of its cargo doors that needed to be open to launch satellites from the maximum shuttle orbit to the geosynchronous orbit thousands of miles higher. During the next flight, they tested a Canadian remote manipulator arm designed to retrieve satellites from orbit and repair them.

The space shuttle provided a significant boost to the satellite industry. Columbia’s fifth flight successfully launched two satellites, the Canadian Anik C and the Satellite Business Systems’ (SBS) third satellite for commercial use. The remote manipulator arm later proved useful when it retrieved and repaired the Solar Max satellite in April 1984 and then later one of Indonesia’s Palapa satellites that had failed to reach the geosynchronous Clarke orbit.

The program ran very smoothly until the Challenger space shuttle blew up on a chilly January morning in 1986. Seventy-three seconds after launch, the spacecraft exploded, killing the entire crew. The disaster stopped shuttle launches for over two years. During this time President Reagan announced that when the shuttle resumed service, it would carry very few if any commercial satellites. Reagan’s intention was to privatize launch services and reserve the shuttle for military and scientific activities, including the infamous “Star Wars” program to create a space-based shield to protect the US from attack.[2]

Incidentally, the plan to privatize space launches proved disastrous for the US, as the Europeans and Chinese quickly captured a significant share of the market. The Ariane and Long March rockets proved to be a viable alternative to the space shuttle. The Bush and Clinton administrations continuously approved the launching of American satellites by other countries under pressure from the rapidly growing telecommunications industry and the transnational corporate users who needed the additional communications capacity.

Notes

[1] Space Shuttle’s development from http://history.nasa.gov/SP-4219/Chapter12.html. By Henry C. Dethloff. Accessed February 24, 2006.
[2] Winter, F. 1990. Rockets into Space. Cambridge, MA: Harvard University Press. pp. 113-126.

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

GOES-16 Satellite and its Orbital Gaze

Posted on | February 7, 2017 | No Comments

“With this kind of resolution, if you were in New York City and you were taking a picture of Wrigley Field in Chicago, you’d be able to see home plate.” So says Eric Webster, vice president and general manager of environmental solutions and space and intelligence systems for the Harris Corp. of Fort Wayne, Indiana about the capabilities of the newly launched GOES-16 satellite (Geostationary Operational Environmental Satellite). But what this statement fails to reveal is the comprehensive view of the Earth that the satellite provides and the extraordinary amount of information that can be gleaned from its images.

NASA launched the GOES-16, formerly known as GOES-R, on November 19, 2016, and after testing, it became operational earlier this year. This satellite provides powerful new eyes for monitoring potential disasters including floods and other weather-related dangers. It was built for the National Oceanic and Atmospheric Administration (NOAA) in Denver, Colorado by Lockheed Martin, with imagers by Harris and launched in an Atlas Rocket.

With 16 different spectral channels and improved resolution, scientists can monitor a variety of events such as hurricanes, volcanoes, and even wildfires. The satellite’s two visible channels, ten infrared, and four near-infrared channels allows the identification and monitoring of a number of earth and atmospheric events. Unlike the earlier built GOES-13, it can combine data from the ABI’s sixteen spectral channels to produce high-resolution composite images.

Operating from geosynchronous orbit roughly 36,000 km (22,240 miles) above the equator, the satellite can take images with its Advanced Baseline Imager (ABI) instrument of the entire earth disk. It can also focus on just a continent or a smaller region that may be impacted by a specific climate event. Parked at 89.5 degrees West longitude, the satellite has a good view of the Americas all the way to the coast of Africa. (A future GOES satellite will focus on the Pacific side) It can take a full disk image of the Earth every 15 minutes and a smaller image of the continental U.S. every 5 minutes, and a specific locale can be captured every 30 seconds.

Spac0559 - Flickr - NOAA Photo Library
Photo from the NOAA Photo Library

What is most significant is what the satellite can do to inform the public of weather events and potential disasters. It can monitor water vapor in the atmosphere and depict rainfall rates. It can gauge melting snowpacks, predict spreading wildfires and measure the poisonous sulfur dioxide emissions of volcanic eruptions. It can sense sea surface temperatures and provide real-time estimates of the intensity of hurricanes, including central pressure and maximum sustained winds.

One of the most valuable benefits will be to monitor the key ingredients of severe weather like lightning and tornadoes. The GOES-16 also utilizes the Geostationary Lightning Mapper (GLM) to monitor the weather for severe conditions, primarily by detecting lighting. It uses high-speed cameras that take pictures 200 times per second allowing it to detect cloud-to-ground lightning and also lightning between clouds. These features allows it to decrease the warning time for severe weather events.

GOES-16 will reduce the risks associated with weather and other potential disasters throughout the Americas and provide much needed support for first responders as well as policy makers.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    July 2025
    M T W T F S S
     123456
    78910111213
    14151617181920
    21222324252627
    28293031  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.
  • Verified by MonsterInsights