Anthony J. Pennings, PhD

WRITINGS ON DIGITAL STRATEGIES, ICT ECONOMICS, AND GLOBAL COMMUNICATIONS

Not Like 1984: GUI and the Apple Mac

Posted on | May 27, 2017 | No Comments

In January of 1984, during the Super Bowl, America’s most popular sporting event, Apple announced the release of the Macintosh computer. It was with a commercial that was shown only once, causing a stir, and gaining millions of dollars in free publicity afterward. The TV ad was produced by Ridley Scott whose credits at the time included directing movies like Alien (1979) and Blade Runner (1982).

Scott drew on iconography from Fritz Lang’s Metropolis (1927) and George Orwell’s classic 1984 novel to produce a stunning dystopic metaphor of what life would be like under what was suggested as a monolithic IBM with a tinge of Microsoft. As human drones file into a techno-decrepit auditorium, they became transfixed by a giant “telescreen” filled with a close-up of a man, eerily reminiscent of an older Bill Gates. Intense eyes peer through wired rimmed glasses and glare down on the transfixed audience as lettered captions transcribe mind-numbing propaganda:

    We are one people with one will, one resolve, one cause.
    Our Enemies will talk themselves to death, and we will bury them with their confusion.
    For we shall prevail.

However, from down a corridor, a brightly-lit female emerges. She runs into the theater and down the aisle. Finally, she winds up and throws an anvil, a large hammer, into the projected face. The televisual screen explodes, and the humans are startled out of their slumbered daze. The ad fades to white, and the screen lights up: “On January 24th, Apple Computer will introduce Macintosh. And you will see how 1984 won’t be like “1984.” The reason, of course, was what Steve Jobs called the “insanely great” new technology of the Macintosh unveiled by Apple. Against a black background, it ends with the famed logo, a rainbow striped Apple with a bite out of the right side.

By 1983, Apple had needed a new computer to compete with the IBM PC. Steve Jobs went to work, utilizing mouse and GUI (Graphical User Interface) technology developed at Xerox PARC in the late 1970s. In exchange for being allowed to buy 100,000 shares of Apple stock before the company went public, Xerox opened its R&D at PARC to Jobs.[17] Xerox was a multi-billion dollar company with a near monopoly on the copier needs of the Great Society’s great bureaucratic structures. In an attempt to leverage its position to dominate the “paperless office,” Xerox sponsored the research and development of a number of computer innovations, but the Xerox leaders never understood the potential of the technology developed under their roofs.

One of these innovations was a powerful but expensive microcomputer called the Alto that integrated many of the new interface technologies that would become standard on personal computers. The new GUI system had the mouse, networking capability, and even a laser printer. It combined a number of PARC innovations including bitmapped displays, hierarchical and pop-up menus, overlapped windows, tiled windows, scroll bars, push buttons, check boxes, cut/move/copy/delete and multiple fonts as well as text styles. Xerox didn’t know quite how to market the Alto, so it gave its microcomputer technology to Apple for an opportunity to buy the young company’s stock.

Apple took this technology and created the Lisa computer, an expensive but impressive prototype of the Macintosh. In 1983, the same year that Lotus officially released it’s Lotus 1-2-3 spreadsheet program — Apple released Lisa Calc, along with six other applications – LisaWrite, LisaList, LisaProject, LisaDraw, LisaPaint, and LisaTerminal. It was the first spreadsheet program to use a mouse, but at a price approaching $10,000, the Lisa proved less than economically feasible. But it did inspire Apple to develop the Macintosh and software companies such as Microsoft to begin to prepare software for the new style of computer.

The Apple Macintosh was based on the GUI, often called WIMP for its Windows, Icons, “Mouse,” and Pull-down menus. The Apple II and IBM PC were still based on something called a command line interface, a “black hole” next to a > prompt that required code to entered and executed on. This system required extensive prior knowledge and/or access to readily available technical documentation. The GUI on the other hand, allowed you to point to information already on the screen or categories that contained subsets of commands. Eventually, menu categories such as File, Edit, View, Tools, Help were standardized on the top of GUI screens.

A crucial issue for the Mac was good third-party software that could work in its GUI environment, especially a spreadsheet. Representatives from Jobs’ Macintosh team visited the fledgling companies that had previously supplied microcomputer software. Good software came from companies like Telos Software that produced the picture-oriented FileVision database and Living Videotext who sold an application called ThinkTank that created “dynamic outlines.” Smaller groupings, such as a collaboration by Jay Bolter, Michael Joyce, and John B. Smith created a program called Storyspace that was a hit with writers and English professors.

PARC was a research center supported by Xerox’s near monopoly on paper-based copying that grew tremendously with the growth of corporate, military and government bureaucracies during the 1960s. Interestingly, in 1958, IBM passed up an opportunity to buy a young company that had developed a new copying technology called “xerography.” The monopoly gave them the freedom to set up a relatively unencumbered research center to lead the company into the era of the “paperless office.” One of the outcomes of this research was the GUI technology.

Unfortunately for Xerox, they failed to capitalize on these new technologies and subsequently sold their technology in exchange for the right to buy millions of dollars in Apple stock. Jobs and Apple used the technology to design and market the Lisa computer with GUI technology and then during the 1984 Super Bowl, dramatically announced the Macintosh. The “Mac” was a breath of fresh air for consumers who were intimated by the “command-line” techno-philosophy of the IBM computer and its clones.

Notes

[17] Rose, Frank (1989) East of Eden: The End of Innocence at Apple Computer. NY: Viking Penguin Group. p.47.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

A First Pre-VisiCalc Attempt at Electronic Spreadsheets

Posted on | May 25, 2017 | No Comments

Computerized spreadsheets were conceived in the early 1960s when Richard Mattessich at the University of California at Berkeley conceptualized the electronic simulation of business accounting techniques in his Simulation of the Firm through a Budget Computer Program (1964). Mattessich envisaged the use of “accounting matrices” to provide a rectangular array of bookkeeping figures that would help analyze a company through numerical modeling. Mattessich’s thinking would predate popular spreadsheet programs like VisiCalc, Lotus 1-2-3, Excel, and Google Sheets.

The development of the first actual computerized spreadsheet was based on an algebraic model written by Mattessich and developed by two of his assistants, Tom Schneider and Paul Zitlau. Together they developed a working prototype using Fortran IV programming language. It contained the basic ingredients for the electronic spreadsheet including the crucial support for individual figures in cells by entire calculative formulas behind each entry.[1]

Mattessich was sanguine on his invention, recognizing that it was a time when computers were being considered for a number of simulation projects, so it was “reasonable to exploit this idea for accounting purposes.”[2] These other projects included the modeling of ecological and weather systems as well as national economies. In fact, Mattissich’s concurrent work on national accounting systems was better received, and his book on the topic became a classic in its own right.[3] Mattessich’s attempts soon fell into obscurity because mainframe technology was not as powerful or as interactive as microprocessor-powered personal computer.

The computer systems of the mid-1960s were not conducive to the type of interactivity that would make spreadsheets so attractive in the 1980s. Computers were big, secluded, and attended to by a slew of programmer acolytes that religiously protected their technological and knowledge domains. They were the domain of the EDP department and removed from all but the highest management by procedures, receptionists, and security precautions.

Computers ran their programs in groups or “batches” of punched cards delivered to highly sequestered data processing centers and the results picked up some time later; sometimes hours, sometimes days. Batch processing was used primarily for payroll, accounts payables, and other accounting processes that could be done on a scheduled basis. The introduction of minicomputers using integrated circuits such as DEC’s PDP-8 meant more companies could afford to have computers, but they did not significantly change their accounting procedures or herald the use of Mattissich’s spreadsheet.

Although the earliest PCs were weaker than their bigger contemporaries, the mainframes, and even the minicomputers, they had several advantages that increased their usefulness. Their main advantage was immediacy; the microcomputer was characterized in part by its accessibility: it was small, relatively cheap, and available via a number of retail outlets. It used a keyboard for human input, a cathode ray monitor to view data, and a newly invented floppy disk for storage.

But just as important was the fact that it bypassed the traditional data processing organization that was constantly striving to keep up with new processing requests. One implication was that frustrated accountants would go out and buy their own computers and software packages over the objections or indifference of the EDP department. It also meant a new flexibility in terms of the speed and amount of information immediately available. Levy recounts the following comments from a Vice-President of Data Processing at Connecticut Mutual who eventually bought one of the earliest microcomputer spreadsheet programs to do his own numerical analyses:

    DP always has more requests than it can handle. There are two kinds of backlog – the obvious one, of things requested, and a hidden one. People say, “I won’t ask for the information because I won’t get it anyway.” When those two guys designed VisiCalc, they opened up a whole new way. We realized that in three or four years, you might as well take your big minicomputer out on a boat and make an anchor out of it. With spreadsheets, a microcomputer gives you more power at a tenth the cost. Now people can do the calculations themselves, and they don’t have to deal with the bureaucracy.[4]

Despite the increasing processing power of the mainframes and minis, and new interactivity due to timesharing and the use of keyboards and cathode ray screens, the use of computerized spreadsheets never increased significantly until the introduction of the personal computer. It was only after the spreadsheet idea was rediscovered in the context of the microprocessing leap made in the next decade that Mattesich’s ideas would be acknowledged.

Notes

[1] Mattessich and Galassi credit assistants Tom Schneider and Paul Zitlau with the development of the first actual computerized spreadsheet based on an algebraic model written by Mattessich. It was reported in “The History of Spreadsheet: From Matrix Accounting to Budget Simulation and Computerization”, a paper presented at the 8th World Congress of Accounting Historians in Madrid, August 2000. See Mattessich, Richard, and Giuseppe Galassi. “History of the Spreadsheet: From Matrix Accounting to Budget Simulation and Computerization.” Accounting and History: A Selection of Papers Presented at the 8th World Congress of Accounting Historians: Madrid-Spain, 19–21 July 2000. Asociación Española de Contabilidad y Administración de Empresas, AECA, 2000.See also George J. Murphy’s “Mattessich, Richard V. (1922-),” in Michael Chatfield and Richard Vangermeersch, eds., The History of Accounting–An International Encyclopedia (New York: Garland Publishing Co., Inc, 1997): 405.
[2] This case of Richard Mattessich developing the first electronic spreadsheets has been made extensively including in “The History of Spreadsheet: From Matrix Accounting to Budget Simulation and Computerization”, a paper presented at the 8th World Congress of Accounting Historians in Madrid, August 2000 by Richard Mattessich and Giuseppe Galassi.
[3] Mattessich’s Accounting and Analytical Methods—Measurement and Projection of Income and Wealth in the Micro and Macro Economy (1964) was published by Irwin and was part of that movement towards national accounting systems. Mattessich, Richard. A later version was published as Accounting and Analytical Methods: Measurement and Projection of Income and Wealth in the Micro- and Macro-Economy. Scholars Book Company, 1977. It was mentioned in Chapter 3: Statistics: the Calculating Governmentality of my PhD dissertation, Symbolic Economies and the Politics of Global Cyberspaces (1993).
[4] Levy, S. (1989) “A Spreadsheet Way of Knowledge,” in Computers in the Human Context: Information Technology, Productivity, and People. Tom Forester (ed) Oxford, UK: Basil Blackwell.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

CISCO SYSTEMS: FROM CAMPUS TO THE WORLD’S MOST VALUABLE COMPANY, PART THREE: PUSHING TCP/IP

Posted on | April 20, 2017 | No Comments

Len Bosack and Sandy Lerner combined several technologies being developed at Stanford University and the Silicon Valley area to form the networking behemoth, Cisco Systems. Although success was by no means foreseen in the early years. The pair obtained access to Bill Yeager’s source code for the multiple-protocol “Blue Box” router in 1985. Yeager’s software became the foundation for the Cisco operating system and a major stimulant for the adoption of the Internet Protocol in data communications systems around the world.

In 1980, Yeager became responsible for networking computers at the Stanford Medical School. A number of devices were proliferating on campus such as the DEC10, PDP11/05 and VAX Systems, and especially a number of Xerox machines, including PARC Lisp machines and Altos file servers and printers. The Xerox influence was substantial as the project started with routing Parc Universal Packet (PUP) for the Xerox PARC systems and mainframes. It was later configured to include IP addresses for the new VAX750s and Xerox’s own proprietary network XNS – Xerox Network Services.

After some controversy, the Stanford Office of Technology Licensing eventually decided to allow Cisco to use the technology. The venerable institution ultimately decided they would try “to make the best of a bad situation” and on April 15, 1987, licensed the router software and two computer boards to Cisco for $19,300 in cash and royalties of $150,000 as well as product discounts. It refused equity in Cisco Systems as “a matter of policy.”[1] The agreement named Yeager as the principal developer of the source code, making him one of the unsung heroes of the Internet age.

The couple initially ran the company from their home at 199 Oak Grove Avenue in Atherton, CA. Using their credit cards for startup capital, they set up an office and started assembling routers in their living room. Using credit, they even installed a mainframe computer in their garage for $5000. They needed it to stay on the ARPANET and take orders for their network equipment, making them an early e-commerce innovator.[2] Bosack focused on technology and Lerner on marketing. Lerner’s ad meme “IP Everywhere” was ahead of its time.

The early years were tough. Venture capital was difficult to acquire, and the couple reportedly made over 70 visits to find money for their venture. At that time, only semi-conductor companies were being funded, and the Internet was barely known outside of academia. Finally Sequoia Capital stepped in for a considerable percentage of the business. Don Valentine had passed on Steve Jobs and Apple and consequently had developed a more open mind to new innovations. They put in $2.5 million for one-third of the company and the ability to make major management decisions.

By the end of 1986, the company was growing rapidly. Although it took two years to get out of the garage, the computer and communications revolution was taking off. PCs were becoming commonplace and even more important, becoming networked. Also, TCP/IP was gaining traction. The Department of Defense mandated its use at the center of the ARPANET and granting projects that coded TCP/IP implementations for IBM machines and operating systems such as UNIX. The emerging NSFNET had also required TCP/IP protocols and compliant networking technologies. By mid-1985, almost 2000 computers hosted TCP/IP technology.

Despite the growing enthusiasm for TCP/IP in the military/academic/research institute sphere, the major manufacturers of computer communications equipment were focused on the OSI models and believed market forces would eventually move in its favor. However, in 1986, advocates of TCP/IP took action to improve and promote their protocols. The Internet Advisory Board (IAB) began to implement strategies in two parts. The first was to encourage more participation in TCP/IP standards development. It resulted in the May 1986 publication of RFC 985, “Requirements for Internet Gateways” and other recommendations. The second was to inform equipment vendors about the features and advantages of TCP/IP. This involved organizing several vendor conferences including the “TCP/IP Vendors Workshop” on August 25-27, 1986 and the “TCP/IP Interoperability Conference” during March 1987.

While some vendors were disappointed that no certification and testing process was forthcoming, it allowed the advocates of TCP/IP to provide some guidance for equipment manufacturers to incorporate their protocols. It was with under leadership of Dan Lynch that these conferences were started and in 1988 they came under the heading of INTEROP. As the industry took shape with innovations like Simple Network Management Protocol, or SNMP, introduced at INTEROP ’88, Cisco was one of its main beneficiaries.

In 1986, Cisco introduced the F Gateway Server (FGS) remote access server and also their first commercial multi-protocol network router, the Advanced Gateway Server (AGS). The “Massbus-Ethernet Interface Subsystem,” was an interface card made for DEC computers and could create bridges between local area networks of different protocols such as TCP/IP and PUP and its highest line rate on the system was 100Mbps FDDI. The AGS was capable of connecting multiple LAN and WAN networks. Its technical characteristics included a throughput of 10M bits a second and it could process 300 packets a second. The AGS supported 200 routing tables and cost approximately $5,550. The AGS was quickly adopted in networks such as CSUNet at Colorado State University. Soon, universities all around the country were calling and emailing them about their equipment.

In November 1986, they moved to their first office, 1360 Willow Road, Menlo Park, CA. Revenues had reached $250,000 a month. By May of 1988, sales had doubled, and then just three months later, they doubled again. By 1989, Cisco had three products and 115 employees and reported revenues of $27 million.[2]

Notes

[1] Information on Cisco’s origins is relatively scarce and dominated by Cisco public relations. Tom Rindfleisch of Stanford University and Bill Yeager, a Senior Staff Engineer at Sun Micro Systems, Inc. present a larger story at http://smi-web.stanford.edu/people/tcr/tcr-cisco.html.
[2] Information on Cisco’s habitats from Segeller, (1999) Nerds 2.0.1: A Brief History of the Internet. New York: TVBooks. pp.240-247.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Lotus 1-2-3 – A Star is Born

Posted on | April 7, 2017 | No Comments

It was during the November of 1982 on the giant floor of the Comdex show in Las Vegas that Lotus 1-2-3 would first make its mark. While VisiCalc for the Apple II had shown both the viability of digital spreadsheets and the new “microcomputers,” Lotus 1-2-3 showed that spreadsheets would become indispensable for modern organizations and global digital finance.

Initially Jonathan Sachs was worried. He had spent nearly a year working with Mitch Kapor and his small startup company called Lotus Development Corporation on a new spreadsheet program for the recently released IBM PC. A former programmer at Data General Corp, he was apprehensive about the prospects of his new creation. “It was too difficult for its intended audience, he thought, and would scare users away.

With questions in his mind and an updated resume in hand, Sachs took his baby to the crowds on the Las Vegas show floor.” Lotus had begun to publicize its new spreadsheet nearly a half a year before the Comdex show and a Wall Street Journal article just before the event was particularly helpful. But the new application was nearly as popular as Apple’s debut at the West Coast Computer Faire. Lotus 1-2-3 turned out to be a big hit with the swarming exhibit crowd.

“By the time the workers started to tear down the exhibit stalls at the end of the Comdex show, Lotus had taken about $3 million in orders based on the demo alone. Little did Sachs know his creation would change the computer industry forever.”[1] Nor did he know it would change the world of finance.

But that was just the start of their success. Soon Lotus 1-2-3 would top the sales list of all computer software. After the new electronic spreadsheet was officially released on January 26, 1983, the company logged sales of some 60,000 copies in its first month. Because software can be reproduced quickly after it is developed, they were able, barely, to keep up with demand. In a few months 1-2-3 was heading distributor Softsel Computer Products Inc.’s best-seller list and where it would stay for the next two years. Lotus 1-2-3 would go on to become the number one best-seller computer software application of the 1980s.

Lotus 1-2-3’s success was based on a combination of great programming skills and market savvy. Sachs and Kapor had decided to create a spreadsheet that would better the functionality and speed of VisiCalc, the tremendously popular application designed for the Apple II. Kapor had created graphics capabilities for VisiCalc while Sachs had written for minicomputers while at Data General. Because of his experience with the popular market, Kapor called the marketing shots while Sachs had the better programming experience and worked to realize Kapor’s conceptual software ideas.

With startup funds from Ben Rosen, who had left Morgan Stanley to become venture capitalist, the small firm began to work on a new product for the business market. Together they decided to create a new microcomputer spreadsheet product by using a faster programming language than most of the competition such as Context MBA and SuperCalc. Their target was a software application for the newly released IBM PC market.

Instead of the easier Pascal programming language, Kapor and Sachs decided to use the more tedious assembly language to construct their new software package. Assembly worked closer to the original machine language of the computer. That meant that it was much harder to program, but could provide a faster and more robust final spreadsheet package. They spent 10 months developing the application that grew from a core product to the final 1-2-3 through a series of decisions to add on various features.

Step-by-step they built on and tested the new prototype, at one point dropping a word processor module because it too difficult. As Sachs said afterwards, “From a programmer’s standpoint, it was a mind-boggling challenge to write that much code in assembly language that fast and get it to be really solid.” The finished product, Lotus 1-2-3 Release 1 For DOS, which included a spreadsheet, graphing capabilities, database storage and a macro language, required only 192K of RAM to run. Because of the functionality of the assembly language, it was much faster than its major competitors: Context MBA, Multiplan and VisiCalc, despite including the extra features.

Just as VisiCalc helped Apple’s sales, Lotus 1-2-3’s popularity helped IBM’s sales take off. Launched in the late summer of 1981, IBM faced stiff competition in the Apple II and a host of new computer manufacturers using the CP/M operating system. Although IBM had the name recognition, particularly in the business world, it still needed the kind of practical application that would justify its expense. Lotus 1-2-3 would supply the type of incentive that would put a PC “on top of every desk in the business world”. Sachs reminisced on the impact of Lotus 1-2-3 on the IBM PC, “It was pretty amazing because a factor of five more PCs got sold once that software was available. It was very closely tuned to the original IBM PC and pretty much used all of its features.”[2] Lotus could also run on “IBM-compatible” machines such as the simultaneously developed Compaq portable computer.

Both Sachs and Kapor left Lotus Corporation in the mid-1980s. The fast growth had taken its toll and created many problems that diminished their enthusiasm. Lotus faced many formidable challenges: Supply shortages, labor problems, and disputes with distributors took their toll on the original cast. Sachs left in 1985 after the introduction of Release 2 for MS-DOS that included add-in support, better memory management, as well as more rows and support for math coprocessors. Kapor left in 1987 after Release 2.01 was introduced.

But neither left leave before Lotus 1-2-3, with its combination of graphics, spreadsheets, and data management caught the eye of business entrepreneurs and corporate executives who saw the value of a computer program that simplified the monumental amount of numerical calculations and manipulation needed by the modern corporation. By October 1985, CFO magazine was reporting that “droves of middle managers and most financial executives are crunching numbers with spreadsheet programs such as Lotus 1-2-3.”[3]

Notes

[1] Information about Lotus at Comdex from “From the floor at Comdex/Fall in 1982, Lotus 1-2-3 hit the ground running and has not slowed down”. CMP Media LLC. Accessed on August 26, 2003.
http://www.crn.com/sections/special/supplement/816/816p71_hof.asp
[2]Quotes attributed to Sachs were taken from “From the floor at Comdex/Fall in 1982, Lotus 1-2-3 hit the ground running and has not slowed down”. CMP Media LLC. Accessed on August 26, 2003.
http://www.crn.com/sections/special/supplement/816/816p71_hof.asp
[3] Quote from CFO on the impact of Lotus 1-2-3 in the corporate world from David M. Katz, “The taking of Lotus 1-2-3? Blame Microsoft.” CFO.com. December 31, 2002.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

That Remote Look: History of Sensing Satellites

Posted on | March 27, 2017 | No Comments

We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.

– President John F. Kennedy, September 12, 1962

During U.S. President Kennedy’s speech at Rice University, where he dedicated the new Manned Spacecraft Center in nearby Houston, he stressed that not only would the US go to the Moon, but it would “do the other things.” He mentioned:

“Within these last 19 months at least 45 satellites have circled the Earth. Some 40 of them were made in the United States of America. Transit satellites are helping our ships at sea to steer a safer course. TIROS satellites have given us unprecedented warnings of hurricanes and storms, and will do the same for forest fires and icebergs.”

The TIROS-1 satellite (shown above) was launched on April 1, 1960, from Cape Canaveral, Florida and carried two TV cameras and two video recorders. The satellite was primarily built by RCA, a major TV and radio manufacturer. Short for Television InfraRed Observational Satellite, TIROS weighed 122 kg and only stayed up for 78 days. Nevertheless, it showed the practicality of using the dynamics of electromagnetism for viewing cloud formations and observing patterns for weather event prediction.

President Dwight Eisenhower had been secretly coordinating the space program as part of the Cold War since the early 1950s. He had become accustomed to the valuable photographic information obtained from spy planes. When the new administration took office in early 1953, tensions with Communist countries were increasing rapidly. After the USSR conducted successful atomic and hydrogen bomb tests, he considered satellites a crucial new Cold War technology.

Eisenhower’s “New Look” policy identified aerospace as a decisive component of future US military strategy. The D-Day successful invasion of Europe, which he had managed as the head of the Allied Forces during World War II, had been meticulously reconnoitered with low and high altitude photography from a variety of reconnaissance aircraft. Given the growing nuclear capacity of the USSR, he particularly wanted satellites that could assess how rapidly the Communists were producing its long-range bombers and where they were being stationed. As the Soviets began to deploy rocket technology siphoned from defeated Nazi Germany, it was important to locate and monitor launchpads with nuclear ballistic missiles.

The top-secret Corona spy program was the first attempt to start mapping the Earth from space with satellites. Their Corona spacecraft were built by Lockheed Martin for the CIA and Air Force and equipped with 70mm “Keyhole” cameras. These started with an imaging resolution of approximately 40 feet, enough to locate airfields and large rockets.

The destruction of an American U-2 spy plane during a USSR overflight on May 1, 1960, accelerated the need for satellite-based surveillance. President Eisenhower had proposed an “open skies” plan at a 1955 Summit conference in Geneva with England, France, and Russia that would allow each country to make flights over each others’ sovereign territory to conduct inspections of launchpads capable of rocketing Intercontinental Ballistic Missiles (ICBMs) into space. Soviet leader Nikita Khrushchev had refused the proposal and ordered missiles to bring down the high-altitude US spy plane. Khrushchev took pleasure in displaying the wreckage for the international press and in the following show trial for pilot Francis Gary Powers. The U-2 would once again show its value when it detected Soviet missiles in Cuba, but the new competition to conquer space would dramatically improve aerospace technology and the ability to see from space.

Like most of the early US attempts to achieve space flight, the first Keyhole-equipped satellites failed to achieve orbit or suffered other technical failures. The US had also obtained its rocket technology from the Nazis, and early adaptions such as the A-4 and Redstone rockets required much testing before reliable launches occurred. This knowledge was applied to the next generation Thor-Agena rockets that were used as launch vehicles for Corona spy satellites from June 1959. By the late summer of 1960, a capsule containing the first Keyhole film stock was retrieved in mid-air by an Air Force cargo plane as it parachuted back down to Earth. By 1963, Keyhole resolution had increased to 10 feet and to 5 feet by 1967.

It was the USSR though that set the precedent for orbital overflight with its Sputnik satellites. While Eisenhower had sought at the Geneva Summit to assure the world of its peaceful intentions in space, the Soviets launched an R-7 ICBM 100 km into space two years later with a payload the size of a beach ball called the Sputnik. It is still a matter of speculation whether Eisenhower baited the USSR into going into orbital space first, but when the US and other countries around the world failed to protest the overflight of the Sputnik, it set the legal precedent for satellites flying over other countries.

As the “Space Race” heated up during the mid-1960s, rocket capabilities improved and new applications were being conceived. The Mercury and Gemini space capsules began to use innovative photographic technologies to capture Earth images. Weather satellites like the TIROS-1 had been monitoring Earth’s atmosphere since 1960, and the idea of sensing land and ocean terrains was being developed. Although the details of the spy satellites were highly classified, enough information about the possibilities of high-altitude sensing of earth terrains circulated in the scientific community. In 1965, William Pecora, the director of the U.S. Geological Survey (USGS), proposed that a satellite program could gather information about the natural resources of our planet. The idea of remote sensing was born, and the USGS would partner with NASA to take the lead.

NASA, the
National Aeronautics and Space Administration, had been created in 1958 to engage the public’s imagination and support for the civilian uses of spacecraft. The Apollo program was conceived as early as 1960 and eventually would reach the Moon. The program also sparked reflection, not just on reaching the apex of an extraordinary human journey, but on the origins of that trip. We went to the Moon, but we also discovered our home planet, what Buckminster Fuller called “Spaceship Earth.

History was made on Aug. 23, 1966, when the first photo of the Earth from the perspective of the Moon was transmitted by NASA’s Lunar Orbiter I. It was received at the NASA tracking station at Robledo De Chavela near Madrid, Spain. The image was taken during the spacecraft’s 16th orbit and was the first view of Earth taken by a spacecraft from the vicinity of the Moon. The Lunar Orbiter was a series of five unmanned missions designed to help select landing sites for the Apollo. In mapping the Moon’s surface, they pioneered some of the earliest remote sensing techniques.

In 1966, the USGS and the Department of the Interior (DOI) began to work with each other to produce their own Earth-observing satellite program. They faced a number of obstacles including budget problems due to the increasing costs of the war in Vietnam. But they persevered, and on July 23, 1972, the Earth Resources Technology Satellite (ERTS) was launched. It was soon called Landsat 1, the first of the series of satellites launched to observe and study the Earth’s landmasses. It carried a system of cameras built for remote sensing by the Radio Corporation of America (RCA) called the Return Beam Vidicon (RBV). Three independent cameras sensed different spectral wavelengths to obtain visible and near infrared (IR) photographic images of the earth. RBV data was processed to 70 millimeter (mm) black and white film rolls by NASA’s Goddard Space Flight Center and then analyzed and archived by the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center.

The second device on Landsat-1 was the Multispectral Scanner (MSS), built by the Hughes Aircraft Company. It provided radiometric images of the Earth through the ability to distinguish very slight differences in energy and continues to be a major contributor to Earth sensing data.

The Landsat satellite program has been the longest-running program for the acquisition and archiving of satellite-based images of Earth. Since the early 1970s, Landsat satellites have constantly been circling the Earth, taking pictures and collecting “spectral information” and storing them for scientific and emergency management services. These images serve a wide variety of uses – from gauging global agricultural production to monitoring the risks of natural disasters.

A successful partnership between NASA and the U.S. Geological Survey (USGS), Landsat’s critical role is monitoring, analyzing, and managing the earth resources needed for sustainable human environments. It manages and provides the largest archive of remotely sensed – current and historical – land data in the world. Landsat uses a passive approach, measuring light and other energy reflected or emitted from the Earth. Much of this light is scattered by the atmosphere, but techniques have been developed for the Landsat space vehicles to dramatically improve image quality. Each day, Landsat-8 adds another 700 high-resolution images to an unparalleled database, giving researchers the capability to assess changes in Earth’s landscape over time. Landsat-9 will have even more sophisticated technologies when it is launched into space in 2020.

Since 1960, the National Oceanic and Atmospheric Administration (NOAA) worked with NASA to build and operate two fleets of satellites to monitor the Earth. One is the Polar-orbiting Environmental Satellites (POES) that fly north and south over the Arctic and Antarctica regions. These make about 14 orbits a day with each rotation covering a different band of the Earth.

The other are the Geostationary Operational Environmental Satellites (GOES) that operate in the higher geosynchronous “Clarke Belt.” This position allows them to measure reflected radiation and some Earth-emitted energies from a single stationary source over a set locations. It then records a wide range of atmospheric and terrestrial information for weather and potential disaster warnings.

Both the Landsat satellites, and the GOES satellites, provide a constant stream of data and imagery to help understand weather events and earth resources. Both are vital to observing current meteorological and land-based events that warrant monitoring, study, and reporting.

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Space Shuttles, Satellites, and Competition in Launch Vehicles

Posted on | February 25, 2017 | No Comments

The NASA space shuttle program provided a valuable new launch vehicle for satellites. This post recounts the beginning of the US space shuttle development and its impact on satellite launches.

The notion of a reusable spacecraft had been a dream since the days of Flash Gordon in the 1930s, but a number of technical problems precluded its feasibility for NASA’s objectives. Foremost was the lack of sufficient insulation to protect the shuttle during multiple re-entries. NASA instead had relied on the launch-only rocket model inherited from Nazi Germany’s work on the V-2 that killed approximately 10,000 civilians in attacks on England but later launched the team that landed on the Moon. With the Apollo program winding down in the early 1970s, new plans were developed and forwarded for a reusable space shuttle.

In January 1972, President Nixon made the announcement that NASA would begin a program to build a Space Transportation System (STS), more commonly known as the Space Shuttle. During the previous summer of 1971, Nixon was convinced by John Ehrlichman and Casper Weinberg that the US should pursue the Space Shuttle.

NASA had various plans for “Reusable Ground Launch Vehicle” as early as 1966 but in the wake of the public’s boredom with the Moon visits, enthusiasm for space exploration diminished. Democrats running for President in 1972 were critical of the billions of dollars needed for the “space truck.” Senator Edmund Muskie (D-ME), campaigned on the promise of shelving the space shuttle. Senator Walter Mondale (D-MN), another candidate for president, called the Space Shuttle program “ridiculous” during a nationally televised debate. The country felt that problems of housing, urban decay, and poor nutrition for children were higher priorities.

But the Congressional vote that passed in the Spring of 1972 for the NASA budget 277-60 included funding for the Space Shuttle and Nixon’s resounding electoral victory later that year ensured the administration’s support, at least for awhile.

The next ten years were challenging ones for NASA which faced numerous funding and technical problems. The space agency made up for its diminishing budget by allocating more internal funds to the space shuttle project. Although enthusiasm for space exploration had diminished, the practical uses of space-based satellites were encouraging.

The space shuttle was be launched on the back of a traditional rocket, maintain a relatively low orbit, and then glide down to a runway on Earth. This latter part was particularly difficult at temperatures exceeded 3000 degrees F during the descent. This was solved by gluing some 33,000 silica thermal tiles to the bottom of the vehicle.

As the STS descended at 25 times the speed of sound, it also needed a complex guidance system to direct it. The avionics (guidance, navigation, and control) system used four computers to coordinate data from star trackers, gyros, accelerometers, star trackers, and inputs from ground-based laboratories to guide the spacecraft. Whereas the Mercury flights were satisfied with landings within a mile from their pickup ships, the space shuttle required a precise landing on a specific runway after a several thousand mile glide.[1]

On April 12, 1981, the space shuttle STS-1 Columbia blasted off from Cape Kennedy on its inaugural flight. After 54 hours and 37 earth orbits it landed safely at Edwards Air Force Base in California. (I remember the event because my little kitten, Marco Polo, went up to the TV and started to paw at the descending spacecraft) During its initial flight, it had a successful test of its cargo doors that needed to be open to launch satellites from the maximum shuttle orbit to the geosynchronous orbit thousands of miles higher. During the next flight, they tested a Canadian remote manipulator arm designed to retrieve satellites from orbit and repair them.

The space shuttle provided a significant boost to the satellite industry. Columbia’s fifth flight successfully launched two satellites, the Canadian Anik C and the Satellite Business Systems’ (SBS) third satellite for commercial use. The remote manipulator arm later proved useful when it retrieved and repaired the Solar Max satellite in April 1984 and then later one of Indonesia’s Palapa satellites that had failed to reach the geosynchronous Clarke orbit.

The program ran very smoothly until the Challenger space shuttle blew up on a chilly January morning in 1986. Seventy-three seconds after launch, the spacecraft exploded, killing the entire crew. The disaster stopped shuttle launches for over two years. During this time President Reagan announced that when the shuttle resumed service, it would carry very few if any commercial satellites. Reagan’s intention was to privatize launch services and reserve the shuttle for military and scientific activities, including the infamous “Star Wars” program to create a space-based shield to protect the US from attack.[2]

Incidentally, the plan to privatize space launches proved disastrous for the US, as the Europeans and Chinese quickly captured a significant share of the market. The Ariane and Long March rockets proved to be a viable alternative to the space shuttle. The Bush and Clinton administrations continuously approved the launching of American satellites by other countries under pressure from the rapidly growing telecommunications industry and the transnational corporate users who needed the additional communications capacity.

Notes

[1] Space Shuttle’s development from http://history.nasa.gov/SP-4219/Chapter12.html. By Henry C. Dethloff. Accessed February 24, 2006.
[2] Winter, F. 1990. Rockets into Space. Cambridge, MA: Harvard University Press. pp. 113-126.

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

GOES-16 Satellite and its Orbital Gaze

Posted on | February 7, 2017 | No Comments

“With this kind of resolution, if you were in New York City and you were taking a picture of Wrigley Field in Chicago, you’d be able to see home plate.” So says Eric Webster, vice president and general manager of environmental solutions and space and intelligence systems for the Harris Corp. of Fort Wayne, Indiana about the capabilities of the newly launched GOES-16 satellite (Geostationary Operational Environmental Satellite). But what this statement fails to reveal is the comprehensive view of the Earth that the satellite provides and the extraordinary amount of information that can be gleaned from its images.

NASA launched the GOES-16, formerly known as GOES-R, on November 19, 2016, and after testing, it became operational earlier this year. This satellite provides powerful new eyes for monitoring potential disasters including floods and other weather-related dangers. It was built for the National Oceanic and Atmospheric Administration (NOAA) in Denver, Colorado by Lockheed Martin, with imagers by Harris and launched in an Atlas Rocket.

With 16 different spectral channels and improved resolution, scientists can monitor a variety of events such as hurricanes, volcanoes, and even wildfires. The satellite’s two visible channels, ten infrared, and four near-infrared channels allows the identification and monitoring of a number of earth and atmospheric events. Unlike the earlier built GOES-13, it can combine data from the ABI’s sixteen spectral channels to produce high-resolution composite images.

Operating from geosynchronous orbit roughly 36,000 km (22,240 miles) above the equator, the satellite can take images with its Advanced Baseline Imager (ABI) instrument of the entire earth disk. It can also focus on just a continent or a smaller region that may be impacted by a specific climate event. Parked at 89.5 degrees West longitude, the satellite has a good view of the Americas all the way to the coast of Africa. (A future GOES satellite will focus on the Pacific side) It can take a full disk image of the Earth every 15 minutes and a smaller image of the continental U.S. every 5 minutes, and a specific locale can be captured every 30 seconds.

Spac0559 - Flickr - NOAA Photo Library
Photo from the NOAA Photo Library

What is most significant is what the satellite can do to inform the public of weather events and potential disasters. It can monitor water vapor in the atmosphere and depict rainfall rates. It can gauge melting snowpacks, predict spreading wildfires and measure the poisonous sulfur dioxide emissions of volcanic eruptions. It can sense sea surface temperatures and provide real-time estimates of the intensity of hurricanes, including central pressure and maximum sustained winds.

One of the most valuable benefits will be to monitor the key ingredients of severe weather like lightning and tornadoes. The GOES-16 also utilizes the Geostationary Lightning Mapper (GLM) to monitor the weather for severe conditions, primarily by detecting lighting. It uses high-speed cameras that take pictures 200 times per second allowing it to detect cloud-to-ground lightning and also lightning between clouds. These features allows it to decrease the warning time for severe weather events.

GOES-16 will reduce the risks associated with weather and other potential disasters throughout the Americas and provide much needed support for first responders as well as policy makers.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

How Schindler Used the List

Posted on | January 28, 2017 | No Comments

When Schindler’s List (1993) was released, I was living in Wellington, New Zealand. But I caught the film during the winter holidays in Hawaii. When I got back to Wellington, I read the book Schindler’s Ark and wrote this article for the city’s newspaper in anticipation of the movie’s NZ premiere in March. It appeared in The Evening Post on March 8, 1994. In it, I examine the political ideology and technology used by the Nazis.

ShindlersList

Schindler’s List (1993), Steven Spielberg’s acclaimed movie on the Holocaust, premieres in Wellington on Friday. Dr Anthony Pennings backgrounds the reasons for the programme of mass genocide.

The cinematic adaption of Thomas Keneally’s 1982 novel, Schindler’s Ark by Steven Speilberg has won international acclaim as one of the best movies of the year. The story of Oskar Schindler credits the Austrian-born industrialist with saving over 1200 Jews from almost certain slaughter in Nazi death camps during the Second World War. By employing them as slave labour in his factories, he was able to harbour them from the mass genocide programme conducted throughout the German-occupied territories.

Although excellent narratives about Oskar Schindler, the book and movie lack adequate descriptions of why and how the Nazis conducted their murders. Not that any justification can be given for the killing of nearly six million Jews, but the popular stories are lacking in the historical background needed to come to grips with the horrible actions of the Nazis. The rationales behind the Nazi extermination programme against the Jews are not as obscure as some people would think, though often hard to hear for our enlightened, liberal ears. The belief in “humanity” and the equality of races, although predominant in our time, is a rather new idea with a weak historical foundation.

One the strongest challenges to the enlightenment period that advanced these ideas was the German Nationalist Socialist movement, a parochial, tribal movement based on the belief of their racial superiority. The Nazis believed that the Germans embodied the Aryan bloodlines, which gave them privileged access to a type of spiritual plane or electrical force that could make them living gods.

They sought to destroy communism, democracy, industrial capitalism, and other forces that supposedly threatened their Aryan bloodlines and sought the rule of wise priest-kings who were imbued with mystical power.

They believed that any dilution of their gene pool through mixing with “lower races” would lock them out of their Garden of Eden. This deeply held mystical paganism was strengthened by the teachings of Darwinism and the pseudo-science of Eugenics, which emerged in the late 19th century. These new beliefs gave the Nazis the rationalisation, however misguided, to their fears of mixing with outsiders.

The Nazis believed the Jewish race was the chief threat to the Germanic people. This belief can be traced back to the writings of Martin Luther, who was the first best-selling book author not only and sparked the Protestant Reformation but left a lasting anti-Semitic legacy with his later writings. According to Luther, Jews were second only to the devil in their capacity for evil.

The later Nazis also used metaphorical devices to denigrate the Jews, such as in the Eternal Jew. This film clip interspersed images of ghetto Jews with footage of rat hordes to suggest Jews were unsanitary and less than human.

Using a vast network of radio relays and loudspeakers dispersed throughout German cities, Adolph Hitler was able to preach his xenophobic version of the Jewish threat to millions of Germans. He argued that the ultimate goal of the Jew was world domination, and the Jewish doctrine of Marxism, in particular, would mean the end of governance by the “aristocratic principle of nature,” the only hope for the German-Aryan bloodlines. Parliamentarism, the press, and the trade union movement were other conspiratorial techniques of the Jews who would ultimately face the Aryan in a worldwide apocalyptic battle.

The Nazi Volkdom (the merging of race politics with the machinery of the State) became committed to eliminating the Jews (and other “sub-races” such as the Slavs) as a matter of national policy. Hitler’s elite warrior class, the black-uniformed SS (Schutzstaffel, or Defence Corps) became the main instrument for carrying out the Race and Resettlement Act, their euphemism for the extermination process.

Headed by Heinrich Himmler, this new group took charge of the secret police (the infamous Gestapo) and the concentration camps which were being built to hold political prisoners and other “anti-Reich” elements such as Bolsheviks and Freemasons. Pledged to give their lives to the Fuhrer, this treacherous and highly indoctrinated Teutonic brotherhood carried out the Holocaust orders.

Two groups, in particular, conducted most of the killings: the Tofenkopfverbande, which bore the chilling death head insignia on its label; and the Einsatzgruppen, a special police force whose tactics even shocked many of the German generals. They combined precise military training and a high level of technocratic competence towards their ideal of a German racial utopia. Unfortunately, the cost would be the lives of several million Jews from Western Europe, 1.7 million from the Soviet Union and the incredible figure of three million from Poland, where most of the Schindler’s List story takes place.

What is so extraordinary about the Nazi Pogrom is that the full force of modernity, with its technologies of chemical production, engineering design, information management, and logistical transport, were brought together under the management of a highly indoctrinated, or at least compliant, professional class. Bureaucratic and scientific advances were marshaled with incredible indeterminacy to carry out the ghastly killings.

The SS spread over the occupied territories to co-ordinate the corralling and transporting of Jews. From small villages, medium-sized cities, industrial centres, and other locations around Europe and the Soviet Union, millions of Jewish families were set into motion.

At first, the Jews were sent to ghettos in the large cities or to industrial factories and other sites of slave labour. As the war progressed, however, the “resettlement” process took priority. Competition arose between the Army and the SS over the use of the railroads, but the Army’s need for supplies, reinforcements and sometimes retreat were secondary to the ideological satisfaction of the Final Solution.

Even the war effort’s need for skilled labour gave way to Himmler and the SS who, with Hitler’s blessing, only increased their extermination efforts as the prospects for winning the war dimmed. Trains flowed day and night with human cargo destined for the death camps at Auschwitz (2,000,000 estimated killed), Belzec (600,000), Chelmo (340,000), Majdanek (1,380,000), Sobibor (250,000) and Treblinka (800,000).

As a scholar of communications, I have been deeply influenced by Cambridge professor Jack Goody, whose Logic of Writing and the Organization of Society (1986) has helped me understand some of the crucial relationships between information technology and the politics of modern life.

Innovators in bureaucracy and population technology, the Germans were leaders in the use of telegraph and teletype communications to control their national administrators and armies. By the turn of the century, the Germans had transformed British “political arithmetic” into “statistics” (state-istics), numerical techniques in the service of State and population administration. They used the tabulating machines and punch cards designed for the US census to identify and control the population. These techniques were taken up by the SS in their management of the Final Solution.

From its first spoken word, “Name?” Schindler’s List investigates the political technology used in the Holocaust. The use of the census was an integral part of the process, as it allowed the Nazis to round up Jews and start the continual process of selecting who would be eligible for work, who would be transported to a concentration camp, and who would be killed. Everyone was assigned a number that was tattooed on their arms. Every number had an associated punch card. Every name needed to be accounted for, registered and given a position.

The list is ancient political technology, which Spielberg chose as a major motif. It is linked to the film’s narrative in a meaningful way, so that it reinforces some of the main themes, such as the bureaucratic momentum of the Nazi machine.  A striking example is shown when Schindler’s trusted accountant (Itzhak Stern, played by Ben Kingsley), forgets to bring his working papers one day and winds up on a train awaiting deportation to an extermination camp. Schindler rushes down to the station to intervene but is told nothing can be done as Stern is now on the list to be transported. Schindler can only get an exemption after he convinces the SS officer that he has the influence to have the officer sent to the Russian Front within weeks.

The list and its physical counterpart, the line, figure prominently throughout the film as mediums of control and efficiency. The line is a particularly brutal and yet effective political technology. It renders people passive and orderly. Disrupting or attempting to escape its smooth, linear surface is an invitation for punishment or death, as many Jews discover.

However, the list also becomes a technology of resistance and escape. With the Russian advancing, Schindler’s factory must yield to the Final Solution. He bribes enough Nazi officials, however, to transport 1200 of his Jews to a new location near his hometown of Zwittau in Austria. From within the Nazi bureaucratic maze, Schindler’s list emerges as a ticket to freedom for the Jews. The list is a manifest for getting on the train to Schindler’s new factory. Getting on this list is a matter of life and death.

For Keneally, it is a modern-day Noah’s Ark. As he writes in Schindler’s Ark about the legends that developed around Schindler’s list: “The list is an absolute good. This list is life.”

It is difficult to say whether Schindler’s List has a happy ending. Spielberg is much harder on Schindler than Keneally. Whereas the latter credits him with an early transformation, the movie-maker waits until nearly the end to acknowledge his attempts to put the welfare of the Jews in front of his own self-interest. He invokes a Talmud equation which is inscribed on the ring offered as a gift to Schindler from the Jews he saved: “Whoever saves one life, saves the world entire.”

Schindler is overcome with grief at the end as he calculates the lives he could have saved with the money he wasted. Ultimately, we left with this moral balance sheet.

Dr Anthony J Pennings is a political scientist and a lecturer in communications studies at Victoria University. He is not Jewish but his parents lived under Nazi occupation in the Netherlands, a country that had 75 percent of its Jewish population shipped to Nazi concentration camps. 

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from http://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor and Associate Chair at State University of New York (SUNY) Korea. Recently taught at Hannam University in Daejeon, South Korea. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, media economics, and strategic communications.

    You can reach me at:

    anthony.pennings@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • Traffic Feed

  • Recent Posts

  • Pages

  • RSS CNN.com – RSS Channel – App Tech Section

    • New lifesaving drone rescues swimmers
      A new lifesaving drone has been used to rescue two teenagers from the rough seas off the coast of Australia's Lennox Head, New South Wales.
    • Untitled
      Two swimmers were in serious trouble off the coast of Australia, until a drone came to their rescue.
    • Untitled
      All stories start somewhere, and the story of the driverless car begins in a research lab in Pittsburgh, where Carnegie Mellon University Professor Red Whittaker was one of the first to develop a fully autonomous driving machine.
    • Untitled
      The programs controlling driverless cars are computers, after all, and all computers are hackable.
    • Untitled
      More than 1.25 million people die every year in auto accidents. Driverless cars could change that.
  • January 2018
    M T W T F S S
    « Dec    
    1234567
    891011121314
    15161718192021
    22232425262728
    293031  
  • Crossword of the Day

  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.