The Landsat Legacy
Posted on | November 3, 2017 | No Comments
“It was the granddaddy of them all, as far as starting the trend of repetitive, calibrated observations of the Earth at a spatial resolution where one can detect man’s interaction with the environment.” – Dr. Darrel Williams, Landsat 7 Project Scientist
The Landsat satellite program is the longest-running program for sensing, acquiring, and archiving of satellite-based images of Earth. Since the early 1970s, Landsat satellites have constantly circled the Earth, taking pictures and collecting “spectral information” and storing them for scientific and emergency management services. These images serve a wide variety of uses, from gauging global agricultural production to monitoring the risks of natural disasters by organizations like the UNISDR. Landsat-7 and Landsat-8 are the current workhorses providing remote sensing services.[1]
The Landsat legacy began in the midst of the “space race,” when William Pecora, the director of the U.S. Geological Survey (USGS), proposed the idea of using satellites to gather information about the Earth and its natural resources. It was 1965, and the U.S. was engaged in a highly charged Cold War with the Communist world and space was seen as a strategic arena. Extensive resources had been applied to gathering satellite imagery for espionage, reconnaissance and surveillance purposes. Some of the technology was also being shared with NASA. Pecora stated that the program was conceived… “largely as a direct result of the demonstrated utility of the Mercury and Gemini orbital photography to Earth resource studies.” A remote sensing satellite program to gather facts about the natural resources of our planet was beginning to make sense.
In 1966, the USGS and the Department of the Interior (DOI) began to work with each other to produce an Earth-observing satellite program. They faced a number of obstacles including budget problems due to the increasing costs of the war in Vietnam. But they persevered, and on July 23, 1972, the Earth Resources Technology Satellite (ERTS) was launched. It was soon called Landsat 1, the first of the series of satellites launched to observe and study the Earth’s landmasses. It carried a set of cameras built for remote sensing by the Radio Corporation of America (RCA).
The Return Beam Vidicon (RBV) system consisted of three independent cameras that sensed different spectral wavelengths. They could obtain visible and near-infrared (IR) photographic images of the earth. RBV data was processed to 70 millimeter (mm) black and white film rolls by NASA’s Goddard Space Flight Center and then analyzed and archived by the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center.
The second device on Landsat-1 was the Multispectral Scanner (MSS), built by the Hughes Aircraft Company. The first five Landsats provided radiometric images of the Earth through the ability to distinguish very slight differences in energy. The MSS sensor responded to Earth-reflected sunlight in four spectral bands.
This video discusses remote sensing imagery, how it is created, and variations in resolution.
Landsat’s critical role became one of monitoring, analyzing, and managing the earth resources needed for sustainable human environments. Landsat uses a passive approach, measuring light and other energy reflected or emitted from the earth. Much of this light is scattered by the atmosphere, but techniques have been developed to dramatically improve image quality.
After 45 years of operation, Landsat now manages and provides the largest archive of remotely sensed – current and historical – land data in the world. A partnership between NASA and the U.S. Geological Survey (USGS), Landsat’s critical role has been monitoring, analyzing, and managing the earth resources needed for sustainable human environments. It manages and provides the largest archive of remotely sensed – current and historical – land data in the world.
Landsat uses a passive approach, measuring light and other energy reflected or emitted from the Earth. Much of this light is scattered by the atmosphere, but techniques have been developed for the Landsat space vehicles to dramatically improve image quality. The following video discusses the Landsat programs in some depth and how researchers can use their data.
Each day, Landsat-8 adds another 700 high-resolution images to the extraordinary database, giving researchers the capability to assess changes in Earth’s landscape over time. Landsat-9 will have even more sophisticated technologies when it is launched into space in 2020.
Recently, major disasters due to hurricanes in the southern US states and the Caribbean have been monitored by the Landsat satellites. The year 2017 will be noted for major disasters including Hurricane Maria in Puerto Rico, hurricane Irma in the Caribbean, and Hurricane Harvey and associated flooding in Texas.[2]
Notes
[1] Landsat 8 was launched in 2013 and images the entire Earth every 16 days. It is the eighth Landsat to be launched although Landsat 6 crashed into the Indian ocean during liftoff in October 1993.
[2] USGS has released a recent report on all 4 major hurricanes of 2017 that adversely affected the southern US.
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Russian Interference, Viral Sharing, and Friends Lying to Friends on Social Media in the 2016 Elections
Posted on | October 5, 2017 | Comments Off on Russian Interference, Viral Sharing, and Friends Lying to Friends on Social Media in the 2016 Elections
As discussed previously, social media is now a central part of modern democracies and their election processes. This was touted in the Obama presidential election in 2008 but became even more evident in the 2016 U.S. election, notably for unexpected 304-232 electoral college victory by Donald Trump. The real estate magnate and reality show TV star occupied the White House in January 2017, despite 2.8 million more US citizens voting for Hillary Clinton.
In this post, I look at some of the influence by Russia and other foreign entities on the recent election as investigated and reported on by the Mueller Special Counsel’s Report on the Investigation into Russian Interference in the 2016 Presidential Election. My main concern, however, is the viral spreading of “memes” on social media by compliant voters who fail to read or make critical distinctions about the posts and articles they share with their “friends.” These collaborators contribute to the spreading of “fake news” from organizations like the Internet Research Agency (IRA), the notorious Russian “troll farm.” Emotionally disturbing messages emerged from both political polarities though and were meant to manipulate the behaviors and emotions of unsuspecting people on various social media.
It is likely that Congress and the press, as well as the public, will be parsing this phenomenon over the next couple of months, if not years, as we all strive to understand where democracy is heading in the age of social media in politics.
In early September 2017, Facebook released results of a preliminary investigation looking into possible Russian interference in the 2016 U.S. election. They announced that approximately US$100,000 in ad spending were associated with Russian profiles during the period from June 2015 to May 2017. They later announced to Congress that over 3,000 Facebook ads were believed to be from Russian sources. How powerful were these ads? How were they aided by overzealous Sanders’ supporters? Were shares by Trump supporters significant in influencing independent and GOP voters?
Did Russian operatives and agents carefully target the supporters of Bernie Sanders, Jill Stein, and Donald Trump with emotionally and politically charged ads? And how many of the targeted citizens “shared” Russian produced propaganda over Facebook and other social media to their friends and others on social media? The controversy has positioned Mark Zuckerberg, the guy in the brown tee shirt below, and CEO of the $500 billion social media giant, as possibly the most important person in the world when it comes to the future of global democracy.
Zuckerberg is being proactive as some lawmakers are proposing to regulate social media by requiring all major digital media platforms with 1,000,000 or more users to monitor and keep public records of ad buys of more than $10,000. These digital platforms, as well as broadcast, cable and satellite providers, would also have to make reasonable efforts to ensure that ads and other electioneering communications are not purchased by foreign nationals, either directly or indirectly.
Some major questions to be addressed when it comes to the election interference issue are:
Who was providing the demographic data for targeting specific potential voters on the US side? We know that key swing states, Michigan and Wisconsin were targeted and Trump carried Michigan by 10,700 votes and Wisconsin by 22,748 votes. These were extremely narrow margins. Add Pennsylvania’s 44,000 voter margin, and the election goes to Trump by a mere 78,000 majority.
What keywords were they using to identify susceptible targets? For example, “Jew hater” and the N-word have actually been used to target specific audiences. Do they use specific words, phrases, or images that incite hate or a sense of unfairness? The Black Lives movement has been a particularly important target as Google’s investigation of foreign intervention found.
What foreign entities were involved and how extensive were the social media ad buys? What were they saying and what kind of images are being produced? Who was producing and designing the manufactured and posted “memes”? Memes are designed communications that combine provocative imagery with text that hooks the viewer. These captioned photos are shared easily on social media and are also associated with stories on websites. PropOrNot’s monitoring report identified more than 200 websites as routine peddlers of Russian propaganda during the 2016 election season. They had an audience of over 15 million Americans. PropOrNot estimates that stories planted or promoted on Facebook were viewed more than 213 million times.[1]
Even more significant are the viral metrics that characterize social media. Who was sharing this information and how dynamic was the virality and network effects? In other words, how fast and wide were the messages being spread from individual to individual? This is similar to word of mouth (WOM) in the nonmedia world, and it is highly prized because it shows active and emotional involvement. What can create a “snowball effect” that multiplies your post reach to friends of friends and beyond. With 2 billion active users a month, Facebook is clearly a concern as are other platforms such as Twitter, Instagram, and Snapchat.
An interesting case dealt with a fake news story about paid protesters being bused to an Anti-Trump demonstration in Austin, Texas. That story started on Twitter with hashtags #fakeprotests and #trump2016 and was quickly shared some 16,000 times and also migrated to Facebook where it was shared more than 350,000 times.
So we are entering into an age when social media metrics are not just crucial for modern advertising and e-commerce but essential for political communication as well. We will see an increasing need for skilled digital media analysts that measure social media metrics ranging from simple counting measures of actions like check-ins, click-through rates, likes, impressions, numbers of followers, visits, etc. to other important contextual metrics including conversation volume, engagement, sentiment ratios, conversion rates, end action rates, and brand perception lifts.[3]
Countries interfering in the elections of other countries is not a new phenomenon. But rarely have they been able to enlist so many unwitting collaborators. As we continue to move into the era of cybernetic democracy, a new vigilance is required at many levels. National and local governments need to be wary, social media platforms like Facebook need to review content and sources, and users also need to take responsibility for reading what is being passed around and using ethics-based judgment before spreading rumors and materials that incite hate and social discord.
Notes
[1] PropOrNot’s monitoring report investigates social media stories planted or promoted by the disinformation campaigns.
[2] ThinkProgress recently began a series investigating foreign influence on social media. Founded in 2005, ThinkProgress is funded by the Center for American Progress Action and covers the connections and interactions between politics, policy, culture, and social justice.
[3] I got my start in teaching social media metrics with John Lovett’s (2011) Social Media Metrics Secrets John Wiley and Sons. It is still highly relevant.
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: 2016 U.S. election > cybernetic democracy > Russian interference > Social Media Metrics
Assessing Digital Payment Systems
Posted on | August 29, 2017 | Comments Off on Assessing Digital Payment Systems
In digitally-mediated environments, new forms of currency and payment systems continue to gain public acceptance. Changes in technology, contactless options, and the ease of electronic payments make cash less desirable. While credit cards remain the most popular form of payment with some 70% of the market, other systems are emerging. PayPal, for example, is used for some 15% of online payments.
The major types of payment systems for B2C transactions:
* Payment Cards – Credit, Charge, Debit
* e-Micropayments – less than $5
* Digital Wallets – AliPay, Apple Pay, Samsung Pay
* e-Checking – Representation of a check sent online
* Digital eCash
* Stored Value (Smart Cards, PayPal, Gift Cards, Debit
Cards)
* Net Bank – Digital payments directly from your bank
* Cryptocurrencies – Peer-to-peer blockchain enabled digital coins (Bitcoin, Ethereumm, Dash)
What factors determine the success of a digital payment system? Here is a list of some important criteria to consider when it comes to assessing the viability of a currency/payment system.
* Acceptability – Will merchants accept it?
* Anonymity – Identities and transactions remain private
* Independence – Doesn’t require extra hardware or software
* Interoperability – Works with existing systems
* Security – Reduced risks to payer and payee
* Durability – Not subject to physical damage
* Divisibility – Can be used for large as well as small purchases
* Ease of use – Is it easier to use than a credit card?
* Portability – Easy to carry around
* Uniformity – Bills, coins, and other unites are the same size and shape and value
* Limited supply – Currencies lose value if too abundant [1]
What about payment systems for large transactions between businesses or governments? The major types of payments systems for B2B, B2G or G2G transactions are:
* Electronic Funds Transfer – ACH, SWIFT, CHIPS
* Enterprise Invoice Presentment and Payment (EIPP)
* Wire Transfers – FEDWIRE
The Federal Reserve’s FEDWIRE transfers trillions of US dollars every day and is a major system for facilitating transactions between banks and other financial institutions.
Notes
[1] Types of payment systems and and criteria drawn from “Functions of Money – The Economic Lowdown Podcast Series, Episode 9.” Functions of Money, Economic Lowdown Podcasts | Education Resources | St. Louis Fed. N.p., n.d. Web. 15 July 2017.
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: ACH > Alipay > CHIPS > Fedwire > PayPal > SWIFT
Digital Content Flow and Life Cycle: The Value Chain
Posted on | July 14, 2017 | No Comments
In this post, I connect the idea of the digital content life-cycle to the concept of a value chain. E-commerce and other digital media firms can use this process to compare and identify value-creating steps and prioritize them within the organization’s workflow. They also become important in anticipating and analyzing needed human competences and digital skill sets.
The graphic below represents the key steps in the digital media production cycle, starting at one o’clock.
Long recognized by economists as part of the commodity production process where raw materials are processed into a final product via a series of value-adding activities, the value chain concept was refined by Michael Porter in 1985.
The idea of the value chain is based on the view of organizational outcomes as a series of transformative steps. He focused on the processes of organizations, seeing productive activities as a series of sequential steps. Central to this idea is seeing a manufacturing (or service) organization as a system, made up of subsystems each with inputs, transformation processes, and outputs.
Porter distinguished “primary activities” in the value chain from “secondary activities” in the sense that the former were more directly related to a final output. Primary activities include inbound logistics, production/operations, outbound logistics, marketing and sales, and services.[1] Raw materials come into the company through logistical processes and pass through production facilities that are coordinated via operations. These resources are transformed into a final product that is marketed to customers.
Activities such as procurement, human resource management, technological development, and infrastructure are secondary only in the sense that they provide support functions to the primary value chain. Departments such as accounting, finance, general management, government relations, legal, planning, public affairs, and quality assurance are necessary, but not primary. Porter didn’t diminish these activities but rather placed them in relation to output goals.
Both primary and secondary activities can help identify value-creating steps, supporting activities and prioritize them within the organization’s workflow.
An analysis of the flow of digital content production can help companies determine the steps of web content globalization.
In the early days of the Internet, Rayport and Sviokla (1995) presented the idea of a virtual value chain in the Harvard Business Review. They listed five activities involving the “virtual world of information” that help generate competitive advantage. In their article they suggest that gathering, organizing, selecting, synthesizing, and distributing information help companies use content and value-added resources to build new products and services and enhance customer relationships.[2]
They even suggested that newspapers could use the five value-adding steps to construct new products with the digital photography and other information resources they have available. Remember this was a time when the jpeg image was fairly new and companies like Real Networks were competing to become a video standard. No Facebook existed.
A digital chain analysis involves diagramming then analyzing the various value-creating activities in the content production, storage, monetization, distribution, consumption, and evaluation process. Each step can be analyzed and evaluated in terms of its contribution to the final outputs.
Furthermore, it involves an analysis of “informating” process, the stream of data that is produced in the content life cycle process. Digital content passes through a series of value-adding steps that prepare it for global deployment via data distribution channels to HDTV, mobile devices, and websites. Each of these steps is meticulously recorded and made available for analysis.[3]
The digital value chain is presented linearly here, but most data activities that add value operate as cycles. New ideas are conceived, new content is added, questions are constantly being asked, new ways of monetizing content is also extremely important. Also, how are audiences consuming the content? This leads the process back to the initial conception of content with the consideration of audience/consumer analytics and feedback.
Through this analysis, the major value-creating steps of content production and marketing process can be systematically identified and managed. In the graph above, core steps in digital production and linkages between them are identified in the context of advanced digital technologies. These are general representations but provide a framework to analyze the activities and flow of content production. Identifying these processes helps to understand the equipment, logistics, and skill sets involved in modern media production and also processes that lead to waste.
In other posts I will discuss each of the following:
Content Creation and Production
Content Storage and Management
Content E-Commerce
Content Distribution and Delivery
Content Usability and Consumption
Content Analytics and Critique
Notes
[1] Porter, Michael E. Competitive Advantage. 1985, Ch. 1, pp 11-15. The Free Press. New York.
[2] A version of this article appeared in the November–December 1995 issue of the Harvard Business Review.
[3] Singh, Nitish. Localization Strategies for Global E-Business.
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: digital content lifecycle > digital content production > Michael Porter > value chain
Not Like 1984: GUI and the Apple Mac
Posted on | May 27, 2017 | No Comments
In January of 1984, during the Super Bowl, America’s most popular sporting event, Apple announced the release of the Macintosh computer. It was with a commercial that was shown only once, causing a stir, and gaining millions of dollars in free publicity afterward. The TV ad was produced by Ridley Scott whose credits at the time included directing movies like Alien (1979) and Blade Runner (1982).
Scott drew on iconography from Fritz Lang’s Metropolis (1927) and George Orwell’s classic 1984 novel to produce a stunning dystopic metaphor of what life would be like under what was suggested as a monolithic IBM with a tinge of Microsoft. As human drones file into a techno-decrepit auditorium, they become transfixed by a giant “telescreen” filled with a close-up of a man, eerily reminiscent of an older Bill Gates. Intense eyes peer through wired-rimmed glasses and glare down on the transfixed audience as lettered captions transcribe mind-numbing propaganda:
-
We are one people with one will, one resolve, one cause.
Our Enemies will talk themselves to death, and we will bury them with their confusion.
For we shall prevail.
However, from down a corridor, a brightly-lit female emerges. She runs into the theater and down the aisle. Finally, she winds up and throws an anvil, a large hammer, into the projected face. The televisual screen explodes, and the humans are startled out of their slumbered daze. The ad fades to white, and the screen lights up: “On January 24th, Apple Computer will introduce Macintosh. And you will see how 1984 won’t be like “1984.” The reason, of course, was what Steve Jobs called the “insanely great” new technology of the Macintosh unveiled by Apple. Against a black background, it ends with the famed logo, a rainbow-striped Apple with a bite out of the right side.
By 1983, Apple needed a new computer to compete with the IBM PC. Steve Jobs went to work, utilizing mouse and GUI (Graphical User Interface) technology developed at Xerox PARC in the late 1970s. In exchange for being allowed to buy 100,000 shares of Apple stock before the company went public, Xerox opened its R&D at PARC to Jobs.[17] Xerox was a multi-billion dollar company with a near monopoly on the copier needs of the Great Society’s great bureaucratic structures. To leverage its position to dominate the “paperless office,” Xerox sponsored the research and development of many computer innovations, but the Xerox leaders never understood the potential of the technology developed under their roofs.
One of these innovations was a powerful but expensive microcomputer called the Alto that integrated many of the new interface technologies that would become standard on personal computers. The new GUI system had the mouse, networking capability, and even a laser printer. It combined several PARC innovations, including bitmapped displays, hierarchical and pop-up menus, overlapped windows, tiled windows, scroll bars, push buttons, checkboxes, cut/move/copy/delete, multiple fonts, and text styles. Xerox didn’t know quite how to market the Alto, so it gave its microcomputer technology to Apple for an opportunity to buy the young company’s stock.
Apple took this technology and created the Lisa computer, an expensive but impressive prototype of the Macintosh. In 1983, the same year Lotus officially released its Lotus 1-2-3 spreadsheet program, Apple released Lisa Calc with six other applications – LisaWrite, LisaList, LisaProject, LisaDraw, LisaPaint, and LisaTerminal. It was the first spreadsheet program to use a mouse, but at a price approaching $10,000, the Lisa proved less than economically feasible. However, it inspired Apple to develop a lower-cost Macintosh and software companies such as Microsoft to begin to prepare software for the new style of computer.
The Apple Macintosh was based on the GUI, often called WIMP, for its Windows, Icons, “Mouse,” and Pull-down menus. The Apple II and IBM PC were still based on a command line interface, a “black hole” next to a > prompt that required code to be entered and executed. This system required extensive prior knowledge and/or access to readily available technical documentation. The GUI, however, allowed you to point to information already on the screen or categories that contained subsets of commands. Eventually, menu categories such as File, Edit, View, Tools, and Help were standardized on the top of GUI screens.
AA crucial issue for the Mac was good third-party software that could work in its GUI environment, especially a spreadsheet. Representatives from Jobs’ Macintosh team visited the fledgling companies that had previously supplied microcomputer software. Good software came from companies like Telos Software, which produced the picture-oriented FileVision database, and Living Videotext, which sold an application called ThinkTank that created “dynamic outlines.” Smaller groupings, such as a collaboration by Jay Bolter, Michael Joyce, and John B. Smith, created a program called Storyspace that was a hit with writers and English professors.
PARC was a research center supported by Xerox’s near monopoly on paper-based copying that grew tremendously with the growth of corporate, military, and government bureaucracies during the 1960s. Interestingly, in 1958, IBM passed up an opportunity to buy a young company that had developed a new copying technology called “xerography.” The monopoly allowed them to set up a relatively unencumbered research center to lead the company into the era of the “paperless office.” One of the outcomes of this research was the GUI technology.
Unfortunately, Xerox failed to capitalize on these new technologies. Subsequently, they sold their technology in exchange for the right to buy millions of dollars in Apple stock. Jobs and Apple used the technology to design and market the Lisa computer with GUI technology, and then, during the 1984 Super Bowl, dramatically announced the Macintosh. The “Mac” was a breath of fresh air for consumers who were intimated by the “command-line” techno-philosophy of the IBM computer and its clones.
Citation APA (7th Edition)
Pennings, A.J. (2017, May 27) Not Like 1984: GUI and the Apple Mac. apennings.com https://apennings.com/how-it-came-to-rule-the-world/not-like-1984-gui-and-the-apple-mac/
Notes
[17] Rose, Frank (1989) East of Eden: The End of Innocence at Apple Computer. NY: Viking Penguin Group. p.47.
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: 512K “fat Mac” > Excel > Excel 2.2 for the Macintosh > Excel 5.0 > Great Society > GUI history > Lisa Calc > Lotus 1-2-3 > Microsoft’s Multiplan > Quattro Pro > Visual Basic Programming System > Windows 1.0 > Xerox PARC
A First Pre-VisiCalc Attempt at Electronic Spreadsheets
Posted on | May 25, 2017 | No Comments
Computerized spreadsheets were conceived in the early 1960s when Richard Mattessich at the University of California at Berkeley conceptualized the electronic simulation of business accounting techniques in his Simulation of the Firm through a Budget Computer Program (1964). Mattessich envisaged the use of “accounting matrices” to provide a rectangular array of bookkeeping figures that would help analyze a company through numerical modeling. Mattessich’s thinking would predate popular spreadsheet programs like VisiCalc, Lotus 1-2-3, Excel, and Google Sheets.
The first actual computerized spreadsheet was developed based on an algebraic model written by Mattessich and developed by two of his assistants, Tom Schneider and Paul Zitlau. Together, they created a working prototype using Fortran IV programming language. It contained the basic ingredients for the digital spreadsheet including the crucial support for individual figures in cells by the entire calculative formulas behind each entry.[1]
Mattessich was sanguine on his invention, recognizing that it was a time when computers were being considered for several simulation projects, so it was “reasonable to exploit this idea for accounting purposes.”[2] These other projects included the modeling of ecological and weather systems as well as national economies. In fact, Mattissich’s concurrent work on national accounting systems was better received, and his book on the topic became a classic in its own right.[3] Mattessich’s attempts soon fell into obscurity because mainframe technology was not as powerful or interactive as the microprocessor-powered personal computer.
The computer systems of the mid-1960s were not conducive to the type of interactivity that would make spreadsheets so attractive in the 1980s. Computers were big, secluded, and attended to by a slew of programmer acolytes that religiously protected their technological and knowledge domains. They were the domain of the EDP department and removed from all but the highest management by procedures, receptionists, and security precautions.
Computers ran their programs in groups or “batches” of punched cards delivered to highly sequestered data processing centers, and the results picked up sometime later, sometimes hours, sometimes days. Batch processing was used primarily for payroll, accounts payables, and other accounting processes that could be done on a scheduled basis. Introducing minicomputers using integrated circuits such as DEC’s PDP-8 meant more companies could afford computers, but they did not significantly change their accounting procedures or herald the use of Mattissich’s spreadsheet.
Although the earliest PCs were weaker than their bigger contemporaries, the mainframes, and even the minicomputers, they had several advantages that increased their usefulness. Their main advantage was immediacy; the microcomputer was characterized in part by its accessibility: it was small, relatively cheap, and available via several retail outlets. It used a keyboard for human input, a cathode ray monitor to view data, and a newly invented floppy disk for storage.
But just as important was the fact that it bypassed the traditional data processing organization that was constantly striving to keep up with new processing requests. One implication was that frustrated accountants would go out and buy their own computers and software packages over the objections or indifference of the EDP department. It also meant a new flexibility in terms of the speed and amount of information immediately available. Levy recounts the following comments from a Vice-President of Data Processing at Connecticut Mutual who eventually bought one of the earliest microcomputer spreadsheet programs to do his own numerical analyses:
-
DP always has more requests than it can handle. There are two kinds of backlog – the obvious one, of things requested, and a hidden one. People say, “I won’t ask for the information because I won’t get it anyway.” When those two guys designed VisiCalc, they opened up a whole new way. We realized that in three or four years, you might as well take your big minicomputer out on a boat and make an anchor out of it. With spreadsheets, a microcomputer gives you more power at a tenth the cost. Now people can do the calculations themselves, and they don’t have to deal with the bureaucracy.[4]
Despite the increasing processing power of the mainframes and minis, and new interactivity due to timesharing and the use of keyboards and cathode ray screens, the use of computerized spreadsheets never increased significantly until the introduction of the personal computer. It was only after the spreadsheet idea was rediscovered in the context of the microprocessing leap made in the next decade that Mattesich’s ideas would be acknowledged.
Citation APA (7th Edition)
Pennings, A.J. (2017, May 25) A First Pre-VisiCalc Attempt at Electronic Spreadsheets. apennings.com https://apennings.com/enterprise-systems/a-first-pre-visicalc-attempt-at-electronic-spreadsheets/
Notes
[1] Mattessich and Galassi credit assistants Tom Schneider and Paul Zitlau with the development of the first actual computerized spreadsheet based on an algebraic model written by Mattessich. It was reported in “The History of Spreadsheet: From Matrix Accounting to Budget Simulation and Computerization”, a paper presented at the 8th World Congress of Accounting Historians in Madrid, August 2000. See Mattessich, Richard, and Giuseppe Galassi. “History of the Spreadsheet: From Matrix Accounting to Budget Simulation and Computerization.” Accounting and History: A Selection of Papers Presented at the 8th World Congress of Accounting Historians: Madrid-Spain, 19–21 July 2000. Asociación Española de Contabilidad y Administración de Empresas, AECA, 2000. See also George J. Murphy’s “Mattessich, Richard V. (1922-),” in Michael Chatfield and Richard Vangermeersch, eds., The History of Accounting–An International Encyclopedia (New York: Garland Publishing Co., Inc, 1997): 405.
[2] This case of Richard Mattessich developing the first electronic spreadsheets has been made extensively including in “The History of Spreadsheet: From Matrix Accounting to Budget Simulation and Computerization”, a paper presented at the 8th World Congress of Accounting Historians in Madrid, August 2000 by Richard Mattessich and Giuseppe Galassi.
[3] Mattessich’s Accounting and Analytical Methods—Measurement and Projection of Income and Wealth in the Micro and Macro Economy (1964) was published by Irwin and was part of that movement towards national accounting systems. Mattessich, Richard. A later version was published as Accounting and Analytical Methods: Measurement and Projection of Income and Wealth in the Micro- and Macro-Economy. Scholars Book Company, 1977. It was mentioned in Chapter 3: Statistics: the Calculating Governmentality of my PhD dissertation, Symbolic Economies and the Politics of Global Cyberspaces (1993).
[4] Levy, S. (1989) “A Spreadsheet Way of Knowledge,” in Computers in the Human Context: Information Technology, Productivity, and People. Tom Forester (ed) Oxford, UK: Basil Blackwell.
Here are the hypertext links from the text formatted in APA (7th Edition) style:
Wikipedia contributors. (n.d.). Richard Mattessich. In Wikipedia. Retrieved January 22, 2025, from https://en.wikipedia.org/wiki/Richard_Mattessich
Obliquity. (n.d.). Fortran IV programming language history. Retrieved January 22, 2025, from https://www.obliquity.com/computer/fortran/history.html
Pennings, A. (n.d.). Steven Levy’s a spreadsheet way of knowledge. Retrieved January 22, 2025, from https://apennings.com/enterprise-systems/steven-levys-a-spreadsheet-way-of-knowledge/
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Excel > Google Sheets > Lotus 1-2-3 > Richard Mattessich > VisiCalc
CISCO SYSTEMS: FROM CAMPUS TO THE WORLD’S MOST VALUABLE COMPANY, PART THREE: PUSHING TCP/IP
Posted on | April 20, 2017 | No Comments
Len Bosack and Sandy Lerner combined several technologies being developed at Stanford University and the Silicon Valley area to form the networking behemoth, Cisco Systems. However, success was by no means foreseen in the early years. The key was when the pair obtained access to Bill Yeager’s source code for the multiple-protocol “Blue Box” router in 1985. Yeager’s software became the foundation for the Cisco router operating system and a major stimulant for the adoption of the Transmission Control Protocol and Internet Protocol (TCP/IP) in data communications systems around the world.
In 1980, Yeager became responsible for networking computers at the Stanford Medical School. A number of devices were proliferating on campus such as the DEC10, PDP11/05 and VAX Systems, and especially a number of Xerox machines, including PARC Lisp machines and Altos file servers and printers. The Xerox influence was substantial as the project started with routing Parc Universal Packet (PUP) for the Xerox PARC systems and mainframes. It was later configured to include IP addresses for the new VAX750s and Xerox’s own proprietary network XNS – Xerox Network Services.
After some controversy, the Stanford Office of Technology Licensing eventually decided to allow Cisco to use the technology. The venerable institution ultimately decided they would try “to make the best of a bad situation,” and on April 15, 1987, licensed the router software and two computer boards to Cisco for $19,300 in cash and royalties of $150,000 as well as product discounts. It refused equity in Cisco Systems as “a matter of policy.”[1] The agreement named Yeager as the principal developer of the source code, making him one of the unsung heroes of the Internet age.
The couple initially ran the company from their home at 199 Oak Grove Avenue in Atherton, CA. Using their credit cards for startup capital, they set up an office and started assembling routers in their living room. Self-financing, they even installed a mainframe computer in their garage for $5000. They needed the big computer to stay on the ARPANET and take orders for their network equipment, making them an early e-commerce innovator.[2] Bosack focused on technology and Lerner on marketing. Lerner’s ad meme “IP Everywhere” was ahead of its time.
The early years were tough. Venture capital was difficult to acquire, and the couple reportedly made over 70 visits to potential investors to find money for their company. At that time, only semi-conductor companies were being funded, and the Internet was barely known outside of academia. Finally, Sequoia Capital stepped in for a considerable percentage of the business. Don Valentine had passed on Steve Jobs and Apple Computers and consequently had developed a more open mind to new innovations. They put in $2.5 million for one-third of the company and the ability to make major management decisions.
By the end of 1986, the company was growing rapidly. Although it took two years to get out of the garage, the computer and communications revolution was taking off. PCs were becoming commonplace and even more important, becoming networked. Also, TCP/IP was gaining traction. The Department of Defense mandated its use at the center of the ARPANET and granting projects that coded TCP/IP implementations for IBM machines and operating systems such as UNIX. The emerging NSFNET had also required TCP/IP protocols and compliant networking technologies. By mid-1985, almost 2000 computers hosted TCP/IP technology.
Despite the growing enthusiasm for TCP/IP in the military/academic/research institute sphere, the major manufacturers of computer communications equipment were focused on the OSI models and believed market forces would eventually move in its favor. However, in 1986, advocates of TCP/IP took action to improve and promote their protocols. The Internet Advisory Board (IAB) began to implement strategies in two parts. The first was to encourage more participation in TCP/IP standards development. It resulted in the May 1986 publication of RFC 985, “Requirements for Internet Gateways” and other recommendations. The second was to inform equipment vendors about the features and advantages of TCP/IP. This involved organizing several vendor conferences including the “TCP/IP Vendors Workshop” on August 25-27, 1986 and the “TCP/IP Interoperability Conference” during March 1987.
While some vendors were disappointed that no certification and testing process was forthcoming, it allowed the advocates of TCP/IP to provide some guidance for equipment manufacturers to incorporate their protocols. It was with under leadership of Dan Lynch that these conferences were started and in 1988 they came under the heading of INTEROP. As the industry took shape with innovations like Simple Network Management Protocol, or SNMP, introduced at INTEROP ’88, Cisco was one of its main beneficiaries.
In 1986, Cisco introduced the F Gateway Server (FGS) remote access server and also their first commercial multi-protocol network router, the Advanced Gateway Server (AGS). The “Massbus-Ethernet Interface Subsystem,” was an interface card made for DEC computers and could create bridges between local area networks of different protocols such as TCP/IP and PUP and its highest line rate on the system was 100Mbps FDDI. The AGS was capable of connecting multiple LAN and WAN networks. Its technical characteristics included a throughput of 10M bits a second and it could process 300 packets a second. The AGS supported 200 routing tables and cost approximately $5,550. The AGS was quickly adopted in networks such as CSUNet at Colorado State University. Soon, universities all around the country were calling and emailing them about their equipment.
In November 1986, Cisco moved to their first office, 1360 Willow Road, Menlo Park, CA. Revenues had reached $250,000 a month. By May of 1988, sales had doubled, and then just three months later, they doubled again. By 1989, Cisco had three products and 115 employees and reported revenues of $27 million.[2]
Notes
[1] Information on Cisco’s origins is relatively scarce and dominated by Cisco public relations. Tom Rindfleisch of Stanford University and Bill Yeager, a Senior Staff Engineer at Sun Micro Systems, Inc. present a larger story at http://smi-web.stanford.edu/people/tcr/tcr-cisco.html.
[2] Information on Cisco’s habitats from Segeller, (1999) Nerds 2.0.1: A Brief History of the Internet. New York: TVBooks. pp.240-247.
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: standards
Lotus 1-2-3 – A Star is Born
Posted on | April 7, 2017 | No Comments
It was during the November of 1982 on the giant floor of the Comdex show in Las Vegas that Lotus 1-2-3 would first make its mark. While VisiCalc for the Apple II had shown both the viability of digital spreadsheets and the new “microcomputers,” Lotus 1-2-3 showed that spreadsheets would become indispensable for modern organizations and global digital finance.
Initially Jonathan Sachs was worried. He had spent nearly a year working with Mitch Kapor and his small startup company called Lotus Development Corporation on a new spreadsheet program for the recently released IBM PC. A former programmer at Data General Corp, he was apprehensive about the prospects of his new creation. It was too difficult for its intended audience, he thought, and would scare users away.
With questions in his mind and an updated resume in hand, Sachs took his baby to the crowds on the Las Vegas show floor. Lotus had begun to publicize its new spreadsheet nearly half a year before the Comdex show, and a Wall Street Journal article just before the event was beneficial. The new application proved almost as popular as Apple in its debut at the West Coast Computer Faire. Lotus 1-2-3 turned out to be a big hit with the swarming exhibit crowd.
“By the time the workers started to tear down the exhibit stalls at the end of the Comdex show, Lotus had taken about $3 million in orders based on the demo alone. Little did Sachs know his creation would change the computer industry forever.”[1] Nor did he know it would change the world of finance.
But that was just the start of their success. Soon, Lotus 1-2-3 would top the sales list of all computer software. After the new electronic spreadsheet was officially released on January 26, 1983, the company logged sales of some 60,000 copies in its first month. Because software can be reproduced and packaged quickly after it is developed, they were able, barely able, to keep up with demand. In a few months, Lotus 1-2-3 was heading distributor Softsel Computer Products Inc.’s best-seller list and where it would stay for the next two years. Lotus 1-2-3 would become the number-one best-selling computer software application of the 1980s.
Lotus 1-2-3’s success was based on great programming skills and market savvy. Sachs and Kapor decided to create a spreadsheet that would improve the functionality and speed of VisiCalc, the tremendously popular application designed for the Apple II. Kapor had created graphics capabilities for VisiCalc, while Sachs wrote for minicomputers at Data General. Because of his experience with the popular market, Kapor called the marketing shots. At the same time, Sachs had the better programming experience and worked to realize Kapor’s conceptual software ideas.
With startup funds from Ben Rosen, who had left Morgan Stanley to become a venture capitalist, the small firm began to work on a new product for the business market. Together, they decided to create a new microcomputer spreadsheet product by using a faster programming language than most of the competition, such as Context MBA and SuperCalc. Their target was a software application for the newly released IBM PC market.
With startup funds from Ben Rosen, who had left Morgan Stanley to become venture capitalist, the small firm began to work on a new product for the business market. Together they decided to create a new microcomputer spreadsheet product by using a faster programming language than most of the competition such as Context MBA and SuperCalc. Their target was a software application for the newly released IBM PC market.
Instead of the easier Pascal programming language, Kapor and Sachs decided to use the more tedious Assembly language to construct their new software package. Assembly worked closer to the original machine language of the computer. That meant it was much harder to program, but it could provide a faster and more robust final spreadsheet package. They spent 10 months developing the application that grew from a core product to the final 1-2-3 through a series of decisions to add on various features.
Step by step, they built on and tested the new prototype, at one point dropping a word processor module because it was too difficult. As Sachs said afterward, “From a programmer’s standpoint, it was a mind-boggling challenge to write that much code in assembly language that fast and get it to be really solid.” The finished product, Lotus 1-2-3 Release 1 For DOS, which included a spreadsheet, graphing capabilities, database storage, and a macro language, required only 192K of RAM. Because of the functionality of the assembly language, it was much faster than its major competitors, Context MBA, Multiplan, and VisiCalc, despite including the extra features.
Just as VisiCalc helped Apple’s sales, Lotus 1-2-3’s popularity helped IBM’s PC sales take off. Launched in the late summer of 1981, IBM faced stiff competition in the Apple II and a host of new computer manufacturers using the CP/M operating system. Although IBM had name recognition, particularly in the business world, it still needed the kind of practical application that would justify its expense. Lotus 1-2-3 would supply the incentive to put a PC “on top of every desk in the business world.” Sachs reminisced on the impact of Lotus 1-2-3 on the IBM PC, “It was pretty amazing because a factor of five more PCs got sold once that software was available. It was very closely tuned to the original IBM PC and pretty much used all of its features.”[2] Lotus could also run on “IBM-compatible” machines such as the simultaneously developed Compaq portable computer.
Both Sachs and Kapor left Lotus Corporation in the mid-1980s. The fast growth had taken its toll and created many problems that diminished their enthusiasm. Lotus faced many formidable challenges: Supply shortages, labor problems, and disputes with distributors took their toll on the original cast. Sachs left in 1985 after the introduction of Release 2 for MS-DOS, which included add-in support, better memory management, as well as more rows and support for math coprocessors. Kapor left in 1987 after Release 2.01 was introduced.
But neither left before Lotus 1-2-3, with its combination of graphics, spreadsheets, and data management caught the eye of business entrepreneurs and corporate executives. They saw the value of a computer program that simplified the monumental amount of numerical calculations and manipulation needed by the modern corporation. By October 1985, CFO magazine reported that “droves of middle managers and most financial executives are crunching numbers with spreadsheet programs such as Lotus 1-2-3.”[3]
Citation APA (7th Edition)
Pennings, A.J. (2017, Apr 7) Lotus 1-2-3 – A Star is Born. apennings.com https://apennings.com/financial-technology/digital-spreadsheets/lotus-1-2-3-a-star-is-born/
Notes
[1] Information about Lotus at Comdex from “From the floor at Comdex/Fall in 1982, Lotus 1-2-3 hit the ground running and has not slowed down”. CMP Media LLC. Accessed on August 26, 2003.
http://www.crn.com/sections/special/supplement/816/816p71_hof.asp
[2]Quotes attributed to Sachs were taken from “From the floor at Comdex/Fall in 1982, Lotus 1-2-3 hit the ground running and has not slowed down”. CMP Media LLC. Accessed on August 26, 2003.
http://www.crn.com/sections/special/supplement/816/816p71_hof.asp
[3] Quote from CFO on the impact of Lotus 1-2-3 in the corporate world from David M. Katz, “The taking of Lotus 1-2-3? Blame Microsoft.” CFO.com. December 31, 2002.
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Apple II > Assembly language > Ben Rosen > Comdex > Context MBA > Jonathan Sachs > Lotus 1-2-3 > Mitch Kapor > Multiplan > SuperCalc > VisiCalc