Anthony J. Pennings, PhD


The Network is the Computer – UNIX and the SUN (Stanford University Network) Workstation

Posted on | July 1, 2013 | No Comments

When computers began using “third generation” integrated circuit technology, processing speeds took a giant leap forward, and new computer languages and applications were enabled. From the time-sharing initiative by General Electric in the early sixties came BASIC (Beginners All-purpose Symbolic Instruction Code) that allowed a new class of non-engineers and scientists to program the computer (including a high school freshman named Bill Gates in 1969). Bell Labs was able to regroup from its early software fiascoes when two typical 1960s computer gurus, long-haired and bearded, created UNIX.

Ken Thompson and Dennis M. Ritchie rethought the software problems needed for time-sharing systems and developed a more elegant and simplistic solution. Written completely in the new “C” language, the UNIX operating system became widely available in the mid-1970s, especially after more powerful versions were created by the University of California at Berkeley, Sun Microsystems, and Xerox. Unix was a key software innovation that enabled data networking to take off, and with it went worldwide finance and the Internet.

The Unix operating system developed originally by AT&T’s Bell Labs was made available in 1969 by Thompson and Ritchie. Both had worked on the pioneering time-sharing project Multics, but when AT&T pulled out of the project, they decided to utilize their institutional freedom to pursue their ideas of an operating system. They named their new OS, “Unix” as a pun on Multics and strove over the next few years to develop an OS that was more streamlined and could run on multiple computers.

The spread of minicomputers by vendors such as DEC, Data General, Prime Computers, and Scientific Data Systems made Unix attractive. Users were frustrated with the cumbersome and propriety software developed for mainframes. Like the transistor before it, AT&T decided to disperse its computer operating system cheaply to avoid government anti-trust action. Bell Labs allowed the Unix software to be distributed to universities and other computer users for a nominal fee and by the late 1970s, its diffusion increased rapidly.[1]

In the early 1980s, SUN (Stanford University Network) Microsystems was incorporated by three Stanford alumni to provide a new type of computer system. The Sun-1 workstation was much smaller than mainframes and minicomputers but more powerful than the increasingly popular personal computers. It would have a major impact, especially on Wall Street, which was ripe for new digital technologies that could empower traders who were eager to use new calculative methods to enhance their trading profitability. Two innovations were crucial to the Stanford networking advances – The Unix operating system and Alohanet inspired Ethernet.

Through military funding, a new version of Unix was developed at the University of California at Berkeley that made its source code available, was cheap to license, and worked with many types of computers. UNIX 4.1BSD (Berkeley Software Distribution) was created when its principal investigator Bob Fabry and lead programmer in the project, Bill Joy, received additional ARPA funds in 1981 to create a new version that serviced Internet protocols. The Berkeley version was designed to maximize performance over smaller Ethernet networks like those in a financial trading floor or on a college campus. Berkeley then distributed the software to universities around the country for a small licensing fee.

The other factor was the Alto Aloha Network, named after the University of Hawaii’s wireless Alohanet system. The Alohanet was the first collision detection system for data communications and inspired companies like Cisco Systems and Sun Microsystems to develop networking solutions. During the late 1970s, Alto Computers developed at Xerox PARC were donated to Stanford University by the giant copier company. They were connected with local area networking technologies that inventor Bob Metcalfe was calling “Ethernet” after the hypothetical medium 19th-century scientists once believed essential to carry the movement of light.

Metcalfe worked on the original ARPANET in Boston and traveled to Hawaii for several months before taking a job at Xerox PARC. Inspired by the Alohanet, he began working on networking when he got to PARC. Unlike the seminal University of Hawaii project that used radio to transmit data packets between the islands, Ethernet connected computers through cables. Metcalfe worked with David Boggs and the inventors of the Alto (Thacker and Lampson) to create a computer card for the Alto computers and soon they were experimenting with a high-speed local area network (LAN). Later they used Ethernet to connect Altos throughout the Stanford campus.

The Sun concept was based on the idea that the “network was the computer”. It started with a prototype 32-bit “workstation” (as opposed to SUN 3 Logopersonal computer) built by Ph.D. student Andy Bechtolsheim, who originally wanted to create a personal computer that would meet the needs of faculty and students on the Stanford campus. Bechtolsheim based his computer on the UNIX operating system and envisioned them linked by Ethernet connections. Bill Joy, who was instrumental in the Berkeley revision of the UNIX code, also joined the Sun team.

The organization was brought together by Vinod Khosla, originally from India, and a Stanford MBA graduate. Khosla was impressed with Bechtolsheim’s prototype and convinced him to go into business with him. He also recruited friend and former roommate Scott McNealy who was incidentally a former high school classmate of Microsoft’s Steve Ballmer. After raising $4 million in venture capital, Khosla and McNealy incorporated Sun Microsystems in February 1982. They marketed their first workstation, the Sun I, later that summer.[2] The founders got together for this informative panel:

Sun Microsystems grew quickly, reaching sales of $9 million in 1983 and $39 million in 1984, largely because of McNealy’s manufacturing expertise. In five years they would become a Fortune 500 company. Sun positioned its products to be cheaper than minicomputers and more sophisticated and expensive than PCs. The key was networking and the strength of the UNIX OS and its ability to work with TCP/IP. A leader in what would be called the “open source movement,” Sun used high quality, off-the-shelf components, openly licensed its key technologies, and developed strong relationships with key software developers. They used equipment like Motorola’s 68000 processor, Intel’s multibus, and the new UNIX. Bill Joy’s version of Unix became the major operating system of Internet’s hosts throughout the world especially after military ordered the integration of the TCP/IP protocols in all hosts throughout the ARPANET in 1982.

The Sun Workstation quickly emerged as a potent computing platform for academic institutions as well as companies in Hollywood and engineers at NASA. But nowhere would the impact be as dramatic as it would be on Wall Street and throughout the financial markets of the world that were rapidly deregulating.

Sun’s revenues would grow to $15.7 billion by 2000 and its stock would grow to $130 before the crash. It would also become the number one supplier of open network computing technologies around the world and the top Unix vendor in the banking, global trading, RDBMS, and securities markets. Sun was also responsible for developing Java still one of the most popular programming language and distributing it for free.

Sun was sold to Oracle Corporation in 2010.


[1] Information on Unix from Campbell-Kelly, M. and Aspray, W. (1996) Computer: A History of the Information Machine. Basic Books. p 219-222.
[2] Information on Sun Microsystems from Segaller’s NERDS 2.0.1 starting on p. 229.
[3] Sales figures from an article on the BUSINESS WEEK website archives accessed on December 8, 2001. “Scott McNealy’s Rising Sun” was originally published in the same magazine by January 22, 1996. According to the article, McNealy had a lot of exposure to manufacturing. His father was a Vice-Chairman of American Motors Corp and after failing to get into both Harvard’s and Stanford’s business schools, he took a job as a foreman for Rockwell International.


AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Google Fiber in Austin

Posted on | May 5, 2013 | No Comments

Austin, Texas is getting Google Fiber, the one-gigabit digital broadband service from the advertising giant. With connections to individual homes and businesses transmitting up to 1,000 Megabits per second, it is from 60 to 100 times faster than Google Fiber Box current services. What makes this Google Fiber unique is that it uses digital signals moving through glass conduits at the speed of light – speeds much, much faster than the copper lines traditionally used for telephone or the coaxial cables that became the staple for broadcasting cable television and later for connecting cable modems. Also, Google connects its fiber optic cabling directly to the home (FTTH) rather than just fiber to the neighborhood or even fiber to the curb in front of your house. Google Fiber will offer digital TV as well as a host of digital Internet services – to selected neighborhoods.[1]

Google has already begun rolling out its glass channels in Kansas, creating what have been nicknamed “fiberhoods”. On March 30, 2011, Kansas City was chosen over 1,000 US metropolitan applicants competing to be the first for the new service. Also, it recently signed a deal to buy a municipal fiber-optic system in Provo, Utah that originally cost $39 million to build. Google is buying it for one dollar because the system is costing the city money. If the Kansas City model is followed, Google Fiber’s pricing structure will include free 300K Internet (with a construction fee), 1GB Internet ($70), and 1GB Internet plus TV ($120).


Telecommunications systems have lagged other technological innovations, particularly in implementation. Fiber optic communication was developed in the 1970s and the first systems were installed by the mid-1980s, including Sprint’s nationwide backbone network. However, fiber is expensive to build out, especially through the “last mile” into homes and businesses. Land lines have lost some of their attraction as investment has shifted to mobile due to the demand for 4G services to feed smart phones and tablets. Verizon has scaled back its FIOS fiber-to-the-home (FTTH) services despite high consumer satisfaction claiming that “Wall Street” punished them for expanding the service.

Telecom incumbents use a variety of competitive strategies to construct barriers to entry including customer captivity through long-term contracts, strong lobbying of government regulation, and extensive investments in fixed costs that are difficult to match by any start-up. Google though, is not just any start-up. One of Google’s major competitive advantages is its investments in fixed costs capital assets. This would include data centers, proprietary advertising and “big data” technology as well as high-speed telecommunications – and with $50 billion in annual revenues – its ability to invest and build is extensive.

Fiber has been an important part of Google’s strategy to connect searchers to their data servers faster so that it feeds their primary revenue source – search advertising. Google wants to make the process incredibly fast to hold off competitors Microsoft and Yahoo! Recognizing this need, they began purchasing fiber optic cabling in the wake of the “telecom crash” in 2002. Some of it was intercity cabling from Enron’s misguided broadband strategy and some it undersea capacity from now defunct international carrier Global Crossing. Much of it was “dark fiber” that would allow Google to attach its laser transmission and termination technology. Towards this end, they began buying up key patents related to optical communications that are going into proprietary fiber optic technology. Fiber is so important for the Google strategy that they spent almost $2 billion for the old Port Authority building on 111 Eighth Avenue in Manhattan because it sits on top of a hub of fiber optic arteries that connect to the surrounding portions of New York City.

Texas assumed a national leadership role in 2005 when it took steps to make it easier for digital video services by companies such as San Antonio-based AT&T and Verizon to expand its broadband services in Texas by centralizing its cable franchising regulations.[2] The proliferation of Internet Protocol Television (IPTV), as it was called at the time, was being stalled because cable TV had existed under monopoly conditions and subject to restrictive regulations and demands by local municipalities. In 2005, Rep. Phil King was the House sponsor of Texas Senate Bill 5 that encouraged competition by allowing new entrants to obtain state-issued, statewide cable and video franchises. No longer would exclusive franchises be granted. The Bill was signed by Governor Perry on September 7, 2005 promising to bring better services and economic benefits to Texas as well as being a model for other states as well.

So will Google Fiber influence economic development in the Austin area? A number of questions are worth raising. Will it attract new companies to Austin? Will it help new and existing firms become more efficient and productive? Can it help increase the rate of innovation needed to compete with other geographical areas? Can it spur competition in the digital services field and bring down prices for 1GB broadband? How will it influence Austin’s advantages in entertainment, government services, and its growing legion of high-tech companies.

One question raised in Forbes magazine asks, “what obligations do we have to provide basic services equally, regardless of income and social circumstances?”. In “Will Poor People Get Google Fiber?” John McQuaid asks whether the Google model of broadband diffusion is the right one or should we return to telecommunications policy that brought us postal service and the telephone – universal service.

In the meantime we will assess the Google model of rolling out digital services and any associated socio-economic development in the Lone Star state’s capitol city, particularly in its cultural and creative industries.[3]


[1] The Google Fiber announcement to build out in Austin was made on Tuesday, April 12, 2013.
[2] I was following the Texas regulatory development as part of a project at NYU on broadband services and economic development. Part of which was written for a paper entitled “The Telco’s Brave New World: IPTV and the “Synthetic Worlds” of Multiplayer Online Games” for the Pacific Telecommunications Council Conference and Proceedings. January 15-18, 2005 Honolulu, Hawaii.
[3] The economic power of the creative industries has been calculated by the U.S. Bureau of Economic Analysis as part of a general revision of what produces economic growth.



AnthonybwAnthony J. Pennings, PhD is the Professor of Global Media at Hannam University in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He also taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii during the 1990s.

Three Levels of Digital Media Metrics

Posted on | April 17, 2013 | No Comments

As the web transforms both user and institutional practices across the digital media sphere, the search for useful metrics intensifies. Traditional techniques for measuring eyeballs and eardrums for television and radio are insufficient in an environment where digital technologies offer so much more in terms of interaction and transaction capabilities. Social media has increasingly embedded itself into the fabric of both profit and nonprofit organizations as senior management recognition of its importance has led to increased budget allocations for staff, technology, data collection and advanced analytics. As the Internet becomes more complex, mobile, and socially-oriented – understanding how your digital media is doing and contributing to organizational objectives becomes more complicated – but also extremely valuable.

So what are the metrics we should be planning to use and looking to analyze? This is not a simple question to answer, but I want to approach it by referencing two books that I used in my undergraduate classes, Social Networking and Digital Analytics: Social Media Metrics: Secrets by John Lovett and Digital Impact: The Two Secrets to Online Marketing Success by Vipin Mayar and Geoff Ramsey. (I guess we are in the unlocking secrets phase of digital media) Both of these books construct a hierarchy of digital media metrics as well as address many other issues such as metrics for mobile, search, and online video. I draw on both as I grapple with my own understanding of a priority system for measuring digital media.

These metrics can be roughly organized into three levels:

1) At a fundamental level, social media metrics involve counting simple, short-term actions like check-ins, tweets, likes, impressions, visits, numbers of followers, click-through rates, etc. They measure the immediate impact of an action or a campaign and can provide some simple but useful diagnostic numbers to gauge effectiveness. In general they provide more tactical information and can also include less quantifiable involvement such as reviews and feedback.

John Lovett in the video below warns against an overemphasis on counting metrics and encourages collecting and evaluating metrics from a more strategic approach.[1]

2) At another level you can start to determine and measure more strategic calculations that provide both benchmark numbers for future analysis or for evaluating a campaign in progress. These strategic measures provide more context for your numbers and give more insights into the actions of your audience. Key Performance Indicators (KPIs)are metrics that help identify and support others that advocate for your brand, share your content and widgets as well as influence others in your key target markets.[2] Key strategic metrics include engagement, conversation volume, sentiment ratios, conversion rates, end action rates, and brand perception lifts.

3) At a “higher” level are the metrics that relate to organizational sustainability. These include financial metrics that measure return on investment (ROI) and efficiencies such as cost per fan/tweet/post/vote, etc.[3] They connect to key concerns about the financial and legal risks involved in digital media activities and acknowledge the importance of social media across the range of corporate or non-profit organizational objectives that involve legal, human resources, as well as advertising and marketing activities. They are of particular concern to upper management who want to see the connections from social media to product development, service innovation, policy changes, market share, election votes and/or stock market value.

Like most analytics, the metrics of digital media require the production of meaningful connection and context to be valuable. Wall Street stock prices became significantly more interesting after Charles Henry Dow and Edward Jones started to chart trends over time in the Dow-Jones Industrial Average (DJIA). Likewise, the number of social mentions or Tweets become more meaningful when tracked over time and perhaps correlated with campaign events. Metrics in general need to be tied to specific goals and objectives to be useful and not all the results are likely to be tied to bottom-line results.

The three levels of digital and social media metrics mentioned above are part of a process of producing valuable information to understand the effectiveness and success of campaigns, products, and services as well as their contributions to organizational sustainability.


[1] I highly recommend John Lovett’s (2011) Social Media Metrics Secrets John Wiley and Sons.
[2] Strategic metrics include both metrics and key performance indicators which Lovett characterizes respectively as the dataflow or “lifeblood” and the “vital signs” of digital analytics such as pulse and temperature.
[3] Another important book I use is Digital Impact: The Two Secrets to Online Marketing Success by Vipin Mayar and Geoff Ramsey. It has a useful perspective on financial metrics and particularly ROI.



AnthonybwAnthony J. Pennings, PhD is the Professor of Global Media at Hannam University in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He also taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii in the 1990s.

New Developments in GPS and Geo-Location for Mobile Technology

Posted on | March 25, 2013 | No Comments

The ubiquity of mobile devices has focused renewed attention on the Global Positioning Satellite System (GPS), the configuration of space-based vehicles that is used to provide location data to GPS Satellitesusers through their hand-carried mobile phones and tablets. GPS technologies were developed for use in aircraft, land vehicles, and ships. More recently, they have become crucial technologies for a wide variety of mobile devices. Global positioning has been primarily used for location tracking and turn-by-turn direction services, but what has become extraordinary are the new value-added services that continue to be built on the basic capabilities of this space-based system that runs 24/7, through all weather conditions, and can reach an unlimited number of users.

Why GPS? While locations for mobile technology can be determined by using cell towers, this data is less accurate than GPS. Approximate positions can determined from cell towers based on the angle of approach, the strength of signals, and the time it takes for the signal to reach various towers. However, mountains and other physical obstructions such as forests and buildings can interfere with location determination. These impediments can also interfere with GPS signals, but more options exist as only 3 of the 27 satellites are needed to determine a fairly accurate position.

The United States started the GPS program in the 1970s after the Cold War’s “Space Race” refined satellite and rocket launching capabilities to make them efficient and reliable. GPS was originally developed by the military and proved to be decisive in the first Gulf War when it enabled Allied troops to bypass Iraqi fortifications by venturing far into the featureless desert to outflank them. It has also been used for search and rescue operations and to provide targeting information and missile guidance as well as mapping strategic areas for facilities management and military engagement.

The basic GPS infrastructure consists of three major segments: the space segment (SS) consisting of 27 satellites that orbit the planet every 12 hours and transmit time-encoded information; the control segment (CS) that monitors and directs the satellites from the ground; and a user segment (US) that picks up signals from the system and produces useful information. The GPS satellites broadcast signals from space that are ‘triangulated’ by the user devices, although the more satellite signals that are accessed, the better the coordinate information.

Devices such automobile GPS systems and GPS dog-tracking collars produce three-dimensional location information (latitude, longitude, and altitude) as well as the current time from the transmitted signals. Assisted GPS however, which is used with Apple’s iOS devices such as the iPhone and iPad, combines standard GPS data with information derived from cellular towers and known Wi-fi spots for faster and more accurate readings.

The United States’ Federal Communications Commission’s (FCC) required all phone manufacturers, service providers and PSAPs (Public Safety Answering Points) to comply with specifications for their Enhanced 911 (E911) program by the end of 2005. This required all cell phones to transmit their phone number and location when dialing 911. More recently, it strengthened 911 requirements for all mobile devices and new location accuracy rules for wireless carriers.

While GPS is currently the dominant provider of position data, other countries have been working on their own global positioning systems. Europe is testing its Galileo system, and China is working on the BeiDou system. The US has liberally allowed the use of their GPS system around the world and has voiced objections to these alternatives, as they might be used for military purposes against US interests.

The Russian GLONASS, an acronym for GLObalnaya Navigatsionnaya Sputnikovaya Sistema, is the most immediate Glonass-M spacecraftcomplement/competitor to the US GPS. Development of GLONASS began in response to GPS in the mid-1970s during the Cold War. It was given a new impetus during the presidency of Vladimir Putin, who substantially increased funding for the Russian Federal Space Agency. That did not stop three GLONASS-M satellites from falling into the Pacific Ocean in December of 2010, forcing the Russian government to use backup satellites. GLONASS is now operational, and both complements as well as provides an alternative to the United States’ GPS.

Mobile devices have started to use the Russian GLONASS system for improved accuracy. Qualcomm was one of the first to develop chipsets that boost positioning performance with GLONASS signals. GPS with GLONASS can track not only the frequencies of all 27 GPS satellites, it can also receive the signals from the 24 GLONASS positioning satellites as well. Together they provide global coverage and superior precision.

The beauty of this government-developed and managed infrastructure is that it has enabled a wide variety of user segment devices that transform the satellite signals into productive information. GPS technology has unleashed a wave of product innovation that has become a somewhat unheralded percentage of the modern technology economy. The satellites emit a set of rather continuous navigation signals while the user segment equipment with embedded microprocessor chips and display technology provides the site of creativity. The result has been a wave of user segment equipment that allow a span of applications from vehicle fleet management, stolen car recovery, and even the tracking of cheating spouses and Alzheimer patients.



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Banner Years – The Resurgence of Online Display Ads

Posted on | March 5, 2013 | No Comments

Although second to keyword advertising, display ads continue to be a significant revenue source for web publishers.[1] While search-based keyword advertising continues its astonishing ascendancy, “banner ads” continue to be a workhorse for many marketing efforts. Revenues continue to rise, and the addition of Facebook as a potent new advertising vehicle has added a new competitive spirit into the mix.

Part of the argument is that display ads do more for brands than initially thought and that click-through rates (CTR) are not a full measure of their value. Display ads are also becoming more transparent, allowing advertisers to view ads and control for ads that are not actually seen. Probably most important is that the online infrastructure for selling and buying ads has become increasingly virtualized with online markets connecting publishers and advertisers in computerized, real-time auction environments.

Online ads began with the web page and with its hypertext links and ability to display .gif or .jpeg images. This combination soon gave birth to the controversial banner ad. Through the use of hypertext markup language’s (HTML) IMG SRC tag, images could be presented and with a little more coding contain links to other websites. Marc Andreessen proposed the IMG code on February 25, 1993, when he was working on the NCSA Mosaic web browser, the precursor to the prolific Netscape browser that kickstarted the World Wide Web and the explosion of the “” companies.

The term “banner ad” was coined by HotWired when it sold the first web banner under its revenue model of “corporate sponsorship”. Designed to go to a site that promoted seven art museums, the first ad went online on October 27, 1994 and was paid for by for the AT&T Corp. Hotwired, an offshoot of Wired Magazine, also pioneered “HotStats”, the first real-time web analytics. The original ad is presented below.[2]

First Banner Ad Paid for by ATT

Advertising on the Internet was not a sure thing. It was banned on the early network because of the ethos of the early Internet community which was either military or academic. It was also officially restricted because the Internet rested on the National Science Foundation network (NSF) and it explicitly forbid any advertising on the network. It took an act of Congress to allow commercial activity on the Internet.

As the web exploded throughout the rest of the 1990s, ad banners grew in popularity. They had the benefit of not only presenting brand information but facilitating the capability to click on the link on to go to a sponsor’s website. There, they could participate in an “end action” such as signing up for additional information, downloading an application, or even making a purchase.

As the population of websites grew, more advertisers saw value in purchasing ad space. The product, a display of an ad on a webpage, was increasingly known as an “impression” or an ad view. A web page can contain several ad banners with different sizes and located on a different part of the browser property. The .gif file format was particularly useful as it allowed several layers that could be timed into an animation.

Inefficiencies in connecting publishers and ad buyers quickly revealed themselves and in response, a number of third-party solutions emerged. Ad networks emerged to aggregate blocks of similar content product and market these packages to advertisers. This was particularly attractive to smaller websites with niche audiences as they could be sold with other sites that appealed to the same groups. They had faults though as advertisers complained about a lack of transparency and flexibility as it was difficult to determine where their ads were being placed and to make campaign adjustments. Publishers complained because they couldn’t connect with the best advertisers and also lost revenue to intermediaries. Also overall metrics were lacking for all parties to the transaction.

More recently, ad exchanges are proving to be a more nimble intermediary. These computerized markets directly connect advertisers and publishers and allow real-time bidding on more targeted ad spaces. Advertisers and agencies can be more selective in choosing the web publishers that reach their preferred audiences. Ad exchanges tend to be very technology-intensive so it is not surprising we see the larger advertising tech companies like Google, Microsoft and Yahoo! take the lead.

The major ad exchanges:

– AdBrite – ceased operations on February 1st, 2013.
AdECN (Microsoft), now Bing Ads (formerly Microsoft adCenter and MSN adCenter).
– ContextWeb merged with Datran to form Pulsepoint.
DoubleClick Ad Exchange was bought by Google.
– Facebook’s FBX draws on a billion users.
Right Media was bought by Yahoo! and is making everyone commute.

Even Amazon announced their plans to enter the ad exchange market in late 2012. Amazon would drop cookies about your visits into your browser that would be acted on when you visit other sites in Amazon’s exchange network such as IMDb, DPReview, and other ad exchanges and publishers with relationships to Amazon.


[1] has useful statistics on the display ad market.
[2] is generally acknowledged as having the first banner ad although some contention exists.
[3] Information on Amazon and Facebook from




Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty of New York University.

Working Big Data – Hadoop and the Transformation of Data Processing

Posted on | February 15, 2013 | No Comments

One day Google downloaded the Internet, and wanted to play with it.

Well, that is my version of an admittedly mythologized origin story for what is now commonly called “Big Data.

Early on, Google developed a number of new applications to manage a wide range of online services such as advertising, free email, blog publishing, and free search. Each required sophisticated telecommunications, storage and analytical techniques to work and be profitable. In the wake of the and subsequent telecom crash, Google started to buy up cheap fiber optic lines from defunct companies like Enron and Global Crossing to speed up connection and interconnection speeds. Google also created huge data centers to collect, store and index this information. Their software success enabled them to become a major disruptor of the advertising and publishing industries and turned them into a major global corporation now making over US$50 billion a year in revenues. These innovations would also help drive the development of Big Data – the unprecedented use of massive amounts of information from a wide variety of sources to solve business and other problems.

Unable to buy the type of software they needed from any known vendor, Google developed its own software solutions to fetch and manage the petabytes of information they were downloading from the World Wide Web on a regular basis. Like other Silicon Valley companies, Google drew on the competitive cluster’s rich sources of talent and ideas, including Stanford University. Other companies such as Teradata were also developing parallel processing technology for data center hardware and software technologies, but Google was able to raise the investment capital to attract the talent to produce an extraordinary range of proprietary database technology. Google File System was created to distribute files securely across its many inexpensive commodity server/storage systems. A program called Borg emerged as an automated methodology to distribute the workload for data coming in amongst its myriad of machines in a process called “load-balancing”. Bigtable scaled data management and storage to enormous sizes. Perhaps the most critical part of the software equation was MapReduce, an almost Assembly-like piece of software that allowed Google to write applications that could take advantage of the large data-sets distributed across their “cloud” of servers.[1] With these sets of software solutions, Google began creating huge warehouse-sized data centers to collect, store and index information.

When Google published the conceptual basis for MapReduce, most database experts didn’t comprehend its implications, but not surprising, a few at Yahoo! were very curious. By that year the whole area of data management and processing was facing new challenges, particularly those managing data warehouses for hosting, search and other applications. Data was growing exponentially; it was dividing into many different types of formats; data models or schemas were evolving; and probably most challenging of all was that data was becoming ever more useful and enticing for businesses and other organizations, including those in politics. While relational databases would continue to be used, a new framework for data processing was in the works. Locked in a competitive battle with Google, Yahoo! strove to catch up by developing their own parallel-processing power.[2]

At Yahoo! a guy named Doug Cutting was also working on software that could “crawl” the Web for content and then organize it so it can be searched. Called Nutch, his software agent or “bot” tracked down URLs and selectively downloaded the webpages from thousands of hosts where it would be indexed by another program he created called Lucene. Nutch could “fetch” data and run on clusters of 100s of distributed servers. Nutch and Lucene led to the development of Hadoop, which drew on the concepts which had been designed into Google’s MapReduce. With MapReduce providing the programming framework, Cutting separated the “data-parallel processing engine out of the Nutch crawler” to create Apache Hadoop, an open source project created to make it faster, easier and cheaper to process and analyze large volumes of data.[3]

Amr Awadallah of Cloudera is one of the best spokesmen for Hadoop.

By 2007, Hadoop began to circulate as a new open source software engine for Big Data initiatives. It was built on Google’s and Yahoo!’s indexing and search technology and adopted by companies like Amazon, Facebook, Hulu, IBM, and the New York Times. Hadoop, in a sense, is a new type of operating system directing workloads, performing queries, conducting analyses, but at a totally unprecedented new scale. It was designed to work across multiple low-cost storage/server systems to manage large data-sets and run applications on them. As an operating system, it works across a wide range of servers in that it is a system for managing files and also for running applications on top of those files. Hadoop made use of data from mobile devices, PCs, and the whole Internet of “things” such as cars, cash registers, and home environmental systems. Information from these grids of data collection increasingly became fodder for analysis and innovative value creation.

In retrospect, what happened in the rise of Big Data was a major transition in the economics and technology of data. Instead of traditional database systems that saved information to archival systems like magnetic tape, which made it expensive to retrieve and reuse the data, low-cost servers became available with central processing units that could run programs within that individual server and across an array of servers. Large data centers emerged with networked storage equipment that made it possible to perform operations across tens of thousands of distributed servers and produce immediate results. Hadoop and related software solutions that could manage, store and process these large sets of data were developed to run data centers and access unstructured data such as video files from the larger world of the Internet. Big Data emerged from its infancy and began to farm the myriad of mobile devices and other data producing instruments for a wide range of new analytical and commercial purposes.



[1] Steven Levy’s career of ground-breaking research includes this article on Google’s top secret data centers.
[2] Amr Awadallah listed these concerns at Cloud 2012 in Honolulu, June 24.
[3] Quote from Mike Olsen, CEO of Cloudera.



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Apollo 13: The Write Stuff

Posted on | January 27, 2013 | No Comments

I recently had the chance to visit the Johnson Space Center near Houston, Texas, with my family. I couldn’t help remember those famous words, “Houston, we have a problem” as I toured the facility. Uttered when the crew of Apollo 13 discovered “a main B bus undervolt,” indicating a loss of power from their fuel cells and an associated gas leak. These technical failures changed the spacecraft’s mission from exploration to survival. Their plight was cinematized by Director Ron Howard in his (1995) movie Apollo 13 and is having a resurgence with a new National Geographic version of “The Right Stuff,” Tom Wolfe’s (1979) book, The Right Stuff.

saturn rockets with burnt Command Module

I wrote the essay below to point attention to the new literacies and simulation techniques created and enhanced by NASA programs to guide and test the space vehicles on their historic journeys. The cybernetic process of guiding a spacecraft to the Moon is exemplified by some clever F/X and acting in this movie.

Still, more than that, it tells the story of a certain break with “reality” and a new trust in the techniques and instrumentalities of hyperreal simulation. Apollo 13 as well as the more recent Hidden Figures (2019) about the black women who contributed so much to the engineering and mathematics needed for success in the space race.

Apollo 13 was the third spacecraft scheduled to land humans on our orbiting Moon. Shortly after its liftoff on April 11, 1970, one of its oxygen tanks ruptured and destroyed several fuel cells and caused a small leak in the other main oxygen tank. Immediately NASA knew the mission was no longer landing on the Moon. The problem became one of returning the astronauts to terra firma before they either froze to death or died of oxygen asphyxiation (or CO2 poisoning), not to mention the problems associated with navigating back with barely any electrical energy left to run the computer or even the radio.

Unlike the macho heroics of The Right Stuff (1979) based on Tom Wolfe’s book of the same name, Apollo 13 celebrated not just the obvious bravery of the endeavor, but a new type of technical/scientific literacy. The “ground controllers” in Houston had to recalibrate the mission trajectories and develop a new set of procedures to be tested and written for the crew in space. These were done largely using the multimillion dollar simulators that the astronauts had trained in before the actual launch.

A fascinating example was when the ground crew developed the procedures for using some additional lithium hydroxide canisters for taking the CO2 out of the air. The astronauts were faced with a very real problem of being poisoned by their own exhalations when the square carbon dioxide tubing from the Command Module was not compatible with the round openings in the Lunar Module environmental system (they had been forced to move to the Lunar Module when the explosion in the Command Module occurred).

A group of the Ground Crew got together with all the supplies they knew were expendable in the spacecraft and devised a solution. They configured a way to attach the square canisters to the round openings by using plastic bags, cardboard, tape, etc. Finally, they wrote up the procedures which were transmitted to the crew just in time to avoid asphyxiation.

The movie is a very interesting historical representation of the use of systems and written procedures within an organization. To some extent the Moon landings provided the triumphant background narrative for the new developments in computers and simulation. Their successes provided the aura of certainty needed for a whole host of new technological investments from CAD/CAM 3-D production, pilot-less drone warfare, space-based remote sensing and mapping, and the Bloomberg/Reuters worldwide system of electronic financial markets.



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Virality and the Diffusion of Music Videos

Posted on | January 10, 2013 | No Comments

I’m talking at the Viral Summit next week in Las Vegas so I thought I’d finish up on some topics I’ve been working on that address viral marketing and the music industry.

With over 1.148 billion views since July of 2012, the Gangnam Style music video has us all scratching our heads. The parody of South Korea’s ritzy Gangnam district in Seoul has rocketed its Asian metrosexual singer to immediate international stardom. Park Jae Sang, better known as PSY, has gone from relatively well-known rapster in his home country to international celebrity, even making an appearance in Madonna’s latest NYC concert.

Gangnam Style also highlights the power of viral marketing. With nearly 36 million shares since its release last summer, primarily via Facebook (33,886,323 shares) but also through Twitter (1,790,190 shares), it already features second on the all time viral chart. The graph below tracks the Gangnam Style “epidemic”.[1]


Virality refers to the diffusion of messages through the help of cooperating individuals. Often referred to as a word-of-mouth (WOM) process, it has received new emphasis with the decline of broadcasting and the rise of network effects on the Internet. The name derives from the term “virus” and their epidemiological spread from person to person until a critical mass erupts into a major outbreak.

According to Unruly Media, the top spot on the list of all-time viral shares belongs to the video by Jennifer Lopez – On The Floor featuring Hispanic-American rapper Pitbull. The disco duet leads the virality list with 37,405,834 Facebook shares and 271,177 Twitter shares since March of 2011.

The success of a viral message depends on such factors as the interest in the item, the timing of message, the network structures available, and the cost and ease of moving the message forward. Good content is obviously a key and it should be no surprise that creative composition, humor and sex appeal are important. Also important is taking advantage of topics that are trending. In addition, knowing how and where to seed content into a target audience on the web through opinion leaders is crucial to a successful viral campaign.[2]

The attention given to music videos had been on a steady decline since their heyday during the 1980s on MTV and VH1 but social media has provided a fascinating new venue to entice audiences and distribute musical creations. Youtube has provided the main new distribution channel but it has been Facebook and Twitter that have provided the network mechanism to propel music content out to their intended and unintended audiences.

Compared with traditional advertising, viral marketing offers music videos better audience targeting, lower communication costs, and faster diffusion. But will it make money? Music piracy has been plaguing the industry since Napster was introduced in the 1990s. A newer challenge has been the number of software applications have been developed that allows MP3s to ripped from Youtube, but ITunes, Amazon MP3, and GooglePlay have now provided easy-to-use platforms to search, sample and buy music. The real test for viral marketing is whether the sharing of music videos will circle consumers back to sites that will monetize music products for the artists.


[1] Stats on viral shares from Unruly Media’s Viral Video Chart.
[2] Check out these tips on how to make a music video go viral.
[3] Mashable maintains a top viral media list.




Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty at New York University.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    June 2023
    M T W T F S S
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.