Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

Controversies in Intellectual Property – The Business Method Patent

Posted on | November 11, 2013 | No Comments

Disclaimer: The following is a brief overview related to business method patents and should not be considered legal advice.

One of the most controversial forms of intellectual property in the digital age is the business method patent (BMP). These deal with particular systems or ways of conducting business and have gained a fair amount of notoriety lately with software-related processes becoming integral in a wide range of commercial activities. From banking to manufacturing and retailing, business methods are increasingly providing a competitive advantage to companies and understandably they seek to limit others from using the same methods.

Patents in general are exclusive rights given by the federal government to limit others from using, making or selling the same innovations or designs for the term of the patent. Successful applications showing a utility of some sort are granted 20 years while a novel design is granted 14 years. Patents must meet the criteria that the innovation be useful, novel and not obvious. In the US, patents are granted by the United States Patent and Trademark Office (USPTO), often called the “PTO“. Currently it takes about 3 years to get a patent.

The US government has been awarding patents since George Washington signed off on the first one in 1790. One of the most notable information business patents was given to Herman Hollerith for The Art of Compiling Statistics in 1889. He was in the process of conducting the first automated census calculations with punched card tabulating machines he invented. When the company he created merged with several others and eventually became IBM, his patent provided the intellectual property foundation for its early success with tabulating machines.

In general, business methods patents were not seriously considered by the PTO until the 1980s when businesses started to develop their own in-house computer systems for automating a wide variety of payment, transaction and trading capabilities. A 1998 decision by the Court of Appeals for the Federal Circuit “upheld a patent on a software program that was used to make mutual fund asset allocation calculations”.[1] Since that ruling, business method applications to the PTO and their approvals have increased dramatically.

The PTO has debated whether BMP needs to be a “technological art” or whether a claimed innovation is a manufacture, process or composition of machine or matter. What seems to be important is that it produces a tangible result. Other basic patent criteria apply as well. For example, Amazon’s One-Click application has been challenged because it has been argued that it is obvious that anyone in the business eventually develop it. The patent has been continuously denied in Europe although upheld in the US. Apple for instance did license the one-click technology from Amazon for its iTunes store.

Netflix on the other hand has been quite successful in patenting their business model which covers their method of renting DVDs to customers in conjunction with the interface on their website that lists the viewer’s preferences, verifies return based on their subscription status, and delivers the next item on their queue to the subscriber.

For more information visit the Patents Business Methods site at the USPTO.

Notes

[1] Business Method Patents Online by William Fisher and Geri Zollinger. Accessed at http://cyber.law.harvard.edu/ilaw/BMP/ on November 11, 2013.

© ALL RIGHTS RESERVED

Share
Anthony
Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty of New York University.

Max Headroom’s Futuristic News Gathering

Posted on | November 5, 2013 | No Comments

One of my favorite TV shows from the 1980s was Max Headroom, a satire on network news done in a type of cyberpunk style. The show only lasted a year, but that is part of its mystique – it was too hot for a TV network to carry. Set in a dystopic near future, it showed a society suffering from harsh inequalities. One of the most interesting aspects of the show was its depiction of the future of journalism. It drew on the contemporary electronic news gathering (ENG) techniques of the time such as video and satellite feeds, and added more futuristic computers and artificial intelligences to help the main characters solve political and social problems.

The show featured a famous futuristic news reporter named Edison Carter who has a motorcycle accident trying to escape from some body-snatching baddies. He is knocked unconscious and is delivered to the head of his network’s research and development department, a teenage computer hacker/mad scientist who decides to digitize the reporter’s neural circuitry and download the data into a glitchy “talking headartificial intelligence – Max Headroom. When Max comes to “life,” that is the last thing he remembers – Edison hitting the parking garage gate that warns MAX HEADROOM 2.3M.

Max becomes the electronic alter ego of Edison Carter, played by Canadian actor Matt Frewer. He soon partners with the reporter as well as Network 23’s star controller Theora Jones, a beautiful hacker played by Amanda Pays. Max provides comic relief and often helps solve the episode’s central problem due to his stealthy infiltration capabilities.

Some of my favorite capers included the times when Edison was accused of credit fraud (“that’s worse than murder”); everyone is addicted to a TV game show; and when a politician tried to rig an election. The latter is particularly interesting today because in the show, the politicians are linked to TV networks, and the one that has the highest ratings gets to have their politician in the driver’s seat. So in contemporary times, a politician connected to Fox or MSNBC would become the Prime Minister if their associated network were ahead in the ratings. This is the original British pilot Max Headroom: Twenty Minutes into the Future from Youtube, where Network 23’s new advertising technology inadvertently blows up inactive people (hey, it’s satire).

Max Headroom extrapolated some interesting trends in television journalism. Edison what was called a “platypus” reporter, multitasking with a multiple forms of equipment, particularly a rather large camcorder. By the 1980s, TV journalism had switched from using film to electromagnetic video camera. Film was difficult to transport and had to be developed before editing. Originally developed in the 1950s for television studios, portable video cameras with sufficient quality for electronic news gathering like the Betacam were available by the time Max Headroom was conceived.

Competition was always fierce for television news but the 24 news network introduced by CNN only intensified the need to get a news story on air faster. Edison’s camera has a direct uplink to a satellite and down to the network controller. Satellite news gathering also became popular during the 1980s. With the Apollo moon program came the global network of geosynchronous orbit satellites that were first conceived of by Arthur C. Clarke. That meant global capacity, and as early as 1962, the Olympics were broadcast from Tokyo. CNN was the first 24 hour news network and drew on the satellite expertise of Ted Turner’s WTBS, the first TV network with satellite distributed programming via RCA’s Satcom vehicle.

As satellites became stronger due to the advent of solid state solar power capacity, the corresponding earth stations got smaller. So small, in fact, that they could installed on moving vehicles. Soon news reporters were being shown live as wireless cameras and audio hookups to a mobile vehicle meant the signal could be transported via satellite to the TV studio. The TV show Nightline, hosted by Ted Koppel for 25 years pioneered the use of satellites for “remote interviewing” during the coverage of the Iranian hostage crisis after the US embassy in Tehran was overrun. See Argo, (2012).

The Network 23 news control room looks much like a modern military headquarters. Computers are able to access a variety of remote sensing satellites and local telemetry such as the floor plans of buildings. The controllers guide the reporters by accessing CCTV cameras and opening doors “literally” by cracking security systems. Max can also subvert security systems and get into difficult spots to help Carter.

Luckily for reporters, cameras have gotten a lot smaller, but reporters have rarely become network stars like Edison Carter. Instead, it has been the “talking heads”, much like Max Headroom, that achieved celebrity status. Max went on in “real life” to have his own show, interviewing celebrities like Jerry Hall, Michael Caine, and Sting, much like Rachel Maddow or Bill O’Reilly do on their TV shows.

Perhaps the real “platypus” reporters now are the public with our smartphone cameras, blogs, Twitter accounts and access to instant information sources like Wikipedia and through “Googling.” This trend is unconvincing at present to enact major social change, but who knows what the next “twenty minutes” might bring.

Notes

[1] Bonner, F. (1992). Separate Development: Cyberpunk in Film and Television (1037673716794540726 T. Shippey, Ed.). In 1037673715 794540726 G. Slusser (Ed.), Fiction 2000: Cyberpunk and the Future of Narrative (pp. 191-206). Athens, GA: The University of
Georgia Press.

Citation APA (7th Edition)

Pennings, A.J. (2013, Nov 5). Max Headroom’s Futuristic News Gathering. apennings.com https://apennings.com/political-economies-in-sf/max-headrooms-futuristic-news-gathering/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he taught digital economics and media. He also taught in Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

What is Entertainment?

Posted on | October 30, 2013 | No Comments

I’m covering entertainment this week in an introduction to digital media class. The assigned textbook, which I generally like, is unsatisfying on this topic. I attribute this to the authors, who come from a journalistic background and probably carry some resentment towards the whole area of entertainment. The mixture of entertainment with news is certainly cause for concern, but what about the upside from injecting more entertainment into our daily information and media practices?

In general, I think we can learn a lot from a more systematic understanding of entertainment. I always thought that “entertainment studies” would be a good interdisciplinary pursuit for academia. Communications, Cultural Studies, English, Film, Media Studies, and Theatre, all broach the subject in various ways. The Australian University of Newcastle has a journal named Popular Entertainment Studies that is currently looking for submissions on entertainment during wartime.

Gaming programs are springing up and perhaps have the most pressing need for work in this area. I grabbed one of my favorites, Rules of Play: Game Design Fundamentals (2003) by Katie Salen and Eric Zimmerman for a little “show and tell” today in class. Interesting though, that it does not have the word entertainment in it. I did see the word “entrainment” though which I’m adding to my list of words below related to entertainment:

entrainment
amusement
diversion
engagement
pleasure
sensual
attention
enjoyment
occupation
preoccupation
diversion
comical
adventure
challenge

This list is not meant to be conclusive but hopefully suggestive about the topic. In exploring the roots of the word “entertainment”, I found that it has Greek roots to the words for bowel or intestine through the word enteron. In the Medieval Latin usage intertenere it meant “to hold inside”. In Old French the word entretenir similarly meant to “hold together” or “maintain” as it does in more contemporary French..

Do these older meanings have any bearing on the contemporary connotations of the word entertainment? In English we often use the word entertainment in a phrase such as “entertain an idea” that is closer to the idea of hold rather than in any way to amuse an idea. Although it is not a passive concept either, as it is meant to at least think about the idea and not dismiss it without some consideration. Is entertainment a type of holding one’s attention? Is it a prolonged focus?

I’m intrigued by the more physiological connotations connected to the stomach area. In English medical terminology, “enteral” as in enteral feeding or enteral nutrition refers to tube feedings or the delivery of nutrients directly into the stomach or intestines. Does entertainment have something to do with stomach rather than the head? Is it base rather than cerebral? To the extent that entertainment causes laughter or other emotional reactions, are they enteral reactions? Or does it still have strong cerebral connections?

Will the new emphasis on brain science and the techniques of scanning the brain provide additional insights into the dynamics of entertainment? Funding to solve contemporary social problems such as from ADHD, sports concussion injuries, and long term exposure to stress and injury by combat soldiers are making a number of imaging techniques available such as:

Functional magnetic resonance imaging, or fMRI
Computed tomography (CT) scanning
Positron Emission Tomography (PET)
Magnetoencephalography (MEG)
Near infrared spectroscopy (fNIRS)

What can we find out by exploring the activities in the brain that entertainment stimulates? Putting aside the Huxleyan implications which were echoed in Neil Postman‘s Amusing Ourselves to Death; could these techniques suggest ways entertainment might enhance education? Or to suggest ways to enhance the development of economic literacies? How about political discourse? How many people now get their civil information from Jon Stewart on the Comedy Channel’s Daily Show? Like the textbook authors, society is somewhat dismissive of entertainment. Even though its consumption habits may suggest otherwise. Perhaps our society could use a little more “gamification,” a little more entrainment, a bit more challenge in our informational practices.

A good workable definition of entertainment is at Mashable. If I had to write this over, I would start with that. Or I may just write a sequel.

© ALL RIGHTS RESERVED

Share

Citation APA (7th Edition)

Pennings, A.J. (2013, Oct 30). What is Entertainment?apennings.com https://apennings.com/meaningful_play/what-is-entertainment/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he taught comparative political economy, digital economics and traditional macroeconomics. He also taught in Digital Media MBA atSt. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

How to Use Facebook with an Online Course

Posted on | August 21, 2013 | No Comments

I taught an online course for New York University (NYU) called “New Technologies in Advertising and Public Relations” this summer that was totally asynchronous – meaning that we didn’t meet in person or online at the same time. I had taught the class at the Washington Square campus in Manhattan when I was a faculty member there, but recently began to teach it online from Austin. In order to increase participation and interaction among the students, I used Facebook (FB) to provide a forum for discussion and sharing information and links related to the class.

In this post I want to share some important points about using Facebook with a class, and particularly how to protect student privacy. My experience with online education is that the devil is in the details. Online applications vary immensely, and some work better than others. I was having trouble developing online interaction and participation. I won’t mention the system NYU uses as it’s several years old and probably due for an upgrade or replacement soon. We’ll call it Brand X.

(Update: NYU has recently changed to a new open source system called NYU Classes that is based on the Sakai learning management system. So far it seems much better than the previous system, Epsilen. NYU has even used NYU Classes/Sakai to replace Blackboard as the general learning management system we were required to use for all classes, onsite and online.)

The Brand X platform, which is not Blackboard, does have some other good features including online lessons, the electronic syllabus, dropboxes and even a collaborative wiki that I like; but the forum/discussion facilities are particularly poor. NYU’s application has a discussion component, but I didn’t like it; primarily because it took 9 clicks to get to a Discussion topic while with FB it took only 2 clicks. Also, FB clicks were immediate while the others were slow and it was tempting to switch to another tab in between each click. In other words, sometimes I didn’t get back before the Brand X application timed out.

FB Groups vs. Page

One of the first things to do if you want to use Facebook is to set up the group. I first made a mistake by creating a “page” for the course. What I really needed to do was set up a “group” dedicated to the class. Note that I have two listings for the course Lrms1-Dc0954 New Tech Ads and PR in the image next to this text. The one with the image is the Facebook “page” that is open to everyone while the one with just an icon is a closed group that is most appropriate to maintain privacy for the class. It hides the group members as well as their postings, comments and likes. Providing of course, that the settings are on the highest privacy levels. Click on the radio button for Secret after everyone has joined the group. If you make it secret too early the students will not be able to search for it so start off with the “closed” setting.

Facebook settings

How do you grade the participation? I asked that each student to post two links and make 10 comments every week. The posts needed to be related to the week’s readings such as the modules on Search, Online Display Ads, Viral Communications, Online Video, and Mobile. These are contemporary topics and we experienced almost a type of choice overload with so many blogs, magazine articles, and types of links available. Still, students uncovered, posted and discussed interesting articles such as this controversy about Linkedin taking down ads for female engineers because they thought the images used did not represent actual female engineers.

Most students did not participate nearly as much I projected. In retrospect, the comments are harder than the posts as the student would have to read the posts, grasp the significance of the article and be willing to make a “public” statement. One way to do it would be the instructor to do all the posting, give context to the week’s discussion and then Settingssee what the class could find.This is likely to be part of the solution but students should have the opportunity to share links they think are interesting and pose their own questions. In retrospect, the participation rates probably mirror actual classroom dynamics with some students dominating the discussion. But as the asynchronous class does not have an actual meeting environment, I think its appropriate to push the students to post and comment. “Likes” however are probably not a useful metric as they are too easy to click without real engagement.

Calculating their contributions involved searching the names of each student (See search icon in left image) and at this time tallying their posts and comments manually.

I asked students to email me their comments about Facebook participation. Comments were generally favorable but they found it hard to keep up with the demands specified above. Those who took an online course with Brand X were particularly positive but pointed out that the other teachers did not require quite as much activity. Students usually complain about workloads and try to get them reduced, but in the spirit of fairness I will examine their concerns in more detail. On the positive side, using Facebook allowed many students to participate easily with smartphones. This is particularly useful for students who are working and are restricted from Facebook activities at work. At least Brand X looks sufficiently geeky so it might not attract attention or be outright banned.

Other concerns arise when you have students from all over the world. Although the privacy settings are pretty goods, some students, particularly from authoritarian countries have significant concerns about their posts on FB being monitored. With settings on Secret most concerns about privacy were alleviated. but certainly the comfort of all students should be considered and especially foreign students. Although the course was mainly about technology, advertising and particularly public relations can bring up touchy topics.

I expect to continue to use Facebook to enhance participation and bring web sources into the course experience. I am using this semester for a MBA class on digital convergence and innovation that is meeting in a regular classroom, primarily for them to dig up related articles. If you want more information or have comments, please email me at anthony.pennings@gmail.com with the subjecting heading “Facebook Course”.

© ALL RIGHTS RESERVED

Share

Anthony

Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty of New York University.

The Network is the Computer – UNIX and the SUN (Stanford University Network) Workstation

Posted on | July 1, 2013 | No Comments

When computers began using “third generation” integrated circuit technology, processing speeds took a giant leap forward, and new computer languages and applications were enabled. From the time-sharing initiative by General Electric in the early sixties came BASIC (Beginners All-purpose Symbolic Instruction Code) that allowed a new class of non-engineers and scientists to program the computer (including a high school freshman named Bill Gates in 1969). Bell Labs was able to regroup from its early software fiascoes when two typical 1960s computer gurus, long-haired and bearded, created UNIX.

Ken Thompson and Dennis M. Ritchie rethought the software problems needed for time-sharing systems and developed a more elegant and simplistic solution. Written completely in the new “C” language, the UNIX operating system became widely available in the mid-1970s, especially after more powerful versions were created by the University of California at Berkeley, Sun Microsystems, and Xerox. Unix was a key software innovation that enabled data networking to take off, and with it went worldwide finance and the Internet.

The Unix operating system developed originally by AT&T’s Bell Labs was made available in 1969 by Thompson and Ritchie. Both had worked on the pioneering time-sharing project Multics, but when AT&T pulled out of the project, they decided to utilize their institutional freedom to pursue their ideas of an operating system. They named their new OS, “Unix” as a pun on Multics and strove over the next few years to develop an OS that was more streamlined and could run on multiple computers.

The spread of minicomputers by vendors such as DEC, Data General, Prime Computers, and Scientific Data Systems made Unix attractive. Users were frustrated with the cumbersome and propriety software developed for mainframes. Like the transistor before it, AT&T decided to disperse its computer operating system cheaply to avoid government anti-trust action. Bell Labs allowed the Unix software to be distributed to universities and other computer users for a nominal fee and by the late 1970s, its diffusion increased rapidly.[1]

In the early 1980s, SUN (Stanford University Network) Microsystems was incorporated by three Stanford alumni to provide a new type of computer system. The Sun-1 workstation was much smaller than mainframes and minicomputers but more powerful than the increasingly popular personal computers. It would have a major impact, especially on Wall Street, which was ripe for new digital technologies that could empower traders who were eager to use new calculative methods to enhance their trading profitability. Two innovations were crucial to the Stanford networking advances – The Unix operating system and Alohanet inspired Ethernet.

Through military funding, a new version of Unix was developed at the University of California at Berkeley that made its source code available, was cheap to license, and worked with many types of computers. UNIX 4.1BSD (Berkeley Software Distribution) was created when its principal investigator Bob Fabry and lead programmer in the project, Bill Joy, received additional ARPA funds in 1981 to create a new version that serviced Internet protocols. The Berkeley version was designed to maximize performance over smaller Ethernet networks like those in a financial trading floor or on a college campus. Berkeley then distributed the software to universities around the country for a small licensing fee.

The other factor was the Alto Aloha Network, named after the University of Hawaii’s wireless Alohanet system. The Alohanet was the first collision detection system for data communications and inspired companies like Cisco Systems and Sun Microsystems to develop networking solutions. During the late 1970s, Alto Computers developed at Xerox PARC were donated to Stanford University by the giant copier company. They were connected with local area networking technologies that inventor Bob Metcalfe was calling “Ethernet” after the hypothetical medium 19th-century scientists once believed essential to carry the movement of light.

Metcalfe worked on the original ARPANET in Boston and traveled to Hawaii for several months before taking a job at Xerox PARC. Inspired by the Alohanet, he began working on networking when he got to PARC. Unlike the seminal University of Hawaii project that used radio to transmit data packets between the islands, Ethernet connected computers through cables. Metcalfe worked with David Boggs and the inventors of the Alto (Thacker and Lampson) to create a computer card for the Alto computers and soon they were experimenting with a high-speed local area network (LAN). Later they used Ethernet to connect Altos throughout the Stanford campus.

The Sun concept was based on the idea that the “network was the computer”. It started with a prototype 32-bit “workstation” (as opposed to SUN 3 Logopersonal computer) built by Ph.D. student Andy Bechtolsheim, who originally wanted to create a personal computer that would meet the needs of faculty and students on the Stanford campus. Bechtolsheim based his computer on the UNIX operating system and envisioned them linked by Ethernet connections. Bill Joy, who was instrumental in the Berkeley revision of the UNIX code, also joined the Sun team.

The organization was brought together by Vinod Khosla, originally from India, and a Stanford MBA graduate. Khosla was impressed with Bechtolsheim’s prototype and convinced him to go into business with him. He also recruited friend and former roommate Scott McNealy who was incidentally a former high school classmate of Microsoft’s Steve Ballmer. After raising $4 million in venture capital, Khosla and McNealy incorporated Sun Microsystems in February 1982. They marketed their first workstation, the Sun I, later that summer.[2] The founders got together for this informative panel:

Sun Microsystems grew quickly, reaching sales of $9 million in 1983 and $39 million in 1984, largely because of McNealy’s manufacturing expertise. In five years they would become a Fortune 500 company. Sun positioned its products to be cheaper than minicomputers and more sophisticated and expensive than PCs. The key was networking and the strength of the UNIX OS and its ability to work with TCP/IP. A leader in what would be called the “open source movement,” Sun used high quality, off-the-shelf components, openly licensed its key technologies, and developed strong relationships with key software developers. They used equipment like Motorola’s 68000 processor, Intel’s multibus, and the new UNIX. Bill Joy’s version of Unix became the major operating system of Internet’s hosts throughout the world especially after military ordered the integration of the TCP/IP protocols in all hosts throughout the ARPANET in 1982.

The Sun Workstation quickly emerged as a potent computing platform for academic institutions as well as companies in Hollywood and engineers at NASA. But nowhere would the impact be as dramatic as it would be on Wall Street and throughout the financial markets of the world that were rapidly deregulating.

Sun’s revenues would grow to $15.7 billion by 2000 and its stock would grow to $130 before the dot.com crash. It would also become the number one supplier of open network computing technologies around the world and the top Unix vendor in the banking, global trading, RDBMS, and securities markets. Sun was also responsible for developing Java still one of the most popular programming language and distributing it for free.

Sun was sold to Oracle Corporation in 2010.

Notes

[1] Information on Unix from Campbell-Kelly, M. and Aspray, W. (1996) Computer: A History of the Information Machine. Basic Books. p 219-222.
[2] Information on Sun Microsystems from Segaller’s NERDS 2.0.1 starting on p. 229.
[3] Sales figures from an article on the BUSINESS WEEK website archives accessed on December 8, 2001. “Scott McNealy’s Rising Sun” was originally published in the same magazine by January 22, 1996. According to the article, McNealy had a lot of exposure to manufacturing. His father was a Vice-Chairman of American Motors Corp and after failing to get into both Harvard’s and Stanford’s business schools, he took a job as a foreman for Rockwell International.

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Google Fiber in Austin

Posted on | May 5, 2013 | No Comments


Austin, Texas is getting Google Fiber, the one-gigabit digital broadband service from the advertising giant. With connections to individual homes and businesses transmitting up to 1,000 Megabits per second, it is from 60 to 100 times faster than Google Fiber Box current services. What makes this Google Fiber unique is that it uses digital signals moving through glass conduits at the speed of light – speeds much, much faster than the copper lines traditionally used for telephone or the coaxial cables that became the staple for broadcasting cable television and later for connecting cable modems. Also, Google connects its fiber optic cabling directly to the home (FTTH) rather than just fiber to the neighborhood or even fiber to the curb in front of your house. Google Fiber will offer digital TV as well as a host of digital Internet services – to selected neighborhoods.[1]

Google has already begun rolling out its glass channels in Kansas, creating what have been nicknamed “fiberhoods”. On March 30, 2011, Kansas City was chosen over 1,000 US metropolitan applicants competing to be the first for the new service. Also, it recently signed a deal to buy a municipal fiber-optic system in Provo, Utah that originally cost $39 million to build. Google is buying it for one dollar because the system is costing the city money. If the Kansas City model is followed, Google Fiber’s pricing structure will include free 300K Internet (with a construction fee), 1GB Internet ($70), and 1GB Internet plus TV ($120).

googlechannels

Telecommunications systems have lagged other technological innovations, particularly in implementation. Fiber optic communication was developed in the 1970s and the first systems were installed by the mid-1980s, including Sprint’s nationwide backbone network. However, fiber is expensive to build out, especially through the “last mile” into homes and businesses. Land lines have lost some of their attraction as investment has shifted to mobile due to the demand for 4G services to feed smart phones and tablets. Verizon has scaled back its FIOS fiber-to-the-home (FTTH) services despite high consumer satisfaction claiming that “Wall Street” punished them for expanding the service.

Telecom incumbents use a variety of competitive strategies to construct barriers to entry including customer captivity through long-term contracts, strong lobbying of government regulation, and extensive investments in fixed costs that are difficult to match by any start-up. Google though, is not just any start-up. One of Google’s major competitive advantages is its investments in fixed costs capital assets. This would include data centers, proprietary advertising and “big data” technology as well as high-speed telecommunications – and with $50 billion in annual revenues – its ability to invest and build is extensive.

Fiber has been an important part of Google’s strategy to connect searchers to their data servers faster so that it feeds their primary revenue source – search advertising. Google wants to make the process incredibly fast to hold off competitors Microsoft and Yahoo! Recognizing this need, they began purchasing fiber optic cabling in the wake of the “telecom crash” in 2002. Some of it was intercity cabling from Enron’s misguided broadband strategy and some it undersea capacity from now defunct international carrier Global Crossing. Much of it was “dark fiber” that would allow Google to attach its laser transmission and termination technology. Towards this end, they began buying up key patents related to optical communications that are going into proprietary fiber optic technology. Fiber is so important for the Google strategy that they spent almost $2 billion for the old Port Authority building on 111 Eighth Avenue in Manhattan because it sits on top of a hub of fiber optic arteries that connect to the surrounding portions of New York City.

Texas assumed a national leadership role in 2005 when it took steps to make it easier for digital video services by companies such as San Antonio-based AT&T and Verizon to expand its broadband services in Texas by centralizing its cable franchising regulations.[2] The proliferation of Internet Protocol Television (IPTV), as it was called at the time, was being stalled because cable TV had existed under monopoly conditions and subject to restrictive regulations and demands by local municipalities. In 2005, Rep. Phil King was the House sponsor of Texas Senate Bill 5 that encouraged competition by allowing new entrants to obtain state-issued, statewide cable and video franchises. No longer would exclusive franchises be granted. The Bill was signed by Governor Perry on September 7, 2005 promising to bring better services and economic benefits to Texas as well as being a model for other states as well.

So will Google Fiber influence economic development in the Austin area? A number of questions are worth raising. Will it attract new companies to Austin? Will it help new and existing firms become more efficient and productive? Can it help increase the rate of innovation needed to compete with other geographical areas? Can it spur competition in the digital services field and bring down prices for 1GB broadband? How will it influence Austin’s advantages in entertainment, government services, and its growing legion of high-tech companies.

One question raised in Forbes magazine asks, “what obligations do we have to provide basic services equally, regardless of income and social circumstances?”. In “Will Poor People Get Google Fiber?” John McQuaid asks whether the Google model of broadband diffusion is the right one or should we return to telecommunications policy that brought us postal service and the telephone – universal service.

In the meantime we will assess the Google model of rolling out digital services and any associated socio-economic development in the Lone Star state’s capitol city, particularly in its cultural and creative industries.[3]

Notes

[1] The Google Fiber announcement to build out in Austin was made on Tuesday, April 12, 2013.
[2] I was following the Texas regulatory development as part of a project at NYU on broadband services and economic development. Part of which was written for a paper entitled “The Telco’s Brave New World: IPTV and the “Synthetic Worlds” of Multiplayer Online Games” for the Pacific Telecommunications Council Conference and Proceedings. January 15-18, 2005 Honolulu, Hawaii.
[3] The economic power of the creative industries has been calculated by the U.S. Bureau of Economic Analysis as part of a general revision of what produces economic growth.

Share

© ALL RIGHTS RESERVED

AnthonybwAnthony J. Pennings, PhD is the Professor of Global Media at Hannam University in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He also taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii during the 1990s.

Three Levels of Digital Media Metrics

Posted on | April 17, 2013 | No Comments

As the web transforms both user and institutional practices across the digital media sphere, the search for useful metrics intensifies. Traditional techniques for measuring eyeballs and eardrums for television and radio are insufficient in an environment where digital technologies offer so much more in terms of interaction and transaction capabilities. Social media has increasingly embedded itself into the fabric of both profit and nonprofit organizations as senior management recognition of its importance has led to increased budget allocations for staff, technology, data collection and advanced analytics. As the Internet becomes more complex, mobile, and socially-oriented – understanding how your digital media is doing and contributing to organizational objectives becomes more complicated – but also extremely valuable.

So what are the metrics we should be planning to use and looking to analyze? This is not a simple question to answer, but I want to approach it by referencing two books that I used in my undergraduate classes, Social Networking and Digital Analytics: Social Media Metrics: Secrets by John Lovett and Digital Impact: The Two Secrets to Online Marketing Success by Vipin Mayar and Geoff Ramsey. (I guess we are in the unlocking secrets phase of digital media) Both of these books construct a hierarchy of digital media metrics as well as address many other issues such as metrics for mobile, search, and online video. I draw on both as I grapple with my own understanding of a priority system for measuring digital media.

These metrics can be roughly organized into three levels:

1) At a fundamental level, social media metrics involve counting simple, short-term actions like check-ins, tweets, likes, impressions, visits, numbers of followers, click-through rates, etc. They measure the immediate impact of an action or a campaign and can provide some simple but useful diagnostic numbers to gauge effectiveness. In general they provide more tactical information and can also include less quantifiable involvement such as reviews and feedback.

John Lovett in the video below warns against an overemphasis on counting metrics and encourages collecting and evaluating metrics from a more strategic approach.[1]

2) At another level you can start to determine and measure more strategic calculations that provide both benchmark numbers for future analysis or for evaluating a campaign in progress. These strategic measures provide more context for your numbers and give more insights into the actions of your audience. Key Performance Indicators (KPIs)are metrics that help identify and support others that advocate for your brand, share your content and widgets as well as influence others in your key target markets.[2] Key strategic metrics include engagement, conversation volume, sentiment ratios, conversion rates, end action rates, and brand perception lifts.

3) At a “higher” level are the metrics that relate to organizational sustainability. These include financial metrics that measure return on investment (ROI) and efficiencies such as cost per fan/tweet/post/vote, etc.[3] They connect to key concerns about the financial and legal risks involved in digital media activities and acknowledge the importance of social media across the range of corporate or non-profit organizational objectives that involve legal, human resources, as well as advertising and marketing activities. They are of particular concern to upper management who want to see the connections from social media to product development, service innovation, policy changes, market share, election votes and/or stock market value.

Like most analytics, the metrics of digital media require the production of meaningful connection and context to be valuable. Wall Street stock prices became significantly more interesting after Charles Henry Dow and Edward Jones started to chart trends over time in the Dow-Jones Industrial Average (DJIA). Likewise, the number of social mentions or Tweets become more meaningful when tracked over time and perhaps correlated with campaign events. Metrics in general need to be tied to specific goals and objectives to be useful and not all the results are likely to be tied to bottom-line results.

The three levels of digital and social media metrics mentioned above are part of a process of producing valuable information to understand the effectiveness and success of campaigns, products, and services as well as their contributions to organizational sustainability.

Notes

[1] I highly recommend John Lovett’s (2011) Social Media Metrics Secrets John Wiley and Sons.
[2] Strategic metrics include both metrics and key performance indicators which Lovett characterizes respectively as the dataflow or “lifeblood” and the “vital signs” of digital analytics such as pulse and temperature.
[3] Another important book I use is Digital Impact: The Two Secrets to Online Marketing Success by Vipin Mayar and Geoff Ramsey. It has a useful perspective on financial metrics and particularly ROI.

Share

© ALL RIGHTS RESERVED

AnthonybwAnthony J. Pennings, PhD is the Professor of Global Media at Hannam University in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He also taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii in the 1990s.

New Developments in GPS and Geo-Location for Mobile Technology

Posted on | March 25, 2013 | No Comments

The ubiquity of mobile devices has focused renewed attention on the Global Positioning Satellite System (GPS), the configuration of space-based vehicles that is used to provide location data to GPS Satellitesusers through their hand-carried mobile phones and tablets. GPS technologies were developed for use in aircraft, land vehicles, and ships. More recently, they have become crucial technologies for a wide variety of mobile devices. Global positioning has been primarily used for location tracking and turn-by-turn direction services, but what has become extraordinary are the new value-added services that continue to be built on the basic capabilities of this space-based system that runs 24/7, through all weather conditions, and can reach an unlimited number of users.

Why GPS? While locations for mobile technology can be determined by using cell towers, this data is less accurate than GPS. Approximate positions can determined from cell towers based on the angle of approach, the strength of signals, and the time it takes for the signal to reach various towers. However, mountains and other physical obstructions such as forests and buildings can interfere with location determination. These impediments can also interfere with GPS signals, but more options exist as only 3 of the 27 satellites are needed to determine a fairly accurate position.

The United States started the GPS program in the 1970s after the Cold War’s “Space Race” refined satellite and rocket launching capabilities to make them efficient and reliable. GPS was originally developed by the military and proved to be decisive in the first Gulf War when it enabled Allied troops to bypass Iraqi fortifications by venturing far into the featureless desert to outflank them. It has also been used for search and rescue operations and to provide targeting information and missile guidance as well as mapping strategic areas for facilities management and military engagement.

The basic GPS infrastructure consists of three major segments: the space segment (SS) consisting of 27 satellites that orbit the planet every 12 hours and transmit time-encoded information; the control segment (CS) that monitors and directs the satellites from the ground; and a user segment (US) that picks up signals from the system and produces useful information. The GPS satellites broadcast signals from space that are ‘triangulated’ by the user devices, although the more satellite signals that are accessed, the better the coordinate information.

Devices such automobile GPS systems and GPS dog-tracking collars produce three-dimensional location information (latitude, longitude, and altitude) as well as the current time from the transmitted signals. Assisted GPS however, which is used with Apple’s iOS devices such as the iPhone and iPad, combines standard GPS data with information derived from cellular towers and known Wi-fi spots for faster and more accurate readings.

The United States’ Federal Communications Commission’s (FCC) required all phone manufacturers, service providers and PSAPs (Public Safety Answering Points) to comply with specifications for their Enhanced 911 (E911) program by the end of 2005. This required all cell phones to transmit their phone number and location when dialing 911. More recently, it strengthened 911 requirements for all mobile devices and new location accuracy rules for wireless carriers.

While GPS is currently the dominant provider of position data, other countries have been working on their own global positioning systems. Europe is testing its Galileo system, and China is working on the BeiDou system. The US has liberally allowed the use of their GPS system around the world and has voiced objections to these alternatives, as they might be used for military purposes against US interests.

The Russian GLONASS, an acronym for GLObalnaya Navigatsionnaya Sputnikovaya Sistema, is the most immediate Glonass-M spacecraftcomplement/competitor to the US GPS. Development of GLONASS began in response to GPS in the mid-1970s during the Cold War. It was given a new impetus during the presidency of Vladimir Putin, who substantially increased funding for the Russian Federal Space Agency. That did not stop three GLONASS-M satellites from falling into the Pacific Ocean in December of 2010, forcing the Russian government to use backup satellites. GLONASS is now operational, and both complements as well as provides an alternative to the United States’ GPS.

Mobile devices have started to use the Russian GLONASS system for improved accuracy. Qualcomm was one of the first to develop chipsets that boost positioning performance with GLONASS signals. GPS with GLONASS can track not only the frequencies of all 27 GPS satellites, it can also receive the signals from the 24 GLONASS positioning satellites as well. Together they provide global coverage and superior precision.

The beauty of this government-developed and managed infrastructure is that it has enabled a wide variety of user segment devices that transform the satellite signals into productive information. GPS technology has unleashed a wave of product innovation that has become a somewhat unheralded percentage of the modern technology economy. The satellites emit a set of rather continuous navigation signals while the user segment equipment with embedded microprocessor chips and display technology provides the site of creativity. The result has been a wave of user segment equipment that allow a span of applications from vehicle fleet management, stolen car recovery, and even the tracking of cheating spouses and Alzheimer patients.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    April 2024
    M T W T F S S
    1234567
    891011121314
    15161718192021
    22232425262728
    2930  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.