AI and the Rise of Networked Robotics
Posted on | June 22, 2024 | No Comments
The 2004 movie I, Robot was quite prescient. Directed by Alex Proyas and named after the short story by science fiction legend Isaac Asimov, the cyberpunkish tale set in the year 2035 revolves around a policeman, played by Will Smith. He is haunted by memories of being saved from drowning by a robot in a river after a car crash. His angst comes from seeing a young girl from the other car drown as he is being saved. The robot calculated that the girl could not be saved, but the policeman could. Consequently, the policeman develops a prejudice and hatred for robots, driving the movie’s narrative.
What was particularly striking about the movie was a relatively new vision of robots as networked, and in this case, connected subjects of a cloud-based artificial intelligence (AI) named VIKI (Virtual Interactive Kinetic Intelligence). VIKI is the central computer for U.S. Robotics (USR), a major manufacturer of robots. One of their newest models is the humanoid-looking NS-5 model, equipped with advanced artificial intelligence and speech recognition capabilities, allowing them to communicate fluently and naturally with humans and the AI. “She” has been communicating with the NS-5s and sending software updates via their persistent network connection outside the oversight of USR management.
In this post, I examine the transition from autonomous robotics to networked AI-enhanced robotics by revisiting Michio Kaku’s Physics of the Future (2012). We use the first two chapters on “Future of the Computer: Mind over Matter” and “Future of AI: Rise of the Machines” from Kaku’s book as part of my Introduction to Science, Technology, and Society course. Both chapters address robotics and are insightful in many ways, but they lacked focus on networked intelligence. The book was published on the verge of the AI and robotics explosion that is coming from crowdsourcing, webscraping, and other networking techniques that can gather information for machine learning (ML).
The book tends to see robotics and even AI as autonomous, stand-alone systems. A primary focus was on ASIMO (Advanced Step in Innovative Mobility), Honda’s humanoid-shaped robot, which was recently discontinued. But not without a storied history. ASIMO was animated to be very lifelike, but its actions were entirely prescribed by its programmers.
Beyond Turing
Kaku continues with concerns about AI’s common sense and consciousness issues, including discussions about reverse engineering animal and human brains to find ways to increase computerized intelligence. Below I recount some of Kaku’s important observations about AI and robotics, and go on to stress the importance of networked AI for robotics and the potential for the disruption of human labor practices in population-challenged societies.
One of the first distinctions Kaku made is the comparison between the traditional computing model based on Alan Turin’s conception of the general purpose computer (input, central processor, output) with the learning models that characterize AI. NYU’s DARPA-funded LAGR, for example, was guided by Hebb’s rule: whenever a correct decision is made, the network is reinforced.
Traditional computing is designed around developing a program to take data in, peform some function on the data, and output a result. LAGR’s (Long-Range Vision for Autonomous Off-Road Driving) convolutional neural networks (CNN) involved training the system to learn patterns and make decisions or preditions based on the data coming in. Unlike the Turing computing model, which focuses on the theoretical aspects of computation, AI aimed to develop practical systems that can exhibit intelligent behavior and adapt to new situations.
Pattern Recognition and Machine Learning
Kaku pointed to two problems with AI and robotics: “common sense” and pattern recognition. Both are needed for automated tasks such as Full Self-Driving (FSD). He predicted common sense would be solved with the “brute force” of computing power and by the development of a “encyclopedia of thought” by endeavors such as CYC, a long-term AI project by Douglas B. Lenat, who founded Cycorp, Inc. CYC sought to capture common sense by assembling a comprehensive knowledge base covering basic ontological concepts and rules. The Austin-based company focused on implicit knowledge like how to walk and ride a bicycle. CYC eventually developed a powerful reasoning engine and natural language interfaces for enterprise applications like medical services.
Kaku went to MIT to explore the challenge of pattern recognition. Poggio’s Machine at MIT researched “Immediate Recognition,” where an AI must quickly recognize a branch falling or a cat crossing the street. It was important to develop the ability to instantly recognize an object, even before registering it in our awareness. This ability was a great trait for humanity as it was evolving through its hunter stage. Life and death decisions are often made in milliseconds, and any AI operation driving our cars or other life-critical technology needs to operate within that timeframe. With some trepidation, Kaku recounts how the robot consistently scored higher than a human (and him) on a specific vision recognition test.
AI made significant advancements in solving the pattern recognition problem by developing and applying machine learning techniques roughly categorized into supervised, unsupervised, and reinforcement learning. These are briefly: learning from labeled data to make predictions, identifying patterns in unlabeled data, and learning to make decisions through rewards and penalties in an interactive environment. Labeled data “supervises” the machine to produce your desired information. Unsupervised learning is beneficial when you need to identify patterns and make decisions. Reinforced learning is similar to human learning, where the algorithm interacts with its environment and gets a positive or negative reward.
The need for labeled data for training machine learning algorithms dates back to the early days of AI research. Researchers in pattern recognition, natural language processing, and computer vision have long relied on manually labeled datasets to develop and evaluate algorithms. Crowdsourcing platforms made obtaining labeled datasets for machine learning tasks easier at a relatively low cost and with quick turnaround times. Further improvements would improve the accuracy, efficiency, speed, and scalability of AI labeling.
Companies and startups emerged to provide AI developers and organizations with data labeling services. These companies employed teams of annotators who manually labeled or annotated data according to specific requirements and guidelines, ensuring high-quality labeled datasets for machine learning applications. Improvements included developing semi-automated labeling tools, active learning algorithms, and methods for handling ambiguous data.
Poggio’s machine at MIT represents an early example of machine learning and computer vision applied to autonomous driving. Subsequently, Tesla’s Full Self-Driving (FSD) system embodied a modern approach based on machine learning and real-world, networked data collection. Unlike Poggio’s earlier driving machine, which relied on handcrafted features and rule-based algorithms, Tesla’s FSD system utilizes a combination of neural networks, deep learning algorithms, and sensor data (e.g., cameras, radar, LiDAR) to enable autonomous driving capabilities, including automated lane-keeping, self-parking, and traffic-aware cruise control. One controversial move is that FSD is mainly relying on labeling video pixels from cameras as they have become the most cost-effective option.
Tesla’s approach to autonomous driving has emphasized real-world data collection and crowdsourcing by learning from millions of miles of driving data collected online from the fleet of Tesla vehicle owners. This information is used to train and refine the FSD system’s algorithms, although it still faces challenges related to safety, reliability, regulatory approval, and addressing edge cases. Telsa continues to leverage machine learning to acquire driving knowledge directly from the data and improve performance over time through continuous training and updates.
Reverse Engineering the Brain
Reverse engineering became a popular concept after Compaq reverse engineered the IBM BIOS in the late 1980s to bypass IBM intellectual property protections on its Personal Computer (PC). The movie Paycheck (2003) explored a hypothetical scenario of reverse engineering. MIT’s James DiCarlo describes how reverse engineering the brain can be used to understand vision better. Professor DiCarlo describes how convolutional neural networks (CNNs) mimic the human brain with networks that excel at finding patterns in images to recognize objects.
Kaku addresses reverse engineering by asking whether AI should proceed along lines of mimicking biological brain development or would it be more like James Martin’s Alien Intelligence? Kaku introduced IBM’s Blue Gene computer, as a “quarter acre” of rows of jet-black steel cabinets, each rack about 8 feet tall and 15 feet long. Housed at Lawrence Livermore National Laboratory in California, it was capable of a combined speed of 500 trillion operations per second. Kaku visited the site because he said he was interested in Blue Gene’s ability to simulate thinking processes. A few years later Blue Gene was operating at 428 Teraflops.
Blue Gene worked on the capability of a mouse brain, with its 2 million neurons, as compared to the 100 billion neurons of the average human. It was a difficult challenge because every neuron is connected to many other neurons. Together they make up a dense, interconnected web of neurons that takes a lot of computing power to replicate. Blue Gene was designed to simulate the firing of neurons found in a mouse, which it accomplished, but only for several seconds. It was Dawn, also based in Livermore, in 2007, which could simulate an entire rat’s brain (which contains 55 million neurons, much more than the mouse brain). Gene/L ran at a sustained speed of 36.01 teraflops, or trillions of calculations per second.
What is Robotic Consciousness?
Kaku suggests at least three issues be considered when analyzing AI robotic systems. One is self awarenes. Does the system recognize itself? Second, can it sense and recognize the environment around it. Boston Dynamic’s robotic “dog,” for example, now uses SLAM (Simultaneous Localization and Mapping) to recognize its surroundings and use algorithms map its location.[3] SPOT uses 360 degree cameras and Lidar to 3D sense the surrounding environment. It is being used in industrial environments to sense chemical and fire hazards. It uses Nvidia chips and a built-in 5G modem for network connections to get data from the digital canine.
Another is simulating the future and plotting strategy. Can the system predict the dimensions of causal relationships. If it recognizes the cat, can it predict what its next actions might be, including crossing into the street. Finally can it sense and ask “What if?” From that can it develop sufficient scenarios that extrapolate into the future? And develop strategies for obtaining that future outcome?
Kaku and the Singularity
Lastly, Kaku was intrigued with the concept of “singularity.” He traces this idea to his area of expertise, relativistic physics, where the singularity represents a point of extreme gravity, where nothing can escape, not even light. “Singularity” was popularized by the mathematician and computer scientist Vernor Vinge in his 1993 essay “The Coming Technological Singularity.” Vinge argued that the creation of superintelligent AI would surpass human intellectual capacity and mark the end of the human era. The term has since been used by enthusiasts such as Ray Kurzweil, who believes that the exponential growth of Moore’s Law will deliver the needed computing power for the singularity around 2045. He believes that humans will eventually merge with machines, leading to a profound transformation of society.
Kaku is cautious and conservative about the more extreme predictions of the singularity, particularly those that suggest a rapid and uncontrollable explosion of superintelligent machines. He acknowledges that while computing power is growing exponentially, he doubts the trend will continue. There are also significant challenges to achieving true artificial general intelligence (AGI). He argues that replicating or surpassing human intelligence involves more than just increasing computational power.
Kaku believes that advancements in AI and related technologies will occur in incremental improvements that will enhance human life but not necessarily lead to a runaway intelligence explosion. Instead of envisioning a future dominated by superintelligent machines, Kaku imagines a more symbiotic relationship between humans and technology. He foresees humans enhancing their own cognitive and physical abilities through biotechnology and AI, leading to a more integrated coexistence.
But once again, he ignores a networked singularity that would involve interconnected AI systems, distributed intelligence, enhanced human-AI integration, and advanced data networking infrastructure. But could the networked robot become the nexus of singularity? Kaku believes this interconnected future holds immense potential for solving complex global problems and enhancing human capabilities, even though it raises issues of security, privacy, regulation, and social equity.
The Robotic Future
The proliferation of machine learning algorithms and cloud computing platforms since the 2000s accelerated the integration of AI and now robotics with networking technologies. Machine learning models, trained on large datasets, can be deployed and accessed over networked systems, enabling AI-powered applications in areas such as image recognition, natural language processing, and autonomous systems. Cloud computing allows these AI models and robotic machines to be updated, maintained, and scaled efficiently, ensuring widespread access and utilization across various sectors.
The rise of the Internet of Things (IoT) in recent years has further expanded the scope of AI and robot communications at the edges of the network. AI algorithms can now be deployed on networked devices, enabling real-time data processing, analytics, and decision-making in distributed environments. This real-time capability is crucial for applications such as autonomous vehicles, smart cities, and industrial automation, where immediate responses are necessary.
To advance AI-enhanced robots, robust data networking infrastructure is essential. High-speed, low-latency networks facilitate the rapid transmission of large datasets and visual information necessary for training and operating AI and robots. Networking infrastructure supports the integration of AI across various devices and platforms, allowing seamless communication and data sharing.
Cloud-based computing provides the computational power required for sophisticated AI algorithms. It offers scalable resources that can handle the intensive processing demands of AI, from training complex models to deploying them at scale. Cloud platforms also enable collaborative efforts in AI research and development by providing a centralized repository for data and models, fostering innovation and continuous improvement.
The development of AI is deeply intertwined with advancements in robotics in conjunction with data networking, networking infrastructure, and cloud-based computing capabilities. These technological advancements enable the deployment of robotics in real-time applications such as healthcare, finance, and manufacturing by supporting decision-making and enhancing operational efficiency across various sectors. The continued development of AI networking is essential for the ongoing integration and expansion of robotic technologies in our daily lives.
Kaku envisions a future where technology solves major challenges such as disease, poverty, and environmental degradation. Kaku advocates for ongoing research and innovation while remaining vigilant about potential risks and unintended consequences. He emphasizes the importance of a gradual, symbiotic relationship between humans and technology. Kaku highlighted the significance of Isaac Asimov’s Three Laws of Robotics, which are central to the plot of I, Robot. He praised the film for exploring these laws and their potential limitations. The Three Laws are designed to ensure that robots act safely and ethically, but the movie illustrates how these laws can be overridden in unexpected ways and are not to be trusted by themselves.
Citation APA (7th Edition)
Pennings, A.J. (2024, Jun 22). AI and the Rise of Networked Robotics. apennings.com https://apennings.com/technologies-of-meaning/the-value-of-science-technology-and-society-studies-sts/
Notes
[1] Javaid, S. (2024, Jan 3). Generative AI Data in 2024: Importance & 7 Methods. AIMultiple: High Tech Use Cases &Amp; Tools to Grow Your Business. https://research.aimultiple.com/generative-ai-data/#the-importance-of-private-data-in-generative-ai
[2] Kaku, M. (2011) Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100. New York: Doubleday.
[3] Lee, Yeong-Min. Seo, Yong-Deok. (2009), SLAM Vision-Based SLAM in Augmented / Mixed Reality. Korea Multimedia Society 13(3). 13-14.
Note: Chat GPT was used for parts of this post. Multiple prompts were used and parsed.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Alan Turing > artificial general intelligence (AGI) > CYC > full self-driving (FSD) > I > neural networks > Poggio’s Machine > Singularity