The field of generative artificial intelligence has exploded into the public consciousness, accompanied by a dizzying lexicon of new terms: RAG, prompt engineering, agents, multi-agents, and deep learning are all now common parlance. These concepts are often presented as groundbreaking inventions, sparking a perception that GenAI is a field born of entirely new ideas. However, a critical examination reveals that much of this vocabulary is a clever repackaging of methods and theories that have been foundational to artificial intelligence and computer science for decades. This strategic rebranding, while not entirely without merit, arguably serves to create an aura of novelty that helps justify the massive influx of research funding and investment.
A prime example of this trend lies in the area of multi-agent systems. The concept of autonomous, interacting agents has been a cornerstone of AI research since at least the 1980s. Theoretical frameworks like game theory and practical architectures like BDI (Belief-Desire-Intention) have been meticulously studied and implemented for decades. Platforms such as JADE and languages like KQML (Knowledge Query and Manipulation Language) were developed to facilitate complex interactions between these agents long before the current GenAI boom. Yet, in today's discourse, multi-agent systems are frequently discussed as a fresh frontier, an emergent property of large language models, rather than an established field of study that these new models are now being integrated into. This reframing obscures the rich history and deep theoretical foundations that came before.
Similarly, the highly-touted Retrieval-Augmented Generation (RAG) system is essentially a modern fusion of a generative model with a classic information retrieval (IR) engine. The core function of RAG—searching a large corpus of data for relevant information and then using that information to ground a response—is the very definition of what a search engine has been doing for a generation. Information retrieval, with its sophisticated methods for indexing, querying, and ranking documents, has been a mature field for nearly four decades. While the specific integration of a powerful language model with a knowledge base is an exciting development, the retrieval component is not a novel invention. Calling it RAG gives it a new, distinct identity, rather than acknowledging it as a sophisticated application of existing IR techniques.
Even the term deep learning itself, which underpins the entire GenAI revolution, is an evolution of an old concept. The fundamental architecture of neural networks, with their layers of interconnected nodes, dates back to the 1950s. The backpropagation algorithm, crucial for training these networks, was popularized in the 1980s. What we now call deep learning is primarily a result of unprecedented computational power and vast datasets, allowing for the creation and training of neural networks with many more layers than was previously feasible. The core idea, however, remains the same. The deep qualifier emphasizes a quantitative leap in scale, but it shouldn't entirely overshadow the legacy of the pioneers who laid the theoretical groundwork.
The GenAI landscape is a vibrant and exciting place, but it’s a mistake to view it as a completely new continent. It is, in many ways, a sophisticated city built upon the foundations of old-world architecture. The constant coinage of new terminology may give the impression of ceaseless, paradigm-shifting innovation, which can be an effective way to attract funding and talent. However, it risks creating a siloed perspective that ignores the decades of intellectual labor that made the current breakthroughs possible. True progress lies not just in new names, but in a clear-eyed understanding of how today’s innovations are intrinsically linked to the history of AI.