1 July 2023

AGI

For Artificial General Intelligence to see reality, there has to be an extension in the use of Large Language Models that comprise of short-term memory, long-term memory, and sensory memory to provide for an abstraction in associative memory in implicit and explicit forms. This will also need to extend into some form of representation in cognitive modeling as well as a quantum information to extend into space and time geometry. And, above all an aspects of sapience, self-awareness, and sentience will need to be achieved for plausible AGI. AGI refers to a combined effort between symbolic and sub-symbolic learning. So, a natural cognitive architecture forms into a Hybrid AI in nature. However, in industry symbolic learning has largely been ignored in favor of sub-symbolic learning. However, sub-symbolic learning comes with a lot of deficiencies of focusing on probabilistic methods. The machine neither understands these probabilities, can't provide blackbox explanations, nor is able to interpret them into new forms of knowledge. Most so called AI solutions are far from intelligent. Statistical methods have already shown to be brittle, rigid, and uninterpretable. Statistics is a level above logic abstraction that machines just cannot seem to understand as part of their programmable circuitry. And, researchers should really stop trying to muddy the waters with incorrect use of terms only to show false pretences in progress to secure funding.

Generative AI

Generative AI is not really AI. The only thing generative is in the application of deep learning methods which is all statistics. The broader field of Machine Learning makes up only thirty percent of AI. There is a lot of incorrect words floating around in academia trying to confuse people on AI progress. In last fifty years there has not been any significant ground breaking advancements in AI. Apart from renaming of fields and reusing methods that have been around for decades. For example, Deep Learning basically comes from reusing methods in Neural Networks. Large Language Models is also a trendy topic. However, LLMs are simply an engineering extension of embedding models which come under the sub-area of distributional semantics, another area that has been around for decades in information retrieval. In most cases of Machine Learning methods the machine develops no formal context or understanding apart from the use of an intermediate programming language to translate probabilities into logical form using the computational syntax and semantics. If the machine developed any form of understanding then there wouldn't be any need to use a programming language to build a machine learning model. The other significant issue in the field is the wrong types of people that are hired at organizations who primarily come from math and statistics backgrounds. The correct types of people to be conducting AI research should really be from computer science backgrounds where the full spectrum of subject matter is formally taught in both theory and practice. The Generative AI should really be called Generative Deep Learning as that is pretty much the only area that is covered in application. 

Conference Index

Conference Index