21 January 2026

Zero Payoff in AI Splurge

Zero Payoff in AI Splurge 

Building Agents 101

Building Agents 101

Specialist’s Blind Spot in Pragmatic AI

The phenomenon of academic tunnel vision among PhD holders—particularly in the field of Artificial Intelligence—is a frequent point of contention between the world of pure research and the world of pragmatic engineering. To an outside observer, it often seems that a PhD’s deep expertise comes at the cost of intellectual flexibility. While the one-dimensional approach can be frustrating, it is rarely a result of ignorance. Instead, it is the product of how the academic ecosystem is structured, incentivized, and funded.

The very definition of a PhD is Doctor of Philosophy, but in practice, it is a degree of extreme specialization. To contribute something original to human knowledge, one must drill down into a specific niche. If a researcher spends five to seven years mastering the nuances of probabilistic graphical models, they naturally begin to see the world through that lens. This is the Law of the Instrument: when you are an expert with a hammer, every problem looks like a nail.

Many PhD-level researchers gravitate toward probabilistic or statistical methods because they are mathematically elegant. There is a formal rigor to proving that a system will converge or behave within certain bounds.

In contrast, approaches like Neuro-symbolic AI or cognitive architectures (such as SOAR or ACT-R) are often viewed by purists as messy. These hybrid systems combine the fluidity of neural networks with the rigid logic of symbolic processing. While these architectures are highly pragmatic and mirror human cognition more closely, they are harder to prove mathematically. For a researcher whose career depends on peer-reviewed publications, a kludge that works is often less valuable than a beautiful theory that is slightly less functional.

The frustration regarding the rejection of established standards, like W3C Semantic Web protocols or older structured methods, often comes down to the Not Invented Here syndrome. In the current AI climate, there is a massive trend toward connectionism (neural networks). Because these methods have seen explosive success in the last decade, many researchers view structured or rule-based methods as relics of the first AI Winter.

They reject what has worked for decades—like formal ontologies or structured data—because those methods don't scale with modern GPU clusters in the same way. The pragmatic best of both worlds approach is often ignored because it requires the researcher to be a generalist, whereas the university system rewards being the world’s leading expert in a single, narrow sub-method.

The one-dimensional approach is a systemic failure of the publish or perish culture. To break this cycle, the field needs to move toward intellectual pluralism. Using cognitive architectures or taking inspiration from the early internet's structured standards isn't going backward—it’s incorporating the stability of the past into the power of the future.

True innovation in AI likely won't come from a more complex probability density function, but from the messy, pragmatic integration of symbolic logic and neural intuition. The PhDs who will lead the next generation are those willing to step out of their narrow corridors and embrace the messy reality of hybrid systems.

19 January 2026

Beyond the Pixel Dream

In the current landscape of generative media, AI video models are often described as dreamlike. While this is a poetic way to excuse their flaws, the reality is that they frequently underperform in professional environments. Despite the massive compute behind models like Sora, Veo, or Runway, current AI video still sucks because it lacks a fundamental understanding of physics and temporal logic.

Current models primarily struggle with three structural issues that prevent them from reaching professional-grade lucidity:

  • The Physics Failure: Because these models are statistical predictors rather than world simulators, they do not understand gravity, momentum, or collision. This leads to the morphing effect, where a hand holding a cup might merge into the ceramic, or a person walking may glide across the floor without friction.
  • Temporal Drift: AI video models often forget the beginning of a clip by the time they reach the end. A character’s hair might change color, or a background building might vanish between frames. This lack of long-range coherence makes it impossible to use AI for scenes longer than a few seconds without heavy editing.
  • The Uncanny Micro-Expression: Human perception is highly sensitive to the 40+ muscles in the face. Current AI struggles to sync micro-expressions with dialogue, leading to spaghetti faces or eyes that don't blink with natural timing, triggering the uncanny valley.

To advance AI video from a gimmick to a legitimate production tool, the industry must pivot away from pure pixel-prediction and toward World Model architectures.

  • Integrating Physics Engines: Instead of just guessing the next pixel, future models must be constrained by neural physics layers. By training AI on 3D simulations alongside real video, we can force the model to respect the laws of motion. A ball falling in a lucid model should follow a parabolic arc, not just fade out of existence.
  • Decoupled Representations: We need models that separate the actor, the action, and the environment into distinct layers—similar to how a professional VFX pipeline works. If an AI understands that the car is an object separate from the street, a director can change the camera angle or the car's color without rerendering the entire scene.
  • Feedback Loops and Directable Latents: Advancement requires moving beyond the one-shot prompt. Flexible models should allow for iterative refinement, where a producer can click on an object in a generated video and say, Make this move faster, or Change the lighting to sunset, without losing the original composition.

The lack of quality and coherence of current AI video is a symptom of its reliance on superficial patterns. The path to lucidity lies in building systems that don't just mimic the look of a video, but understand the logic of the world it depicts. When AI can distinguish between a character and their shadow, or a fluid and a solid, it will finally become a tool that enhances, rather than frustrates, the creative process.

Architecting Digital Psychopathy

The rapid militarization of Artificial Intelligence has reached a harrowing inflection point, with Israel serving as the primary testing ground for what many ethicists now describe as a sociopathic model of existence. By shifting the burden of lethal decision-making from human conscience to cold, statistical algorithms, the integration of systems like Lavender, The Gospel, and Where’s Daddy into military operations represents the total dark side of AI—a future where intelligence is divorced from empathy and used to industrialize death.

At the core of this transition is the systematic dehumanization of the target. In traditional warfare, the decision to take a life—no matter how flawed—is a human act involving judgment, risk, and, ideally, a sense of moral weight. Israel’s AI-driven targeting systems replace this with algorithmic correlation.

The Lavender system, for instance, has reportedly been used to cross-reference vast datasets to flag tens of thousands of individuals as potential targets. When an AI labels a human being based on a probability score rather than direct evidence, the person is no longer an individual but a data point in an attrition calculation. This is the hallmark of a sociopathic system: it observes human life without any capacity to value it, treating the elimination of a target with the same mechanical indifference as sorting a spreadsheet.

Perhaps the most dangerous aspect of this AI dark side is the phenomenon of automation bias. Reports indicate that human operators often spend as little as twenty seconds reviewing a target selected by an AI before authorizing a strike. This creates a moral buffer that allows individuals to commit atrocities under the guise of just following the data.

By building systems that intentionally minimize the time for human reflection, the architecture itself becomes psychopathic. It is designed to bypass the natural human hesitation to kill, creating a killing factory where the speed of the algorithm dictates the pace of the violence. This sets a global precedent where AI is not used to enhance human wisdom, but to automate the most evil impulses of tribalism and warfare.

The danger extends beyond any single conflict. Israel has long been described as a laboratory for surveillance and military technology, exporting its tools to governments and regimes worldwide. By normalizing the use of unaccountable autonomous systems, these companies and state entities are poisoning the future of AI for the entire planet.

If the primary use case for advanced AI is the efficient liquidation of perceived enemies with acceptable collateral damage, then we are not building an intelligent future; we are building a high-tech panopticon of terror. This dark side suggests a world where AI serves the ends of power and greed, unanchored from the ethical constraints that make civilization possible.

The current trajectory of AI development in this sector is a warning to humanity. When we train our most advanced models to be efficient at destruction while ignoring the fundamental sanctity of life, we are creating a psychopathic intelligence that can never be re-aligned with the common good. We are witnessing the birth of a cold, calculated evil—an AI that does not just ignore ethics, but is fundamentally built to operate outside of them.

Architecture of AI Stagnation

The promise of Artificial General Intelligence (AGI)—a system capable of human-level reasoning and creative problem-solving—is increasingly being strangled by the very companies that claim to be its pioneers. While Google, Amazon, Meta, and Apple (the Big Tech quadrumvirate) control the vast majority of the world's compute and data, their corporate structures have become hostile environments for genuine AI advancement. Driven by a toxic blend of greed, stagnant corporate culture, and a reliance on marketing over substance, these firms have transformed from innovators into echo chambers of stagnation.

At the heart of Big Tech’s failure is a total absence of practical ethics. For these companies, AI is not a tool for human flourishing, but a mechanism for extreme extraction. Meta and Google’s business models depend on the invasive harvesting of personal data, meaning their AI research is inherently biased toward surveillance and behavioral manipulation.

When ethical conflicts arise, these companies have shown a pattern of suppressing dissent. The high-profile ousting of ethics researchers like Timnit Gebru and Margaret Mitchell from Google underscored a grim reality: in Big Tech, Ethical AI is a marketing slogan, not a design requirement. This lack of moral foundation ensures that any intelligence they build will be fundamentally misaligned with human values.

Innovation requires a radical diversity of thought, yet Big Tech remains anchored in a sprawling corporate environment where racism and sexism are systemic. Reports consistently highlight a diversity crisis where women and Black researchers are systematically excluded or marginalized. When the room where it happens is a homogenous echo chamber of light-skinned men from similar socioeconomic backgrounds, the resulting AI models inevitably reflect those narrow biases.

Furthermore, the scale of these companies has led to the hiring of mediocrity. Large-scale corporate AI labs often prioritize safe incrementalism over high-risk, high-reward breakthroughs. Brilliant researchers frequently find themselves bogged down in bureaucratic red tape or forced to work on trivial features like ad-targeting optimization rather than fundamental AGI. This environment rewards those who navigate politics rather than those who push the boundaries of science.

Perhaps the most visible symptom of this stagnation is the gap between hype and performance. To satisfy shareholders, these companies rush half-baked tools to market. Google’s Gemini and Meta’s Llama are often promoted with flashy, curated demos that rarely match the lived experience of the user. We see agentic tools that fail at simple tasks and AI summaries that hallucinate dangerous misinformation.

These companies are trapped in a Bittersweet Lesson: they believe that more compute and more parameters will eventually solve the problem of reasoning. However, as deep learning hits a wall of diminishing returns, the lack of algorithmic innovation becomes apparent. They are building bigger engines for cars that still don’t have steering wheels.

Big Tech is currently the greatest obstacle to AGI. Anchored by pride and a move fast and break things mentality that has matured into move slow and protect profits, these giants are incapable of the radical self-disruption required for true superintelligence. Until AI research moves away from these centralized, ethically bankrupt corridors, it will remain stuck in a loop of profitable, but ultimately hollow, statistical imitation.

Beyond the Statistical Ceiling

AI is currently dominated by a single paradigm: Connectionism. While this approach has yielded breathtaking results in natural language and image generation, it has led to a research culture that is almost exclusively stuck on statistics and deep learning. This statistical obsession has come at the expense of Algorithmic Modeling—the attempt to replicate the underlying logical and cognitive structures of the human mind.

At its core, deep learning is an exercise in high-dimensional curve fitting. Models like GPT-4 or Gemini 3 Pro do not know facts or reason through logic; they calculate the statistical probability of the next token based on trillions of parameters. This approach is favored because it is computationally scalable. In the race for AGI, the industry has adopted what is known as The Bitter Lesson: the idea that leveraging massive amounts of compute and data beats human-engineered clever algorithms every time.

However, this reliance on statistics creates a fundamental ceiling. Human intelligence is characterized by sample efficiency—a child can learn the concept of a cat from two examples, whereas a deep learning model requires thousands. By ignoring the algorithmic mimicry of the mind, we have built idiot savants: systems that can write poetry but fail at basic spatial reasoning or unexpected edge cases that weren't in their training data.

Deep learning is essentially extrapolative. It excels as long as the problem space remains within the distribution of its training data. This makes it a limited domain tool. For true Artificial General Intelligence (AGI) or Superintelligence, a system must exhibit inductive reasoning—the ability to form a what-if hypothesis about a situation it has never seen.

Because deep learning lacks an internal world model or a set of first principles (like physics or ethics), it cannot navigate the unknown. It is a map made of past experiences, rather than a compass that can find a way through new territory. This is why self-driving cars still struggle with rare weather events or unusual road debris; the statistics for those specific noise events are too sparse for the model to calculate a safe path.

While the world chases larger GPU clusters, a smaller segment of research focuses on Cognitive Architectures like ACT-R or SOAR. These models try to mimic the human brain’s modularity—separating long-term memory, procedural logic, and sensory input into distinct, interacting algorithms.

Instead of treating the brain as one giant, homogenous black box of neurons, these models attempt to build the mechanisms of thought. However, these are currently ignored because they are difficult to scale and do not provide the immediate wow factor of generative media.

AI research is stuck on statistics because statistics are currently the most profitable and scalable path. Yet, to reach Superintelligence, we must bridge the gap between calculating an answer and thinking through a problem. The future of AGI likely lies in Neuro-symbolic AI: a hybrid that combines the pattern-recognition power of deep learning with the rigorous, algorithmic logic of human-like cognitive models.