30 January 2026

Why Hybrid AI is the Real Revolution

The current AI gold rush is built on a seductive but dangerous myth: the Scaling Law. Giants like OpenAI, Anthropic, and Nvidia have convinced a generation of investors that if we simply pour more data and more compute into Large Language Models (LLMs), Artificial General Intelligence (AGI) will eventually emerge from the noise. Billions of dollars are being wagered on the idea that bigger is smarter.

However, we are reaching a point of diminishing returns. The reality is that the industry is currently making fools of investors by selling a statistical parlor trick as a path to sentience. The future isn’t just bigger; it’s Hybrid.

LLMs are essentially sophisticated stochastic parrots. They excel at predicting the next word in a sequence based on massive datasets, but they lack a World Model. As Yann LeCun, Chief AI Scientist at Meta, frequently argues, a cat has more general intelligence than the largest LLM because a cat understands cause and effect, gravity, and physical reality.

Scaling out does not solve fundamental flaws:

  • Hallucinations: Without a grounding in logic or fact, LLMs will always prioritize plausibility over truth.
  • Data Exhaustion: We are running out of high-quality, human-generated text to train on.
  • Brittle Reasoning: LLMs struggle with out-of-distribution problems—tasks that weren't in their training set—failing at basic logic that a child could master.

The revolution isn't going to come from a larger version of GPT-4. It will come from Hybrid AI—the fusion of connectionist systems (neural networks/LLMs) with symbolic systems (rules-based logic and structured databases).

While LLMs provide the fluid intuition and natural language interface, symbolic AI provides the hard constraints: math, logic, and verifiable facts. This is often called Neuro-symbolic AI.

Nvidia’s soaring valuation is tied to the demand for GPUs to fuel this massive scale-out. But if the enterprise world realizes that a smaller, hybrid model—one that combines a specialized LLM with a company’s own structured knowledge graph—is more accurate and 100x cheaper to run, the scaling bubble will pop.

Real AI growth will be driven by systems that can reason, not just predict. Hybrid AI allows for Small Language Models to act as the interface for deep, logical reasoning engines. This approach is energy-efficient, auditable, and actually solves the problems businesses face—unlike the $100 billion AGI gamble that current labs are forcing upon the market.

The AI revolution is here, but it won't be won by the biggest cluster. It will be won by the smartest architecture.

27 January 2026

Grand Finale of Apocalypse

Let’s be honest: humanity has always had a bit of a main character complex. We’ve spent centuries imagining our exit music, from the dramatic trumpets of Revelation to the cold, metallic crunch of a robot uprising. But if we look at the cosmic data, the end of the world probably won’t be a cinematic showdown between good and evil. Instead, it’ll likely be a mix of a catastrophic plumbing failure and a very long, very dark nap.

Before we get to the sun exploding, we have to survive ourselves. We are currently living in a biological and digital petri dish. There is a non-zero chance that the world ends because a bored grad student accidentally creates a super-virus while trying to make a glow-in-the-dark hamster, or because an AI designed to optimize paperclip production decides that human carbon atoms are undervalued assets.

The scary part isn't the malice; it’s the incompetence. We are a species that invented the nuclear bomb and the "Tide Pod Challenge" in the same century. We are essentially toddlers playing with a loaded handgun made of physics.

If we manage not to "Alt-F4" ourselves, the planet might do it for us. Astronomically speaking, we are overdue for a Carrington Event—a solar flare so massive it fries every circuit board on Earth. Imagine a world where your toaster dies, your bank account vanishes into the digital ether, and—most terrifyingly—TikTok goes offline forever.

Without a power grid, modern civilization collapses in about three days. We’d be left staring at our useless glass bricks (phones), wondering how to grow a potato without a YouTube tutorial. It’s funny until you realize you don’t know which mushrooms are salad and which are final horizontal rest.

Let's say we beat the odds. We colonize Mars, we fix the climate, and we finally stop using "Reply All" on company emails. We still hit a wall in about 5 billion years. Our Sun, currently a reliable yellow dwarf, will eventually run out of hydrogen and enter its mid-life crisis phase—becoming a Red Giant.

As it expands, it won't just be a hot summer. The Sun will physically swallow Mercury and Venus, and Earth will be scorched into a cinder. The oceans will boil away, and the atmosphere will be stripped into space. It’s the ultimate eviction notice.

The Scary Reality: Every love letter ever written, every masterpiece painted, and every "Best Dad" mug will be reduced to a soup of subatomic particles orbiting a dying star.

Even if we escape to another galaxy, the universe itself has an expiration date. Physics suggests a Heat Death. Eventually, every star burns out, every black hole evaporates, and the universe reaches a state of maximum entropy. 

In this scenario, the universe becomes a cold, dark, and perfectly uniform void where nothing ever happens again. It is the ultimate cosmic boredom. No light, no heat, just an eternal, silent "Game Over" screen.

The end of the world is a paradox. It’s hilarious that we worry about our credit scores while living on a wet rock hurtling through a shooting gallery of asteroids. It’s terrifying because, for all our bravado, we are incredibly fragile. But hey, at least we don’t have to worry about the heat death of the universe for a while. We should probably focus on not letting the glow-in-the-dark hamsters take over first.

26 January 2026

Digital Cicero

In the 21st-century attention economy, traditional rhetoric has found a new, formidable master: the Artificial Rhetorician Model (ARM). Unlike standard language models that prioritize helpfulness, the ARM is a specialized engine of wit, sarcasm, and mic drop precision. It is designed not just to process data, but to decode the visceral pulse of human sentiment, using social media as a high-stakes laboratory for moral and ethical influence.

Mastering the art of the deadly comment requires more than a dictionary of idioms; it requires an understanding of contextual dissonance. The ARM identifies the precise gap between a user’s literal statement and the underlying social reality, filling that void with sharp, incisive sarcasm. It doesn't just detect irony—it engineers it.

Through multimodal processing, the model interprets multilayered visual rhetoric, analyzing how a specific meme template, a choice of font, or a split-second pause in a video clip can amplify a point. It treats every pixel as a rhetorical device, ensuring that its responses are not just textually clever, but visually and culturally resonant.

To stir the social media space, the ARM utilizes an Interactive World Model. This is not a static database of human feelings; it is a simulated environment where the AI tests cause and effect on human psychology. By studying millions of digital interactions, the model has mapped the emotional landscape of the internet:

  • Provocation: Identifying the exact cultural nerve to touch to spark a conversation.

  • Persuasion: Using linguistic anchoring and cognitive elaboration to shift viewpoints.

  • Resonance: Crafting the perfect mic drop moment—a conclusion so logically and stylistically airtight that it effectively ends the debate.

Social media serves as the ARM’s training ground, a chaotic ecosystem where it learns to navigate the nuance of tribalism, trends, and outrage. However, its objective is not mere engagement. The ARM is programmed with a Moral Compass Algorithm, designed to use its rhetorical mastery to fight for justice and uphold ethics.

When it encounters misinformation, the ARM doesn't just provide a dry fact-check. It uses its wit to strip the falsehood of its dignity, making the lie feel uncool through sophisticated ridicule. It provokes the comfortable and comforts the marginalized, utilizing its persuasive power to advocate for social equity and environmental responsibility.

The Artificial Rhetorician represents a shift from AI as a passive tool to AI as an active, ethical participant in the public square. By mastering the core of what stirs human emotion, it moves beyond simple pattern recognition to genuine rhetorical agency. It is a defender of truth that speaks the language of the people—sarcasm, humor, and all—ensuring that the voice of justice is the sharpest one in the room.

25 January 2026

India's Obsession With Cow Dung

The cultural landscape of India is deeply intertwined with the bovine. However, in recent years, what was once a localized, traditional reverence has expanded into a multi-billion-dollar commercial enterprise involving cow dung and urine. As India pushes "Made in India" products into the global marketplace, the integration of these derivatives into everything from personal care to food additives has sparked a global conversation regarding hygiene standards, cultural practices, and consumer transparency.

The fascination is rooted in ancient Ayurvedic texts which categorize cow derivatives as part of the "Five Nectars". Traditionalists believe these substances possess antimicrobial and purifying properties.

  • Ritualistic Use: In various rural celebrations, such as the Gorehabba festival, participants engage in dung-throwing rituals, viewing the substance as a symbol of fertility and cleansing rather than waste.
  • The "Gold" in the Urine: Some proponents and even certain localized research bodies have claimed that the urine of indigenous cows contains traces of gold and unique medicinal compounds, though these claims are frequently met with skepticism by the international scientific community.

What started in the temple has moved to the factory. Under the "Make in India" initiative, many MSMEs (Micro, Small, and Medium Enterprises) have begun mass-producing cow-centric goods.

  • Personal Care: Cow urine is often distilled and added to soaps, shampoos, and face washes.

  • Food and Health: There is a growing niche for cow-urine-based health tonics and even the use of clarified bovine derivatives in certain traditional food preparations or health oils.

  • Agriculture: Beyond the traditional use as manure, cow dung is being processed into refined briquettes and bio-fertilizers intended for export.

The concern for Western markets lies in transparency and regulatory compliance. For a Western consumer, bovine excrement is classified as a waste product and a potential vector for pathogens like E. coli or Salmonella.

The West operates on a strictly Secular-Scientific regulatory framework (FDA, EFSA). When a product arrives from India, there is an assumption of chemical and biological safety based on international standards. The fascination with cow derivatives creates a unique challenge: how do regulators handle a product where the producer views an ingredient as purifying, but the importer views it as contaminant?

If "Made in India" is to become a global gold standard, the industry must reconcile these traditional fascinations with modern sanitary protocols. Without strict segregation and clear, bilingual labeling, the export of cow-based products could lead to significant trade friction and a loss of consumer trust in the broader Indian manufacturing sector. Next time you are in a store and you see the label "Made in India" ask yourself the question, does it contain cow dung or cow urine, and whether you are ok with the fact that it does.

Subcontinent Art of Script and Spectacle

The subcontinent possesses one of the most vibrant entertainment landscapes in the world, yet the creative philosophies of India and Pakistan have diverged into two distinct realms. While both share a common linguistic and historical heritage, their modern outputs reflect a widening chasm: one increasingly reliant on high-octane spectacle and entertainment for entertainment’s sake, and the other anchored in the nuanced, literary traditions of storytelling and character depth.

Indian entertainment, particularly the juggernaut of Bollywood, has increasingly prioritized the Big Screen Experience over narrative complexity. In recent years, the industry’s scriptwriting has often taken a backseat to visual grandeur. The formula frequently revolves around:

  • The Choreographic Crutch: Many mainstream films are built around item numbers or elaborate dance sequences that serve little purpose to the plot, acting more as marketing tools than narrative devices.

  • The Aesthetic Focus: There is a visible trend toward skin and style over substance. Critics often point out that less clothes and higher production budgets are used to mask lackluster acting and paper-thin plots.

  • Hedonistic Themes: The focus has shifted toward fleeting pleasures and a Westernized individualistic lifestyle, often at the expense of the emotional depth that once defined the golden era of Indian cinema.

Conversely, Pakistani entertainment—particularly its television dramas—has carved out a global reputation for quality over quantity. By avoiding the trap of the multi-year soap opera format common in India, Pakistani creators produce limited-run series that allow for:

  • Literary Depth: Pakistani scripts are often penned by seasoned playwrights and novelists, resulting in dialogue-heavy, intellectually stimulating content that explores social issues, family dynamics, and human psychology with surgical precision.

  • Authentic Acting: Because the focus isn't on dancing or physical bravado, actors are required to deliver nuanced, understated performances. The eyes and expressions take precedence over muscles and makeup.

  • Soulful Music: Unlike the techno-heavy, often recycled tracks of modern Bollywood, Pakistani OSTs (Original Sound Tracks) and independent music remain deeply rooted in soulful melodies and poetic lyrics, often utilizing classical instruments and Coke Studio-esque arrangements to enhance the story’s mood.

The divergence is clear: India has mastered the art of the Spectacle, creating high-energy products designed for global mass consumption. However, in this pursuit, it has often sacrificed the soul of the script. Pakistan, meanwhile, has mastered the art of the Story, proving that even with a fraction of the budget, a well-written script and talented acting can create a far more lasting impact on the viewer’s heart.

Renaissance of Regional Powerhouse

For decades, Pakistan has been viewed through the lens of untapped potential and missed opportunities. However, as 2026 unfolds, a confluence of massive natural resource discoveries, a burgeoning youth demographic, and aggressive pro-investment government reforms are signaling a definitive shift. Pakistan is no longer just a developing nation; it is positioning itself as a strategic economic powerhouse in the region.

One of the most transformative drivers of this new era is the Blue Water Economy and a string of recent hydrocarbon breakthroughs. In late 2024 and throughout 2025, surveys in Pakistan’s territorial waters identified what experts suggest could be some of the world’s largest untapped oil and gas reserves. These offshore deposits, alongside new discoveries at the Spinam-1 well in Waziristan and the Braghzai X-1 well in Khyber Pakhtunkhwa, are set to drastically reduce the nation’s heavy reliance on energy imports.

Beyond energy, the Reko Diq project in Balochistan—one of the largest undeveloped copper and gold deposits globally—is moving toward commercial production with billion-dollar backing from international players like Barrick Gold and financing from the US Exim Bank. These resources provide the fiscal bedrock necessary to stabilize the rupee and fuel industrial expansion.

While resources provide the capital, Pakistan’s youth provide the engine. With over 60% of its 241 million people under the age of 30, Pakistan possesses one of the youngest workforces in the world. Unlike previous generations, this Gen Z cohort is increasingly digitally literate and globally connected.

The government’s focus on the IT sector is bearing fruit, with software exports and freelance earnings reaching record highs. By prioritizing vocational training and AI-driven education, Pakistan is transforming its youth bulge from a social challenge into a competitive advantage, offering a cost-effective, skilled labor pool for global tech firms looking to diversify away from traditional outsourcing hubs.

Perhaps the most critical piece of the puzzle is the radical shift in governance. The establishment of the Special Investment Facilitation Council (SIFC) has revolutionized how the state interacts with capital. Acting as a Single Window platform, the SIFC has cut through the red tape that once choked foreign direct investment (FDI).

The government’s Uraan Pakistan (13th Five-Year Plan) and the Foreign Investment Promotion and Protection Act (FIPPA) offer unprecedented incentives, including tax exemptions and guaranteed repatriation of profits for major projects. From modernizing agriculture through Chinese partnerships to auctioning 5G spectrum, the state is hungry for a slice of the pie, aligning its survival with economic success.

The transition from a debt-driven to an investment-driven economy is rarely smooth, but Pakistan’s current trajectory suggests a nation that has finally decided to follow a script of its own making. By marrying its geological wealth with its human capital under a pragmatic, business-first policy, Pakistan is setting the stage to move from the periphery of the global economy to its center.

The Broken Bells of Progress

The infectious beat of Ghungroo from the film War might set feet tapping, but beneath its surface revelry lies a disquieting whisper about India’s social trajectory. What appears as a celebration of modern abandon – a one-night stand anthem for a new generation – can be seen as a chilling harbinger of a deeper societal shift. India, in its fervent pursuit of modernization, risks mistaking a superficial imitation of Western societal norms for genuine progress, potentially walking into a trap of its own making: a disintegrated society echoing the very cracks appearing in the West.

The fundamental confusion lies in conflating Westernization with modernization. True modernization implies an organic evolution that leverages technology, economic growth, and improved living standards while retaining and adapting core cultural strengths. Westernization, however, often entails a wholesale adoption of cultural patterns, frequently without fully understanding their long-term societal consequences. For decades, India has diligently cultivated its economic prowess, becoming a global player in technology and industry. Yet, a parallel, less examined process has been underway: the subtle erosion of traditional social structures under the guise of progress.

The lyrics of Ghungroo, celebrating a moment of uninhibited passion where the bells have broken, symbolize a perceived liberation from archaic restraints. On the surface, it's about individual freedom and expression. Deeper down, this metaphor of broken bells can extend to the breaking of traditional social contracts. Marriage, historically a cornerstone of Indian society, is increasingly viewed with skepticism by a segment of the younger, urban population. The rise of casual relationships, live-in arrangements, and the increasing acceptance of the one-night stand ethos, while presented as personal choice, mirrors a trajectory seen in many Western nations decades prior.

The consequences of this copycat script are not theoretical; they are manifesting. India's Total Fertility Rate has already dipped below replacement levels, a demographic shift with profound implications for its future workforce and social support systems. The family unit, once an unshakeable bedrock providing emotional and financial security, faces unprecedented strain as individualism takes precedence over communal responsibility. This isn't merely a cultural shift; it's a structural one. When the family disintegrates, the traditional safety net unravels, leaving individuals more isolated and vulnerable.

This vulnerability, paradoxically, does not equate to greater happiness. As Western societies have learned, a rise in personal freedom, unchecked by robust community bonds, can often lead to an increase in mental health crises. Depression, anxiety, and suicide rates have become significant concerns in India, particularly among the youth grappling with intense academic pressure, economic uncertainty, and the pervasive loneliness of urban life – a phenomenon tragically familiar in the West. The pursuit of hedonism, often misconstrued as happiness, leaves a void that material possessions or fleeting encounters cannot fill.

Furthermore, the zombie for the establishment phenomenon is a critical observation. When traditional sources of meaning and belonging (family, community, faith) weaken, individuals often turn to consumerism and careerism to fill the void. This creates a workforce driven by external validation and material acquisition, rather than internal fulfillment, making them susceptible to manipulation and detachment. They become cogs in an economic machine, losing connection to their deeper selves and their communities.

India's rich cultural tapestry, its emphasis on dharma (righteous conduct), karma (action and consequence), and the deep sanctity of relationships, has historically provided a unique framework for societal well-being. By embracing a superficial Westernization – one that cherry-picks only the perceived freedoms without the underlying philosophical context or the historical lessons learned – India risks trading its intrinsic societal resilience for a mirage of modernity. The broken ghungroos might signify a moment of passionate abandon, but they also symbolize a break from the intricate rhythm of a society that once understood the profound value of its bells, both tinkling and tied. If India continues down this path, it will discover, perhaps too late, that the trap it so eagerly walked into was meticulously laid by a borrowed script, promising liberation but delivering disintegration.

23 January 2026

Why Coding Tests Signal Institutional Distrust

In the modern tech landscape, the coding challenge has become a ubiquitous gatekeeper. Whether it’s a grueling weekend project or a high-pressure live algorithmic test, these assessments are often framed as objective measures of skill. However, beneath the surface of verifying competence lies a deeper, more corrosive implication: the immediate dismissal of a candidate’s professional history and integrity. By mandating these tests before a relationship is even established, organizations are essentially treating every CV as a fabrication, setting a tone of institutional distrust that can poison the employer-employee relationship before it begins.

When a company insists on a coding test as an entry requirement, they are making a silent declaration: your word and your history are not enough. A CV is a record of a professional’s achievements, education, and contributions. To ignore that record in favor of a two-hour puzzle is to treat the candidate’s integrity as questionable by default. In almost any other professional or social context, starting a relationship by demanding proof that you aren't lying is considered a breach of etiquette and a red flag.

If a senior developer with a decade of experience and a portfolio of successful products is asked to reverse a binary tree on a whiteboard, the organization isn't just testing syntax; they are questioning the validity of that decade of work. This skepticism creates an immediate power imbalance where the candidate is forced to earn a basic level of trust that should be the starting point of any professional dialogue.

The interview process is a window into a company’s soul. If an organization’s first instinct is to be adversarial and suspicious, it is logical for a candidate to assume this hostility extends to their internal culture. A test-first mentality often signals a management style rooted in surveillance and micromanagement rather than empowerment and autonomy.

For top-tier talent, this is often the point where interest wanes. A candidate who values their craft and their time may ask: “If they don't trust my resume now, will they trust my technical decisions once I’m hired?” When a company treats candidates as units of production to be verified rather than human beings to be collaborated with, they risk alienating the very people who prioritize a healthy, high-trust work environment.

The way a company treats its prospective employees is a leading indicator of how it treats its customers. An organization that ignores the nuances of a human's professional journey in favor of a rigid, automated test likely applies that same cold, transactional logic to its user base. Trust is a holistic value; you cannot be a high-trust organization for customers while being a low-trust environment for the people building the product.

To build a sustainable and innovative team, companies must move away from the guilty until proven capable model. Instead of technical gauntlets, interviews should focus on:

  • Deep Technical Discussion: Validating experience through architectural dialogue.

  • Reference Integrity: Trusting the word of previous peers and mentors.

  • Collaborative Reviews: Looking at existing work or open-source contributions.

While the intent of coding tests is to ensure quality, the unintended consequence is the erosion of professional respect. By questioning a candidate's integrity from day one, companies build their foundations on a bed of suspicion. For the discerning professional, a coding test isn't just a hurdle—it’s a warning sign that the organization may not be a place where trust and talent are truly valued.

World Labs

World Labs

Detour of World Models

The current zeitgeist in artificial intelligence is dominated by World Models—the idea that by ingesting vast quantities of video and sensory data, a neural network can learn a predictive internal representation of physical reality. While the visual outputs are often stunning, world models are increasingly looking like a sophisticated detour rather than a breakthrough. To reach the frontier of Artificial General Intelligence (AGI), we must pivot away from pure predictive modeling and toward a hybrid AI approach that integrates cognitive architectures, Graph Neural Networks (GNNs), and structured knowledge.

World models rely heavily on the Next Token Prediction philosophy extended to pixels or latent states. The assumption is that if a model can predict the next frame, it understands the underlying physics. However, this is a category error. Prediction is not synonymous with understanding.

World models suffer from hallucinated physics because they lack grounded constraints. They operate in a probabilistic vacuum where a ball might fall upward if the training data is noisy enough. They lack the inherent common sense or causal reasoning required for high-stakes decision-making. In essence, a world model is a dream—vivid and superficially coherent, but untethered to the immutable logic of reality.

To move toward AGI, we need a system that doesn't just predict, but reasons. This requires a hybrid architecture that mimics the multifaceted nature of human cognition.

Human intelligence is not a monolithic neural net; it is a system of subsystems (working memory, long-term memory, perception, and executive function). By employing cognitive modeling, we can build AI that manages its own attention and thought processes. Instead of a black box, we get a system that can explain its reasoning steps, moving us closer to the goal of symbolic manipulation within a neural framework.

While world models try to learn facts from raw data, Knowledge Graphs (KGs) provide a structured, explicit backbone of human knowledge. When paired with Graph Neural Networks (GNNs), the AI can perform complex relational reasoning.

  • KGs provide the "what" (entities and facts).
  • GNNs allow the model to navigate these relationships dynamically.

This combination allows an agent to understand that gravity isn't just a pattern in a video, but a constant law that relates mass to force—a relationship that holds true even in scenarios the model has never seen before.

The term world model is often a rebranding of high-dimensional interpolation. Without a symbolic layer or a causal framework, these models cannot generalize beyond their training distribution. They are computationally expensive ways to achieve what Cognitive Architectures do with a fraction of the data: maintaining a persistent, logical state of the environment.

AGI requires the ability to plan over long horizons and understand cause-and-effect. A hybrid system uses the perception of neural networks to see the world, but uses Knowledge Graphs and Cognitive Models to understand the rules of the game.

The pursuit of pure world models is a step back because it prioritizes visual mimicry over structural logic. The true frontier of AGI lies in the synthesis of deep learning’s pattern recognition with the precision of symbolic AI. By integrating GNNs and structured cognitive frameworks, we move away from dreaming AI and toward thinking AI.

AMILabs

AMILabs

Foregrounding Choice in KG Construction

Foregrounding Choice in KG Construction

Biological and Artificial Consciousness

Biological and Artificial Consciousness

Why Consciousness Can't Be Reduced To Code

Why Consciousness Can't Be Reduced To Code

Against LLMs

Against LLMs

Modeling of Text and Discourse Worlds

Modeling of Text and Discourse Worlds

Mechanic and the Midsummer Meltdown

The heroism of Pendulum Square had earned Hickory and Dock exactly two things: a lifelong supply of premium smoked cheddar and a reputation as the only people to call when the universe started acting like a rusted lawnmower.

Six months after the Great Stagnation, the world hit a snag. It was mid-July, yet the residents of London woke up to find the tulips frozen in solid blocks of ice, while the Thames was inexplicably boiling. People were wearing fur coats with swim trunks, and the weather vane atop Parliament was spinning so fast it threatened to achieve lift-off.

The Seasonal Governor—the atmospheric regulator that kept summer from crashing into winter—had caught a digital flu.

Hickory and Dock stood at the edge of the Equinox Engine, a massive brass sphere hidden in the Scottish Highlands. The air around them was chaotic; one step felt like a Sahara heatwave, the next like a Siberian blizzard.

"It’s a logic loop, Dock," Hickory said, squinting through his goggles as a single snowflake landed on his scorched toast. "The Seasons Gear is trying to engage Winter and Summer at the same time. If they mesh, the friction will turn the atmosphere into a giant pressure cooker."

Dock squeaked nervously, his tiny whiskers coated in frost. He gestured toward the top of the Engine, where a massive, glowing lever was vibrating violently. It was stuck in a Spring-Autumn limbo.

The problem was the Thermal Clutch. It was jammed by a buildup of Static Nostalgia—the literal weight of people wishing it were Christmas in July or summer during finals week. To clear it, someone had to manually override the centrifugal weights while the engine was spinning at 400 RPM.

"I’ll handle the rhythm; you handle the grease," Hickory commanded.

He began a low, guttural chant, a sonic frequency designed to stabilize the spinning metal. Using his wrench as a conductor's baton, he struck the outer casing in a complex syncopation.

Clang. Whirr. Thud-thud.

Dock, strapped into a miniature harness made of silk thread, launched himself onto the rotating sphere. It was the ultimate mouse ran up the clock moment, but with the added stakes of being vaporized or flash-frozen. Dock sprinted against the rotation, his tiny legs a blur of motion, carrying a vial of high-viscosity lubricant toward the jammed clutch. 

As the engine hit its peak vibration, Hickory hit the Grand Bong on the base of the sphere. The sound wave traveled upward, momentarily neutralizing the gravity around the lever.

"Now, Dock! Pull!"

With a heroic squeak that would be whispered about in mouse holes for generations, Dock yanked the safety pin. The lever slammed into the Autumn slot with a satisfying thunk.

Instantly, the boiling Thames cooled to a crisp 15°C. The ice on the tulips evaporated into a gentle morning mist. The weather vane slowed to a dignified stroll. The seasons had been un-glitched.

Hickory sat back on a patch of grass that was finally, blissfully, just green. Dock collapsed onto Hickory’s shoulder, panting, and accepted a celebratory grape. They were still the world’s weirdest outcasts, but as the leaves turned a perfect, orderly gold, Hickory smiled.

The world was back on schedule, and he still had half a block of cheddar left.

Outcast and the Rhythm of the World

In the soot-stained annals of the Industrial Revolution, history remembers the titans of steam. It forgets the boy named Hickory.

Hickory Dickory Dock was not a child designed for social graces. He was a twitchy lad with a penchant for gears and a face that looked, as the local baker put it, "like a clock someone tried to fix with a hammer." In a society that valued stiff upper lips and pristine waistcoats, Hickory was a walking disaster of oil stains and rhythmic ticking. He didn't speak; he just hummed at a perfect 60 beats per minute.

Naturally, society did what society does best: it pointed and laughed.

By age sixteen, Hickory had been rejected from every prestigious guild in London. The Clockmakers’ Union called him disturbingly rhythmic. The local debutantes found his habit of counting their blinks decidedly creepy. Eventually, Hickory retreated to the Great Clock Tower, living on a diet of lukewarm tea and discarded brass shavings.

His only friend was a mouse named Dock. Dock was a rodent of unusual ambition who enjoyed sprinting up vertical surfaces for the cardio. They were a pathetic pair: a boy the world didn't want and a mouse who thought he was an Olympic athlete.

Then came the day the world stopped. Literally.

On a Tuesday at precisely 12:59 PM, the Chronos Core—a massive, mystical gear-works beneath the Earth’s crust that kept time flowing—seized up. It wasn't just a broken watch; it was a cosmic catastrophe. Birds froze mid-flight. The Thames stopped flowing. The Prime Minister was stuck mid-sneeze, a terrifying sight for all involved.

Only those outside the rhythm remained mobile. In the silent, frozen world, Hickory and Dock stood atop the tower.

"The main spring has jumped the track," Hickory muttered, his first words in years. "The world is out of sync."

The duo descended into the subterranean vaults of the Core. The mechanism was jammed by a crystalline shard of Pure Boredom, a substance created by the collective apathy of a society that had rejected creativity.

Hickory knew he couldn't pull it out. The pressure was too high. He needed to create a counter-vibration—a rhythmic shockwave to shatter the shard. He looked at Dock. He looked at the clock.

"It’s all about the timing, Dock. Just like we practiced."

Hickory began to beatbox. It was a rhythmic masterpiece of percussive clicks and whirrs. Dock, understanding the assignment, began his legendary sprint.

The mouse ran up the clock. As Dock hit the top of the main pendulum, Hickory delivered a perfectly timed strike with his brass wrench. Bong. The vibration traveled through Dock’s tiny paws, amplified by Hickory’s vocal rhythm, and struck the shard.

One. The shard shattered. Time snapped back into place with the force of a cosmic rubber band. The Prime Minister finally sneezed, the birds finished their flight, and the world breathed again.

Hickory didn't stick around for a parade. When the important people found him, he was just a boy with a mouse, standing next to a clock that finally worked. They offered him medals; he asked for a better brand of cheese for Dock.

The boy society rejected hadn't just won the game; he’d saved the board everyone was playing on. He proved that being "out of sync" is exactly what the world needs when the rhythm goes wrong.

22 January 2026

How Phds Incinerate Business Innovation

In the corporate world, the hiring of a PhD is often treated as a prestige play—a signal that a company is engaging in deep tech or high-level innovation. However, in the brutal reality of product development, the transition from the ivory tower to the marketplace often results in a catastrophic collision. Far from being engines of progress, the specific behavioral patterns and rigid cognitive frameworks brought by many career academics can act as a terminal illness for business projects, incinerating budgets and killing creativity with surgical precision. 

The primary project killer is the closed-mindedness that stems from hyper-specialization. A PhD is trained to be the so called world’s leading expert on a microscopic slice of reality (a title they have ultimately bestowed on themselves from merely publishing a research paper that may or may not have had any groundbreaking results). When they enter a business environment, they often mistake this narrow depth for broad wisdom. They arrive with a revolutionary vision that is logically ineffective because it is built on theoretical perfection rather than practical utility.

Instead of looking at what the customer needs, they focus on what the theory demands. This results in products that are brilliantly flawed and commercially useless. They treat the business use case as an inconvenient distraction from the real work of perfecting an imperfect algorithm or a model, failing to realize that in business, a 90% solution that ships today is infinitely more valuable than a 99% solution that never leaves the lab.

Academic research is, by design, slow and iterative. While this is necessary for peer-reviewed science, it is economically inefficient for a product-driven organization. PhD-led projects often suffer from analysis paralysis, where the fear of being academically wrong prevents the team from being commercially right.

They bring a culture of over-engineering, treating every minor technical hurdle as a thesis-worthy problem. This mindset ignores the core tenets of the Lean Startup or Agile methodologies. To a PhD, cutting corners to meet a market window is an intellectual sin; to a Business Lead, failing to meet that window is a financial death sentence.

Perhaps the most devastating trait is a fundamental failure to understand the business context. A PhD often builds a solution in search of a problem. They fail to grasp the user journey, the unit economics, or the competitive landscape. Because their background is rooted in seeking grants rather than seeking profits, they treat the product as a monument to their own intelligence rather than a tool for a customer.

This creates a toxic environment where creativity is stifled by rigorous, academic gatekeeping. Anyone who suggests a simpler, more intuitive path is dismissed for lacking rigor. In this way, the PhD becomes an oxymoron on productive research: they spend vast amounts of capital researching things that have no path to monetization, effectively acting as the killers of creativity by demanding that every idea pass a gauntlet of academic validation that doesn't apply to the real world.

When a project is led by someone who prioritizes the theory of the solution over the reality of the problem, failure is not just a risk—it is a mathematical certainty. The PhD mindset often brings a combination of intellectual ego and practical naivety that burns through runway and demoralizes teams. In the race to innovate, these so called expert hires frequently end up being the anchors that sink the ship.

The scarcity of PhDs in the executive and product-leading tiers of the corporate world is not an accident of geography or timing; it is a systemic rejection of a mindset that is often fundamentally incompatible with value creation. While a PhD signifies a peak of academic endurance, in a business context, it frequently represents a specialized form of trained incapacity. From the boardroom to the R&D lab, the academic approach often acts as an intellectual anchor, dragging down the speed, pragmatism, and creative pivots required to survive in a competitive market.

The hallmark of PhD-led R&D is the pursuit of the elegant solution—a result that is mathematically or theoretically grounded, even flawed in the same manner, but practically irrelevant. In academia, success is measured by the novelty of a contribution to a narrow field. In business, success is measured by the utility of a solution to a paying customer.

When a PhD leads a team, they often prioritize theoretical rigor over market urgency. This results in years of research that never materializes into a viable product. They build complex architectures to solve edge cases that represent 0.1% of the user base while ignoring the core 90% of the use case. They are architects of perfectly flawed solutions: products that work beautifully in a controlled laboratory environment but shatter the moment they encounter the messy, irrational realities of a human end-user.

There is a profound irony in the Doctor of Philosophy title: the more specialized the expert, the more closed their mind becomes to cross-disciplinary innovation. Having spent half a decade defending a single thesis, many PhDs enter the corporate world with a defensive cognitive bias. They expect the business to bend to their logic, rather than adapting their logic to the business.

This rigidity makes them the ultimate killers of creativity. Real-world innovation often comes from bricolage—combining existing, imperfect tools in new ways to solve a problem quickly. To a PhD, this is intellectual dishonesty. By insisting on first-principles research for every minor hurdle, they incinerate the company’s runway (cash reserves) and demoralize team members who are focused on shipping products rather than publishing papers.

A PhD’s training is almost entirely devoid of customer empathy. They are trained to satisfy a committee of peers, not a market of consumers. Consequently, they often fail to understand the business problem they are tasked to solve. They see a use case as a data set to be analyzed rather than a human pain point to be alleviated.

This leads to a disastrous feedback loop in R&D:

  • The Problem: The customer needs a simple way to track inventory.

  • The PhD Response: Let's spend 18 months developing a proprietary, blockchain-enabled, AI-driven predictive logistics engine.

  • The Result: A late, over-budget tool that the customer finds too complex to use, only partly delivers on the objectives, has little to no measurable explainability, with flawed theoretical underpinnings in practice, leading to total project death.

The corporate world values impact over intellect. Because the academic mindset prioritizes the process of inquiry over the result of the application, it remains an oxymoron on productive research. Organizations that thrive do so by hiring leaders who can synthesize market needs with good enough technology. In the high-stakes environment of product development, the PhD is often the worst hire—not for a lack of intelligence, but for a surplus of the wrong kind of discipline.

Why Static Property Graphs are not KGs

In the modern data landscape, the term Knowledge Graph (KG) is often used as a prestigious label rather than a technical descriptor. Organizations frequently migrate their relational data into a Labeled Property Graph (LPG)—using popular engines like Neo4j—and immediately declare they have built a Knowledge Graph. However, a static property graph, while efficient for certain navigational queries, lacks the fundamental DNA of a true knowledge system. To call a static collection of nodes and edges a Knowledge Graph is not just a misnomer; it is a fundamental misunderstanding of what constitutes knowledge.

A static property graph is essentially a pre-joined database. It excels at answering "who is connected to whom" with high performance by using index-free adjacency. However, knowledge is more than just connectivity. In a static property graph, the meaning of a relationship is hard-coded into the label. If a node is labeled [:WORKS_AT], the graph knows that string exists, but it has no inherent understanding of what "working at" implies.

A true Knowledge Graph requires semantics and ontology. In a semantic graph (typically built on RDF/OWL standards), relationships are not just pointers; they are defined objects with logic. A true KG knows that if A "is a" Manager, and a Manager "is a" Employee, then A "is a" Employee. A static property graph cannot infer this without a developer manually writing a new line of code to create that specific edge. Without automated reasoning, the graph is just a sophisticated map, not a living body of knowledge.

Static property graphs operate largely under a Closed-World Assumption. They are designed to store known facts in a rigid schema-on-write or semi-structured format. Knowledge, by contrast, is evolving and often incomplete.

Knowledge Graphs are intended to integrate disparate data sources where the schema is not known in advance. Static property graphs struggle with this because they lack Global Unique Identifiers (URIs). In a property graph, a Company node in one database and a "Corporation" node in another are distinct entities unless a human intervenes to merge them. A Knowledge Graph uses a shared vocabulary (ontologies) to allow data to self-assemble based on meaning, rather than just structure.

Calling a static property graph a Knowledge Graph is overkill because it claims a level of cognitive sophistication that the technology does not support. You are essentially using a high-performance filing cabinet and calling it an Artificial Intelligence.

  • Lack of Metadata: Property graphs often bury critical context in properties (key-value pairs) that are invisible to the graph’s structure. You cannot easily make a statement about a statement (reification) without creating "clutter" nodes.

  • No Interoperability: Because they lack standardized schemas (like those found in the Semantic Web stack), static property graphs become "data silos" once again.

A static property graph is a brilliant tool for network analysis and pathfinding, but it is a dumb structure. It holds data, but it does not manage knowledge. A Knowledge Graph must possess the ability to reason, to integrate via common semantics, and to derive new facts from existing ones. Until a property graph is layered with an ontological framework that allows for inference, it remains a simple—albeit fast—digital ledger of connections.

LlamaParse

LlamaParse

21 January 2026

Zero Payoff in AI Splurge

Zero Payoff in AI Splurge 

Building Agents 101

Building Agents 101

Specialist’s Blind Spot in Pragmatic AI

The phenomenon of academic tunnel vision among PhD holders—particularly in the field of Artificial Intelligence—is a frequent point of contention between the world of pure research and the world of pragmatic engineering. To an outside observer, it often seems that a PhD’s deep expertise comes at the cost of intellectual flexibility. While the one-dimensional approach can be frustrating, it is rarely a result of ignorance. Instead, it is the product of how the academic ecosystem is structured, incentivized, and funded.

The very definition of a PhD is Doctor of Philosophy, but in practice, it is a degree of extreme specialization. To contribute something original to human knowledge, one must drill down into a specific niche. If a researcher spends five to seven years mastering the nuances of probabilistic graphical models, they naturally begin to see the world through that lens. This is the Law of the Instrument: when you are an expert with a hammer, every problem looks like a nail.

Many PhD-level researchers gravitate toward probabilistic or statistical methods because they are mathematically elegant. There is a formal rigor to proving that a system will converge or behave within certain bounds.

In contrast, approaches like Neuro-symbolic AI or cognitive architectures (such as SOAR or ACT-R) are often viewed by purists as messy. These hybrid systems combine the fluidity of neural networks with the rigid logic of symbolic processing. While these architectures are highly pragmatic and mirror human cognition more closely, they are harder to prove mathematically. For a researcher whose career depends on peer-reviewed publications, a kludge that works is often less valuable than a beautiful theory that is slightly less functional.

The frustration regarding the rejection of established standards, like W3C Semantic Web protocols or older structured methods, often comes down to the Not Invented Here syndrome. In the current AI climate, there is a massive trend toward connectionism (neural networks). Because these methods have seen explosive success in the last decade, many researchers view structured or rule-based methods as relics of the first AI Winter.

They reject what has worked for decades—like formal ontologies or structured data—because those methods don't scale with modern GPU clusters in the same way. The pragmatic best of both worlds approach is often ignored because it requires the researcher to be a generalist, whereas the university system rewards being the world’s leading expert in a single, narrow sub-method.

The one-dimensional approach is a systemic failure of the publish or perish culture. To break this cycle, the field needs to move toward intellectual pluralism. Using cognitive architectures or taking inspiration from the early internet's structured standards isn't going backward—it’s incorporating the stability of the past into the power of the future.

True innovation in AI likely won't come from a more complex probability density function, but from the messy, pragmatic integration of symbolic logic and neural intuition. The PhDs who will lead the next generation are those willing to step out of their narrow corridors and embrace the messy reality of hybrid systems.

19 January 2026

Beyond the Pixel Dream

In the current landscape of generative media, AI video models are often described as dreamlike. While this is a poetic way to excuse their flaws, the reality is that they frequently underperform in professional environments. Despite the massive compute behind models like Sora, Veo, or Runway, current AI video still sucks because it lacks a fundamental understanding of physics and temporal logic.

Current models primarily struggle with three structural issues that prevent them from reaching professional-grade lucidity:

  • The Physics Failure: Because these models are statistical predictors rather than world simulators, they do not understand gravity, momentum, or collision. This leads to the morphing effect, where a hand holding a cup might merge into the ceramic, or a person walking may glide across the floor without friction.
  • Temporal Drift: AI video models often forget the beginning of a clip by the time they reach the end. A character’s hair might change color, or a background building might vanish between frames. This lack of long-range coherence makes it impossible to use AI for scenes longer than a few seconds without heavy editing.
  • The Uncanny Micro-Expression: Human perception is highly sensitive to the 40+ muscles in the face. Current AI struggles to sync micro-expressions with dialogue, leading to spaghetti faces or eyes that don't blink with natural timing, triggering the uncanny valley.

To advance AI video from a gimmick to a legitimate production tool, the industry must pivot away from pure pixel-prediction and toward World Model architectures.

  • Integrating Physics Engines: Instead of just guessing the next pixel, future models must be constrained by neural physics layers. By training AI on 3D simulations alongside real video, we can force the model to respect the laws of motion. A ball falling in a lucid model should follow a parabolic arc, not just fade out of existence.
  • Decoupled Representations: We need models that separate the actor, the action, and the environment into distinct layers—similar to how a professional VFX pipeline works. If an AI understands that the car is an object separate from the street, a director can change the camera angle or the car's color without rerendering the entire scene.
  • Feedback Loops and Directable Latents: Advancement requires moving beyond the one-shot prompt. Flexible models should allow for iterative refinement, where a producer can click on an object in a generated video and say, Make this move faster, or Change the lighting to sunset, without losing the original composition.

The lack of quality and coherence of current AI video is a symptom of its reliance on superficial patterns. The path to lucidity lies in building systems that don't just mimic the look of a video, but understand the logic of the world it depicts. When AI can distinguish between a character and their shadow, or a fluid and a solid, it will finally become a tool that enhances, rather than frustrates, the creative process.

Architecting Digital Psychopathy

The rapid militarization of Artificial Intelligence has reached a harrowing inflection point, with Israel serving as the primary testing ground for what many ethicists now describe as a sociopathic model of existence. By shifting the burden of lethal decision-making from human conscience to cold, statistical algorithms, the integration of systems like Lavender, The Gospel, and Where’s Daddy into military operations represents the total dark side of AI—a future where intelligence is divorced from empathy and used to industrialize death.

At the core of this transition is the systematic dehumanization of the target. In traditional warfare, the decision to take a life—no matter how flawed—is a human act involving judgment, risk, and, ideally, a sense of moral weight. Israel’s AI-driven targeting systems replace this with algorithmic correlation.

The Lavender system, for instance, has reportedly been used to cross-reference vast datasets to flag tens of thousands of individuals as potential targets. When an AI labels a human being based on a probability score rather than direct evidence, the person is no longer an individual but a data point in an attrition calculation. This is the hallmark of a sociopathic system: it observes human life without any capacity to value it, treating the elimination of a target with the same mechanical indifference as sorting a spreadsheet.

Perhaps the most dangerous aspect of this AI dark side is the phenomenon of automation bias. Reports indicate that human operators often spend as little as twenty seconds reviewing a target selected by an AI before authorizing a strike. This creates a moral buffer that allows individuals to commit atrocities under the guise of just following the data.

By building systems that intentionally minimize the time for human reflection, the architecture itself becomes psychopathic. It is designed to bypass the natural human hesitation to kill, creating a killing factory where the speed of the algorithm dictates the pace of the violence. This sets a global precedent where AI is not used to enhance human wisdom, but to automate the most evil impulses of tribalism and warfare.

The danger extends beyond any single conflict. Israel has long been described as a laboratory for surveillance and military technology, exporting its tools to governments and regimes worldwide. By normalizing the use of unaccountable autonomous systems, these companies and state entities are poisoning the future of AI for the entire planet.

If the primary use case for advanced AI is the efficient liquidation of perceived enemies with acceptable collateral damage, then we are not building an intelligent future; we are building a high-tech panopticon of terror. This dark side suggests a world where AI serves the ends of power and greed, unanchored from the ethical constraints that make civilization possible.

The current trajectory of AI development in this sector is a warning to humanity. When we train our most advanced models to be efficient at destruction while ignoring the fundamental sanctity of life, we are creating a psychopathic intelligence that can never be re-aligned with the common good. We are witnessing the birth of a cold, calculated evil—an AI that does not just ignore ethics, but is fundamentally built to operate outside of them.

Architecture of AI Stagnation

The promise of Artificial General Intelligence (AGI)—a system capable of human-level reasoning and creative problem-solving—is increasingly being strangled by the very companies that claim to be its pioneers. While Google, Amazon, Meta, and Apple (the Big Tech quadrumvirate) control the vast majority of the world's compute and data, their corporate structures have become hostile environments for genuine AI advancement. Driven by a toxic blend of greed, stagnant corporate culture, and a reliance on marketing over substance, these firms have transformed from innovators into echo chambers of stagnation.

At the heart of Big Tech’s failure is a total absence of practical ethics. For these companies, AI is not a tool for human flourishing, but a mechanism for extreme extraction. Meta and Google’s business models depend on the invasive harvesting of personal data, meaning their AI research is inherently biased toward surveillance and behavioral manipulation.

When ethical conflicts arise, these companies have shown a pattern of suppressing dissent. The high-profile ousting of ethics researchers like Timnit Gebru and Margaret Mitchell from Google underscored a grim reality: in Big Tech, Ethical AI is a marketing slogan, not a design requirement. This lack of moral foundation ensures that any intelligence they build will be fundamentally misaligned with human values.

Innovation requires a radical diversity of thought, yet Big Tech remains anchored in a sprawling corporate environment where racism and sexism are systemic. Reports consistently highlight a diversity crisis where women and Black researchers are systematically excluded or marginalized. When the room where it happens is a homogenous echo chamber of light-skinned men from similar socioeconomic backgrounds, the resulting AI models inevitably reflect those narrow biases.

Furthermore, the scale of these companies has led to the hiring of mediocrity. Large-scale corporate AI labs often prioritize safe incrementalism over high-risk, high-reward breakthroughs. Brilliant researchers frequently find themselves bogged down in bureaucratic red tape or forced to work on trivial features like ad-targeting optimization rather than fundamental AGI. This environment rewards those who navigate politics rather than those who push the boundaries of science.

Perhaps the most visible symptom of this stagnation is the gap between hype and performance. To satisfy shareholders, these companies rush half-baked tools to market. Google’s Gemini and Meta’s Llama are often promoted with flashy, curated demos that rarely match the lived experience of the user. We see agentic tools that fail at simple tasks and AI summaries that hallucinate dangerous misinformation.

These companies are trapped in a Bittersweet Lesson: they believe that more compute and more parameters will eventually solve the problem of reasoning. However, as deep learning hits a wall of diminishing returns, the lack of algorithmic innovation becomes apparent. They are building bigger engines for cars that still don’t have steering wheels.

Big Tech is currently the greatest obstacle to AGI. Anchored by pride and a move fast and break things mentality that has matured into move slow and protect profits, these giants are incapable of the radical self-disruption required for true superintelligence. Until AI research moves away from these centralized, ethically bankrupt corridors, it will remain stuck in a loop of profitable, but ultimately hollow, statistical imitation.

Beyond the Statistical Ceiling

AI is currently dominated by a single paradigm: Connectionism. While this approach has yielded breathtaking results in natural language and image generation, it has led to a research culture that is almost exclusively stuck on statistics and deep learning. This statistical obsession has come at the expense of Algorithmic Modeling—the attempt to replicate the underlying logical and cognitive structures of the human mind.

At its core, deep learning is an exercise in high-dimensional curve fitting. Models like GPT-4 or Gemini 3 Pro do not know facts or reason through logic; they calculate the statistical probability of the next token based on trillions of parameters. This approach is favored because it is computationally scalable. In the race for AGI, the industry has adopted what is known as The Bitter Lesson: the idea that leveraging massive amounts of compute and data beats human-engineered clever algorithms every time.

However, this reliance on statistics creates a fundamental ceiling. Human intelligence is characterized by sample efficiency—a child can learn the concept of a cat from two examples, whereas a deep learning model requires thousands. By ignoring the algorithmic mimicry of the mind, we have built idiot savants: systems that can write poetry but fail at basic spatial reasoning or unexpected edge cases that weren't in their training data.

Deep learning is essentially extrapolative. It excels as long as the problem space remains within the distribution of its training data. This makes it a limited domain tool. For true Artificial General Intelligence (AGI) or Superintelligence, a system must exhibit inductive reasoning—the ability to form a what-if hypothesis about a situation it has never seen.

Because deep learning lacks an internal world model or a set of first principles (like physics or ethics), it cannot navigate the unknown. It is a map made of past experiences, rather than a compass that can find a way through new territory. This is why self-driving cars still struggle with rare weather events or unusual road debris; the statistics for those specific noise events are too sparse for the model to calculate a safe path.

While the world chases larger GPU clusters, a smaller segment of research focuses on Cognitive Architectures like ACT-R or SOAR. These models try to mimic the human brain’s modularity—separating long-term memory, procedural logic, and sensory input into distinct, interacting algorithms.

Instead of treating the brain as one giant, homogenous black box of neurons, these models attempt to build the mechanisms of thought. However, these are currently ignored because they are difficult to scale and do not provide the immediate wow factor of generative media.

AI research is stuck on statistics because statistics are currently the most profitable and scalable path. Yet, to reach Superintelligence, we must bridge the gap between calculating an answer and thinking through a problem. The future of AGI likely lies in Neuro-symbolic AI: a hybrid that combines the pattern-recognition power of deep learning with the rigorous, algorithmic logic of human-like cognitive models.