Tag: ChatGPT

  • “AI” is nothing more than dynamic regular expressions.

    In recent years, large language models (LLMs) have been described as revolutionary, intelligent, and even proto-conscious systems.
    However, a compelling counter-position argues that these systems are nothing more than extraordinarily sophisticated pattern-matching machines – essentially dynamic, probabilistic regular expression engines operating at massive scale.

    This article presents a steelman version of that argument: the strongest, most intellectually rigorous case that LLMs are fundamentally advanced statistical pattern processors rather than thinking entities.

    1. Next-Token Prediction as Pattern Completion

    At their core, LLMs are trained to predict the next token in a sequence. Given prior tokens, the system calculates the probability distribution of possible continuations and selects one based on learned statistical weights.

    This is pattern completion. Regular expression engines also operate on sequences, identifying matches based on structured symbolic rules. While regex uses deterministic transitions and fixed syntax, LLMs use probabilistic transitions and learned weights. In both cases, the system maps input sequences to outputs based on pattern structure rather than understanding.

    2. Transformers as Probabilistic State Machines

    Modern LLMs rely on the transformer architecture, which computes attention scores between tokens and assigns weights to contextual relationships. Conceptually, this resembles a vast probabilistic state machine operating in high-dimensional vector space.

    A traditional regular expression compiles to a finite state automaton with deterministic transitions. An LLM can be seen as a soft, differentiable automaton whose transitions are weighted by learned statistical correlations.
    The structure differs in scale and flexibility, but the functional role — sequence processing via state transitions — remains analogous.

    3. Statistical Correlation Without Grounded Semantics

    Regular expressions do not understand what they match. They recognize structure.

    Similarly, LLMs do not possess intrinsic semantic grounding. They model statistical relationships between tokens in training data.
    Their outputs reflect learned correlations rather than lived experience or intentional meaning.
    The appearance of understanding may emerge from scale and complexity, but internally the system manipulates symbol patterns.

    4. Emergent Behavior Does Not Imply Cognition

    Critics of the regex analogy point to reasoning, planning, and abstraction capabilities in LLMs.
    However, the steelman position argues that emergent behavior from sufficiently complex statistical systems does not constitute true cognition.

    Chess engines evaluate massive search trees without understanding chess.
    Similarly, LLM reasoning may be structured interpolation across learned distributions rather than deliberate thought.
    Complex pattern simulation can mimic reasoning without instantiating it.

    5. The Compression Perspective

    Another powerful framing views LLMs as compression engines. During training, vast corpora of text are compressed into parameter weights. During inference, those weights generate plausible continuations — effectively decompressing structured language patterns.

    Regular expressions also encode compressed pattern descriptions. LLMs simply encode patterns at a scale and dimensionality far beyond manual symbolic systems.

    6. Turing Completeness and Category Errors

    Some argue that because transformers are Turing-complete in principle, they transcend simple pattern matching. The steelman response notes that Turing completeness alone does not imply intelligence. Many simple systems are computationally universal yet devoid of cognition.

    Thus, the ability to simulate reasoning does not entail genuine reasoning — only sufficient structural complexity.

    Conclusion

    The strongest version of the argument concludes:

    • LLMs operate purely on statistical token prediction.
    • They lack intrinsic semantic grounding.
    • Their internal processes are weighted pattern transitions.
    • Apparent reasoning is structured probability, not cognition.

    Under this interpretation, LLMs are not minds, thinkers, or agents.
    They are adaptive, high-dimensional, probabilistic pattern-matching systems — dynamic regular expression engines operating at planetary scale.

  • AI Is Reshaping What Engineering Work Looks Like

    The rapid rise of generative-AI in software development has sparked proclamations that AI will obsolete human programmers. But a closer look at major tech firms’ behavior — and recent research — paints a different picture. Rather than eliminating engineering jobs, AI seems to be reshaping what engineering work looks like — and in many cases increasing demand for engineers with the right skills to oversee, integrate, and scale AI-driven systems.

    Why the Narrative of “AI Will Kill Coding Jobs” Isn’t Holding Up

    It’s common to hear executives or media suggest AI will replace large swathes of software-engineering roles. Terms like “vibe coding” — where a developer prompts an AI and accepts its output — are gaining popularity as shorthand for an AI-first future. Wikipedia+1

    Yet firms continue hiring — and often hiring more. That suggests that even as AI becomes more capable, human engineers remain essential.

    Concrete Example: Anthropic + Bun Acquisition

    Take a recent, telling move: in December 2025, Anthropic acquired Bun — a JavaScript/TypeScript runtime and toolkit — at the same time it shared that its AI coding assistant (Claude Code) had reached a milestone of $1 billion in annualized run-rate revenue. IT Pro+1

    Bun is not a simple toy — it’s a foundational runtime used for building, bundling, and running JavaScript/TypeScript applications. By bringing Bun into its fold, Anthropic signaled confidence not just in AI-generated code, but in needing human engineers to build the infrastructure, integrate AI into real-world systems, and maintain stability and performance.

    In other words: even as Anthropic pushes forward with code-generating AI, it still invests in human engineering talent and software infrastructure. The acquisition underlines that AI alone isn’t sufficient — human oversight, toolchain maintenance, and architectural work remain essential.

    Broader Industry Pattern — Not Just One-Off

    This isn’t unique to Anthropic. Across the industry, there’s mounting evidence that AI adoption often correlates with continued, or even increased, hiring of engineers — especially those with expertise in AI, infrastructure, or integration. Analysts argue generative-AI doesn’t so much replace developers as it augments them — requiring new kinds of human skills. AP News+2Zen van Riel+2

    At the same time, studies of AI-generated code point to serious limitations: security flaws, lack of deep understanding of context, inefficient or sub-optimal patterns, and a need for human review. For example, one empirical study found that nearly 30 % of AI-generated Python snippets and 24 % of AI-generated JavaScript snippets showed security weaknesses when merged into real-world projects. arXiv

    These findings suggest that AI-generated code — while useful — remains far from “set it and forget it.” In many cases, human engineers must still review, debug, secure, and integrate the code, often adding complexity rather than removing it.

    Languages Matter — AI Doesn’t Dominate All Programming Languages Equally

    An important insight: AI’s effectiveness depends heavily on the programming language — and the nature of the task. Recent research into Large Language Model (LLM) preferences for code generation confirms that AI tends to favour certain languages. In one 2025 study, LLMs generated code in Python 90–97 % of the time for language-agnostic benchmark tasks, even when Python wasn’t the most suitable language for production code. arXiv

    That bias reflects both the abundance of training examples in languages like Python (and by extension, also languages like JavaScript or TypeScript) and the simplicity and flexibility of those languages. AI tools have a far easier time generating correct boilerplate, simple scripts, web-app code, and high-level logic in such languages. Zen van Riel+2IntuitionLabs+2

    On the other hand, languages that require stricter type systems, memory safety, or low-level control — such as Rust, C++, or languages used for systems programming — remain more challenging for AI. Some developers and community reports note that while AI can scaffold basic code in those languages, it often fails to satisfy compiler constraints, produce efficient or safe output, or handle complex system-level logic. BigGo+2AICodes+2

    Thus, AI’s impact is highly uneven: it’s more likely to disrupt or assist in web development, scripting, or high-level logic, while offering limited benefit in systems, embedded, or high-performance programming.

    What Remains the Role of Human Engineers — and What’s Changing

    Given these trends and limitations, here’s how the role of human engineers is evolving:

    • More emphasis on oversight, review, and integration: AI-generated code often needs human validation, security hardening, and testing. Engineers must review, debug, and refine output rather than simply accept it.
    • Higher demand for infrastructure, tooling, and system-level expertise: As firms build AI-assisted pipelines (like Anthropic integrating Bun), demand grows for engineers who can architect, maintain, and scale complex toolchains.
    • Shift toward languages and domains where AI is weaker: Systems programming, security-critical code, performance-sensitive components—areas where AI struggles—become more reliant on human expertise.
    • New hybrid workflows combining AI + human intelligence: Engineers increasingly act as supervisors, guides, and quality-assurance agents, using AI for boilerplate, scaffolding, or prototyping, while steering overall design.

    In many ways, AI is serving as a force multiplier — increasing what a given team can produce, but also increasing the need for thoughtful engineering, oversight, and human judgment.

    Why “AI Replaces All Engineers” Is Overstated — For Now

    Based on current evidence, the narrative that AI will wholesale eliminate software-engineering jobs seems overstated. Instead:

    • AI performs best on high-level, structured, well-documented languages — precisely where a large share of current web and app development lives.
    • AI-generated code is error-prone, insecure, or inefficient — making human review and expertise indispensable.
    • Companies adopting AI (like Anthropic) are also investing in building engineering teams — not downsizing them.

    Thus, the present and near-term future seem to favor more engineers — especially those capable of working with AI rather than being replaced by it.

    Conclusion — The Shape of Engineering Is Changing, Not Disappearing

    What we see in late 2025 is not the end of software engineering — but its transformation. AI doesn’t so much eliminate the need for human engineers as it shifts what kinds of engineering work matter:

    • From writing boilerplate or repetitive code
    • Toward system architecture, integration, maintenance, security, performance tuning, and high-level design
    • Toward languages and domains where AI assistance is less reliable

    In effect, AI becomes a toolchain multiplier, enabling faster development — but not eliminating the need for human engineers. If anything, demand for engineers adept at working in an AI-augmented world is rising.