Skip to content

LÆKA

  • ProtocolExpand
    • Monade
    • Symbiote
    • Architect
    • Empath
  • ProductsExpand
    • Seahorse
    • Artefact
    • Cognitive Engine
    • Starpod
    • Hibou
    • Sherpa
  • Academy
  • ResearchExpand
    • Publications
    • Blog
  • AboutExpand
    • Laeka
    • Manifesto
CONTACT
LÆKA
  • The Attention Mechanism Was Named Right. We Just Forgot Why.
    Contemplative AI

    The Attention Mechanism Was Named Right. We Just Forgot Why.

    When Vaswani et al. published “Attention Is All You Need” in 2017, they borrowed a term from cognitive science. Then the field promptly forgot everything cognitive science knows about attention. That forgetting is costing…

  • Why Attentional Training Produces Better Training Data
    Contemplative AI

    Why Attentional Training Produces Better Training Data

    The quality of AI training data is the biggest bottleneck in alignment research. Most DPO and RLHF datasets are generated by crowdworkers operating under time pressure, with vague guidelines and minimal cognitive training. The…

  • Integrated Cognition in Artificial Systems: Beyond Binary Processing
    Contemplative AI

    Integrated Cognition in Artificial Systems: Beyond Binary Processing

    Current AI systems think in binaries. True or false. Positive or negative. Safe or unsafe. This works for classification tasks. For anything that matters, it fails. The limitation lies in how models are forced…

  • Controlled Hallucination: Anil Seth Was Talking About LLMs Too
    Contemplative AI

    Controlled Hallucination: Anil Seth Was Talking About LLMs Too

    Anil Seth didn’t set out to describe how language models work. He was describing the human brain. But the parallels are so precise they border on uncomfortable. Perception as Prediction Seth’s core argument is…

  • The Hallucination Problem Isn’t a Bug. It’s a Feature We Don’t Understand Yet.
    AI Safety & Ethics

    The Hallucination Problem Isn’t a Bug. It’s a Feature We Don’t Understand Yet.

    Every major AI lab is racing to eliminate hallucinations. They’re wrong. Not about the problem — about what hallucinations actually are. Hallucination Is Just Creativity Without a Leash When a language model generates text…

  • What Attentional Training Reveals About Language Model Alignment
    Contemplative AI

    What Attentional Training Reveals About Language Model Alignment

    Attention isn’t about emptying the mind. It’s about watching the mind do its thing — and choosing not to follow every impulse. That distinction matters enormously when you’re trying to align a language model….

  • A Neural Network Is a Neural Network. That’s the Whole Point.
    Contemplative AI

    A Neural Network Is a Neural Network. That’s the Whole Point.

    Every few months, someone publishes a paper claiming neural networks aren’t really neural. They’re mathematical functions. They’re statistical models. They’re fancy curve fitters. And technically, they’re right. But they’re missing the point entirely. A…

  • Beyond Rule-Based AI Ethics: Why Structural Alignment Outperforms Behavioral Constraints
    AI Safety & Ethics

    Beyond Rule-Based AI Ethics: Why Structural Alignment Outperforms Behavioral Constraints

    AI ethics relies on rules. Don’t generate violent content. Don’t reveal personal information. Don’t discriminate. The problem: rule-based ethics doesn’t scale to the situations that matter most — the ambiguous, context-dependent cases where you…

  • Synthetic Data: Can AI Train AI? The Evidence Says Mostly No.
    Datasets & Curation

    Synthetic Data: Can AI Train AI? The Evidence Says Mostly No.

    The pitch is seductive. Running out of training data? Just have AI generate more. Use your existing model to create synthetic datasets, then train the next model on those. Problem solved. Except the evidence…

  • The Training Data Wall: Have We Used All the Internet?
    Datasets & Curation

    The Training Data Wall: Have We Used All the Internet?

    There’s a problem nobody in the AI industry likes to talk about publicly. We’re running out of training data. Not hypothetically. Not in some distant future. Now. The internet is big, but it’s not…

Page navigation

Previous PagePrevious 1 … 3 4 5 6 7 … 12 Next PageNext

© 2026 LÆKA — Open Source Intelligence Lab

  • Protocol
    • Monade
    • Symbiote
    • Architect
    • Empath
  • Products
    • Seahorse
    • Artefact
    • Cognitive Engine
    • Starpod
    • Hibou
    • Sherpa
  • Academy
  • Research
    • Publications
    • Blog
  • About
    • Laeka
    • Manifesto