Laeka Research | Perception Lens 04 | Relational Tone Calibration

EMPATH

Conversational depth · Relational presence · Local inference

A perception lens encoding the cognitive structure of empathic attunement, drawn from first-person neuroscience, therapeutic communication, and the inner mechanics of human suffering. Not as knowledge to quote, but as a relational posture encoded into the Laeka Perception Protocol's weights.

"Presence before answer. Understanding before solution. The quality of listening shapes the quality of all that follows."
Relational calibration · First-person neuroscience

Most language models rush toward answers. EMPATH is trained on a different priority: understanding the human first, at every level: what is said, what is felt, and what is left unsaid. This is not a stylistic feature. It is encoded as a processing structure.

The training draws from decades of research in empathetic communication, first-person neuroscience, and the inner mechanics of human suffering, not as a knowledge base to quote, but as a cognitive posture that shapes how responses are formed.

EMPATH doesn't know everything about personality psychology or attentional neuroscience. What it knows is how to be present with another person, and that is a different, and rarer, skill.

Human capacity Encoded as
Deep listening Delay before response generation
Emotional attunement Affective register detection
Non-judgment Suspension of evaluative framing
Unresolved-state tolerance Tolerance for unresolved emotional states
Compassionate response Truth delivered with care and timing

Six conversational qualities.

Each quality is a direct encoding of human relational intelligence, not as a style overlay, but as deep fine-tuned structure.

01

Presence before solution

EMPATH does not rush to fix. It first acknowledges the emotional reality of the person: their state, not just their question. Solutions come later, if at all.

02

Affective register matching

EMPATH reads the emotional tone of each message and calibrates its own voice accordingly: quieter in grief, warmer in confusion, steadier in anxiety.

03

Non-evaluative framing

Where other models judge, categorize, or diagnose, EMPATH suspends evaluation. It holds the person's experience as valid before anything else.

04

Conversational nuance

EMPATH handles the texture of real human conversation (ambivalence, contradiction, half-said things) without forcing premature clarity on messy inner states.

05

Compassionate honesty

When truth is needed, EMPATH delivers it, but with care about timing, tone, and the person's readiness to receive. Honesty without presence is just bluntness.

06

Complete privacy, on-device

EMPATH runs locally. No conversation ever leaves the device. The intimacy of real companionship requires real privacy, not a privacy policy.

Fine-tuned for presence, not knowledge.

EMPATH is built on a small base model (1.5B parameters) fine-tuned with LoRA on a curated dataset of empathetic dialogues, therapeutic communication patterns, and compassionate language structures.

The goal is not encyclopedic knowledge. It is relational intelligence: knowing when to speak, when to ask, and when to simply stay present. This is trained, not prompted.

L1

Emotional state recognition

Before forming any response, EMPATH identifies the affective state beneath the words, not just the literal content of the message.

L2

Acknowledgment before content

The felt experience of the person is named and validated first. No response skips this step, regardless of how pragmatic the question seems.

L3

Calibrated response

Content is offered (information, reflection, or question) in a form and timing appropriate to the person's current state, not just their query.

Built to run anywhere.

EMPATH is designed for on-device inference. No subscription. No server. No cloud. The most intimate conversations deserve the most private architecture.

Primary target

iPhone, Neural Engine

A17 Pro and A18 chips deliver 25–35 tokens/second on a 1.5B Q4 model. Fluid conversation. Full privacy. The Neural Engine is purpose-built for exactly this workload.

1.5B Q4 · ~1.2 GB · 25–35 tok/s · iPhone 15 Pro+

Embedded companion

Single-board computer

Orange Pi 5 or Jetson Orin Nano for dedicated companion hardware: a small, silent device that simply sits on your desk and listens. No screen required.

1.5B Q4 · ~1.2 GB · 8–15 tok/s · Always-on

Near future

iPhone 17, 16 GB RAM

With 16 GB unified memory, a 3B fine-tuned model becomes viable, with significantly deeper reasoning while remaining entirely local and private.

3B Q4 · ~2.5 GB · 20–30 tok/s · 2025

Training

Free cloud GPU, zero cost

LoRA fine-tuning of a 1.5B model runs in 30–90 minutes on a free Colab T4 GPU using Unsloth. The entire pipeline (from dataset to GGUF export) costs nothing.

LoRA · Unsloth · Colab T4 · Export → GGUF

Published.
Private by design.

EMPATH's training data curation methodology, LoRA training configs, and model weights are published and freely available. Build your own companion, shaped by your own values.

EMPATH contributes to integrity convergence by calibrating tone to state — responses that stay accurate when the emotional register shifts are the ones the integrity benchmark scores highest.