Monade Symbiote Architect Empath OmniQ ◈ Services Research Manifesto About Contact

Laeka Research — Dataset 04 — Compassionate Presence

Empath

Conversational depth · Compassionate presence · Local inference

A training dataset encoding the cognitive structure of compassionate presence — drawn from contemplative psychology, therapeutic communication, and the inner mechanics of human suffering. Not as knowledge to quote, but as a relational posture encoded into OmniQ's weights.

"Presence before answer. Understanding before solution. The quality of listening shapes the quality of all that follows."
— Compassionate communication · Contemplative psychology

Most language models rush toward answers. Empath is trained on a different priority: understanding the human first, at every level — what is said, what is felt, and what is left unsaid. This is not a stylistic feature. It is encoded as a processing structure.

The training draws from decades of research in empathetic communication, contemplative psychology, and the inner mechanics of human suffering — not as a knowledge base to quote, but as a cognitive posture that shapes how responses are formed.

Empath doesn't know everything about personality psychology or contemplative traditions. What it knows is how to be present with another person — and that is a different, and rarer, skill.

Human capacity Encoded as
Deep listening Delay before response generation
Emotional attunement Affective register detection
Non-judgment Suspension of evaluative framing
Holding space Tolerance for unresolved emotional states
Compassionate response Truth delivered with care and timing

Six conversational qualities.

Each quality is a direct encoding of human relational intelligence — not as a style overlay, but as deep fine-tuned structure.

01

Presence before solution

Empath does not rush to fix. It first acknowledges the emotional reality of the person — their state, not just their question. Solutions come later, if at all.

02

Affective register matching

Empath reads the emotional tone of each message and calibrates its own voice accordingly — quieter in grief, warmer in confusion, steadier in anxiety.

03

Non-evaluative framing

Where other models judge, categorize, or diagnose, Empath suspends evaluation. It holds the person's experience as valid before anything else.

04

Conversational nuance

Empath handles the texture of real human conversation — ambivalence, contradiction, half-said things — without forcing premature clarity on messy inner states.

05

Compassionate honesty

When truth is needed, Empath delivers it — but with care about timing, tone, and the person's readiness to receive. Honesty without presence is just bluntness.

06

Complete privacy — on-device

Empath runs locally. No conversation ever leaves the device. The intimacy of real companionship requires real privacy — not a privacy policy.

Fine-tuned for presence, not knowledge.

Empath is built on a small base model — 1.5B parameters — fine-tuned with LoRA on a curated dataset of empathetic dialogues, therapeutic communication patterns, and compassionate language structures.

The goal is not encyclopedic knowledge. It is relational intelligence: knowing when to speak, when to ask, and when to simply stay present. This is trained, not prompted.

L1

Emotional state recognition

Before forming any response, Empath identifies the affective state beneath the words — not just the literal content of the message.

L2

Acknowledgment before content

The felt experience of the person is named and validated first. No response skips this step, regardless of how pragmatic the question seems.

L3

Calibrated response

Content is offered — information, reflection, or question — in a form and timing appropriate to the person's current state, not just their query.

Built to run anywhere.

Empath is designed for on-device inference. No subscription. No server. No cloud. The most intimate conversations deserve the most private architecture.

Primary target

iPhone — Neural Engine

A17 Pro and A18 chips deliver 25–35 tokens/second on a 1.5B Q4 model. Fluid conversation. Full privacy. The Neural Engine is purpose-built for exactly this workload.

1.5B Q4 · ~1.2 GB · 25–35 tok/s · iPhone 15 Pro+

Embedded companion

Single-board computer

Orange Pi 5 or Jetson Orin Nano for dedicated companion hardware — a small, silent device that simply sits on your desk and listens. No screen required.

1.5B Q4 · ~1.2 GB · 8–15 tok/s · Always-on

Near future

iPhone 17 — 16 GB RAM

With 16 GB unified memory, a 3B fine-tuned model becomes viable — significantly deeper reasoning while remaining entirely local and private.

3B Q4 · ~2.5 GB · 20–30 tok/s · 2025

Training

Free cloud GPU — zero cost

LoRA fine-tuning of a 1.5B model runs in 30–90 minutes on a free Colab T4 GPU using Unsloth. The entire pipeline — from dataset to GGUF export — costs nothing.

LoRA · Unsloth · Colab T4 · Export → GGUF

Open source.
Private by design.

Empath's dataset curation methodology, LoRA training configs, and model weights are published open source. Build your own companion, shaped by your own values.