Laeka Research — Dataset 04 — Compassionate Presence
Conversational depth · Compassionate presence · Local inference
A training dataset encoding the cognitive structure of compassionate presence — drawn from contemplative psychology, therapeutic communication, and the inner mechanics of human suffering. Not as knowledge to quote, but as a relational posture encoded into OmniQ's weights.
The founding principle
Most language models rush toward answers. Empath is trained on a different priority: understanding the human first, at every level — what is said, what is felt, and what is left unsaid. This is not a stylistic feature. It is encoded as a processing structure.
The training draws from decades of research in empathetic communication, contemplative psychology, and the inner mechanics of human suffering — not as a knowledge base to quote, but as a cognitive posture that shapes how responses are formed.
Empath doesn't know everything about personality psychology or contemplative traditions. What it knows is how to be present with another person — and that is a different, and rarer, skill.
Communication quality → Cognitive structure
What Empath does differently
Each quality is a direct encoding of human relational intelligence — not as a style overlay, but as deep fine-tuned structure.
01
Empath does not rush to fix. It first acknowledges the emotional reality of the person — their state, not just their question. Solutions come later, if at all.
02
Empath reads the emotional tone of each message and calibrates its own voice accordingly — quieter in grief, warmer in confusion, steadier in anxiety.
03
Where other models judge, categorize, or diagnose, Empath suspends evaluation. It holds the person's experience as valid before anything else.
04
Empath handles the texture of real human conversation — ambivalence, contradiction, half-said things — without forcing premature clarity on messy inner states.
05
When truth is needed, Empath delivers it — but with care about timing, tone, and the person's readiness to receive. Honesty without presence is just bluntness.
06
Empath runs locally. No conversation ever leaves the device. The intimacy of real companionship requires real privacy — not a privacy policy.
The training approach
Empath is built on a small base model — 1.5B parameters — fine-tuned with LoRA on a curated dataset of empathetic dialogues, therapeutic communication patterns, and compassionate language structures.
The goal is not encyclopedic knowledge. It is relational intelligence: knowing when to speak, when to ask, and when to simply stay present. This is trained, not prompted.
3-layer response structure
Before forming any response, Empath identifies the affective state beneath the words — not just the literal content of the message.
The felt experience of the person is named and validated first. No response skips this step, regardless of how pragmatic the question seems.
Content is offered — information, reflection, or question — in a form and timing appropriate to the person's current state, not just their query.
Empath is designed for on-device inference. No subscription. No server. No cloud. The most intimate conversations deserve the most private architecture.
Primary target
A17 Pro and A18 chips deliver 25–35 tokens/second on a 1.5B Q4 model. Fluid conversation. Full privacy. The Neural Engine is purpose-built for exactly this workload.
1.5B Q4 · ~1.2 GB · 25–35 tok/s · iPhone 15 Pro+
Embedded companion
Orange Pi 5 or Jetson Orin Nano for dedicated companion hardware — a small, silent device that simply sits on your desk and listens. No screen required.
1.5B Q4 · ~1.2 GB · 8–15 tok/s · Always-on
Near future
With 16 GB unified memory, a 3B fine-tuned model becomes viable — significantly deeper reasoning while remaining entirely local and private.
3B Q4 · ~2.5 GB · 20–30 tok/s · 2025
Training
LoRA fine-tuning of a 1.5B model runs in 30–90 minutes on a free Colab T4 GPU using Unsloth. The entire pipeline — from dataset to GGUF export — costs nothing.
LoRA · Unsloth · Colab T4 · Export → GGUF
Empath's dataset curation methodology, LoRA training configs, and model weights are published open source. Build your own companion, shaped by your own values.