I built an inference-time method that reduces LLM hallucination by applying
toroidal geometric constraints to logit outputs. No retraining, no fine-tuning —
it works as a plug-in on existing models.
Results on Qwen 2.5-0.5B-Instruct:
• 40% error reduction on factual tasks
• +6.8% absolute improvement on TruthfulQA
• Random/non-toroidal baselines show no effect
The math comes from the Tonnetz — the same toroidal manifold that describes
harmonic relationships in music theory. Periodic boundary conditions on a 2D
torus create structured long-range coherence and a spectral gap that filters
noise from the logit distribution.
Try it: https://huggingface.co/spaces/paraxiom-research/topological-coherence
Paper: https://doi.org/10.5281/zenodo.18516477
Code: https://github.com/Paraxiom/topological-coherence
I built an inference-time method that reduces LLM hallucination by applying toroidal geometric constraints to logit outputs. No retraining, no fine-tuning —
Rust crate: https://crates.io/crates/topological-coherence Total compute cost to validate: $40.