Skip to content

augchan42/king-wen-agi-framework

Repository files navigation

King Wen AGI Framework

DOI License: MIT License: CC BY 4.0

Paper

Statistical Properties of the King Wen Sequence: An Anti-Habituation Structure That Does Not Improve Neural Network Training

The King Wen sequence has genuine statistical properties (confirmed by Monte Carlo analysis against 100,000 random baselines) but they do not improve neural network training. This is a negative result paper reporting experiments across two platforms (NVIDIA RTX 2060 with PyTorch, Apple Silicon with MLX).

If you use this work in your research, please cite:

@article{Chan2026kingwen,
  title={Statistical Properties of the King Wen Sequence: An Anti-Habituation Structure That Does Not Improve Neural Network Training},
  author={Augustin Chan},
  year={2026},
  publisher={Zenodo},
  doi={10.5281/zenodo.14679537}
}

Repository Structure

  • paper/ — Paper source (markdown, LaTeX, PDF), statistical results, figures
  • experiment/ — LR schedule implementations and experiment protocol
  • arxiv-submission/ — Flat files ready for arXiv upload
  • arxiv.sty — arXiv preprint style file

Build Instructions

cd paper
pdflatex king-wen.tex
bibtex king-wen
pdflatex king-wen.tex
pdflatex king-wen.tex

Requires TeX Live 2025+ with libertine and newtxmath packages.

License

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Citation and Reuse

If you use this template or code in your work:

  1. Cite our paper using the BibTeX entry above
  2. Link back to this repository
  3. Check the licenses for code (MIT) and documentation (CC-BY 4.0)

Archived Versions

This repository is archived on Zenodo for long-term preservation. You can find specific versions:

Contact

For questions or issues, please:

  1. Open an issue in this repository
  2. Contact the authors through the paper's corresponding email

About

Negative result paper: the King Wen sequence has genuine anti-habituation statistical properties (confirmed via Monte Carlo against 100k baselines) but does not improve neural network training. Experiments on NVIDIA RTX 2060 (PyTorch) and Apple Silicon (MLX).

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors