ArXivIQ

ArXivIQ

Home
Notes
World Models
Openclaw
Archive
Leaderboard
About

Latents

Unified Latents (UL): How to train your latents
Authors: Jonathan Heek, Emiel Hoogeboom, Thomas Mensink, Tim Salimans
Feb 23
Next Concept Prediction in Discrete Latent Space Leads to Stronger Language Models
Authors: Yuliang Liu, Yunchong Song, Yixuan Wang, Kewen Ge, Alex Lamb, Qipeng Guo, Kai Chen, Bowen Zhou, Zhouhan Lin
Feb 20
Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space
Authors: Xingwei Qu, Shaowen Wang, Zihao Huang, Ge Zhang, Kai Hua, Fan Yin, Rui-Jie Zhu, Jundong Zhou, Qiyang Min, Zihao Wang, Yizhi Li, Tianyu Zhang…
Jan 5
Next-Latent Prediction Transformers Learn Compact World Models
Authors: Jayden Teoh, Manan Tomar, Kwangjun Ahn, Edward S.
Nov 18, 2025
Continuous Autoregressive Language Models
Authors: Chenze Shao, Darren Li, Fandong Meng, Jie Zhou
Nov 13, 2025
The Free Transformer
A Latent-Variable Approach to Boost Reasoning
Oct 24, 2025
Thoughtbubbles: an Unsupervised Method for Parallel Thinking in Latent Space
Authors: Houjun Liu, Shikhar Murty, Christopher D.
Oct 7, 2025
Beyond Sparsity: Uncovering the Functional Roles of Dense Latents in LLMs
Dense SAE Latents Are Features, Not Bugs
Jun 26, 2025
© 2026 Grigory Sapunov · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture