ArXivIQ
Subscribe
Sign in
Home
Notes
World Models
Openclaw
Archive
Leaderboard
About
Latents
Unified Latents (UL): How to train your latents
Authors: Jonathan Heek, Emiel Hoogeboom, Thomas Mensink, Tim Salimans
Feb 23
Next Concept Prediction in Discrete Latent Space Leads to Stronger Language Models
Authors: Yuliang Liu, Yunchong Song, Yixuan Wang, Kewen Ge, Alex Lamb, Qipeng Guo, Kai Chen, Bowen Zhou, Zhouhan Lin
Feb 20
1
Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space
Authors: Xingwei Qu, Shaowen Wang, Zihao Huang, Ge Zhang, Kai Hua, Fan Yin, Rui-Jie Zhu, Jundong Zhou, Qiyang Min, Zihao Wang, Yizhi Li, Tianyu Zhang…
Jan 5
2
Next-Latent Prediction Transformers Learn Compact World Models
Authors: Jayden Teoh, Manan Tomar, Kwangjun Ahn, Edward S.
Nov 18, 2025
1
1
Continuous Autoregressive Language Models
Authors: Chenze Shao, Darren Li, Fandong Meng, Jie Zhou
Nov 13, 2025
The Free Transformer
A Latent-Variable Approach to Boost Reasoning
Oct 24, 2025
2
Thoughtbubbles: an Unsupervised Method for Parallel Thinking in Latent Space
Authors: Houjun Liu, Shikhar Murty, Christopher D.
Oct 7, 2025
Beyond Sparsity: Uncovering the Functional Roles of Dense Latents in LLMs
Dense SAE Latents Are Features, Not Bugs
Jun 26, 2025
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts