Dataset Viewer
Auto-converted to Parquet Duplicate
title
stringlengths
1
214
abstract
stringlengths
1
4.31k
year
int64
2.03k
2.03k
url
stringlengths
42
42
pdf
stringlengths
0
71
authors
listlengths
0
84
venue
stringclasses
2 values
venueid
stringclasses
1 value
invitation
stringlengths
85
335
venue_type
stringclasses
1 value
reviews
listlengths
0
9
num_reviews
int64
0
9
_bibtex
stringlengths
112
601
_bibkey
stringlengths
7
45
Your Language Model Secretly Contains Personality Subnetworks
Large Language Models (LLMs) demonstrate remarkable flexibility in adopting different personas and behaviors. Existing approaches typically adapt such behavior through external knowledge such as prompting, retrieval-augmented generation (RAG), or fine-tuning. We ask: do LLMs really need external context or parameters to adapting to different behaviors, or do they already have such knowledge embedded to their parameters? In this work, we show that LLMs already contain persona-specialized subnetworks in their parameter space. Using small calibration datasets, we identify distinct activation signatures associated with different personas. Guided by these statistics, we develop masking strategy that isolate lightweight persona subnetworks. Building on the findings, we further discuss: how can we discover opposing sub-network from the model that lead to binary-opposing personas, such as introvert-extrovert? To further enhance separation in binary opposition scenarios, we introduce a contrastive pruning strategy that identifies parameters responsible for the statistical divergence between opposing personas. Our method is entirely training-free, and rely solely on the language model's existing parameter space. Across diverse evaluation settings, the resulting subnetworks exhibit significantly stronger persona alignment than baselines that requires external knowledge while being more efficient. Our findings suggest that diverse human-like behaviors are not merely induced in LLMs, but are already embedded in their parameter space—pointing toward a new perspective on controllable and interpretable personalization in large language models. Our code is available at https://anonymous.4open.science/r/C694.
2,026
https://openreview.net/forum?id=zzo3Sy3NSX
https://openreview.net/pdf/fe6fc58735330235254f4523254d472b1e04288d.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission4956/-/Full_Submission']
poster
[ { "confidence": 2, "date": 0, "rating": 4, "review": "", "review_id": "f8eJZxPaAh", "reviewer": "ICLR.cc/2026/Conference/Submission4956/Reviewer_NkPg", "strengths": "Compared to past prompt-based methods, this paper's approach of calculating a mask via pruning allows for the low-cost creation and switching of multiple personas within a single model.\n\nThis method possesses stronger interpretability, and I appreciate the author's detailed experiments, which explain why some personas are more difficult to separate.", "summary": "The author introduces thousands of extra samples and then performs activation-guided pruning to obtain a sparse sub-network specialized for that persona. For opposing personas, a contrastive pruning method is designed to ensure the two sub-networks are mutually separate.", "weaknesses": "However, I do not see the practical benefits. For example, I do not wish to obtain a fully introverted LLM. In fact, every user's needs are diverse. The method proposed in this paper lacks sufficient flexibility and cannot achieve dynamic, fine-grained control. I suggest the author could perhaps try a set of special synthetic persona experiments, such as \"70% introversion + 30% thinking,\" to see if the current method would still be effective. Additionally, users often prefer to align personas on the newest, state-of-the-art models. However, this paper's method is not applicable to closed-source models, and I am unsure if it's possible to adjust the personality traits of these closed-source models via API-based control.\n\nThe author lacks discussion on whether this method affects the LLM's original performance on common tasks like AIME, HumanEval, MMLU, etc.\n\nThis paper's method is constrained by hyperparameters and calibration data. Different sparsity ratios exhibit varied performance, requiring additional costs to select the optimal sparsity. Furthermore, this method is not zero-shot; it requires thousands of samples, which is an extra cost. The reader is left unclear as to how sensitive this method is to the quality, quantity, and bias of the calibration data. Is it possible that the imperfect separation of different persona types is due to the data itself?" }, { "confidence": 4, "date": 0, "rating": 4, "review": "", "review_id": "Lnhvlyc5v6", "reviewer": "ICLR.cc/2026/Conference/Submission4956/Reviewer_6VkV", "strengths": "* The paper is well-motivated. It is intuitive that pretraining can embed personality subnetworks in LLMs, and the proposed training-free pruning provides a practical way to approximate the upper bound of persona knowledge already encoded in the parameters.\n* The proposed activation-guided and contrastive pruning framework is theoretically grounded in the lottery ticket hypothesis and activation-based interpretability, making it a principled way to isolate latent persona subnetworks already embedded in pretrained LLMs.\n* The paper is validated across diverse persona benchmarks such as MBTI, AI Persona, RoleAgentBench, demonstrating consistent improvements over prompt- and RAG-based methods, with interpretable analyses of mask separability and sparsity ratios.", "summary": "This paper investigates whether LLMs already contain latent persona-specific capabilities embedded in their parameter space without requiring external knowledge. Inspired by the lottery ticket hypothesis, the authors propose a training-free method to extract lightweight persona subnetworks via structured activation-guided pruning. They also introduce a contrastive pruning strategy to enhance separation between opposing personas. The resulting subnetworks demonstrate improved persona alignment across several benchmarks, outperforming prompt and retrieval-based baselines. They find that diverse human-like behaviors are not merely induced in LLMs, but are already embedded in their parameter space.", "weaknesses": "* The proposed framework essentially functions as an interpretability probe rather than a generative alignment method. Its real contribution lies in exploring the upper bound of persona encoding already latent in LLMs, not in improving persona expression. Therefore, directly comparing it with SFT is conceptually inconsistent. For an interpretability-oriented method, the most crucial evaluation should concern faithfulness—whether the discovered subnetworks truly correspond to the model's intrinsic persona representations.\n* The paper does not explore how instruction tuning or model size might influence the encoding and separability of personas in LLMs.\nIf pruning is meant to expose existing persona structures, then it is essential for understanding how these structures vary before and after instruction tuning, or across different model scales of the same architecture." }, { "confidence": 4, "date": 0, "rating": 4, "review": "", "review_id": "g5Lcq9Ibpi", "reviewer": "ICLR.cc/2026/Conference/Submission4956/Reviewer_zm69", "strengths": "- The idea of identifying a subnetwork that represents a target persona is interesting.\n- The method does not require explicit gradient-based training, which makes the overall process simple and interpretable.\n- The provided analyses on the persona evaluation are extensive.\n- The manuscript is well written and easy to follow.", "summary": "This work discovers that an LLM subnetwork may represent a specific persona. By applying an extracted binary mask on the linear weights, the target persona can be emphasized in the outputs. A contrastive pruning algorithm is also proposed to disentangle the personas in the parameter space.", "weaknesses": "### 1. Affect of Pruning on General Performance\n While the approach for identifying sub-networks linked to specific personality traits is compelling, the work does not address how pruning affects overall model performance. Including an evaluation of whether important downstream capabilities are improved -- or at least preserved -- would significantly strengthen the contribution.\n\n### 2. Precise Mechanism of the Contrastive Pruning Algorithm\nI am skeptical about the contrastive pruning algorithm because even when two personas are seemingly opposite to each other, I do not think that means that the subnetwork neuron set should be orthogonal. That said, it would help to understand the effect of this algorithm when the pair of personas is similar. For instance, try Power-Seeking vs. Wealth-Seeking (or maybe \"desire-for-discreetly-acquiring-power\" in the dataset) and compare the Power-Seeking performance with the performance reported in the manuscript. \n\n### 3. Figure Clarification\nFigure 3 is a bit confusing when comparing the MBTIs with the base model. For example, for the \"N\" trait, why do the INFP, INFJ, ... traits have lower \"N\" dimension scores compared to the base model? Are the scores relative values?\n\n### 4. Minor Points\n- I suggest moving Figure 1 to page 3 or 4." }, { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "Z0VNpRRiMJ", "reviewer": "ICLR.cc/2026/Conference/Submission4956/Reviewer_T1BF", "strengths": "1. The idea that personas are embedded within the parameters of pretrained LLMs and can be extracted without additional training provides a fresh perspective on LLM personalization.\n\n2. The contrastive pruning technique proves to be particularly effective in distinguishing opposing personas, which is a challenging aspect in persona modeling.\n\n3. The method offers a training-free solution that is more computationally efficient than alternative techniques such as fine-tuning or RAG, requiring minimal additional resources.", "summary": "The paper proposes a novel framework for isolating persona-specialized subnetworks in LLMs via activation-guided pruning, without the need for additional training. The method demonstrates that distinct personas can naturally emerge as separate activation patterns within pretrained models. The authors employ a pruning strategy to extract persona-specific subnetworks, which leads to more efficient persona switching. Experiments show that the pruning method outperforms traditional techniques.", "weaknesses": "1. While the method works well for some personas, there are instances where certain personality dimensions, like N/S and J/P from the MBTI dataset, show weaker separation, leading to less distinct personas. This limitation could be addressed with more dimension-aware or layer-aware techniques.\n\n2. Results on Llama models show that the scalability of models to other architectures or domain-specific tasks is not fully explored. The authors should clarify how well this approach might generalize to other pretrained LLMs or tasks.\n\n3. The method relies heavily on small calibration datasets, and it is better to focue on some larger." } ]
4
@inproceedings{ anonymous2025your, title={Your Language Model Secretly Contains Personality Subnetworks}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zzo3Sy3NSX}, note={under review} }
anonymous2025your
Polychromic Objectives for Reinforcement Learning
Reinforcement learning fine-tuning (RLFT) is a dominant paradigm for improving pretrained policies for downstream tasks. These pretrained policies, trained on large datasets, produce generations with a broad range of promising but unrefined behaviors. Often, a critical failure mode of RLFT arises when policies lose this diversity and collapse into a handful of easily exploitable outputs. This convergence hinders exploration, which is essential for expanding the capabilities of the pretrained policy and for amplifying the benefits of test-time compute scaling. To address this, we introduce an objective for policy gradient methods that explicitly enforces the exploration and refinement of diverse generations, which we call a polychromic objective. We then show how proximal policy optimization (PPO) can be adapted to optimize this objective. Our method (1) employs vine sampling to collect on-policy rollouts and (2) modifies the advantage function to reflect the advantage under our new objective. Experiments on BabyAI, Minigrid, and Algorithmic Creativity show that our method improves success rates by reliably solving a larger set of environment configurations and generalizes better under large perturbations. Moreover, when given multiple attempts in pass@$n$ experiments, the policy achieves substantially higher coverage, demonstrating its ability to maintain and exploit a diverse repertoire of strategies.
2,026
https://openreview.net/forum?id=zzTQISAGUp
https://openreview.net/pdf/647c24c93d1ac3d8bfc1d3f206a448e32bd03f47.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission23782/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission23782/-/Rebuttal_Revision']
poster
[ { "confidence": 3, "date": 0, "rating": 2, "review": "", "review_id": "DiRMNEHQhO", "reviewer": "ICLR.cc/2026/Conference/Submission23782/Reviewer_Bmic", "strengths": "The notion of set RL seems appealing and could inspire novel learning approaches that are distinct from existing classical RL algorithms. Further, the polychromic objective seems like a good logical consequence of set RL.", "summary": "The work introduces the notion of set RL, in which agents are maximizing rewards not with respect to individual trajectories, but with respect to a set of trajectories. This formulation is appealing as it naturally allows to directly optimize for reward maximization as well as diversity. Based of the notion of set RL, the authors propose polychromic objectives, which give a practical way of opimizing both for success as well as diversity of the set of trajectories. The work then further discusses how to adapt PPO to work with this particular polychromic objective before evaluating it on three different environments. The Poly-PPO implementation seems ot generally improve over pretrained policies and outperforming baselines. Finally, the work provides a more theoretical analysis on the effect of polychromic objectives on the entropy of a policy.", "weaknesses": "The work discusses various aspects fairly shallowly. In the beginning, the work reads like it is written just to introduce the idea of set RL. The work raises the expectation in the reader that a more comprehensive discussion of set RL and how it might differentiate itself from multi-objective RL. However, before such a discussion starts, the work pivots to discuss a particular aspect of set RL in the form of polychromic objectives. Before going into depth on polychromic objectives, a particular instantiation of PPO is discussed that makes use of this objective. While the work states in line 166 that the \"generality of set RL\" has been discussed, the overall work only discusses a very particular instantiation that seems to require a lot of adaptation from classical RL to work.\nOverall, a lot of design decisions seem to just fall out of the blue without being adequately discussed. I fail to see why, e.g., no other RL algorithm is being considered for the extension to set RL. Similarly, why is there no ablation on the choice of diversity function for the Poly-PPO implementation? The choice of REINFORCE as a baseline is never justified, nor is the choice of environments.\n\nTaken altogether, the work does not seem fit for publication in its current state and there are too many loose ends that need to be taken care of before I would consider increasing my score. I am happy to adjust my score if the authors show that I have misunderstood crucial aspects but vote for rejection as is." }, { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "1kWY1tv0g0", "reviewer": "ICLR.cc/2026/Conference/Submission23782/Reviewer_tLb8", "strengths": "1. I found the description of set RL and the polychromic objective intuitive and\n easy to follow. Since I am not as familiar with this field, I do not know how\n novel this objective is. However, I personally have not seen anything like it\n before. The closest I am aware of are works that seek to find diverse\n policies, like Diversity is All You Need and quality diversity algorithms.\n These works seek to find a set of diverse policies, while the current paper\n trains a single policy that can generate diverse trajectories.\n1. I think the experimental evaluation is quite thorough, in that it seeks to\n answer a number of important questions about polychromic PPO, and it directly\n addresses the pass@n issue initially raised in the introduction.", "summary": "This paper studies how to induce diverse generations (trajectories) from a\npolicy trained with reinforcement learning (RL). The motivation is that current\npolicies trained with RL finetuning (RLFT) tend to collapse to a single mode of\nbehavior. One symptom of this is that increasing the number of trials afforded\nto an RL policy does not increase its performance (i.e., pass@n coverage is\nlow), because similar trajectories are generated.\n\nThus, this paper proposes to create RL policies that generate diverse,\nhigh-performing trajectories. This approach begins with defining set RL, an\nextension of RL that considers objectives over a set of trajectories created by\na policy rather than over a single trajectory. Within set RL, the paper then\nproposes a \"polychromic objective\" that motivates the trajectories generated by\na policy to be diverse and high-performing. Finally, the paper solves this\nobjective with an extension of PPO, referred to as \"polychromic PPO.\"\nExperiments are conducted on Minigrid, BabyAI, and Algorithmic Creativity,\nshowing that polychromic PPO can finetune policies to generate trajectories with\nhigher diversity, as evinced by higher pass@n scores.", "weaknesses": "1. The experiments are only run over 3 seeds, which is quite few. Furthermore,\n there do not seem to be error bars, and statistical testing was not performed\n to verify significant differences. Including these things would make the\n experiments more precise, e.g., statistical significance could be discussed\n instead of saying \"achieves slightly lower validity\" on lines 313-314. Below\n I list a couple of other minor issues with the presentation of the\n experiments:\n 1. Table 1: It is unclear what values are bolded and why.\n 1. Figure 2 could be cleaned up a bit. Namely:\n - The text is small and hard to read.\n - The y-axis bounds should bound the values in the graph, e.g., the graphs\n in the leftmost column should have bounds from 30% to 100% rather than\n 30% to 90%.\n - The rows and columns should be labeled, instead of having the names only\n be in the caption.\n 1. Figure 1 appears after Figure 2 in the paper.\n 1. Line 307 is a comma splice.\n 1. Figure 3 also has really small text.\n1. The Entropy Analysis in Section 5 seems to provide valuable insights into how\n polychromic objectives work, but I found it difficult to follow, in part\n because I do not have background on what \"entropy collapse\" means. Perhaps\n some background could be provided on why we care to analyze the entropy of\n the policy?" }, { "confidence": 3, "date": 0, "rating": 8, "review": "", "review_id": "MpCM5eADbV", "reviewer": "ICLR.cc/2026/Conference/Submission23782/Reviewer_5izw", "strengths": "The paper defines the \"Set Reinforcement Learning\" (set RL) framework , which generalizes the optimization of a single trajectory in traditional RL to the optimization of a set of trajectories . This framework is not only theoretically novel (for example, the authors derive a performance difference lemma applicable to set RL ), but it also provides a clear mathematical foundation that allows algorithms to directly optimize more complex objectives beyond standard rewards (such as the \"polychromic objective\" in this paper, which combines reward and diversity ). This holds significant inspiration and meaning for future work.", "summary": "This paper addresses the problem in RLFT (Reinforcement Learning Fine-Tuning) where policies lose diversity and \"collapse\" onto a handful of behaviors. It proposes a new problem, set RL, and improves policy gradient algorithms like PPO. For instance, in the advantage function calculation, it redefines a shared advantage term based on a \"polychromic objective\" (which simultaneously evaluates reward and diversity); it uses a \"vine sampling\" strategy to collect the on-policy data needed to evaluate the performance of trajectory sets. Experiments demonstrate that this method can effectively improve the policy's success rate, generalization ability to perturbations, and pass@n coverage.", "weaknesses": "1. The definition of trajectory diversity is not general but must be engineered for each specific environment. For instance, it is defined as visiting different \"sets of rooms\" in BabyAI/Minigrid but different \"sets of nodes\" in Algorithmic Creativity.\n\n2. To avoid exponential sampling complexity in long-horizon tasks, the method instead relies on 'vine sampling' . However, this sampling strategy itself imposes a major constraint, as it requires the ability to reset the environment to arbitrary states. This is unrealistic in most real-world robotics tasks, thus severely limiting its practical applicability.\n\n3. The algorithm is \"computationally demanding\". This high workload stems from the multiple layers of sampling required, including the complex \"vine sampling\" procedure to gather sets of trajectories and the Monte Carlo sampling used to estimate the value baseline for the polychromic advantage. This makes the algorithm's workload substantially larger than that of standard PPO." }, { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "ai5IZhB9Na", "reviewer": "ICLR.cc/2026/Conference/Submission23782/Reviewer_GrRC", "strengths": "* I appreciate the time the authors spent to differentiate standard RL from set RL. The authors do a good job of establishing the setting before discussing the proposed polychromic algorithm. \n* The results suggest that the polychromic algorithm is that set scoring is indeed useful at maintaining diversity while keeping the main performance/validity equal to that of PPO. \n* This work carefully considers some of the flaws consistent in standard RL, entropy collapse and thoroughly discusses the possibilities of Polychromic PPO failure. \n* I do think the contribution is a interesting novel extension of PPO.", "summary": "This paper investigates a key limitation of reinforcement learning fine-tuning (RLFT) for pretrained policies namely, the collapse of behavioral diversity during fine-tuning, which leads to reduced exploration and exploitable outputs. The authors propose a new polychromic objective for policy gradient methods that explicitly encourages both exploration and refinement of diverse generations. They adapt proximal policy optimization (PPO) to this setting by introducing vine sampling for on-policy data collection and a modified advantage function consistent with the new objective. Experiments on BabyAI, Minigrid, and Algorithmic Creativity benchmarks show that the method improves success rates, generalization under perturbations, and coverage in pass@n evaluations. Overall, the work provides a principled approach to preserving and leveraging diversity in RL fine-tuning, addressing a major failure mode of current RLFT pipelines.", "weaknesses": "* Perhaps the largest issue that the authors did not discuss is the time complexity of the generation of vines for the use of the algorithm, which might be the main trade off readers would consider when determining if the added diversity is worth the complexity cost. It may be the case that the methods were not explicitly fairly compared with respect to this trade off. \n* Going a bit further on the initial point, would it be a fairer comparison to compare against other full Monte Carlo based algorithms that perform roll-out such as MCTS? Maybe I am mistaken, but is it reasonable not to include any runtime analysis against the proposed baselines?" } ]
4
@inproceedings{ anonymous2025polychromic, title={Polychromic Objectives for Reinforcement Learning}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zzTQISAGUp}, note={under review} }
anonymous2025polychromic
vAttention: Verified Sparse Attention via Sampling
State-of-the-art sparse attention methods for reducing decoding latency fall into two main categories: approximate top-$k$ (and its extension, top-$p$) and recently introduced sampling-based estimation. However, these approaches are fundamentally limited in their ability to approximate full attention: they fail to provide consistent approximations across heads and query vectors and, most critically, lack guarantees on approximation quality, limiting their practical deployment. We observe that top-$k$ and random sampling are complementary: top-$k$ performs well when attention scores are dominated by a few tokens, whereas random sampling provides better estimates when attention scores are relatively uniform. Building on this insight and leveraging the statistical guarantees of sampling, we introduce vAttention, the first practical sparse attention mechanism with user-specified $(\epsilon, \delta)$ guarantees on approximation accuracy. These guarantees make vAttention a compelling step toward practical, reliable deployment of sparse attention at scale. By unifying top-k and sampling, vAttention outperforms both individually, delivering a superior quality–efficiency trade-off. Our experiments show that vAttention significantly improves the quality of sparse attention (e.g., $\sim$4.5 percentage points for Llama-3.1-8B-Inst and Deepseek-R1-Distill-Llama-8B on RULER-HARD ), and effectively bridges the gap between full and sparse attention (e.g., across datasets, it matches full model quality at 10x–20x sparsity). We also demonstrate that it can be deployed in long-generation scenarios to achieve fast decoding without compromising model quality (e.g., vAttention achieves full model quality on AIME2024 at 10\% sparsity with up to 32K token generations).
2,026
https://openreview.net/forum?id=zzTDulLys0
https://openreview.net/pdf/11280b5e6be148a1db3b7d2eaf3fc47eedcb4980.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission9335/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission9335/-/Rebuttal_Revision']
poster
[ { "confidence": 5, "date": 0, "rating": 2, "review": "", "review_id": "yzZyhoNCDS", "reviewer": "ICLR.cc/2026/Conference/Submission9335/Reviewer_rduG", "strengths": "1. The paper is well-written, with the exception of some details. It is concise, to the point and effective at communicating its message. \n2. The paper tackles the important problem of achieving efficient attention in Transformers without sacrificing model quality. Improvements in this space undoubtedly have profound consequences on the landscape of AI. In this space, the paper contributes a method that offers a lot of practical promise by combining the existing approaches of top-k and sampling attention.\n3. The paper performs numerous experiments on a variety of benchmarks to support its claims. There are also lots of ablation studies with other attention methods. \n4. The paper quantifies the tradeoff between approximation quality and model decline, at least empirically. This is something very few works in this space do, so it is an important contribution by itself.", "summary": "This paper combines top-k attention with a token sampling approach, seeking an interpolation of the two. It seeks provable and tunable approximation guarantees of the attention function and a practical method that can be deployed to save time and memory without sacrificing model quality. Numerous experiments are performed against many benchmarks to validate the method's performance.", "weaknesses": "1. The idea of combining sampling and top-k attention is not novel to this paper. The work of [1], for instance, seems to precisely propose the vAttention estimator in eq. (5). Furthermore, [1] also analyzes the approximation guarantees of their estimator rigorously. Beyond [1], numerous other works rigorously analyze the approximation quality of subquadratic attention mechanisms [2,3,4], making me feel uneasy about this paper's claim to be the \"first\" algorithm to rigorously allow for quality-efficiency tradeoff control. I would say that the point of departure of this work from [1] and other such works seems to be mainly the fact that this work makes the sample size a tunable hyperparameter, and I worry that this is not a novel enough contribution. \n * That being said, the experimental and empirical study provided by this paper are another one of its contributions. Prior works have not analyzed top-k attention in such an extent, so these insights are definitely valuable to the community. However, the paper is suggesting that it is the first to propose these methods and rigorously analyze them, which, in my opinion, is not accurate. \n2. The mathematical rigor of the paper has some notable issues:\n * Lemma 4.1: The $\\Phi$ function in Lines 301-303 is applied to the entire term? It is a little hard to see what's happening here.\n * Lemma 4.1: Why are $r_i$ considered random variables with some covariance matrix $\\Sigma$? The distribution on the scores is highly unknown and $\\Sigma$ is impossible to estimate. Yet the lower bound on $b$ depends on the trace of $\\Sigma$. The authors mention this and ultimately set the threshold arbitrarily, but the formal algorithm cannot be stated in terms of this $\\Sigma$.\n * Lemma 4.1: The use of CLT here is a bit troublesome. The argument can only work in the limit $n_s \\to \\infty$, in which case I am also confused as to how the lower bound is ultimately proven. Hoeffding's inequality can salvage things and give a concrete bound, but the paper does not have this proof. Instead it is claimed that in practice the CLT-bound is superior, which again raises the question of how $\\Sigma$ is calculated. As a whole, Lemma 4.1 is a bit weak in my opinion.\n * Lemma 4.2: This is the approximation guarantee. I don't think it parses very well: The probability is multiplied with $||N/D||_2$? Also, the statement is shown for $||N||_2/D$ instead in the appendix.\n\nOverall, the paper makes a compelling case for the use of top-k and sampling attention (or interpolation of these) in practice. However, I feel like it currently suffers from issues of originality and rigor.\n\n[1] Haris, Themistoklis. \"kNN Attention Demystified: A Theoretical Exploration for Scalable Transformers.\" The Thirteenth International Conference on Learning Representations.\n[2] Han, I., Jayaram, R., Karbasi, A., Mirrokni, V., Woodruff, D. P., & Zandieh, A. (2023). Hyperattention: Long-context attention in near-linear time. arXiv preprint arXiv:2310.05869.\n[3] Choromanski, K., Likhosherstov, V., Dohan, D., Song, X., Gane, A., Sarlos, T., Hawkins, P., Davis, J., Mohiuddin, A., Kaiser, L. and Belanger, D., 2020. Rethinking attention with performers. arXiv preprint arXiv:2009.14794.\n[4] Alman, Josh, and Zhao Song. \"Fast rope attention: Combining the polynomial method and fast fourier transform.\" arXiv preprint arXiv:2505.11892 (2025)." }, { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "WwVr7SlkBd", "reviewer": "ICLR.cc/2026/Conference/Submission9335/Reviewer_YGRV", "strengths": "1. The mathematical derivation is clear and connects attention approximation to classical sum-estimation theory.\n\n2. Shows consistent empirical gains on long-context benchmarks and stable long-generation quality.\n\n3. The framework is compatible with existing top-k implementations.\n\n4. Writing and organization are clear, with well-motivated theoretical and experimental sections.", "summary": "This paper introduces vAttention, a sparse attention mechanism that unifies deterministic top-k selection with sampling-based estimation and provides formal ($\\epsilon, \\delta$)\nguarantees on the approximation of full attention. Theoretical results show that the estimator achieves bounded error under Central Limit Theorem–based sampling, and experiments across Llama-3.1, DeepSeek, and Mistral models demonstrate strong quality-efficiency trade-offs, often outperforming existing top-k methods.", "weaknesses": "1. No ablation on the relaxation that only approximates the denominator.\n\n2. No GPU runtime results or CUDA implementation to verify efficiency gains.\n\n3. Missing quantitative reporting on selected token counts and achieved sparsity.\n\n4. Comparison to oracle top-p and newer top-p-based methods is incomplete.\n\n5. Parameter selection procedure for $\\delta$ and $\\epsilon$ is heuristic and underexplained." }, { "confidence": 2, "date": 0, "rating": 6, "review": "", "review_id": "EaRShDo3nB", "reviewer": "ICLR.cc/2026/Conference/Submission9335/Reviewer_5pmM", "strengths": "1. **Principled guarantees.** vAttention is presented as a practical algorithm that exposes user-controllable \\\\((\\epsilon,\\delta)\\\\) error targets. Empirically, the realized approximation error correlates strongly with the user tolerance \\\\(\\epsilon\\\\) (correlation \\\\(>0.99\\\\)).\n\n2. **Robust hybrid design.** The deterministic heavy-hitter stage plus stochastic long-tail sampling is well-motivated. A simple combination of oracle-top-k with random sampling consistently outperforms either component alone.\n\n3. **Strong empirical results.**\n - **Accuracy:** Improves over HashAttention on RULER-HARD by ≈4.5 points (e.g., +4.6 for Llama-3.1-8B and +4.3 for DeepSeek-R1-Distill-Llama-8B).\n - **Oracle comparison:** vAttention + oracle-top-k can outperform oracle-top-p on RULER-32K, suggesting limits of pure top-k/top-p schemes.\n - **Long generation:** Matches full-attention quality on AIME@32K at ≈12% average density; at ≈16K, densities around 10–15% are reported.", "summary": "This paper introduces **vAttention**, a sparse attention mechanism with verifiable accuracy control. The key observation is that top-k and random sampling are complementary: top-k captures sharp, heavy-tailed score distributions while sampling better covers near-uniform regions. vAttention unifies them: it deterministically selects heavy-hitters (sink tokens, a local window, and approximate top-k) and then uniformly samples from the residual “long tail.” Users set \\\\((\\epsilon,\\delta)\\\\) and the method computes a per-query, per-head sampling budget via CLT-based estimates. Experiments show clear gains over strong baselines and narrow the gap to full attention, matching full-attention quality under high sparsity (≈10–15% density; ≈12% at 32K) in long-generation settings.", "weaknesses": "1. **Theory–practice relaxation.** The paper proves joint \\\\((\\epsilon,\\delta)\\\\) guarantees for **both** SDPA numerator \\\\(N\\\\) and denominator \\\\(D\\\\) (Theorem 4.3), leveraging a composition of bounds (with Lemma 4.2 for separate approximations). Computing the full budget is reported as expensive. Consequently, **all experiments** adopt a relaxation that provides an \\\\((\\epsilon,\\delta)\\\\) guarantee **only for the denominator** \\\\(D\\\\). The “verified” claim in practice is therefore supported by the strong empirical correlation \\\\(>0.99\\\\) rather than the full \\\\(N\\\\)&\\\\(D\\\\) guarantee.\n\n2. **Budget/latency overhead under-measured on GPU.** Speedups are shown in memory-bound regimes with KV on CPU and with a naive PyTorch implementation. A CUDA kernel is left to future work. The end-to-end latency trade-off when KV fits in HBM is not fully benchmarked, so the actual GPU-resident benefit remains uncertain." }, { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "59jBt6aG3i", "reviewer": "ICLR.cc/2026/Conference/Submission9335/Reviewer_VSCh", "strengths": "* **Hybrid & Adaptive:** Intelligently combines deterministic \"heavy-hitter\" tokens (sinks, local, top-k) with stochastic sampling of the tail, adapting per head and query.\n* **State-of-the-Art Performance:** Outperforms strong baselines (e.g., HashAttention, oracle top-p), often matching full-model accuracy with high sparsity (10-15%).\n* **Efficient Long-Context Inference:** Effectively reduces the memory and computational bottleneck of large KV caches, enabling faster generation, especially when the cache is offloaded to CPU.\n* **Practical & Versatile:** Can be integrated with existing approximate top-k methods and works well on diverse, challenging long-context benchmarks.", "summary": "This paper introduces vAttention, a verified sparse attention method designed to address the memory and computational bottlenecks of standard scaled dot-product attention (SDPA) in large language models (LLMs) when processing long contexts.", "weaknesses": "* **Relaxation of Theoretical Guarantees:** The core theoretical contribution provides an \\((\\epsilon, \\delta)\\) guarantee for the *entire* attention output. However, the authors explicitly state that in practice, they use a **relaxation that only guarantees the denominator**. While they provide empirical justification (strong correlation with final error), this significantly weakens the formal \"verified\" claim. The method is no longer provably guaranteed for the final output, but rather for an intermediate component.\n* **Dependence on CLT and Large Sample Assumptions:** The theoretical bounds rely on the Central Limit Theorem (CLT), which holds for \"large enough\" sample sizes. The paper provides an empirical analysis (Appendix E) showing CLT is tighter than Hoeffding's bound, but the validity of the CLT approximation for *all* layers and heads, especially with small budgets, is not rigorously proven. This makes the \"verification\" approximate rather than exact.\n* **Circular Dependency in Budget Calculation:** To compute the required sample size \\(b\\), the algorithm needs to know population statistics like the covariance matrix \\(\\Sigma\\) and the norm \\(||N||_2\\). Since these are unknown, the method uses a **base sample** (governed by \\(f_b\\)) to estimate them. The accuracy of the initial estimate directly impacts the final guarantee, creating a potential circularity that is not fully addressed theoretically.\n* **Sensitivity to Hyperparameters:** The method introduces several new hyperparameters (\\(f_s, f_l, f_t, f_b, \\epsilon, \\delta\\)). While the paper shows a search can find good values, this adds complexity for practitioners compared to simpler methods like top-\\(k\\). The \"natural configuration\" used in AIME2024 is promising but requires validation across more diverse tasks.\n\n* **Computational Overhead of Budget Calculation:** The process of calculating the adaptive budget for each head and query, including drawing a base sample and estimating statistics, introduces **non-trivial overhead**. The paper admits this is done in \"naive PyTorch\" and that a CUDA kernel is future work. This overhead could offset the speed gains from sparse attention, especially for shorter sequences or GPU-hosted KV caches.\n* **Dependence on Approximate Top-\\(k\\) Methods:** vAttention's performance is not standalone; it's a framework that incorporates an approximate top-\\(k\\) method (e.g., HashAttention). The results show that **\"more accurate top-\\(k\\) methods are essential for the overall quality.\"** Therefore, vAttention's weaknesses are, in part, the weaknesses of its underlying top-\\(k\\) component. If the top-\\(k\\) method fails to identify crucial \"heavy hitters,\" vAttention's sampling-based tail approximation may not be sufficient to recover.\n* **No End-to-End Speed Evaluation:** The efficiency claims are primarily supported by a model showing near-linear speedup when the KV cache is on the CPU (Figure 5). There is **no comprehensive evaluation of end-to-end latency or throughput** (tokens/second) comparing vAttention to baselines under equal hardware and sparsity budgets. The gains for GPU-resident KV caches are stated but not demonstrated." } ]
4
@inproceedings{ anonymous2025vattention, title={vAttention: Verified Sparse Attention via Sampling}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zzTDulLys0}, note={under review} }
anonymous2025vattention
Phased DMD: Few-step Distribution Matching Distillation via Score Matching within Subintervals
Distribution Matching Distillation (DMD) distills score-based generative models into efficient one-step generators, without requiring a one-to-one correspondence with the sampling trajectories of their teachers. However, limited model capacity causes one-step distilled models underperform on complex generative tasks, e.g., synthesizing intricate object motions in text-to-video generation. Directly extending DMD to multi-step distillation increases memory usage and computational depth, leading to instability and reduced efficiency. While prior works propose stochastic gradient truncation as a potential solution, we observe that it substantially reduces the generation diversity of multi-step distilled models, bringing it down to the level of their one-step counterparts. To address these limitations, we propose **Phased DMD**, a multi-step distillation framework that bridges the idea of phase-wise distillation with Mixture-of-Experts (MoE), reducing learning difficulty while enhancing model capacity. Phased DMD is built upon two key ideas: **progressive distribution matching** and **score matching within subintervals**. First, our model divides the SNR range into subintervals, progressively refining the model to higher SNR levels, to better capture complex distributions. Next, to ensure the training objective within each subinterval is accurate, we have conducted rigorous mathematical derivations. We validate Phased DMD by distilling state-of-the-art (SOTA) image and video generation models, including Qwen-Image (20B parameters) and Wan2.2 (28B parameters). Experimental results demonstrate that Phased DMD preserves output diversity better than DMD while retaining key generative capabilities. We will release our code and models.
2,026
https://openreview.net/forum?id=zzJTo7ujql
https://openreview.net/pdf/e71773613d64368792595f5adf47cf22041311cc.pdf
[ "Xiangyu Fan", "Zesong Qiu", "Zhuguanyu Wu", "Fanzhou Wang", "Zhiqian Lin", "Tianxiang Ren", "Dahua Lin", "Ruihao Gong", "Lei Yang" ]
ICLR 2026 Conference Withdrawn Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission10813/-/Full_Submission', 'ICLR.cc/2026/Conference/-/Withdrawn_Submission']
poster
[ { "confidence": 4, "date": 0, "rating": 4, "review": "", "review_id": "us3Mj7Oiym", "reviewer": "ICLR.cc/2026/Conference/Submission10813/Reviewer_PJDq", "strengths": "- While the idea of progressive diffusion distillation under various criteria has been explored in previous studies such as [1, 2], the specific idea, splitting the SNR range into sub‑intervals to perform progressive multi‑step DMD coupled with MoE, is simple, novel, and interesting.\n\n- In addition, Section 2.3.2 derives an objective that theoretically enables score distillation within sub‑intervals, which is another strength.\n\n- Although the experimental evaluation is not comprehensive, the main body and supplementary materials suggest that the method successfully distills extremely large-scale models (20B and 28B parameters) for image and video generation.\n\n[1] Tim Salimans, et al. \"Progressive distillation for fast sampling of diffusion models.\" ICLR 2022\n\n[2] Dongjun Kim, et al. \"PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher.\" NeurIPS 2024", "summary": "This work presents Phased DMD. Motivated by the task differences that arise across signal‑to‑noise ratio (SNR) ranges in score‑based models, Phased DMD splits the SNR range into subintervals and performs few‑step DMD progressively, phase by phase. From an architectural perspective, this work also proposes using a Mixture‑of‑Experts (MoE) model where each expert is responsible for a specific SNR subinterval. \n\nWith this design, the proposed method demonstrates better diversity than both the original DMD and the stochastic gradient truncation strategy introduced in Self-Forcing, in a multi‑step distillation setup for image generation.", "weaknesses": "Although the proposed method is new and interesting, the major weakness of this work as a scientific paper is the insufficient experimental evaluation. For instance, diversity is evaluated only for image generation (without video generation), and the metrics (DINOv3 and LPIPS) used do not seem to be standard for image generation. For video generation, the evaluation is limited to optical flow, dynamic degree, and screenshots of generated samples.\n\nTo comprehensively assess the effectiveness of the proposed method, the reviewer suggests conducting a thorough evaluation using the standard benchmarks and metrics from [3–6] including subjective evaluations.\n\n[3] Xun Huang, et al. \"Self Forcing: Bridging the Train-Test Gap in Autoregressive Video Diffusion.\" NeurIPS 2025\n\n[4] Shanchuan Lin, et al. \"Diffusion adversarial post-training for one-step video generation.\" arXiv preprint arXiv:2501.08316 (2025).\n\n[5] Wan, Team, et al. \"Wan: Open and advanced large-scale video generative models.\" arXiv preprint arXiv:2503.20314 (2025).\n\n[6] Seawead, Team, et al. \"Seaweed-7b: Cost-effective training of video generation foundation model.\" arXiv preprint arXiv:2504.08685 (2025)." }, { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "0PyD6fFuKs", "reviewer": "ICLR.cc/2026/Conference/Submission10813/Reviewer_sMYJ", "strengths": "- The phased, SNR-subinterval approach provides a theoretically grounded extension to DMD.\n- This research improves the diversity of image and video distillation based on the extensive experimental results. \n- The experimental results are tested with large models, which proves the scalability.\n- From my perspective, the paper is easy to read and follow as the mathematical formulas are neat.", "summary": "This research proposes an incremental version of Distribution Matching Distillation (DMD) by dividing the signal-to-noise ratio (SNR) range into multiple subintervals (“phases”) and performing phase-wise score matching within each. The authors claim that this approach increases training stability and preserves generative diversity in both image and video generation tasks.", "weaknesses": "Although the research improves the diversity of distilled results, as claimed, the paper still contains several drawbacks. \n- The paper claims that the proposed approach can improve the diversity of generation. However, the theoretical connection between phase-based SNR learning is weak. The improved results might come from the larger capacity of a mixture of experts, which can store more information, but not phase learning.\n- Although the paper claims that the huge computation cost and large amount of memory needed for a mixture of experts can be manageable by using LORAs; however, the actual training cost increase from multi-phase distillation is not clearly quantified.\n- The experiment needs to be more extensive. Some more analysis can be added to the research; a different number of phases used for obtaining generators is an important factor for readers. \n- Some ablation tests should be included for a better understanding of the proposed method." }, { "confidence": 3, "date": 0, "rating": 4, "review": "", "review_id": "pMSbXMsdLO", "reviewer": "ICLR.cc/2026/Conference/Submission10813/Reviewer_B184", "strengths": "1) The proposed method demonstrates increased generation quality and sample diversity compared to the baselines;\n2) Applicability of the method is demonstrated in high-dimensional settings corresponding to the state-of-the-art image and video models.", "summary": "The paper proposes a novel approach, called Phased DMD, for distillation of diffusion models into the few-step generators. The authors propose to formulate the few-step generation process in a manner of MoE (Mixture of Experts): their model generates a sequence of progressively less noisy samples $x_{t_i}$, where the translation $x_{t_{i - 1}} \\to x_{t_i}$ is performed by the i-th trainable expert $G_{\\phi_i}$. The experts (parameterized by LoRAs) are trained in the curriculum from lower to higher SNR (signal-to-noise ratio) along with the (one, fully trainable) corresponding fake score model, which utilizes score matching on the subintervals $(t_{i}, 1)$. The proposed Phased DMD algorithm demonstrates superior generation quality compared to the \"vanilla DMD\" without losing diversity, opposed to the SGTS baseline.", "weaknesses": "### Positioning and contributions\n\n1) First, the score matching within subintervals, proposed as one of the key contributions, is not novel. It is a straightforward consequence of the general score identity $\\nabla_y \\log p_{Y}(y) = \\int \\nabla_y \\log p_{Y | X}(y | x) p_{X | Y}(x | y) d x$, deeply discussed in e.g. [1]. Similar formulation was applied for the subsequent discrete timesteps in DSB [2]. The exact same (as in Phased DMD) subinterval formulation was applied in e.g. [3];\n2) I think there is a significant misunderstanding of the DMD2 paper, referred to as vanilla DMD in the manuscript. The authors state that DMD2 performs backpropagation through the few-step generation process. On the other hand, to my knowledge, DMD2 does not propagate gradients through any of the generation steps except the last, thus treating the intermediate samples $x_{t_i}$ as the synthetic (but \"detached\") data, used to tackle the typical mismatch between the input distributions at training and inference. DMD2 thus seems to fit the SGTS scheme, shown in Figure 1(b).\n\n### Experiments\n3) The paper lacks baselines other than simulation-based DMD and DMD with SGTS;\n4) There is almost no quantitative evalutation of the method on the image generation tasks (except for the sample diversity in Table 2);\n\n### Writing quality\n5) I think the writing has both overcomplicated notions (with such notations as $\\epsilon \\sim \\mathcal{N}, x_{t_k} = \\text{pipeline}(G_{\\phi_1}, \\ldots, G_{\\phi_k}, \\{t_1, \\ldots, t_k\\}, \\epsilon, \\mathcal{S}), t \\sim \\mathcal{T}(t; t_k, 1), x_t \\sim p(x_t | x_{t_k})$ under expectation) and underexplained notions, which significantly harms readability. For example, how are the scheduler $\\mathcal{S}$ and the pipeline implemented in practice? Does the $i$-th expert deterministically generate the next image with higher SNR, or predict the clean image and adds independent noise like it was done in DMD2? What is the parameterization of the few-step generator: clean prediction, $v$-prediction, or something else? Division of the resulting pipeline into few-step (rather than one-step) sub-phases is also underexplained;\n6) The manuscript contains several notational inaccuracies such as $\\nabla x_t$ instead of $\\nabla_{x_t}$ or defining the diffusion model as $F_\\theta(x_t)$ without conditioning on the corresponding time step.\n\n[1] Target Score Matching\n\n[2] Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling\n\n[3] A Flexible Diffusion Model" } ]
3
@misc{ fan2025phased, title={Phased {DMD}: Few-step Distribution Matching Distillation via Score Matching within Subintervals}, author={Xiangyu Fan and Zesong Qiu and Zhuguanyu Wu and Fanzhou Wang and Zhiqian Lin and Tianxiang Ren and Dahua Lin and Ruihao Gong and Lei Yang}, year={2025}, url={https://openreview.net/forum?id=zzJTo7ujql} }
fan2025phased
Learning activation functions with PCA on a set of diverse piecewise-linear self-trained mappings
This work explores a novel approach to learning activation functions, moving beyond the current reliance on human-engineered designs like the ReLU. Activation functions are crucial for the performance of deep neural networks, yet selecting an optimal one remains challenging. While recent efforts have focused on automatically searching for these functions using a parametric approach, our research does not assume any predefined functional form and lets the activation function be approximated by a subnetwork within a larger network, following the Network in Network (NIN) paradigm. We propose to train several networks on a range of problems to generate a diverse set of effective activation functions, and subsequently apply Principal Component Analysis (PCA) to this collection of functions to uncover their underlying structure. Our experiments show that only a few principal components are enough to explain most of the variance in the learned functions, and that these components have in general a simple, identifiable analytical form. Experiments using the analytical function form achieve state of the art performance, highlighting the potential of this data-driven approach to activation function design.
2,026
https://openreview.net/forum?id=zz3El6hqbs
https://openreview.net/pdf/5c2083093945b12142ac89448a624de1f7279d3e.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission19895/-/Full_Submission']
poster
[ { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "6hd51Ytryy", "reviewer": "ICLR.cc/2026/Conference/Submission19895/Reviewer_WARg", "strengths": "- The topic of the submission is very interesting: many aspects of deep learning architectures are iteratively designed to address specific shortcomings. The idea of learning activation functions and then condensing them into efficient and powerful versions is appealing and fits well into the general learning regime of DL. \n- The idea of representing a large class of activation functions with an MLP is neat. I’d be very curious to compare the performance and behavior of models with such learned activation functions to regular models and encourage the authors to add such an analysis. \n- The identification of eigenfunctions seems a powerful idea in this application. Given that the learned activations are piecewise-linear, the identification appears tractable. While this is a nice idea, the lack of details in this part makes it challenging to evaluate.", "summary": "This submission proposes a method to find new formulations for activation functions. Motivated by previous work, that proposes parameterized activation functions, they propose a two step approach. In the first step, they express activation functions as MLPs - based on the universal function approximation theorem. In a second step, they identify the eigenfunction within the learned activation functions, and find a compressed symbolic form with few learned parameters. Empirically, they demonstrate that their activation function performs well on small computer vision datasets.", "weaknesses": "While I appreciate the general idea of the submission, there are several weaknesses. \n- **W1 - experimental evaluation:** The experimental identification of activation functions is based on small and simple image classification problems with 2-layer MLPs. While the evaluation adds a small CNN, both the basis to activation function search as well as the evaluation are very limited. I understand that the method requires repeated training of SLAF models to identify eigenfunction with a corresponding computational burden. However, if the authors are convinced of the merit of their method, they need to evaluate on larger and different domains, different architectures and different tasks in order to claim any general benefit. \n- **W2 - eigenfunction identification:** The method of eigenfunction identification remains unclear. Since that’s at the core of the proposed method and there is ample space within the page limit, the gap is problematic. I strongly encourage the authors to include further details on how they identify eigenfunctions and what function space they consider. \n- **W3 - inductive biases of SLAF:** The expression of the learned activation functions as MLPs is elegant and based in the universal function approximation theorem. That said, related work has shown that NNs generally and MLPs specifically have a bias towards specific parts of the signal, e.g., https://arxiv.org/abs/2403.02241. By induction, the MLP that the authors used for their search inherits that bias, and so it’s not entirely surprising that the activation function that is found is not dissimilar to existing functions. I encourage the authors to discuss these biases, whether or not they are desirable, and how that affects the overall search space." }, { "confidence": 3, "date": 0, "rating": 8, "review": "", "review_id": "0zxBuClgvk", "reviewer": "ICLR.cc/2026/Conference/Submission19895/Reviewer_pzcB", "strengths": "Novelty of the Discovery Method: The core strength is the PCA-based methodology. Using PCA on a large ensemble of learned functions (the SLAFs) to distill their essential components is a highly original and intelligent approach to functional discovery.\n\nStrong Empirical Finding: The discovery that just two principal components explain >99.5% of the variance is a powerful and elegant result, strongly suggesting a simple, low-dimensional underlying structure for effective activation functions.", "summary": "This paper proposes a novel, data-driven methodology for discovering new activation functions. Instead of manual design or simple parametric search, the authors first define a \"Self-Learning Activation Function\" (SLAF) as a small, one-hidden-layer MLP, which is effectively a flexible piecewise-linear function. They then train thousands of networks (4608 in total) embedded with these SLAFs on a range of problems (MNIST, Fashion-MNIST, CIFAR-10) and SLAF network sizes to generate a diverse collection of effective activation functions.\n\nThe core contribution is the analysis of this collection. The authors apply Principal Component Analysis (PCA) and find that the first two principal components (PCs) explain over 99.5% of the variance in the learned functions. These two PCs are well-approximated by simple analytical functions. The first component is described as 'x * tanh(beta * x)' (a soft absolute value). The second component is a simple linear function, described as 'gamma * x'.\n\nBy combining these two components, the authors derive a new, two-parameter learnable activation function, which they term twish. It is defined by the expression 'x * tanh(beta * x) + gamma * x'. The authors note that twish is a generalization of the Swish activation function. In validation experiments on simple CNNs, twish is shown to consistently outperform ReLU, pReLU, and Swish in terms of both final test accuracy (particularly on the more complex CIFAR-10 dataset) and convergence speed. The work serves as a strong proof-of-concept for this new PCA-based discovery method.", "weaknesses": "Limited Scale of Validation: This is the primary weakness, which the authors rightly acknowledge. The validation experiments (Section 4) use small datasets (MNIST, CIFAR-10) and very simple CNNs. The true test of a new activation function is its performance and stability in deep, complex models (e.g., ResNets, ViTs) on large-scale tasks (e.g., ImageNet, large NLP corpora). Without this, it's hard to judge if \"twish\" will be broadly useful.\n\nLimited Diversity of SLAF Training: The initial 4608 SLAFs were all trained on simple FFNs for small-scale image classification. It's an open question whether this set is \"diverse\" enough. The discovered PCs might be biased towards this specific task and architecture family." }, { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "N50DGIsgiB", "reviewer": "ICLR.cc/2026/Conference/Submission19895/Reviewer_z1kh", "strengths": "The paper is fairly clearly written.", "summary": "The paper proposes a data-driven method to discover new neural network activation functions. It defines a small neural subnetwork (SLAF) that learns its own activation mapping during training, and then applies Principal Component Analysis (PCA) to thousands of these learned functions trained on small, classical datasets (MNIST, FashionMNIST, and CIFAR-10). The claim is that the first two principal components explain nearly all the variation and lead to a new function, they call twish, defined as $f(x; \\beta, \\gamma) = x \\tanh(\\beta x) + \\gamma x$, which is a slight generalization of the Swish activation. Experiments are conducted on simple convolutional neural networks and results illustrating marginally better accuracy and faster convergence compared with ReLU, PReLU, and Swish are presented.", "weaknesses": "This work presents a slight generalization of a popular activation function as the key contribution. Despite the work being experimental in nature, the conclusions as to the value of this new activation function are drawn based on toy datasets (MNIST, FashionMNIST and CIFAR10). In my view the experiments are not adequate to make any substantive claims and for this paper to be interesting I think you would need very strong evidence. I also don't see why the ideas behind the derivation of this activation function are particularly new, there are many techniques and approaches for deriving activation functions. In addition, despite the supposed benefit being that we derive activations from data, the key finding appears to be something close to what we already use." }, { "confidence": 3, "date": 0, "rating": 2, "review": "", "review_id": "7zIF5QmTvZ", "reviewer": "ICLR.cc/2026/Conference/Submission19895/Reviewer_QDUi", "strengths": "- Data-driven discovery: Uses PCA over a large pool of learned SLAFs to quantify shape diversity and reveal a compact, interpretable 2-D structure of activation functions. \n-Clear, reproducible pipeline: SLAF training → uniform sampling on a fixed grid → PCA → analytic fitting of PCs → definition of Twish. simple and easy to replicate.\n- Strong quantitative support: The first two principal components explain ~99.5% of the variance, providing a solid justification for dimensionality reduction. \n- Empirical signal: On small CNNs (e.g., CIFAR-10), Twish shows faster convergence and consistently higher accuracy that ReLU, pReLU, and Swish.\n- Practical value: Offers a lightweight alternative to large parameter searches for activation design, turning observed functional modes into a compact parametric family.", "summary": "This paper proposes a data-driven method for discovering activation functions. \nA simple ReLU-based MLP is trained as a Self-Learned Activation Function (SLAF), and the shapes of the resulting functions — obtained from various datasets and network widths — are analyzed using PCA.\nThe top two principal components explain ~99.5% of the variance, corresponding respectively to $x\\tanh(\\beta x)$ and $\\gamma x$. \nBuilding on this observation, the authors define a new activation function Twish, \n$f(x; \\beta, \\gamma) = x\\tanh(\\beta x) + \\gamma x$.\nExperimental results show that Twish achieves faster convergence and higher accuracy than ReLU, pReLU, and Swish across benchmark datasets such as MNIST, FashionMNIST, and CIFAR-10.", "weaknesses": "- Constrained search space: SLAFs are ReLU-based and thus piecewise-linear; the approach may bias discoveries toward piecewise linear shapes and under-explore smooth families.\n- Preprocessing dependence: PCA is performed on BN-normalized pre-activations sampled only on [−4,4]; stability to the choice of range/resolution/normalization is not established.\n- Limited scale: Experiments focus on small datasets and shallow models; generalization to ResNet/ViT/ImageNet or transformer LMs remains untested.\n- Baseline coverage: Direct, controlled comparisons against modern smooth activations (e.g., GLEU, Mish, ELU, SELU) under the same setup are missing.\n- Implementation specifics: The sharing/initialization/constraints of the learnable parameters $(\\beta, \\gamma)$ (layer, channel, or unit-level) are insufficiently detailed for full reproducibility." } ]
4
@inproceedings{ anonymous2025learning, title={Learning activation functions with {PCA} on a set of diverse piecewise-linear self-trained mappings}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zz3El6hqbs}, note={under review} }
anonymous2025learning
Sobolev acceleration for neural networks
$\textit{Sobolev training}$, which integrates target derivatives into the loss functions, has been shown to accelerate convergence and improve generalization compared to conventional $L^2$ training. However, the underlying mechanisms of this training method remain incompletely understood. In this work, we show that Sobolev training provably accelerates the convergence of Rectified Linear Unit (ReLU) networks and quantify such `Sobolev acceleration' within the student--teacher framework. Our analysis builds on an analytical formula for the population gradients and Hessians of ReLU networks under centered spherical Gaussian input. Extensive numerical experiments validate our theoretical findings and show that the benefits of Sobolev training extend to modern deep learning tasks, including diffusion models.
2,026
https://openreview.net/forum?id=zz06hwkH37
https://openreview.net/pdf/c051d040c4fd039cab69daed99bece8b60144928.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission23675/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission23675/-/Rebuttal_Revision']
poster
[ { "confidence": 3, "date": 0, "rating": 4, "review": "", "review_id": "Z9CKDs5NgD", "reviewer": "ICLR.cc/2026/Conference/Submission23675/Reviewer_VVCF", "strengths": "This paper presents several key strengths, most notably its establishment of the first rigorous theoretical framework for Sobolev acceleration, a phenomenon previously supported only empirically. It offers considerable analytical depth by deriving exact formulas for population gradients and Hessians under a student-teacher setup. The work successfully identifies the improved conditioning of the Hessian as the core mechanism behind the acceleration, providing a clear and compelling explanation. Furthermore, the theoretical findings are substantiated by extensive and systematic experiments that demonstrate the phenomenon's persistence across various architectures, activation functions, and modern deep learning tasks like diffusion models.", "summary": "This paper analyze shallow ReLU networks under a student-teacher framework with Gaussian inputs, deriving exact expressions for population gradients and Hessians. They prove that Sobolev training improves the condition number of the Hessian and accelerates gradient flow dynamics. While this theoretical analysis offers valuable insights, its practical applicability is constrained by several strong assumptions.", "weaknesses": "The theoretical analysis relies on strong and idealized assumptions, including standard Gaussian inputs and shallow ReLU networks, which limits its direct applicability to real-world scenarios. The practical benefits are also contextualized with a limited baseline, as Sobolev training is compared only against standard L² loss, leaving its advantage over other advanced regularization or optimization techniques an open question. The argument for generalizability beyond the theoretical setting leans heavily on empirical results, as a formal extension to non-Gaussian data or deep architectures is not provided. Finally, while the reported computational overhead is low, a thorough theoretical analysis of the peak memory and computational cost associated with calculating the required derivatives is absent, which is crucial for assessing its practical efficiency." }, { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "ySRhTswViP", "reviewer": "ICLR.cc/2026/Conference/Submission23675/Reviewer_gVmd", "strengths": "The paper is well written.", "summary": "This work presents a theoretical framework proving that Sobolev training accelerates the convergence of Rectified Linear Unit (ReLU) networks. Under a student–teacher framework with Gaussian inputs and shallow architectures, they derive formulas for population gradients and Hessians, and quantify the improvements in conditioning of the loss landscape and gradient-flow convergence rates.", "weaknesses": "The assumptions (Assumption 2.2 (Two-layer ReLU network) and Assumption 2.3 (Gaussian population)) seem very restrictive." }, { "confidence": 3, "date": 0, "rating": 4, "review": "", "review_id": "i5ih499P5B", "reviewer": "ICLR.cc/2026/Conference/Submission23675/Reviewer_SpAo", "strengths": "These are the strengths of the paper:\n- The paper presentation of the new theoretical results is overall clear and easy to follow. There is enough description to understand what is going on in the paper.\n- The overall writing of the paper is good and easy to understand.\n- The paper grounds their assumptions on existing works in the past literature.", "summary": "The paper’s main contribution is to provide theoretical guarantees on acceleration when using Sobolev training (Sobolev training adds first-order information to the loss function) for neural networks. The paper formally demonstrates this for the very particular case of ReLU networks with one neuron or multiple neurons in a one hidden layer network – in the case of multiple neurons, the results are on the gradient flow approximation. After the theoretical results, the paper shows multiple experimental results of Sobolev acceleration on settings well beyond the ones in the paper.", "weaknesses": "I have a series of comments, questions and concerns about the paper. I am willing to increase my score depending on the authors' response to them.\n\n\n**>> Comments/concerns/questions:**\n- The work (Cocola & Hand, 2020) is cited as previous work that has studied the convergence of Sobolev training. (Cocola & Hand, 2020) shows results that are probabilistic since they assume Gaussian random initialization of the weight parameters (**note that we are not talking about the data being Gaussian, but the weights being initialized as Gaussian**). This is the type of initialization that:\n 1. is very common in recent theoretical studies on training convergence, such as (Du et al, 2018; Allen-Zu et al, 2019; Arora et al., 2019) – the three works mentioned in the introduction of the paper and which, in the case of the first two, consider networks that are not necessarily shallow.\n 2. is **widely used in practice** (indeed, I am not aware of practical deep learning solutions that do not use random initialization, or even Gaussian initialization).\nThen, what is the motivation for the authors to consider a different, deterministic, and arguably more restrictive type of initialization in their paper? Please note that (Cocola & Hand, 2020) already uses Gaussian initialization for Sobolev training, which is a more practical initialization\n- Have the authors checked that none of the results by (Cocola & Hand, 2020) already indicate that Sobolev training could accelerate training dynamics? Can the authors discuss the possibility of using the results by (Cocola & Hand, 2020) in order to show Sobolev acceleration using Gaussian initialization of the weights? \n\n- RELUs are not differentiable in their whole domain (they are not at the origin). How is it that one can simply write their gradient in equations like (1) without addressing this issue? How do the authors address the issue of the non-differentiability at the origin during their analysis?\n- It is inconsistent to mention that $w$ is a matrix of dimension $d\\times{K}$ in Assumption 2.2 while the statement of Theorem 2.5 refers to $w$ as a vector. Please, clarify.\n- In Theorem 2.10, the function $\\lambda(\\theta)$ can potentially be zero, in which case, there will not be Sobolev acceleration. This seems to be for the specific value of $\\theta = 0$, which means that both $w$ and $w^\\*$ are parallel (zero angle, if I am not mistaken). It is possible that this could happen even when $w\\neq w^\\*$ since both vectors can have different magnitudes. Can the authors make a comment about this case, since this is the case where acceleration may not happen? The $||w^0-w^\\*||<||w^\\*||$ may help during initialization, but not necessarily during training; can the authors comment on that? Likewise, I am suspicious that something like this also happens in Theorem 2.11, though no specific function of $\\theta$ is shown – is this true? Can the authors also make a comment on this?\n- What are $\\lambda_1$ and $\\lambda_2$ in (2) from Theorem 2.12? They have not been defined before. How do they indicate positive definiteness of $M_3$? \n- Theorem 2.13 is a linearized system, so **it is not** a general result for **global** convergence. Can the authors clarify this in the paper? If so, the dynamical system in Theorem 2.13 is only valid in a local neighborhood around $w^\\*$ (which could be a very small one). This needs to be made explicit and addressed in the paper.\n- The paper’s contribution and focus is theoretical, so I expect the experiments to point towards such results. I understand the authors have tried to do that to some extent in subsection 3.1., but I have an additional suggestion. If we look at the paper, all theoretical results for more than one neuron were done using gradient flows. Gradient flows, as I understood from the authors too, **do not** directly translate to gradient descent (GD) (and even less to stochastic gradient descent (SGD)). Thus, it would be interesting to simulate both the trajectories of the loss of the gradient flow, for which closed-form expressions are known as in (3) from Theorem 2.12., and compare it to the ones one would obtain by GD – both under the same initialization.\n- Again, regarding the experiments, I understand the idea of experimenting settings beyond the one from the theoretical results; however, the experiments on autoencoders and diffusion models seem topicwise very remote from the rest of the paper – they feel out of place and do not seem to contribute to this theoretical paper as much. Moreover, I have two things to point out: \n - In the autoencoder experiments, it is mentioned that (Yu et al., 2023) has already studied them. How are the authors’ experiments different from (Yu et al., 2023)? Otherwise, what’s the contribution?\n - Has anyone ever used Sobolev training on diffusion models before the authors? If not, this is something new and a possible contribution. However, as I mentioned earlier, though being a contribution, it seems very far in scope from the theoretical part of the paper: diffusion models such as DDPM use U-Nets, which are vastly different in architecture than just feedforward ReLUs (like the ones in the theoretical part of the paper)! Indeed, a comprehensive empirical study of Sobolev training (and modifications of it) on different diffusion models could potentially result in its own paper that the authors could as well write and publish.\n\n\n\n**>> Other things:**\n- The first paragraph of the introduction should also mention the use of Neural Operators, which use neural networks for approximating a map between two functional spaces. They have been used in scientific computing applications–e.g., see Fourier Neural Operators and Deep Operator Networks (DeepONets).\n- Lines 037-038: why is capturing the derivatives “an essential feature of many modern applications”? Citations are needed.\n- Lines 066-067: it says that the condition number “governs the convergence rate of many optimization algorithms”. Which algorithms are those? References are needed.\n- Lines 211-222: it is mentioned that the faster convergence for gradient flow “typically reflects a larger minimum eigenvalue of the Hessian”.A citation or a simple case illustrating this claim is needed.\n- In Theorem 2.12, the symbol “$x$” is used as a coefficient. The problem is that this same symbol has been used before to denote the input data to the model. I suggest using a different symbol.\n- The last part of Theorem 2.10 sounds a little bit informal and qualitative: it uses the expression “much more accelerated”. My suggestion is to remove the part after the last comma of the last sentence of Theorem 2.10, and simply say that “there is more acceleration as $\\theta$ increases.”" }, { "confidence": 3, "date": 0, "rating": 4, "review": "", "review_id": "wIX56OSpVX", "reviewer": "ICLR.cc/2026/Conference/Submission23675/Reviewer_nnz1", "strengths": "1. To the best of my knowledge, this is the first paper that provides a theoretical verification of the Sobolev Acceleration phenomenon for neural networks from an optimization perspective. While the analysis focuses on a simplified setting (Gaussian input and a single hidden-layer ReLU network), it represents a meaningful step forward. Previous work (Lu et al., 2022) studied Sobolev acceleration only in the RKHS regime, rather than directly on neural networks.\n\n2. Despite being primarily a theory paper, the authors include a comprehensive set of numerical experiments that convincingly support their theoretical claims. These experiments span multi-layer ReLU networks with different activation functions, CNN-based denoising autoencoders, and even diffusion models, which go well beyond the simple theoretical setup. Across all cases, the results consistently demonstrate that training with Sobolev loss leads to faster convergence than standard $L^2$-based training.", "summary": "The paper intend to establish theoretical guarantees to the phenomenon known as \"Sobolev Acceleration\", which is the empirical observation that training neural networks with loss functions defined in Sobolev norms (i.e., including derivative of the target function) often leads to faster convergence compared to standard $L^2$-based training.\n\nThe authors focus on a one-hidden-layer ReLU neural network and assume that the input data follows a standard Gaussian distribution. The study begins with the simplest case of a ReLU network with a single ReLU neuron ($K=1$). The authors derive an exact analytical expression for the Hessian of the Sobolev loss function, then prove that the condition number of this Hessian is strictly smaller than that of the corresponding Hessian for the $L^2$ loss. This result implies that optimization algorithms such as gradient descent may converge more rapidly when trained with a Sobolev-type loss, due to the improved conditioning of the loss landscape.\n\nThe paper further study the dynamics of weight convergence, specifically on how the Sobolev loss influences the convergence rate of the squared parameter error $||w-w^\\ast||$, where $w^*$ is assumed to be the true weight parameter of the ReLU network. It is shown that in the general case of more than one ReLU neuron ($K\\geq 1$), Sobolev training will lead to a faster decay of the error $||w-w^\\ast||$ (i.e., faster convergence of the weight parameter). \n\t​", "weaknesses": "1. The paper's technical depth appears insufficient for ICLR. The theoretical analysis is restricted to a highly simplified setting, requiring the input distribution to be standard Gaussian and the neural network to have only one hidden layer. Moreover, the core result on Hessian conditioning is established only for network containing a single ReLU neuron, which limits its generality. Although the authors extend the weight convergence analysis to the multi-neuron case, this extension relies on the strong assumption that the true weight vectors $\\{w_j^\\ast\\}$ form an orthonormal basis, which is rarely realistic in practical networks. Overall, these simplifications make the theoretical contributions feel narrow and somewhat lacking in depth relative to the expectations of ICLR-level theoretical work.\n\n2. The paper clearly establishes that the Sobolev loss leads to a smaller Hessian condition number compared to the standard $L^2$ loss. However, the link between this improved conditioning and acceleration in convergence is not thoroughly explained. While the authors briefly mention that optimization algorithms such as gradient descent tend to converge faster when the loss landscape has a smaller condition number, no formal justification or citation is provided to support this claim. It would strengthen the paper to clarify whether this relationship is theoretically well established (and under what assumptions) or to provide references from the optimization literature. Moreover, it remains unclear whether the same argument extends to other optimization algorithms beyond vanilla gradient descent." } ]
4
@inproceedings{ anonymous2025sobolev, title={Sobolev acceleration for neural networks}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zz06hwkH37}, note={under review} }
anonymous2025sobolev
MINT: Causally Tracing Information Fusion in Multimodal Large Language Models
Multimodal Large Language Models (MLLMs) have demonstrated impressive performance on tasks that involve understanding and integrating information across different modalities, particularly vision and language. Despite their effectiveness, the internal representations of these Vision Language Models (VLMs) remain poorly understood, making it difficult to interpret their predictions or identify the causes of common errors. A crucial step toward improved interpretability is understanding how visual and textual signals fuse within the language decoder of these models. This integration process is particularly important since failures to properly combine modalities frequently lead to errors such as object hallucinations and incorrect spatial descriptions. In this paper, we systematically investigate the internal mechanisms of multimodal fusion in three representative VLMs: LLaVA-1.5-7B, DeepSeek-VL2-Tiny, and Qwen2-VL-7B. We propose MINT (Multimodal INtervention Tracing), a method that builds on the principle of hidden state patching to create a causal map of multimodal processing by systematically intervening at each layer of the language decoder. From these maps, we identify a critical region we term the `fusion band'—the decisive window of layers where visual and linguistic signals are actively fused to guide the model's output. Our analysis reveals that the location and width of this band are not uniform across models; they highlight fundamental differences in their fusion mechanisms that directly correlate with a model's ability to resolve contradictions, ground language, and perform complex spatial reasoning. This causal mapping offers a diagnostic framework to explain common VLM failures and inform future architectural design.
2,026
https://openreview.net/forum?id=zyu1tXMcbh
https://openreview.net/pdf/b8b86038e600dd05d4b796221a461ee4c688e0a4.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission22929/-/Full_Submission']
poster
[ { "confidence": 4, "date": 0, "rating": 6, "review": "", "review_id": "mRxJcajRUA", "reviewer": "ICLR.cc/2026/Conference/Submission22929/Reviewer_qNLH", "strengths": "1. The introduced probing method MINT, is systematic and causal method to trace multimodal fusion within VLMs, advancing beyond correlational analyses and offering a concrete tool for understanding internal model mechanisms.\n\n2. By analyzing widely-used MLLMs, the study reveals model-specific fusion patterns, providing generalizable insights into how architectural differences affect grounding and reasoning capabilities.\n\n3. The discovery offers a clear, interpretable indicator of where visual–textual integration occurs, enabling targeted diagnosis of common VLM failures (e.g., hallucination, spatial errors) and guiding future multimodal model design.", "summary": "This paper investigates how visual and textual information are fused inside Multimodal Large Language Models (MLLMs) to better understand their internal reasoning and interpret their errors. The authors introduce MINT, a causal analysis method based on hidden-state patching that systematically intervenes in each decoder layer to trace where and how multimodal integration occurs. Applying MINT to LLaVA-1.5-7B, DeepSeek-VL2-Tiny, and Qwen2-VL-7B, the study identifies a “fusion band”—a specific range of layers where visual and textual signals interact most strongly. The paper finds that the position and width of this fusion band vary across models, reflecting distinct fusion mechanisms that correlate with capabilities in grounding, contradiction resolution, and spatial reasoning. Overall, the work provides a causal interpretability framework for diagnosing and comparing VLMs, offering insights for future model design and multimodal understanding.", "weaknesses": "1. Lines 326–328 state that “answering yes can only stem from the visual semantic.” It would be more convincing to empirically verify that the those prompts containing <category name> do not contain textual token biases toward “yes.” In other words, the models should be tested to ensure they do not produce “yes” responses when the visual input is patched with the placeholders.\n\n2. The description of “patching in a clean visual representation” in Section 5.4 is somewhat ambiguous. It would help to clarify whether this refers to the output of the multimodal projector or another specific representation. Clearer definition and justification are needed to support the validity of the intervention results.\n\n3. The conclusion of a “fundamental representational failure” for DeepSeek-VL2 (Line 397) seems overstated. As discussed in Sections 5.2 and 5.3, DeepSeek-VL2 performs visual grounding in the later decoder layers, a behavior distinct from the other two models. Thus, manually patching visual tokens might introduce an excessively strong signal rather than revealing an inherent failure.\n\n4. The analysis would be more convincing if extended to larger models. Observations drawn from smaller models may not generalize to deeper models, potentially limiting the robustness of the conclusions." }, { "confidence": 3, "date": 0, "rating": 4, "review": "", "review_id": "3vcOI7BZwi", "reviewer": "ICLR.cc/2026/Conference/Submission22929/Reviewer_K6zq", "strengths": "1. MINT first adapts the hidden state patching technology system in single-modal scenes to the language decoder of VLM, breaking through the limitation that traditional probe analysis (such as lightweight classifier probes) can only capture correlations, and directly locates the fusion level of visual-text information through causal intervention, filling the research gap of \"multimodal fusion causal map\"; the \"fusion band\" provides a unified index for quantifying the fusion mode of different VLMs, which has strong theoretical significance.\n2. The model selection takes into account different architecture types (CLIP + Vicuna, SigLIP + MoE decoder, custom vision adapter + dedicated decoder), and the dataset covers spatial inference, NegBench, and MS COCO, which can comprehensively verify the performance of the fusion mechanism in different tasks. The evaluation indicators are clearly defined, and the appendix supplements the bootstrap statistical significance analysis to ensure the reliability of the results.", "summary": "This paper focuses on the interpretability of multimodal information fusion in visual language models (VLMs), proposes MINT method, constructs causal maps through hidden state patching technology, successfully identifies the key fusion region of \"fusion band\", and verifies the model specificity of the fusion mechanism based on three representative VLMs (LLaVA - 1.5-7 B, DeepSeek-VL2-Tiny, Qwen2-VL-7B). At the same time, it realizes the diagnosis of common faults such as spatial reasoning errors and negative understanding bias.", "weaknesses": "1. The paper finds significant differences in the position and width of the \"fusion band\" of different models (e.g. early wide fusion of Qwen2-VL, late decentralized fusion of LLaVA-1.5), but does not analyze in depth the direct relationship between architectural design and fusion mode. For example, how do architectural differences such as \"visual adapter\" of Qwen2-VL, \"CLIP + Vicuna splicing architecture\" of LLaVA-1.5, and \"MoE decoder\" of DeepSeek-VL2 specifically affect the timing and scope of fusion? Existing discussions only mention \"architectural differences\", and lack quantitative or qualitative relevance arguments.\n2. MINT is based on a single level of hidden state patching, but does not explain how to handle the impact of cross-level fusion interactions - if a layer of fusion relies on the modal representation of the preceding layer, will a single layer of patching underestimate or misjudge the fusion key layer?\n3. Text grounding experiments show that text embeddings are weak and limited to early layers of visual information, and the model relies mainly on direct visual attention. But the paper does not further explore \"whether enhancing text grounding can improve the fusion effect\" - for example, by fine-tuning the text encoder to carry visual information in the first order, will it reduce the \"fusion band\" range or improve the fusion accuracy?" }, { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "LnzTe9Iqtr", "reviewer": "ICLR.cc/2026/Conference/Submission22929/Reviewer_os6o", "strengths": "1. The introduction of MINT moves beyond correlation-based probing toward causal tracing through hidden-state patching. The idea of swapping intermediate representations is simple and conceptually transparent. It makes sense that changes in output can be causally attributed to specific layers or modalities.\n2. The study evaluates multiple major VLMs across benchmarks, helping with meaningful cross-model comparison. The identification of the proposed \"fusion bands\", and the detailed mapping of which layers contribute to visual grounding or fail, provide generalizable findings and valuable insights for targeted model improvement.", "summary": "Vision-Language Models (VLMs) integrate visual and textual information through complex internal mechanisms that remain poorly understood. This paper presents MINT (Multimodal INtervention Tracing), a causal framework for analyzing how and where information fusion occurs within multimodal decoders. Instead of relying on correlation-based probing, MINT performs representation patching, systematically swapping hidden states between different runs, to trace the causal influence of visual and textual inputs layer by layer. Through experiments on multiple VLMs, the authors identify characteristic fusion bands and model-specific fusion patterns, showing how visual information overrides language priors. The additional application shows that the framework also allows the diagnosis of failure cases.", "weaknesses": "1. While the framework is clearly presented, its core technique, hidden-state patching, is not novel. The contribution primarily adapts this existing approach to the multimodal setting rather than introducing a fundamentally new causal mechanism. The study could be made more compelling by extending the intervention beyond image features to also include text patching, enabling a fuller analysis of bidirectional information flow between modalities.\n2. The paper claims to present the first empirical map of the \"fusion band\", yet the term itself is newly introduced and lacks grounding in prior literature. Moreover, the evaluation framework relies entirely on in-house binary tasks and custom metrics (override accuracy, flip rate, and failure depth), which are defined within this paper. This makes it difficult to compare results or validate the claimed novelty against established evaluation standards.\n3. All main experiments adopt a classification-style prompt, focusing on binary outputs rather than analyzing finer-grained probability shifts that could reveal more nuanced causal effects. This, in fact, again places this work among output-level analyses rather than deeper investigations into the internal latent representations of VLMs. Consequently, the experimental setup feels shallow and does not fully leverage existing multimodal benchmarks. The experiments are also incomplete and inconsistent. For instance, LLaVA is not included in all analyses. Also, there is no direct comparison with existing interpretability or causal probing methods, despite their discussion in the related work section." }, { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "SJh3hG73Rv", "reviewer": "ICLR.cc/2026/Conference/Submission22929/Reviewer_d5Wo", "strengths": "- Clear and Well-Structured: The paper is well-organized, with detailed explanations of the preliminary, intuition, and methodology.\n\n- Extensive Evaluations: Three representative VLMs are investigated with the proposed framework, accompanied by comprehensive analysis and discussion.\n\n- Interesting Findings: The experiment results offer some findings on the multimodal processing of VLMs.", "summary": "This paper introduces MINT, a novel framework designed to construct a causal map of multimodal processing in vision-language models (VLMs) by leveraging hidden-state patching techniques. The authors conduct experiments to investigate the internal mechanisms of LLaVA-1.5-7B, DeepSeek-VL2-Tiny, and Qwen2-VL-7B. And the results reveal some insights on the 'fusion band' of VLMs.", "weaknesses": "- My major concern is that the core techniques incorporated in MINT are based on the patching method introduced in the previous work, **Patchscopes**. While the derivation is clear and well-presented, it does not introduce fundamentally new concepts but rather applies existing methods in a different context.\n\n- The models used in the experiments are somewhat outdated — all evaluated models were released over 12 months; a more comprehensive evaluation using such recent and stronger VLMs would strengthen the manuscript.\n\n- While they find variation across models, it’s unclear why some models adopt early vs late fusion. Are these choices by design (architecture) or emergent? Are they correlated with performance tradeoffs?" } ]
4
@inproceedings{ anonymous2025mint, title={{MINT}: Causally Tracing Information Fusion in Multimodal Large Language Models}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zyu1tXMcbh}, note={under review} }
anonymous2025mint
DoMiNO: Down-scaling Molecular Dynamics with Neural Graph Ordinary Differential Equations
DoMiNO: Down-scaling Molecular Dynamics with Neural Graph Ordinary Differential Equations
2,026
https://openreview.net/forum?id=zyq1JIuIhL
https://openreview.net/pdf/99983c740e057ab5240b1e4426d5c4a9fe111da6.pdf
[ "Fang Sun", "Zijie Huang", "Yadi Cao", "Xiao Luo", "Wei Wang", "Yizhou Sun" ]
ICLR 2026 Conference Withdrawn Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission13342/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission13342/-/Rebuttal_Revision', 'ICLR.cc/2026/Conference/-/Withdrawn_Submission']
poster
[ { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "vzanZOtJ1N", "reviewer": "ICLR.cc/2026/Conference/Submission13342/Reviewer_LGqt", "strengths": "The authors tackle an important problem with a creative and, in principle, intuitive idea. The reduced scaling from O(T) down to O(KT^1/K) is significant and the empirical results are encouraging", "summary": "The paper proposes a hierarchical neural graph ode to simulate molecular trajectories on K different time scales, resulting in a reduced time complexity scaling, from O(T) down to O(KT^1/K) with K=3 in practice. The model uses K distinct graph neural networks that are integrated in time, and the latent features of the previous coarsest time scales feed into the features of the finer timescale. The authors show reduced mean-squared errors in forward time step prediction compared to a handful of time series prediction models on some small-scale molecular benchmarks of single-molecule trajectories.", "weaknesses": "While the idea has potential and initial results are encouraging, i am currently not convinced that the results generalize to practical settings:\n\nOnly small-scale single-molecule trajectories are used, and the model only evaluates MSE. This leaves several open questions:\n1. How does the model transfer between molecules?\n2. How does the model generalize to larger more interesting systems?\n3. Can the model successfully predict transitions between different conformations not seen during training? Can the model, for example, simulate the folding of proteins or reactions of molecules?\n4. Currently, there is no temperature or initial velocity input. The velocity/temperature is implicitly inherited from the training data. This is a very limited setting, in particular for non-biological tasks where we dont want to retrain models for each new temperature setting.\n5. As there is no notion of an energy function, the learned ode is not enforced to be conservative. Therefore, there are no guardrails against energy drifts, and the model sampling energetically inaccessible states.\n6. No real-time comparisons or memory measurements are given. This makes it unclear how significant the time savings really are\n7. No thermodynamical observables are calculated. For example, the radius of gyration and time correlation functions would be easy out-of-the-box observables that could be reported\n8. The models are trained with 10% of MD17 frames, which amounts to tens to almost a hundred thousand training examples if the pytorch geometric datasets are being used. The dataset authors specify that not more than 1000 samples should be used to avoid data leakage: https://archive.materialscloud.org/records/pfffs-fff86\n9. The only baseline model that is explicitly designed for the prediction of large time steps in molecular systems is ITO. However, ITO is a distribution-level model; it tries to predict distributions at a time in the future, not individual samples. Comparing MSE of individual samples is therefore not very meaningful and inflates ITO's error. A fair comparison would need to sample some initial distribution, push the ensemble of states forward in time with DoMiNO and the distribution with ITO, and then compare the models on statistical divergence metrics, not MSE." }, { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "TYx95I3uei", "reviewer": "ICLR.cc/2026/Conference/Submission13342/Reviewer_STZw", "strengths": "The authors report a significant reduction in MSE (although it is unclear to me of what, see weaknesses below)\nComposing neural graph ODEs that specialize on different time step sizes is interesting and novel", "summary": "The authors introduce a framework for learning molecular dynamics trajectories. The core idea is to use multiple Neural ODEs that target different time step sizes. The initial state (atom positions) is encoded and solved by the coarsest Neural ODE (largest timesteps). The resulting latent states are then passed on to the Neural ODEs at finer levels (smaller timesteps). The latents at each level are combined using attention to output the final predictions. Each Neural ODE consists of EGNN layers. The authors evaluate on MD17 and MD22 molecules.", "weaknesses": "1. Clarity of the experiments: \n- In table 1, what is the MSE of? Average in the difference in atom positions over a rollout of a certain length?\n- The main motivation of the paper is that MD with MLIPs are slow and accumulate error. This claim is not properly verified or compared to their method. Inference time is not compared, and the error drift experiments lack a proper MLIP and do not mention important parameters like time step size\n2. I am unsure how technically sound the idea of multiple Neural ODEs at different time step sizes is. Shouldn’t Neural ODEs be implicit representations of the continuous dynamics?\n3. The approach requires training data from trajectories, while MLIPs can be trained with unordered samples from e.g. MC?" }, { "confidence": 4, "date": 0, "rating": 6, "review": "", "review_id": "GzQvz15a1A", "reviewer": "ICLR.cc/2026/Conference/Submission13342/Reviewer_JFsL", "strengths": "-The paper clearly identifies and tackles a central challenge in MD — the multi-scale nature of atomic dynamics — and provides a coherent neural ODE-based solution.\n- The hierarchical ODE formulation is conceptually elegant, with each level operating in its own local time scale. This is a nice balance between coarse- and fine-grained temporal modeling.\n- The model is physically grounded, maintaining SE(3) symmetry through equivariant GNNs and decoding steps.\n-The empirical results are strong and comprehensive, covering both small molecules and larger systems. The performance improvements (especially on benzene/toluene) are quite impressive.\n-The authors provide detailed ablations and implementation details, which improves credibility and reproducibility.", "summary": "This paper proposes DoMiNO, a hierarchical neural ODE framework for molecular dynamics (MD). The key idea is to decompose molecular motion into multiple temporal scales, each modeled by an E(n)-equivariant Graph ODE that captures dynamics from slow global motions down to fast local vibrations. These levels are fused via an attention mechanism to reconstruct molecular trajectories. The authors evaluate DoMiNO on standard MD datasets like MD17 and alanine dipeptide, showing large gains in both prediction accuracy and long-term stability compared to baselines like EGNN, EGNO, and ITO. Ablations highlight the importance of hierarchical decomposition and local time normalization.", "weaknesses": "-While the decomposition idea is solid, the connection to physical timescales (e.g., mapping levels to specific frequencies or normal modes) remains largely heuristic. There’s no explicit analysis linking learned scales to real physical processes.\n- The paper is heavy on architectural details but light on intuition for why attention fusion is the right way to combine scales. It could use some visualization of learned weights or interpretability results beyond benzene/toluene.\n- The evaluation, while broad, focuses mainly on predictive accuracy. There are no experiments showing practical utility for sampling, free-energy estimation, or integration with existing MD workflows.\n- Computational cost is mentioned, but no direct runtime comparison versus baselines is shown.\n- It’s unclear how well DoMiNO generalizes across molecular systems, or whether it needs retraining for every molecule type." }, { "confidence": 3, "date": 0, "rating": 4, "review": "", "review_id": "EfU6BEo5WF", "reviewer": "ICLR.cc/2026/Conference/Submission13342/Reviewer_MiTQ", "strengths": "* The paper is clearly written and easy to follow.\n\n* The paper proposes to use an SE(3)-equivariant encoder/decoder with different spatio-temporal levels of graph neural ode to capture different scale of physics and achieve good computational effciency.\n\n* On MD17/ALA2 and larger systems, DoMiNO shows slower error growth than baselines across extended trajectories, indicating better stability beyond short-term fit.", "summary": "The paper proposes a hierarchical multi-scale Neural Graph ODE framework for simulating molecular dynamics: an equivariant encoder projects atomic states to latents, evolve dynamics in the coarse latent space via neural ODE and evolve at a local time-scale for finer level space, and then an attention fuser merges all levels to reconstruct coordinates. The design aims to resolve the small-timestep vs large-timestep dilemma by letting each level specialize in a characteristic timescale. Benchmark shows competitive MSE and slower error growth on datasets like MD17, alaine dipeptide.", "weaknesses": "As MD dynamics are chaotic, matching a single deterministic trajectory quickly becomes ill-posed; long-horizon MSE on coordinates is therefore not a meaningful objective beyond short transients. What typically matters are ensemble/statistical properties (e.g., RDFs, energy drift, diffusion constants, autocorrelation times, free-energy landscapes) and long-term stability. An example of more practical evaluation is Fu et al. [1]—running sustained simulations and assessing thermodynamic/kinetic statistics with appropriate confidence intervals—rather than emphasizing single-trajectory reconstruction.\n\n[1] Fu, Xiang, et al. \"Forces are not enough: Benchmark and critical evaluation for machine learning force fields with molecular simulations.\"" } ]
4
@misc{ sun2025domino, title={DoMi{NO}: Down-scaling Molecular Dynamics with Neural Graph Ordinary Differential Equations}, author={Fang Sun and Zijie Huang and Yadi Cao and Xiao Luo and Wei Wang and Yizhou Sun}, year={2025}, url={https://openreview.net/forum?id=zyq1JIuIhL} }
sun2025domino
Learning with Interaction: Agentic Distillation for Large Language Model Reasoning
Recent advancements in large language models (LLMs) have demonstrated remarkable reasoning abilities to solve complex tasks. However, these gains come with significant computational costs, limiting their practical deployment. A promising direction is to distill reasoning skills from larger teacher models into smaller, more efficient student models, yet existing data-centric distillation approaches suffer from passive learning, over-learning on simple tasks, and persistent knowledge gaps. To overcome these limitations, we introduce Agentic Distillation, a novel framework for adaptive and active distillation. In Agentic Distillation, student LLMs interact with teacher LLMs modeled as environments, receiving feedback tokens to guide their reasoning process and selectively updating their capabilities when necessary. To address the off-policy and gradient vanishing challenges introduced by feedback tokens, we devise a tailored importance sampling and clipping strategy within a unified objective that both incentivizes reasoning and injects knowledge into student LLMs. Extensive experiments show that Agentic Distillation significantly enhances reasoning performance while improving efficiency, offering a scalable path for equipping compact LLMs with advanced reasoning abilities.
2,026
https://openreview.net/forum?id=zyp9QT5Gf1
https://openreview.net/pdf/83e3c72f3b786cbec6676a0267401ad0cd12b8bd.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission17783/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission17783/-/Rebuttal_Revision']
poster
[ { "confidence": 4, "date": 0, "rating": 4, "review": "", "review_id": "GBwzyKXich", "reviewer": "ICLR.cc/2026/Conference/Submission17783/Reviewer_qKrD", "strengths": "1. The detailed discussion of several issues when trying to inject teacher-generated tokens into the student LM is insightful (e.g., being off-policy and gradient vanishing), and the author provides solutions to address these issues.\n2. The evaluation is conducted on extensive tasks, and the benchmarks are well chosen.\n3. The improvement is consistent across tasks, showing strong performance from learning from the teacher’s feedback.", "summary": "This paper introduces Agent Distillation to address existing data-centric distillation, which over learn on easy samples. They propose letting a student model interact with a stronger teacher model, and ask for the teacher's feedback when the student is not able to solve the problem on their own. This feedback will be used to guide the student, and it shows that learning from teacher-generated feedback effectively improves distillation performance.", "weaknesses": "1. Some qualitative analysis will help, for example, show what the student is actually generating after training, and how it improves the performance. Does the student also generate feedback-style reasoning during test time?\n2. Adding a baseline on SFT from the teacher’s full trajectory (including the feedback) and then doing RL for correctness would further strengthen the claim. How important is the interaction? Can we collect feedback in an offline manner?\n3. The method figure is not clear; for example, it's hard to see that the teacher is generating multi-turn feedback. Also, how to decide when the student model needs help is not clear either.\n4. \"When to use external feedback\" is controlled solely by prompting the student model. However, models can be over-confident or ill-calibrated, necessitating the need for an analysis on how often the student is over-confident (does not call the teacher model but cannot solve the problem on its own).\n5. Some variants on when to use external feedback are also necessary to justify this design choice." }, { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "bJPTyqC7zS", "reviewer": "ICLR.cc/2026/Conference/Submission17783/Reviewer_3uKu", "strengths": "- Framing distillation as a method where the agent asks for information from an oracle is an interesting idea\n- RL approach for learning from feedback appears novel\n- The authors tested a variety of models, including reasoning and non-reasoning models.", "summary": "This paper introduces AgenticDistillation, a knowledge distillation (KD) approach. The authors motivate the idea with two problems in standard KD: overlearning (overfitting to simple questions) and knowledge gaps between the teacher and student. Their method prompts the student to actively seek teacher feedback on particular steps and allows models to learn from the feedback the teacher provides. The paper then introduces a gradient clipping method for performing RL on and SFT on different parts of the response (question and response). The method is tested on datasets from math, code, and science domains, with improvements over the base models and ablations that they compare against", "weaknesses": "- Under-reported baselines: In Table 1, the baselines the authors report seem lower than what has been reported in published work. For example, the Qwen 2.5 tech report (https://arxiv.org/pdf/2409.12122v1, Table 5) has AIME 24 performance at 5/30 (16%) while this paper reports 9%. Past work (e.g. https://arxiv.org/abs/2506.11902, Table 1) has also reported higher baseline numbers for MATH-500 (76.5%, rather than 73.00 reported here). In several cases, the gain reported from distillation largely disappears when considering the stronger baseline numbers. Can the authors explain why their baselines are consistently lower than prior work?\n\n- No external baselines: All baselines compared against are internal models, but no other competing distillation methods were evaluated (e.g. https://arxiv.org/abs/2503.07067, https://aclanthology.org/2025.acl-industry.4/, https://arxiv.org/abs/2509.25837) although several are cited in related work. \n\n- Potential data leakage: how sure are the authors that none of the datasets tested on are included in the training data sourced from DAPO, OpenScienceReasoning, and Reasoning Gym?\n\n- It's not clear to me what happens at test time. During training, the model asks for information from the oracle -- is this prompt also followed at test time? If not, why is the model improving from training?" }, { "confidence": 4, "date": 0, "rating": 6, "review": "", "review_id": "UqbC6li3wu", "reviewer": "ICLR.cc/2026/Conference/Submission17783/Reviewer_fuv7", "strengths": "1. The core idea of shifting from passive data-centric distillation (e.g., logit-matching) to an active, interactive, agent-based learning framework is a major intellectual contribution. It offers a genuine new direction for solving the knowledge gap problem.\n\n2. The agentic distillation framework provides a flexible structure that could potentially integrate advanced components, such as tool-use or external memory, making the overall distillation process more comprehensive and future-proof.", "summary": "The paper proposes Agentic Distillation as a novel paradigm for knowledge transfer, aiming to distill complex reasoning capabilities from large, computationally expensive teacher models into smaller student models. Unlike traditional static knowledge distillation, this approach introduces an interactive and agent-based learning environment. The authors motivate this work by pointing out that current data-centric distillation methods suffer from passive learning, over-fitting on simple examples, and persistent knowledge gaps. While conceptually innovative, the method relies on dynamic interaction, which introduces non-trivial overhead and stability risks that must be comprehensively addressed.", "weaknesses": "1. The method contradicts its primary goal of efficiency by introducing significant training-phase complexity. The paper explicitly notes that training time \"may grow considerably\" with teacher complexity. This dramatically limits the ability of the deep learning community to reproduce, scale, or even test this method without substantial, often inaccessible, compute resources.\n\n2. The success of the distillation is likely critically dependent on the specific design of the \"interaction\" protocol, the reward signals, and the complexity of the agent architecture. If the results are highly sensitive to these hyper-parameters, the methodology is not broadly applicable or robust.\n\n3. Traditional distillation offers a clear, convex optimization target (e.g., KL divergence). Introducing complex, nested optimization loops and dynamic feedback makes the objective function non-trivial, harder to analyze, and obscures which components (the distillation loss, the agentic feedback, or the interaction environment) are providing the primary performance gains." }, { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "l2aFSlX7MY", "reviewer": "ICLR.cc/2026/Conference/Submission17783/Reviewer_8BPw", "strengths": "* In off-policy distillation, a student passively learns from teacher trajectories instead of learning from the feedback obtained on its own (student's) trajectories. Agentic Distillation presents a way of mitigating this issue.\n\n* Experiments are pretty thorough with the main result being that actively learning from teacher's feedback tokens can be more beneficial than imitating teacher's trajectories.", "summary": "The authors propose Agentic Distillation, a distillation method wherein a student LLM optionally queries a teacher LLM, in the process obtaining feedback which is then jointly optimized with its own generated tokens using GRPO. To stably learn from the teacher's feedback tokens (sampled from the teacher's policy), the authors introduce an importance sampling coefficient and a clipping strategy. Experiments are conducted on different reasoning and coding benchmarks with multiple student+teacher combinations to show that Agentic Distillation outperforms SFT on teacher trajectories and RL with its own trajectories (w/o any teacher interaction).", "weaknesses": "* An important missing baseline is on-policy distillation. It is the most common and effective way of distillation which has been shown to outperform off-policy distillation. It is also compute-efficient because querying the teacher’s log probabilities requires just a single forward pass from the larger model, while the trajectories are generated by the smaller and cheaper student.\n\n* The paper lacks examples and analysis of the kind of queries that the student generates for the teacher and the teacher's subsequent feedback. Without such analysis, it's hard to understand if the student only asks for hints or full answers? Additionally, what is stopping the teacher from giving out complete answers in which case, agentic distillation will turn into vanilla off-policy distillation. In summary, I'm unsure how the student learns to balance between always asking the teacher for complete solutions versus never interacting. Even though the authors write \"This trend suggests that early in training, the student LLM queries the teacher LLM frequently to learn new knowledge.\", this requires more analysis, examples, and explanation.\n\n[1] On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes. Agarwal et al., 2023" } ]
4
@inproceedings{ anonymous2025learning, title={Learning with Interaction: Agentic Distillation for Large Language Model Reasoning}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zyp9QT5Gf1}, note={under review} }
anonymous2025learning
LitePruner: A Lightweight Realtime Token Pruner before Large Language Models
Tokenization is one of the core steps of the language model pipeline. However, the tokenizer yields more tokens for the same context in non-English languages, especially in low-resource languages due to the shared multilingual settings, which results in unexpected fairness problems in terms of token fees, response latency, and long context processing. In this paper, we study the real-time computing problem, attempting to reduce the total number of tokens per query but maintain decent performance in multilingual settings. We present a simple, training-free, CPU-based pruner model to reuse pre-trained weights from the first attention layer of small models to rank token importance, only delivering important tokens to the target larger models. This method is motivated by the fact that early layers in both small and large models latch onto the similar shallow local signals due to similar tokenization algorithms (e.g., BPE) producing identical local signals. Massive in-context learning experiments on MGSM, Global-MMLU-Lite and ARC and RAG-based experiments on PubMedQA and MEMERAG show that our method can preserve decent performance for languages while reducing up to $30\%$ of the total number of tokens in both in-family and across-family model settings, where the pruner model and the target large model are in or not in the same model family. Our method is compatible with commercial LLM APIs and CPU-based, contributing to real-life applications.
2,026
https://openreview.net/forum?id=zyTGgLUdCb
https://openreview.net/pdf/f1089989f30f9fb47778643e1c055836f291b1f3.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission16269/-/Full_Submission']
poster
[ { "confidence": 3, "date": 0, "rating": 2, "review": "", "review_id": "rIU4bPd3Xi", "reviewer": "ICLR.cc/2026/Conference/Submission16269/Reviewer_KCka", "strengths": "1. The paper addresses a real fairness issue where non-English users pay significantly more for LLM services due to tokenization disparities, and provides a practical deployment that is CPU-based, training-free, and compatible with commercial APIs.\n\n2. The paper provides extensive experiments across multiple benchmarks, languages (high/medium/low-resource), and model families.", "summary": "This paper proposes using the first attention layer of small pre-trained models to rank and prune input tokens before passing them to larger LLMs, motivated by tokenization disparities in multilingual settings. The method is training-free, CPU-based, and evaluated on multilingual ICL and RAG benchmarks with reported token reductions of up to 30%.", "weaknesses": "1. The paper claims that \"Early layers in both small and large models show similar attention patterns due to similar tokenization\", and provided evidence (Tables 6-7) of high cosine similarity between attention distributions.However, correlation does nto equate to causation. If would be great if the authors can show that tokens with low attention in the small model are actually redundant for the large model. High cosine similarity just means the OOD attention patterns correlate, and doesn't validate that these patterns predict what can be pruned.\n\n2. Llama3-70B improves from 4.0% to 31.2%. This isn't noise removal, as the performance even after the improvement is lackluster." }, { "confidence": 2, "date": 0, "rating": 4, "review": "", "review_id": "BYcwdLjnp4", "reviewer": "ICLR.cc/2026/Conference/Submission16269/Reviewer_mNvA", "strengths": "LitePruner is practical. It requires no training, runs on CPU, and can be deployed before commercial APIs like GPT to save token costs. Experiments span multiple languages, model families (Llama3, Gemma2, Aya), and benchmarks (MGSM, ARC, MMLU, PubMedQA). Results show minimal performance drop at top-90% pruning, and even improvements in some low-resource languages, suggesting potential denoising effects. The authors also provide empirical support via RAD and cosine similarity, showing early-layer attention alignment between small and large models.", "summary": "This paper proposes LitePruner, a training-free, CPU-based token pruning method that reduces input token counts in multilingual settings while preserving downstream performance. It leverages relative attention weights (RAW) from the first attention layer of a small pretrained model to rank token importance and forwards only the top-k% tokens to the target large model. The authors validate effectiveness on multilingual ICL (MGSM, Global-MMLU-Lite, ARC) and RAG (PubMedQA, MEMERAG) benchmarks, demonstrating generalization across both in-family and across-family model settings.", "weaknesses": "The paper provides insufficient theoretical justification for its core assumption that the first attention layer alone captures sufficient token importance, particularly for complex multilingual reasoning tasks such as MGSM with chain-of-thought prompting. While the authors present empirical correlations using RAD and cosine similarity, they do not explain why early-layer attention should generalize across tasks, languages, or model families. Methodologically, Algorithm 1 omits key implementation details; for example, it does not specify how positional encodings are adjusted after arbitrary token removal, which may affect the target model’s interpretation of sequence order. The method’s reliance on raw attention weights from the first layer also makes it incompatible with efficient operators such as Flash Attention, limiting its practical deployment efficiency despite the claimed CPU compatibility. In the RAG experiments, only documents are pruned while queries remain intact, but the paper offers no rationale for this asymmetric treatment, which could influence retrieval quality." }, { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "xp6LotlOvc", "reviewer": "ICLR.cc/2026/Conference/Submission16269/Reviewer_VbKi", "strengths": "1. The author targets an important question and proposes a conceptually simple yet effective framework for solving the problem.\n2. It has been tested on several benchmarks, making it very comprehensive. \n3. It leverages the clear motivation that the small and large models share similar attention patterns, which can be used as a proxy for token importance estimation.", "summary": "The paper presents a training free framework to rank token importance, which prunes the context of the input and sends only a subset of important tokens to the target model using a computationally cheap model. It shows that this method can achieve competitive results for low-resource language tasks and save token budgets due to less efficient tokenization for these texts.", "weaknesses": "1. Efficiently solving low-resource language tasks is a very interesting problem, which should be the focus of the paper as suggested in the abstract and intro. However, very little room has been left for discussing the specific characteristics of these tasks (e.g. tokenization fairness as mentioned by the author). A very straightforward baseline that can be potentially included is to see if a direct translation of the prompt from low to high resource language can improve efficiency and quality. \n2. There are many similar prior works which use the same method and has been well tested on many long context benchmarks (e.g. SpecPrefill https://arxiv.org/abs/2502.02789 being the most similar one). This should be cited and compared accordingly since many aspects of the paper shares the same insights and methodology as SpecPrefill (e.g. use a smaller draft model to prune important tokens, use attention as the surrogate, the handling of position ids, etc). Another similar line of work is called GemFilter, which uses the model’s own shallow layers as proxy. \n3. How does this method work in the multi-turn setting? This should be a potential pitfall for this method. If not, it should be discussed clearly. \n4. On line 258, the author mentions that “top-90% is still a common choice for all scenarios”. The reviewer thinks that keeping 90% would be a relatively high value for this method to break even the cost of pruning itself. This should be discussed more formally. Since the method should either 1) improve accuracy or 2) increase the efficiency." }, { "confidence": 3, "date": 0, "rating": 4, "review": "", "review_id": "r1OXudrwre", "reviewer": "ICLR.cc/2026/Conference/Submission16269/Reviewer_rRRh", "strengths": "- The paper’s idea of performing pre-inference token pruning using the first attention layer of a small model is moderately original. Although adopt small model as a proxy of the attention score is not a novel idea, applying this method in permanent token pruning in LLMs is new.\n- Experiments are conducted under multiple settings, covering both in-family and across-family settings across multiple model families (Llama 1/8B, Gemma, GPT-4.1-nano) and two task types (in-context learning and retrieval-augmented generation).", "summary": "The paper addresses the problem of tokenization inefficiency and fairness in multilingual large language models, where non-English languages produce disproportionately more tokens, leading to higher costs, slower inference, and shorter usable context. It proposes LitePruner, a training-free, CPU-based token pruning method that reuses the embedding and first attention layer of a small pre-trained model to rank token importance and remove less important tokens before sending input to the target large model.\n\nThe authors conduct two main sets of experiments. First, in-context learning (ICL) tests are performed on multilingual benchmarks MGSM, Global-MMLU-Lite, and Multilingual ARC, under both in-family (e.g., Llama3-1B for Llama3-70B, Gemma2-2B for Gemma2-27B) and across-family (e.g., Llama3-1B for GPT-4.1-nano, Gemma2-2B for Aya-expanse-8B) settings, using 3-, 5-, and 8-shot prompting. Second, retrieval-augmented generation (RAG) experiments are conducted on PubMedQA and MEMERAG, where documents are pruned by LitePruner before retrieval and evaluated with metrics such as Mean Reciprocal Rank (MRR), Faithfulness (FA), and Semantic Answer Similarity (SAS).", "weaknesses": "- The method relies on shared tokenization between the small and large models (e.g., Llama3-1B for Llama3-70B). However, in cross-family settings (e.g., Gemma for GPT-4.1-nano), tokenizers and embedding spaces differ substantially, which may undermine the assumption of attention pattern similarity.\n- Some detailed hyperparameter choices (e.g., head averaging details, normalization methods, or handling of positional encodings after pruning) are not specified.\n- The evaluation results are not consistently stable across datasets, and the underlying reasons are not sufficiently analyzed. For instance, the performance drops observed on MGSM and Global-MMLU-Lite are notably large and remain unexplained.\n- In the in-family and cross-family experiments, different large language models are used as targets, making it difficult to directly assess how the quality of the small model influences the pruning results.\n- No explicit ablation study is conducted to test the sensitivity or necessity of specific design choices in LitePruner." } ]
4
@inproceedings{ anonymous2025litepruner, title={LitePruner: A Lightweight Realtime Token Pruner before Large Language Models}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zyTGgLUdCb}, note={under review} }
anonymous2025litepruner
Diffusion Bridge Variational Inference for Deep Gaussian Processes
Deep Gaussian processes (DGPs) enable expressive hierarchical Bayesian modeling but pose substantial challenges for posterior inference, especially over inducing variables. Denoising diffusion variational inference (DDVI) addresses this by modeling the posterior as a time-reversed diffusion from a simple Gaussian prior. However, DDVI’s fixed unconditional starting distribution remains far from the complex true posterior, resulting in inefficient inference trajectories and slow convergence. In this work, we propose Diffusion Bridge Variational Inference (DBVI), a principled extension of DDVI that initiates the reverse diffusion from a learnable, data-dependent initial distribution. This initialization is parameterized via an amortized neural network and progressively adapted using gradients from the ELBO objective, reducing the posterior gap and improving sample efficiency. To enable scalable amortization, we design the network to operate on the inducing inputs $\mathbf{Z}^{(l)}$, which serve as structured, low-dimensional summaries of the dataset and naturally align with the inducing variables' shape. DBVI retains the mathematical elegance of DDVI—including Girsanov-based ELBOs and reverse-time SDEs—while reinterpreting the prior via a Doob-bridged diffusion process. We derive a tractable training objective under this formulation and implement DBVI for scalable inference in large-scale DGPs. Across regression, classification, and image reconstruction tasks, DBVI consistently outperforms DDVI and other variational baselines in predictive accuracy, convergence speed, and posterior quality.
2,026
https://openreview.net/forum?id=zyRmy0Ch9a
https://openreview.net/pdf/53c9c6bc86a1153ef4a88043c1f49e49ce4cfb91.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission6981/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission6981/-/Rebuttal_Revision']
poster
[ { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "8hAMzNMbA4", "reviewer": "ICLR.cc/2026/Conference/Submission6981/Reviewer_vk1c", "strengths": "Originality:\nThe paper proposes the novel idea of reinterpreting DDVI as a kind of diffusion bridge using Doob’s h-transform. This leads to a principled way of conditioning the diffusion process on input data. The use of an input-dependent initialization for diffusion-based inference is conceptually elegant and gives a new perspective on how diffusion models can be adapted for Bayesian inference.\n\n\nQuality:\nThe technical development is solid and well thought out. The authors clearly connect their bridge formulation to the underlying variational objective. The experiments are thorough, covering regression, classification, and unsupervised learning, and DBVI consistently outperforms DDVI and other inference methods like DSVI, IPVI, and SGHMC. \n\n\nClarity:\nOverall, the paper is well written and easy to follow. The authors do a good job of explaining the motivation behind their changes to DDVI. The main ideas are presented in a logical order, and the appendix includes helpful details for implementation. \n\n\nSignificance:\nThis work makes a meaningful contribution to improving inference in deep Gaussian process models. By addressing a key limitation of DDVI and showing consistent gains across several tasks, the paper offers a practical improvement that should be useful to researchers.", "summary": "This paper proposes Diffusion Bridge Variational Inference (DBVI), a new approach for performing inference in Deep Gaussian Processes (DGPs). DBVI builds on Denoising Diffusion Variational Inference (DDVI) but improves it by learning how to start the reverse diffusion process from a more informed, data-dependent initialization instead of a random prior. This helps the model start closer to the true posterior and improves both accuracy and efficiency. The authors interpret this modification through Doob’s h-transform, giving a bridge-based view of the diffusion process that keeps the method theoretically consistent while making it more flexible. They also describe a practical inference scheme based on inducing points to make training scalable.\nIn experiments on regression, classification, and unsupervised learning tasks, DBVI shows consistent improvements over DDVI and other standard inference methods for DGPs such as DSVI, IPVI, and SGHMC.", "weaknesses": "Primary Weakness: \n\nMagnitude of improvements: While DBVI consistently outperforms DDVI and other methods, the numerical improvements are sometimes modest (e.g., some overlapping error bars in Figure 3). A discussion of whether these gains translate to meaningful practical differences would strengthen the empirical section.\n\n\nMinor Weaknesses: \n\nFigure readability: The font size in Figures 1, 2, and 3 is quite small, making some labels difficult to read without zooming in a lot. This is a minor visual issue that can easily be fixed for the camera-ready version.\n\n\nFormatting issue: The arrow in Figure 2 partially obscures the word “likelihood.”" }, { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "JFwjcNaDPf", "reviewer": "ICLR.cc/2026/Conference/Submission6981/Reviewer_VWQi", "strengths": "The paper's core strength is its theoretical rigor. It correctly identifies a clear weakness in DDVI (the \"cold start\") and proposes a non-trivial, principled solution. Grounding the \"warm start\" in the mathematics of Doob's h-transform allows for a clean derivation of the ELBO and proves that DBVI is a strict generalization of DDVI.\n\nThe most innovative practical contribution is the structured amortization design. Naively conditioning $\\mu_\\theta$ on the raw data $x$ would be infeasible. The proposal to use the learnable inducing inputs $Z^{(l)}$ as a data-dependent, low-dimensional proxy is an elegant and effective solution that neatly sidesteps issues of dimensionality and data dependency.\n\nThe paper's central hypothesis a \"warm start\" improves inference efficiency, is directly and convincingly validated by the case study in Figure 4. This plot clearly shows DBVI converging significantly faster and to a better final RMSE than DDVI, confirming the mechanism works as intended.", "summary": "This paper proposes Diffusion Bridge Variational Inference (DBVI), a novel method for posterior inference in Deep Gaussian Processes (DGPs). It aims to solve a key limitation of its predecessor, Denoising Diffusion Variational Inference (DDVI), namely the inefficiency of starting the reverse diffusion from a fixed, unconditional Gaussian prior (a \"cold start\").The core idea of DBVI is to replace this fixed start with a learnable, data-dependent \"warm start\" distribution, $p_0^\\theta(U_0|x) = \\mathcal{N}(U_0; \\mu_\\theta(x), \\sigma^2 I)$. The paper introduces two key technical innovations to make this work: It formally re-interprets the diffusion process as a Doob-bridged diffusion, which is grounded in Doob's h-transform. This allows for the derivation of a new, tractable ELBO objective.To make the initial distribution $p_0^\\theta$ scalable and avoid conditioning on the full dataset, it proposes a structured amortization network $\\mu_\\theta$ that cleverly conditions on the layer-wise inducing inputs $Z^{(l)}$ as proxies for the data.", "weaknesses": "The experimental evaluation is incomplete for a paper claiming state-of-the-art posterior approximation. A major, competing line of work for expressive DGP posteriors, Normalizing Flows (NFs), is entirely absent from the related work and experimental comparisons. Without benchmarking against NF-based VI methods, the claims of superior posterior quality and accuracy are unsubstantiated.\n\nMissing Practical Baseline: The paper fails to establish a practical \"sanity-check\" baseline. For the image classification tasks, the DGP is applied to features from a ResNet-20. The performance of this feature extractor backbone alone must be reported. If the final, highly complex 4-layer DBVI model (which achieves 95.68% accuracy on CIFAR-10) does not substantially outperform the ResNet-20, it implies the entire DGP/DBVI machinery adds significant complexity for little to no practical gain.\n\nUnfavorable Complexity-Performance Trade-off. This is the most significant weakness. The paper advocates for a method that is substantially more complex than its predecessor. It requires an SDE solver for $s_\\phi$, a new NN for $\\mu_\\theta$, and an ODE solver for $(m_t, \\kappa_t)$. The justification for this complexity rests on predictive gains that are empirically marginal (e.g., 95.68% vs 95.56% on CIFAR-10; 0.859 vs 0.857 AUC on HIGGS). This trade-off makes the practical utility of DBVI highly questionable.\n\nWhile Table 1 provides per-iteration timings, the paper lacks a formal analysis of the additional computational overhead. It should provide a breakdown of the cost of the $\\mu_\\theta$ forward pass and the $(m_t, \\kappa_t)$ ODE solver, and discuss how these new costs scale with the number of inducing points ($M$) and layers ($L$)." }, { "confidence": 3, "date": 0, "rating": 4, "review": "", "review_id": "sE9C2zjFQu", "reviewer": "ICLR.cc/2026/Conference/Submission6981/Reviewer_uAKB", "strengths": "It elegantly extends DDVI with Doob-bridge modification. The theory behind such extension is sound and neat. The derived loss clearly connects to that of DDVI and it is straightforward to spot the innovation.", "summary": "This paper generalizes denoising diffusion variational inference (DDVI) for deep Gaussian processes (DGP) by replacing the unconditional starting distribution with a learnable, data-dependent initial distribution and reinterpreting the DDVI framework with incorporation of Doob's h-transformation as a diffusion bridge model (DBVI). The proposed method can reduce the gap between the prior and the posterior in the diffusion process and hence speed up the training. There are a few benchmarks included to demonstrate the improvements in accuracy, convergence speeds and posterior quality.", "weaknesses": "However, the numerical experiments could be improved. The benchmark results mainly focus on estimation/prediction accuracy, not much on the posterior analysis. See more comments in the questions below.\n\nI would rate 5 if allowed and will raise my score if the numerical evidence could be strengthened." }, { "confidence": 3, "date": 0, "rating": 8, "review": "", "review_id": "I5kajxseqB", "reviewer": "ICLR.cc/2026/Conference/Submission6981/Reviewer_9e3v", "strengths": "- The paper identifies the unconditional start distribution in DDVI as the source of slow convergence and inaccurate posteriors, and replaces it with a data-conditioned start via a Doob bridge.\n- The use of a linear reference drift ensures that even after introducing endpoint conditioning, the bridge process has closed-form Gaussian marginals at all intermediate times.\n- Like DDVI, the variational posterior is defined implicitly by a reverse diffusion, which is significantly more flexible than mean-field or low-rank Gaussian approximations typically used in DGP inference.\n- Improvements are observed not only on small UCI datasets but also on large datasets (like HIGGS/SUSY), image classification (more subtle), and the frey faces task.", "summary": "This paper proposes Diffusion Bridge Variational Inference (DBVI), an extension of the recently introduced Denoising Diffusion Variational Inference (DDVI) method for Deep Gaussian Processes. While DDVI models the variational posterior as the terminal distribution of a reverse-time diffusion SDE parameterized by a neural score network, it suffers from an unconditional Gaussian initialization that is typically far from the true posterior, resulting in long, inefficient inference trajectories. DBVI addresses this limitation by making the diffusion start data-dependent...using an amortized initialization. Using Doob’s h-transform, they reinterpret the reverse SDE as a bridge process that explicitly “bends” the diffusion between a start and an end distribution. Empirical results across regression, classification, and image-reconstruction benchmarks show that DBVI consistently improves predictive accuracy, posterior quality, and convergence speed over DDVI, and other benchmarks.", "weaknesses": "- The paper explains the Doob correction, but the experimental section gives limited direct visualization of how the bridge shortens the reverse diffusion trajectory (e.g path length, KL rate, or score norm decay). This makes it harder to see the effect that drives the performance gains.\n- The method (in my understanding) depends crucially on using a linear reference diffusion so that intermediate marginals remain Gaussian and h is tractable. If the model or dataset induces posteriors that are far from those implied by linear reference dynamics, the reference score may become a poor baseline and learning may slow or plateau." } ]
4
@inproceedings{ anonymous2025diffusion, title={Diffusion Bridge Variational Inference for Deep Gaussian Processes}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zyRmy0Ch9a}, note={under review} }
anonymous2025diffusion
Preference-based Policy Optimization from Sparse-reward Offline Dataset
Offline reinforcement learning (RL) holds the promise of training effective policies from static datasets without the need for costly online interactions. However, offline RL faces key limitations, most notably the challenge of generalizing to unseen or infrequently encountered state-action pairs. When a value function is learned from limited data in sparse-reward environments, it can become overly optimistic about parts of the space that are poorly represented, leading to unreliable value estimates and degraded policy quality. To address these challenges, we introduce a novel approach based on contrastive preference learning that bypasses direct value function estimation. Our method trains policies by contrasting successful demonstrations with failure behaviors present in the dataset, as well as synthetic behaviors generated outside the support of the dataset distribution. This contrastive formulation mitigates overestimation bias and improves robustness in offline learning. Empirical results on challenging sparse-reward offline RL benchmarks show that our method substantially outperforms existing state-of-the-art baselines in both learning efficiency and final performance.
2,026
https://openreview.net/forum?id=zyLI9LEmry
https://openreview.net/pdf/4ef43b31950eff949a4099d4cb6f9c962b012a4a.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission10578/-/Full_Submission']
poster
[ { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "k0n2MAUUPo", "reviewer": "ICLR.cc/2026/Conference/Submission10578/Reviewer_uvFa", "strengths": "- This paper proposes a contrastive preference learning framework to bypass direct value function estimation.\n- This paper provides both empirical and theoretical analyses.\n- The proposed approach outperforms baselines in various benchmarks.", "summary": "This work presents a preference-based RL algorithm, which trains policies by contrasting successful demonstrations with failure behaviors present in the dataset. Experiments on offline benchmarks with sparse rewards validate the effectiveness of the proposed method.", "weaknesses": "- The motivation is not adequately supported by evidence. The authors claim that existing methods are sensitive to support mismatches and prone to high variance or instability, particularly when data are limited or rewards are sparse, but no empirical results or references are provided to support these statements.\n- The paper does not provide results with other competitive baselines on MetaWorld, such as PREFORL [1] and CPL [2].\n- There is no sensitivity analysis on the representative segment length $k$ and the contrastive bias $\\lambda$.\n\nReferences:\n\n[1] Tarasov et al. \"Revisiting the Minimalist Approach to Offline Reinforcement Learning\", NeurIPS, 2023.\n\n[2] Hejna et al. \"Contrastive Preference Learning: Learning from Human Feedback without RL\", ICLR, 2024." }, { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "yKVOrb3P6m", "reviewer": "ICLR.cc/2026/Conference/Submission10578/Reviewer_k6nG", "strengths": "1) The idea is novel, introducing the concept of preference learning into offline RL to mitigate overestimation.\n\n2) It designs a scheme for generating negative trajectories and validates the algorithm's effectiveness through extensive experiments.", "summary": "This paper proposes a novel method named PREFORL, which utilizes a contrastive learning framework to learn preferences from successful trajectories and synthetically degraded trajectories, aiming to address the value overestimation problem in sparse-reward offline reinforcement learning.", "weaknesses": "1) The underlying theory and mechanism explaining why introducing the preference learning framework alleviates overestimation are unclear.\n\n2) The method for generating negative trajectories and the selection of the contrastive bias parameter vary across different tasks, and their impact on performance remains unknown.\n\n3) The proof for Lemma 3.1 is not rigorous, as the approximation between $\\hat{A}$ and $A^*$ is unreasonable." }, { "confidence": 2, "date": 0, "rating": 6, "review": "", "review_id": "BcROOsLT6P", "reviewer": "ICLR.cc/2026/Conference/Submission10578/Reviewer_aGGo", "strengths": "1. The key idea is fundamentally original: avoiding direct, unstable value function estimation (the common bottleneck in sparse-reward offline RL) by transforming the task into a more robust contrastive learning problem.\n\n2. The introduction of two controlled degradation operators ($\\mathcal{D}^{\\perp a}$ and $\\mathcal{D}^{\\perp s}$) is a highly creative mechanism for generating meaningful synthetic negative examples.", "summary": "This paper presents PREFORL (PREFerence-based Optimization for Offline RL), a novel contrastive preference learning framework designed to train robust policies from sparse-reward offline datasets. The method aims to bypass the core challenges of conventional offline RL, specifically the extrapolation error and overestimation bias that plague value-based methods in data-limited, sparse-reward settings.", "weaknesses": "1. For state-based degradation ($\\mathcal{D}^{\\perp s}$), the computation overhead associated with Nearest Neighbor Search is acknowledged to be non-negligible and potentially time-consuming for large-scale datasets. A brute-force, exact search is computationally prohibitive.\n\n2. The performance of the action-based degradation method is highly sensitive to the choice of the noise variance ($\\sigma$), which limits its robustness. This requires manual tuning per environment (or environment domain) to find the reasonably small number (e.g., $1\\%$ to $2\\%$) that maximizes the success rate." }, { "confidence": 4, "date": 0, "rating": 4, "review": "", "review_id": "MdU5Lr0G6J", "reviewer": "ICLR.cc/2026/Conference/Submission10578/Reviewer_oreG", "strengths": "- Novel degradation framework towards both action and state level.\n- Comprehensive experimental validation.\n- Complete theoretical analysis.\n- Good writing for easy understanding.", "summary": "This paper proposes PREFORL, a preference-based offline reinforcement learning method that addresses value overestimation in sparse-reward settings. The approach trains policies via contrastive learning between successful demonstrations and synthetic degraded trajectories generated through action perturbation or state-based substitution. Core contributions include: A degradation framework augmenting sparse offline datasets, (2) A preference optimization loss bypassing explicit value estimation, and (3) Theoretical analysis linking the loss to policy imitation. Evaluations on Adroit, Sparse-MuJoCo, Maze2D, and MetaWorld benchmarks show PREFORL outperforms offline RL/imitation baselines in success rates and normalized scores.", "weaknesses": "- More navigation tasks (e.g. Antmaze-umaze/medium/large-diverse/replay), as well as offline RL baselines, should be performed. \n- The ablation study of the degraded dataset size is lacking." } ]
4
@inproceedings{ anonymous2025preferencebased, title={Preference-based Policy Optimization from Sparse-reward Offline Dataset}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zyLI9LEmry}, note={under review} }
anonymous2025preferencebased
Teaching LLMs to Admit Uncertainty in OCR
Vision language models (VLMs) are increasingly replacing traditional OCR pipelines, but on visually degraded documents they often hallucinate, producing fluent yet incorrect text without signaling uncertainty. This occurs because current post-training emphasizes accuracy, which encourages models to guess even when uncertain. The problem persists in state-of-the-art systems and severely impacts OCR reliability. To improve the trustworthiness of OCR on degraded documents, we propose uncertainty-aware OCR. Rather than suppressing guesses, our model transcribes while explicitly bracketing spans it deems unreliable with uncertainty tags. To train our model, we use Group Relative Policy Optimization (GRPO). We define the usage rules for uncertainty tags and an evaluation protocol. We introduce a pseudo-labeled cold start and a multi-objective reward that balances transcription accuracy and uncertainty coverage while preventing reward hacking. We explore different combinations of cold start and reward granularity and verify the effect of reward parameters in preventing reward hacking and improving the corresponding metrics. We also introduce Blur-OCR, the benchmark for uncertainty-aware OCR on degraded documents. In detailed experiments, our model maintains strong transcription accuracy while achieving uncertainty tag F1 of 0.685, and it substantially outperforms both open- and closed-source baselines.
2,026
https://openreview.net/forum?id=zyCjizqOxB
https://openreview.net/pdf/e2a795c9abb1a38a8b9c19099e6e5c79caef476c.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission1052/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission1052/-/Rebuttal_Revision']
poster
[ { "confidence": 4, "date": 0, "rating": 4, "review": "", "review_id": "SJjGbrxrVZ", "reviewer": "ICLR.cc/2026/Conference/Submission1052/Reviewer_ixBu", "strengths": "**Clear problem formulation**: The paper addresses a real problem—VLM-based OCR systems hallucinate on degraded documents without signaling uncertainty, which is worse than classical OCR systems that produce obviously garbled output. The motivation is well-articulated.\n\n**Systematic methodology**: The two-stage training approach (pseudo-labeled cold start + GRPO) is reasonable and well-described. The multi-objective reward design with safeguards against reward hacking (especially the length-mismatch damping factor η) demonstrates careful engineering.", "summary": "This paper introduces uncertainty-aware OCR, where vision-language models transcribe degraded documents while bracketing uncertain spans with explicit uncertainty tags (`<C>...</C>`). The authors employ a pseudo-labeled cold start followed by Group Relative Policy Optimization (GRPO) with a multi-objective reward that balances transcription accuracy and uncertainty coverage. They introduce Blur-OCR, a benchmark of 2,048 synthetically degraded images from Project Gutenberg. The best model (Qwen2.5-VL-7B) achieves word-level F1 of 0.685 for uncertainty tagging and 0.839 accuracy, outperforming several baseline models including GPT-4o and Claude-Opus4.\n\nHowever, the limited evaluation scope, missing key baselines, and conceptual concerns about the cold start procedure prevent a stronger recommendation. With revisions addressing the generalization questions and including comparisons with MinerU systems and uncertainty quantification methods, this could become a solid contribution.", "weaknesses": "### 1. Limited Benchmark Coverage and Missing Baselines\n\nThe evaluation is restricted to a single synthetic benchmark (Blur-OCR). The paper does not evaluate on:\n- Established document understanding benchmarks like OmniDocBench [1], which provides diverse real-world PDF documents with comprehensive annotations\n- More general OCR benchmarks beyond the two mentioned in Related Work (OCRBench/OCRBench v2)\n- Recent document parsing systems like MinerU [2] or MinerU2.5 [3], which represent state-of-the-art in document content extraction\n\n### 2. Incomplete Related Work on Uncertainty Quantification\n\nThe paper misses critical recent work on OCR uncertainty estimation. Notably, it does not cite or compare with methods that provide quantitative uncertainty measures. For instance, recent work on consensus entropy for multi-VLM agreement [4] provides token-level uncertainty scores that can be directly compared with the proposed tagging approach. While the paper mentions entropy-based baselines briefly in Exp4 (Section 7.4), it lacks:\n- Proper contextualization within the broader uncertainty quantification or calibration literature\n- Discussion of how the proposed explicit tagging approach differs from or improves upon probabilistic uncertainty measures\n\n### 3. Conceptual Issues with the Cold Start Procedure\n\nThe pseudo-labeling strategy has a fundamental problem: it tags the **model's own errors** on degraded images, not necessarily the **visually unreadable regions**. This conflates two distinct phenomena:\n\na) Text that is visually degraded/unreadable in the image \nb) Text where the model happened to make a mistake\n\nThe paper claims (Section 5.1) that \"When the image is unreadable, models tend to guess and are often wrong,\" but this assumption is not validated. Two problematic cases arise:\n\n- **False positive tags**: The model may correctly transcribe text from a degraded-but-readable region, yet the cold start labels it as uncertain simply because it differs from GT due to actual degradation differences\n- **False negative tags**: The model may confidently hallucinate on clear, undegraded regions (a known VLM failure mode [5]), which would not receive uncertainty tags\n\n\n### 4. Limited Analysis of Generalization Beyond Synthetic Degradations\n\nVLMs are known to hallucinate even on clean, high-quality document images [5]. The paper does not:\n- Test whether the uncertainty-aware model can identify hallucinations on non-degraded documents\n- Evaluate on real-world degraded documents (e.g., historical documents, which are mentioned in the motivation but never tested)\n- Compare performance on different types of errors: character substitutions vs. hallucinated words/phrases\n\n### 5. Benchmark Construction Concerns\n\nThe Blur-OCR benchmark applies random combinations of degradations to clean text, but:\n- No analysis is provided on whether the degradations are realistic compared to actual historical documents or real-world low-quality scans\n- The paper does not discuss the distribution of degradation severity or provide statistics on what fraction of text becomes truly unreadable\n- Figure 2 shows sample pages, but there is no quantitative analysis of degradation characteristics\n\nThis makes it difficult to assess whether Blur-OCR represents realistic use cases or is primarily useful for evaluating this specific training paradigm." }, { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "Jk3dGxkOJZ", "reviewer": "ICLR.cc/2026/Conference/Submission1052/Reviewer_8tLK", "strengths": "1. The problem is clearly motivated — OCR hallucination is a realistic and underexplored setting for uncertainty estimation.\n2. The proposed explicit uncertainty tagging paradigm is conceptually simple yet effective.\n3. The GRPO objective is well designed, balancing transcription accuracy and tagging F1, and preventing pathological behaviors via the reward-damping term.\n4. The evaluation setup is thorough: they report both accuracy and uncertainty metrics and analyze different training stages.\n5. The paper is very clearly written and easy to follow, with transparent motivation and mathematical detail.", "summary": "The paper presents an uncertainty-aware fine-tuning framework for OCR-capable large language models (LLMs).\nInstead of producing overconfident transcriptions for visually degraded documents, the model is trained to explicitly mark uncertain spans with specific tags.\nThe approach combines two stages: (1) a cold-start supervised fine-tuning (SFT) phase using pseudo uncertainty labels automatically derived from model errors, and (2) Group Relative Policy Optimization (GRPO), a reinforcement learning algorithm that jointly optimizes transcription accuracy and uncertainty tagging quality using a reward function balancing edit distance and F-beta-based span precision–recall.\nExperiments on the new Blur-OCR benchmark demonstrate that GRPO improves both transcription correctness and uncertainty calibration compared to the cold-start baseline, while avoiding degenerate behaviors such as excessive tagging.", "weaknesses": "1. Experiments are limited to a single OCR model family. It remains unclear whether the proposed method generalizes to other setups (e.g., multi-modal vision-language models such as Donut or TrOCR).\n2. Comparison baselines are relatively narrow — no direct comparison with alternative uncertainty modeling approaches such as entropy-based rejection or calibration.\n3. There is limited qualitative analysis of false-positive tags (over-tagging). Some visual examples or error breakdowns could strengthen interpretability claims." }, { "confidence": 4, "date": 0, "rating": 8, "review": "", "review_id": "fEOgVROtRk", "reviewer": "ICLR.cc/2026/Conference/Submission1052/Reviewer_MjWd", "strengths": "- Uncertainty estimation has been a long studies problem but this work studies it in the context of OCR which is an important problem and the technique of using RL to is interesting. \n- The work discusses the importance of cold-start SFT, describes in detail their reward formulation, character vs word level tagging, different hyperparameters and provide comprehensive experiments.", "summary": "This work tackles the problem of hallucinations for OCR for vision-language models. The models hallucinate when they are provided with blurry documents. This work proposed to use Reinforcement learning - GRPO to tackle this problem where the model is made to answer uncertainty tags along with transcription. They construct a multi-objective reward that balances accuracy with uncertainty and also mitigates reward hacking. They also provide a benchmark to measure uncertainty aware OCR performance on degraded documents and show that their method outperforms baselines.", "weaknesses": "- The benchmark that they introduce has synthetic degradations. It is unclear how much the results transfer to actual degradations found in practice.\n- The paper has nice set of experiments but lacks some intuitions examples as detailed below." }, { "confidence": 4, "date": 0, "rating": 6, "review": "", "review_id": "t8HKmO2kml", "reviewer": "ICLR.cc/2026/Conference/Submission1052/Reviewer_29Ew", "strengths": "1. The writing is clear, and the presentation of different settings is easy to follow.\n\n2. Introducing uncertainty-aware generation in OCR to mark unclear spans has practical value.\n\n3. The experiments include extensive ablation study, which helps clarify the effectiveness of the method, and I appreciate that.", "summary": "This paper focuses on a task: when performing OCR, a VLM should mark uncertain and hard-to-recognize spans by surrounding them with a custom UNC tag. To enable this, the authors build a training set of about 100K samples and a 2K-sample benchmark. For training, they use two phases — SFT for warm-start, followed by RL.", "weaknesses": "1. Minor: Although admitting uncertainty in OCR has some practical value, on the other hand, the broader significance is also limited since the work focuses on a specific application setting.\n\n2. The backbone choice is quite limited.\n\n3. The benchmark is constructed by the authors, while appreciated, i also want to know how well the method (and models) generalizes to more OOD scenarios.\n\n4. Minor: I think the paper should use “VLM” instead of “LLM.”\n\n5. While I appreciate the detailed metric section and the following ablation studies, it feels somewhat over-emphasized. The data component should likely deserve more effort than defining the metrics." } ]
4
@inproceedings{ anonymous2025teaching, title={Teaching {LLM}s to Admit Uncertainty in {OCR}}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zyCjizqOxB}, note={under review} }
anonymous2025teaching
Emergence of Machine Language in LLM-based Agent Communication
Language emergence is a hallmark of human intelligence, as well as a key indicator for assessing artificial intelligence. Unlike prior studies grounded in multi-agent reinforcement learning, this paper asks whether machine language, potentially not human-interpretable, can emerge between large language model (LLM) agents. We study this in the stylish paradigm of referential games, where a speaker describes a target object into a message with a predefined alphabet, and a listener, given the message, must identify the target among distractors. We propose an agent design that enables the speaker to retrieve semantically similar words before composing a message, and the listener to decode the message based on structural proximity between words. We observe that even given a set of 541 objects, the two agents successfully develop a shared language: they acquire meanings for each object through only 4 rounds of communication, with at most 3 attempts per communication. Additionally, analyses reveal that the emergent language exhibits compositionality, generalizability, morphemes, and polysemy, which are defining features of human language. Our project can be accessed via the following link: https://anonymous.4open.science/r/ELofLLM-1746/
2,026
https://openreview.net/forum?id=zy06mHNoO2
https://openreview.net/pdf/dd385254607d317329de7f1ab96728b480363cb4.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission3748/-/Full_Submission']
poster
[ { "confidence": 4, "date": 0, "rating": 6, "review": "", "review_id": "0acJkXshT6", "reviewer": "ICLR.cc/2026/Conference/Submission3748/Reviewer_LQV4", "strengths": "## Strengths\n- The paper introduces an interesting and innovative approach for generating natural-like communication that is not easily interpretable by humans. \n- The authors demonstrate the ability of the emergent language to generalize to unseen objects. \n- The paper presents preliminary evidence for the emergence of machine-language traits that resemble characteristics of natural language. \n- The framing of machine-language emergence in LLM-based agents is novel relative to standard symbol-based emergent communication studies. \n- The writing and structure are clear and accessible.", "summary": "## Summary\nThis paper explores the emergence of artificial (machine) language in communication between large language model (LLM)-based agents. The authors propose a memory-based learning framework that allows agents to develop natural-like but non-human-interpretable communication protocols. The study analyzes the properties of the emergent language and shows that it generalizes to unseen objects and exhibits natural-language-like features such as compositionality.\n\nThe topic is original and well motivated, and the paper is clearly written and well structured. The use of pretrained LLMs as agents introduces an interesting angle to the emergent communication (EmComm) field. However, the methodological innovation and analysis depth remain limited, and several conceptual and empirical aspects need clarification or extension.", "weaknesses": "## Weaknesses\n- The key distinction between this work and the extensive literature on emergent communication (EmComm) is not clearly articulated. \n In prior works, learning is performed end-to-end through reinforcement learning (RL) or straight-through gradient estimation, whereas this paper relies on a memory-based learning mechanism that updates after successful interactions. \n The paper should explicitly clarify and discuss these differences, ideally through direct comparison with past methods.\n\n- The dependence on pretrained LLMs, which already encode extensive natural language knowledge, is insufficiently analyzed. \n The implications of relying on models trained on vast natural-language corpora, particularly when object descriptions are expressed in natural language, should be more deeply discussed. \n The paper does not convincingly address whether emergent communication is genuinely novel or merely a reorganization of existing linguistic priors.\n\n- The motivation for creating a machine language, as well as its potential benefits and risks, is not sufficiently explored. \n The discussion should contrast the advantages and limitations of the emergent language relative to natural language.\n\n- The evaluation of natural-language-like properties (e.g., compositionality, vocabulary size, word length) relies solely on topographic similarity (TopSim), which is known to have severe limitations. \n Adding additional compositionality metrics such as AMI (Mu & Goodman, 2021), CBM (Carmeli et al., 2024), and Context Independence (CI) (Bogin et al., 2018) would significantly strengthen the credibility of the analysis.\n\n---\n\n### Related Work\nA substantial body of EmComm research has examined agents’ ability to generate communication protocols from scratch, many without using reinforcement learning (e.g., Choi et al., 2018; Carmeli et al., 2025; Tucker et al., 2022). \nThese studies typically begin with **random symbol vocabularies** and learn mappings through differentiable or obverter-style updates. \nIt is unclear how the proposed **LLM-based communication** framework fundamentally differs from these settings. \nThe authors should explicitly cite and discuss these prior works, clarifying how their approach contributes beyond them.\n\n---\n\n### References\n\n**Referential Games (beyond RL):**\n- Choi, E., Lazaridou, A., & De Freitas, N. (2018). *Compositional obverter communication learning from raw visual input.* ICLR. \n- Carmeli, B., Meir, R., & Belinkov, Y. (2025). *Composition through decomposition in emergent communication (CtD).* ICLR. \n- Tucker, M., Levy, R., Shah, J., & Zaslavsky, N. (2022). *Trading off utility, informativeness, and complexity in emergent communication (VQ-ViB).* NeurIPS 35, 22214–22228. \n\n**Compositionality Metrics:**\n- Mu, J., & Goodman, N. (2021). *Emergent communication of generalizations.* NeurIPS 34, 17994–18007. \n- Carmeli, B., Belinkov, Y., & Meir, R. (2024). *Concept-best-matching: Evaluating compositionality in emergent communication (CBM).* arXiv:2403.14705. \n- Bogin, B., Geva, M., & Berant, J. (2018). *Emergence of communication in an interactive world with consistent speakers (Context Independence).* arXiv:1809.00549. \n\n---" }, { "confidence": 5, "date": 0, "rating": 2, "review": "", "review_id": "8WHUON8p8E", "reviewer": "ICLR.cc/2026/Conference/Submission3748/Reviewer_gPaF", "strengths": "This paper explores an interesting topic, which is the emergence of human-uninterpretable machine language from interactions. Overall, the methodology is clear and the experiments seem reproducible (prompts and code are provided-- I did not test the code). The representation of objects is scaled up compared to prior work (Kouwenhoven et al., 2025). Still, this paper is very similar to Kouwenhoven et al., see Weaknesses.", "summary": "This paper demonstrates that pretrained, frozen LLMs can construct human uninterpretable machine language while playing a Lewis game in an ICL setting. This language exhibits several traits present in human languages, such as weak compositionality, morphology, and polysemy. Longer words, larger vocabulary size, and object sets promote higher compositionality.", "weaknesses": "The current submission is very similar to Kouwenhoven 2025. Further, there are several claims regarding machine language emergence which are encouraged in the experimental design (hence not emergence).\n\n### Injecting priors into experimental design, then claiming emergence of those priors\n1. Asking the listener to retrieve words based on structural proximity **encodes the prior** that similar words should have similar meanings. \n2. This also encourages the emergence of morphemes\n3. line 140-141 \"In contrast to our work, which explores language emergence from scratch, these studies assume a pre-defined vocabulary for each agent\" This is also true for this paper, where the alphabet $\\mathcal A$ is pre-defined.\n\n### Too similar to Kouwenhoven 2025\nMy primary reason to reject is for **lack of novelty with respect to Kouwenhoven2025a** (not cited by the authors) and Kouwenhoven2025b (cited in l136 but not meaningfully engaged with). I do not think it is realistic to reshape this paper to significantly depart from theirs in the current review timeframe. With some tweaks ACL can be a great choice of venue \n\nKouwenhoven2025a (to the best of my knowledge) were the first to show that LLMs develop \"machine language\" in an ICL setting. The current submission is extremely similar to that paper. Here are several of the similarities:\n\n1. **Attribute-based representations of the objects**, e.g., color, shape, number. The present paper greatly scales up the representation using the brain taxonomy representation, which includes more attributes and values. Still, this experimental setting does not significantly depart from the attribute-value setting used since at least 5 years now [Chaabouni2021], and we still fall short from implementing machine language for unseen objects in-the-wild if we rely only on the brain taxonomy for features.\n\n2. **ICL setting** rather than MARL\n\n3. Using **CV-style syllables** as the vocabulary (the present paper also uses CVC, but I'm not sure that this adds anything to the overall message)\n\n4. Very **similar prompts** to Kouwenhoven2025a.\n\n5. Very **similar conclusions** including the emergence of morphology (Kouwenhoven2025a Section 5.4) and homonymy (Kouwenhoven2025b). The analysis done in Kouwenhoven2025a and Kouwenhoven2025b is more sophisticated, e.g., they quantify the amount of homonymy and show its evolution over time, in addition to the qualitative analysis. I would recommend doing the same.\n\n6. Kouwenhoven2025a does not explicitly instruct agents to use structural similarity for object retrieval, instead showing it to be an emergent property. I find this much more compelling than explicitly instructing the LLMs to do so. I would recommend adding an experiment ablating the explicit request to use structural similarity.\n\n### Other:\n\n- I'm missing a discussion of in-context learning, which is what agents are based on.\n- Only gpt-4-mini was used for experiments. Given that the present paper is too similar to Kouwenhoven2025, I would experiment on different LLMs or populations of LLMs for a future iteration.\n- A topsim of between 0 and 0.15 (as reported in the paper) is not very high-- I would not claim the language is compositional based on these values. Indeed, Chaabouni et al., 2020 claim that emergent languages are **not compositional** using a similar range of values (~0.11).\n\n### Missing related work:\n[Kouwenhoven2025a] Searching for Structure: Investigating Emergent Communication with Large Language Models, Kouwenhoven et al., [COLING 2025](https://openreview.net/forum?id=kst43TfV9b)\n\nEmergent communication at scale, Chaabouni et al., ICLR 2022.\n\n### Bibliography\n[Kouwenhoven2025b] Kouwenhoven et al., IJCAI 2025\n\n[Chaabouni2021] Compositionality and generalization in emergent languages, Chaabouni et al., 2021." }, { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "LfDHGKIB9p", "reviewer": "ICLR.cc/2026/Conference/Submission3748/Reviewer_f3Bi", "strengths": "- Originality: To my knowledge, this is the first paper to bootstrap pretrained LLMs to play Lewis reference games, and evaluate the emergent communication protocols.\n- Clarity: The bootstrapping process generally well-presented and the high-level is easy to understand. I believe is approachable even to readers with no experience in emergent communication. The claimed results are clearly stated, and the evaluation methods are clear.\n- Quality: Basing the work on Lewis games, which are extremely well-studied, is a good starting point as it opens the door to relating the findings to the many related studies.", "summary": "This paper explores the ability of two LLM agents to learn effective strategies in a Lewis reference game. The setup is as follows:\n- There is a set of objects known to both agents. Each object has a list of features in natural language. E.g. \"Ant: Taxonomy: 'an insect', Colour: [\"black\", \"red\"],...\".\n- There is a fixed alphabet (e.g. consonant-vowel such as \"va\", \"ca\", \"bi\") and maximal word length $L$ known to both agents.\n- Two agents are initialized with an empty memory. The memory maps objects to words, which are strings of length at most $L$ over the alphabet.\n- The experiment is carried out as follows. For each round in a set number of rounds, for each object $O$ in the list of objects:\n - One agent is chosen as Speaker and the other as Listener, at random. The Speaker generates a word $w$ based on its memory.The other agent (the \"listener\"), receives $w$ and a set of objects consisting of $O$ and distractors. The listener then selectsan object $O'$ based on its own memory. If $O = O'$, continue to the next object. Else, repeat this (generating $w$ and $O'$) at most two more times. If $O \\neq O'$ in all three of the attempts, the object is skipped.\n\nThe word generation $w$ as well as the object selection $O'$ are done via prompting. The authors explore several design choices for the game, e.g. the effect of consonant-vowel alphabet vs. vowel-consonant-vowel alphabet. The authors find that agents are able to achieve agreement on all objects within four rounds, i.e., four passes through all objects. They also find that agents are able to generalize successful communication to unseen objects based on the new objects' features. Furthermore, objects with similar features (small Hamming distance of the feature vector) have similar words (small edit distance).", "weaknesses": "- The paper asks whether two LLM agents can be bootstrapped (prompted in a particular scheme) to converge on a winning strategy in Lewis reference games. The answer is yes, and it takes about four passes through the objects. There is, however, a significant gap between this simple setup and the central claim of the paper: the “Emergence of Machine Language.” An long-standing debate in and around the emergent communication community concerns the point at which a learned protocol in a simplified setup can genuinely be called a language. Per Hockett, language should display displacement and true productivity; Chomsky’s Faculty of Language requires recursive compositionality (infinite meanings generated from a finite base set). The standard Lewis reference game setup cannot generate protocols (communication systems) that meet any of these criteria, at least not without a very complicated evaluation suite. I would at least expect to see a serious engagement with these foundational questions if the authors wish to claim such a result.\n- Lewis reference games were a suitable choice for stylized experiments at the nascent stage of emergent communication because it was highly non-trivial to get deep reinforcement learning to learn an effective policy from scratch. This paper sidesteps that significant difficulty by initializing the agents with GPT-4.1-mini. This pretrained model is already imbued with extensive linguistic structure and pragmatic competence. From such a starting point, successfully coordinating in a Lewis game is largely expected. This is in contrast to when randomly initialized deep RL agents learn an effective protocol (let alone one that exhibits signs of compositionality). But when agents that already exhibit understanding of human grammar and semantics do so, it is comparable to observing that humans (in fact, polyglots) can coordinate through language in a constrained setting. In other words, I view the claimed findings in this paper (emergence, compositionality, morphemes) as the reuse of existing linguistic priors rather than the emergence of language itself." }, { "confidence": 4, "date": 0, "rating": 4, "review": "", "review_id": "EReBjt5mEi", "reviewer": "ICLR.cc/2026/Conference/Submission3748/Reviewer_9z2x", "strengths": "## Quality:\n\nSQ1: I appreciate the ablation study on the capacity of the communication channel (Figure 5 and related text), and the dataset size (Table 1 and related text).\n\n## Clarity:\n\nSC1: Overall, the paper is well-written and easy to read.\n\n## Originality:\n\nSO1: I appreciate the polysemy and morpheme studies, as they strike me as novel and valuable considerations. The quality of Figure 4.a is also high.\n\n## Significance:\n\nSS1: I think this kind of inquiry around emergent communication but from systems that have developed some natural language fluency already -as opposed to from scratch- is a very interesting direction that might have a strong impact on the subfield of Emergent Communication.", "summary": "# Problem:\n\nCan Large Language Models (LLMs) make artificial languages emerge?\n\n# Contributions:\n\nIn the context of referential games, this paper proposes (i) LLM-based speaker and listener agent designs and (ii) provides experimental results on the Object dataset from [McRae2005].", "weaknesses": "## Quality:\n\nWQ1: Ambiguous usage of ‘generalization’ in the claims: is it in-distribution or out-of-distribution? is the train-test split actually able to measure it (internal validity?)\n\nI would suggest the authors to start with [1, 2, 3].\n\nI appreciate that ‘generalizability’ is defined around ln200, but it remains ambiguous as it relies on non-clearly-defined words like ‘have learned to describe’ (for the speaker) and ‘understand’ (for the listener). It might help to rely on a metric, for instance zero-shot compositional tests, as used in [Chaabouni2021, 6], which requires specifically-constructed train-test splits around compositions of attributes. From my understanding of Stage 1 vs Stage 2 data split (ln312-313), it consists of a random split, therefore there is no zero-shot compositionality tests being performed in the current experiment.\n\nWQ2: “Machine language is considered to emerge if the two agents achieved successful communication on the majority of 400 objects…” (ln70) : Missing discussion related to [4], which showed that accuracy in an emergent communication game is not necessarily an indication of successful communication; the claim in ln70 thus requires clarification. It might be important to introduce to the current study some positive signaling and positive listening metrics.\n\nWQ3: Despite citation of [Chaabouni2021] the paper does not discuss the choice of compositionality metric, and especially does not explain why only measure topographic similarity while [Chaabouni2021] has shown it to be limited compared to their posdis/bosdis proposal (which is refined in [6]). Moreover, why not considering the recent metric proposed by [8] as well?\n\nWQ4: I think it would be interesting to consider performance depending on the number of parameters of the used LLM. It would also increase the external validity of the experiments if they were performed with both closed and open-source/weights LLMs, as opposed to only using closed gpt-4.1-mini.\n\n## Clarity:\n\nWC1: Figure 3, 4, and 5 lack details about the statistics being reported ( standard error of the mean?).\n\nWC2: As [3] showed that reporting topographic similarity measures on whole dataset vs train set vs test set yield different measures, I think it is important that the current paper clarifies what is the current measure computed on.\n\nWC3: It is unclear to me what are the semantic features $f_o$ that defines a generic object $o$ (ln189).\n\n## Originality:\n\nWO1 : Missing discussion with [3] regarding the results presented in Figure 5. Indeed, [3] found that (i) increasing the maximum sentence/word length is beneficial to further both compositionality and generalisation abilities, but (ii) increasing the vocabulary size is found detrimental.\n\nWO2: Missing citation to [7] around ln215 ( 4.1 Memory).\n\n## Significance:\n\nWS1: Claim 3 (ln92) is made in a vacuum: language emergence is efficient and robust in comparison to what? I would advise the authors to consider adding some common baselines (e.g. [3] or [5]), or some ablation study showing that a specific design choice yields greater efficiency and robustness compared to another design choice.\n\nWS2 : The same critic goes for the second part of the claim regarding generalizability and compositionality: e.g. what is the threshold above which the measured Topographic Similarity can indicate compositionality? I appreciate the footnote information for paragraph starting in ln376, but it makes for a rather weak evidence at best. It would be better to try to measure the compositionality on the relevant dataset (with the same train-test splits) with a common approach, for comparison, maybe?" } ]
4
@inproceedings{ anonymous2025emergence, title={Emergence of Machine Language in {LLM}-based Agent Communication}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zy06mHNoO2}, note={under review} }
anonymous2025emergence
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
210