MACHINE_LEARN Telegram 3422
LIMO: Less is More for Reasoning

5 Feb 2025 · Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, PengFei Liu ·

We present a fundamental discovery that challenges our understanding of how complex reasoning emerges in large language models. While conventional wisdom suggests that sophisticated reasoning tasks demand extensive training data (>100,000 examples), we demonstrate that complex mathematical reasoning abilities can be effectively elicited with surprisingly few examples. Through comprehensive experiments, our proposed model LIMO demonstrates unprecedented performance in mathematical reasoning. With merely 817 curated training samples, LIMO achieves 57.1% accuracy on AIME and 94.8% on #MATH, improving from previous SFT-based models' 6.5% and 59.2% respectively, while only using 1% of the training data required by previous approaches. LIMO demonstrates exceptional out-of-distribution generalization, achieving 40.5% absolute improvement across 10 diverse benchmarks, outperforming models trained on 100x more data, challenging the notion that SFT leads to memorization rather than generalization. Based on these results, we propose the Less-Is-More Reasoning Hypothesis (#LIMO Hypothesis): In foundation models where domain knowledge has been comprehensively encoded during pre-training, sophisticated reasoning capabilities can emerge through minimal but precisely orchestrated demonstrations of cognitive processes. This hypothesis posits that the elicitation threshold for complex reasoning is determined by two key factors: (1) the completeness of the model's encoded knowledge foundation during pre-training, and (2) the effectiveness of post-training examples as "cognitive templates" that show the model how to utilize its knowledge base to solve complex reasoning tasks. To facilitate reproducibility and future research in data-efficient reasoning

Paper: https://arxiv.org/pdf/2502.03387v1.pdf

Codes:
https://github.com/gair-nlp/limo
https://github.com/zhaoolee/garss


@Machine_learn
2👍1🔥1



tgoop.com/Machine_learn/3422
Create:
Last Update:

LIMO: Less is More for Reasoning

5 Feb 2025 · Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, PengFei Liu ·

We present a fundamental discovery that challenges our understanding of how complex reasoning emerges in large language models. While conventional wisdom suggests that sophisticated reasoning tasks demand extensive training data (>100,000 examples), we demonstrate that complex mathematical reasoning abilities can be effectively elicited with surprisingly few examples. Through comprehensive experiments, our proposed model LIMO demonstrates unprecedented performance in mathematical reasoning. With merely 817 curated training samples, LIMO achieves 57.1% accuracy on AIME and 94.8% on #MATH, improving from previous SFT-based models' 6.5% and 59.2% respectively, while only using 1% of the training data required by previous approaches. LIMO demonstrates exceptional out-of-distribution generalization, achieving 40.5% absolute improvement across 10 diverse benchmarks, outperforming models trained on 100x more data, challenging the notion that SFT leads to memorization rather than generalization. Based on these results, we propose the Less-Is-More Reasoning Hypothesis (#LIMO Hypothesis): In foundation models where domain knowledge has been comprehensively encoded during pre-training, sophisticated reasoning capabilities can emerge through minimal but precisely orchestrated demonstrations of cognitive processes. This hypothesis posits that the elicitation threshold for complex reasoning is determined by two key factors: (1) the completeness of the model's encoded knowledge foundation during pre-training, and (2) the effectiveness of post-training examples as "cognitive templates" that show the model how to utilize its knowledge base to solve complex reasoning tasks. To facilitate reproducibility and future research in data-efficient reasoning

Paper: https://arxiv.org/pdf/2502.03387v1.pdf

Codes:
https://github.com/gair-nlp/limo
https://github.com/zhaoolee/garss


@Machine_learn

BY Machine learning books and papers




Share with your friend now:
tgoop.com/Machine_learn/3422

View MORE
Open in Telegram


Telegram News

Date: |

The best encrypted messaging apps As the broader market downturn continues, yelling online has become the crypto trader’s latest coping mechanism after the rise of Goblintown Ethereum NFTs at the end of May and beginning of June, where holders made incoherent groaning sounds and role-played as urine-loving goblin creatures in late-night Twitter Spaces. With Bitcoin down 30% in the past week, some crypto traders have taken to Telegram to “voice” their feelings. The main design elements of your Telegram channel include a name, bio (brief description), and avatar. Your bio should be: Hui said the time period and nature of some offences “overlapped” and thus their prison terms could be served concurrently. The judge ordered Ng to be jailed for a total of six years and six months.
from us


Telegram Machine learning books and papers
FROM American