GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Embed Notice

HTML Code

Corresponding Notice

  1. Embed this notice
    Mark Gritter (markgritter@mathstodon.xyz)'s status on Sunday, 13-Oct-2024 02:18:03 JSTMark GritterMark Gritter

    I think this is very interesting but really very _basic_ research on the capabilities of LLMs at reasoning problems. Any random PhD should be able to move from a static benchmark of math problems to a distribution of similar problems. That's exactly what this paper does, and discovers:

    1. All current models do worse at GSM8K-like problems than they do on GSM8K itself, and there is wide variation in success for different samples.
    2. #LLM performance varies if you change the names. It changes even more if you change the numbers. Change both, and you get even more variation.
    3. Adding more clauses to the word problems makes the models perform worse.
    4. Adding irrelevant information to the word problems makes the models perform worse.
    5. Even the latest o1-minit and o1-preview models, while they score highly, show the same sort of variability.

    I think this is the bare minimum we should be expecting of "AI is showing reasoning behavior" claims: demonstrate on a distribution of novel problems instead of a fixed benchmark, and show the distribution instead of the best results.

    It's not that humans don't share similar biases -- plenty of middle-school students are tripped up by irrelevant data too -- but I think results like this show we are very far off from any sort of expert-level LLMs. If they show wide distribution of behavior on tasks that are easy to measure, it's quite likely the same is true on tasks that are harder to measure.

    https://arxiv.org/abs/2410.05229

    In conversationabout a year ago from mathstodon.xyzpermalink

    Attachments

    1. No result found on File_thumbnail lookup.
      results.it
      This domain may be for sale!

    2. https://media.mathstodon.xyz/media_attachments/files/113/295/496/240/567/803/original/f46cf2d72f5014f7.png

    3. https://media.mathstodon.xyz/media_attachments/files/113/295/525/945/775/697/original/dee72d422645c0eb.png
    4. Domain not in remote thumbnail source whitelist: arxiv.org
      GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
      Recent advancements in Large Language Models (LLMs) have sparked interest in their formal reasoning capabilities, particularly in mathematics. The GSM8K benchmark is widely used to assess the mathematical reasoning of models on grade-school-level questions. While the performance of LLMs on GSM8K has significantly improved in recent years, it remains unclear whether their mathematical reasoning capabilities have genuinely advanced, raising questions about the reliability of the reported metrics. To address these concerns, we conduct a large-scale study on several SOTA open and closed models. To overcome the limitations of existing evaluations, we introduce GSM-Symbolic, an improved benchmark created from symbolic templates that allow for the generation of a diverse set of questions. GSM-Symbolic enables more controllable evaluations, providing key insights and more reliable metrics for measuring the reasoning capabilities of models.Our findings reveal that LLMs exhibit noticeable variance when responding to different instantiations of the same question. Specifically, the performance of all models declines when only the numerical values in the question are altered in the GSM-Symbolic benchmark. Furthermore, we investigate the fragility of mathematical reasoning in these models and show that their performance significantly deteriorates as the number of clauses in a question increases. We hypothesize that this decline is because current LLMs cannot perform genuine logical reasoning; they replicate reasoning steps from their training data. Adding a single clause that seems relevant to the question causes significant performance drops (up to 65%) across all state-of-the-art models, even though the clause doesn't contribute to the reasoning chain needed for the final answer. Overall, our work offers a more nuanced understanding of LLMs' capabilities and limitations in mathematical reasoning.
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.