Emergent Analogical Reasoning in Large Language Models

Emergent Analogical Reasoning in Large Language Models

https://ift.tt/j5UWANz


Taylor Webb, Keith J. Holyoak, Hongjing Lu, arXiv, Jan 04, 2023

Icon

What’s really interesting to me about GPT-3 and other large language models (LLM) is that they are not programmed with rules or categories, but instead create them out of the data they’re given. As this paper (27 page PDF) argues, "GPT-3 appears to display an emergent ability to reason by analogy, matching or surpassing human performance across a wide range of problem types." The authors continue, "The deep question that now arises is how GPT-3 achieves the analogical capacity that is often considered the core of human intelligence." Now many of the criticisms of LLM point to errors in these pattern recognition capabilities. They sometimes get basic facts wrong, and don’t seem to (yet) understand what types of things some things are. But as the authors write, "regardless of the extent to which GPT-3 employs human-like mechanisms to perform analogical reasoning, we can be certain that it did not acquire these mechanisms in a human-like manner." We don’t actually teach an LLM the way we would, say, a child. But suppose we did…

Web: [Direct Link] [This Post]

Schule,Englisch

via Stephen’s Web ~ OLDaily http://www.downes.ca/

January 4, 2023 at 06:45PM