New insights by Google DeepMind and Stanford University on the limitations of current LLMs (Gemini Pro, GPT-4 TURBO) regarding causal reasoning, and logic.
Unfortunately the human reasoning process and all its limitations are encoded in our LLMs, given the multitude of human conversations and human reasoning processes on all online platforms (including social media's logical richness - smile).
NO AGI in sight, just pure rule hallucinations added w/ factual hallucinations and only linear sequential understanding. Our LLMs really learn from us, including all human mathematical and logical limitations.
All rights with authors:
2024-2-15
Premise Order Matters in Reasoning with Large Language Models
https://arxiv.org/pdf/2402.08939.pdf00:00 Intro
01:00 Linear order of reasoning
04:14 Sensitive to premise order
06:31 Maths reasoning
09:52 Insights
12:07 Logical hallucinations
#airesearch
#reasoning
#logic
46 Comments