New AI developments: Apple's new contextual understanding benchmark (February 2024) provide new insight into AI reasoning. Especially the ICL (in-context learning prompt) from retrieved and augmented data, and their non-performance on low-complexity pre-trained LLMs. I build upon Apple's insights to discuss a new complexity class of synthetic pre-training data sets for better pre-trained LLMs.
Q: is the core of AI reasoning build upon the pre-trained LLM and what can a coherent-complexity-class fine-tuned dataset for the SFT achieve? Strong indications that AI reasoning is backed into the reasoning-complexity-class of the pre-training dataset for LLMs, indicating a natural way to significantly improve on AI reasoning for medicine and financial services.
Four new methods to improve AI reasoning are explored in detailed, focussing on better pre-trained LLMs to a higher-complexity class fine-tuning data sets for advanced reasoning to policy optimizations via DPO aligned sub-trajectories and their synthetically reward functions. We also explore random walks to discover new argumentation paths in pre-trained Language models and dive into Self-Discover: LLM self-composes adaptive reasoning structures with 39 reasoning modules and their combinatorial configurations for advanced reasoning.
#airesearch
#aieducation
8 Comments