• AI Research Insights
  • Posts
  • šŸ…šŸ…šŸ… What is trending in AI research- A Memeā€™s Glimpse into the Pinnacle of Artificial Intelligence (AI) Progress in a Mamba Series: LLM Enlightenment + This AI Paper from CMU and Apple Unveils WRAP and many more.....

šŸ…šŸ…šŸ… What is trending in AI research- A Memeā€™s Glimpse into the Pinnacle of Artificial Intelligence (AI) Progress in a Mamba Series: LLM Enlightenment + This AI Paper from CMU and Apple Unveils WRAP and many more.....

This newsletter brings AI research news that is much more technical than most resources but still digestible and applicable

Together with

Hi there,Ā 

I hope you all are doing well!

Here are this week's top AI/ML research briefs.

This AI Paper from CMU and Apple Unveils WRAP: A Game-Changer for Pre-training Language Models with Synthetic DataĀ šŸ…
How can large language models (LLMs) efficiently learn from the vast, unstructured, and often low-quality data available on the web, given the escalating compute and data requirements? This paper introduces Web Rephrase Augmented Pre-training (WRAP), a novel method that leverages an instruction-tuned model to paraphrase web documents into more structured formats like Wikipedia or question-answer formats. This approach significantly enhances LLM training efficiency and effectiveness. Specifically, WRAP accelerates pre-training on the C4 dataset by roughly 3x and, with the same compute budget, reduces perplexity by over 10% and increases zero-shot question-answering accuracy by more than 2% across various tasks. The study further explores how different rephrasing styles impact LLM performance, especially in out-of-distribution (OOD) settings. The success of WRAP is attributed to its ability to inject stylistic diversity and higher-quality synthetic data into the training process, mirroring the diversity and quality expected in downstream applications. šŸš€šŸ¤–

A Memeā€™s Glimpse into the Pinnacle of Artificial Intelligence (AI) Progress in a Mamba Series: LLM EnlightenmentšŸ…
In the dynamic field of Artificial Intelligence (AI), the trajectory from one foundational model to another has represented an amazing paradigm shift. The escalating series of models, including Mamba, Mamba MOE, MambaByte, and the latest approaches like Cascade, Layer-Selective Rank Reduction (LASER), and Additive Quantization for Language Models (AQLM) have revealed new levels of cognitive power. The famous ā€˜Big Brainā€™ meme has succinctly captured this progression and has humorously illustrated the rise from ordinary competence to extraordinary brilliance as one delf into the intricacies of each language model.

Self-Rewarding Language Models: https://arxiv.org/abs/2401.10020Ā 

Cascade Speculative Drafting: https://arxiv.org/abs/2312.11462Ā 

This AI Paper from UNC-Chapel Hill Proposes ReGAL: A Gradient-Free Method for Learning a Library of Reusable Functions via Code Refactorization šŸ…
How can large language models (LLMs) overcome their limitations in program synthesis, particularly their lack of a global view and tendency to generate redundant code? The proposed solution, Refactoring for Generalizable Abstraction Learning (ReGAL), offers a gradient-free method aimed at creating a library of reusable functions through code refactorizationā€”restructuring code to maintain its functionality while optimizing for efficiency and error reduction. By learning from a minimal set of programs and iteratively refining its abstractions, ReGAL enhances the predictability of programs across various domains. Testing on datasets from LOGO graphics, date reasoning, and TextCraft (a Minecraft-based text game) shows significant accuracy improvements for both open-source and proprietary LLMs, including an absolute accuracy boost of up to 26.1% in specific tasks, surpassing GPT-3.5 in two domains. ReGAL's success lies in its ability to encapsulate common subroutines and adapt to environmental dynamics, demonstrating a promising avenue for enhancing LLMs' efficiency and generality in program synthesis. šŸš€

Researchers from Grammarly and the University of Minnesota Introduce CoEdIT: An AI-Based Text Editing System Designed to Provide Writing Assistance with a Natural Language InterfaceĀ šŸ…
How can writers enhance their text to meet specific stylistic or complexity requirements efficiently? šŸ¤”Ā Grammarlyā€™s CoEdIT offers a cutting-edge solution as a text editing system designed to assist in writing. By taking user instructions that detail desired text attributesā€”like simplifying sentences or adopting a neutral styleā€”CoEdIT outputs the refined text accordingly. This system is powered by a large language model, meticulously fine-tuned with a vast array of 82,000 task-specific instructions for text editing. Remarkably, CoEdIT not only sets a new benchmark in text editing performance across various standards but also holds its ground against the largest publicly available LLMs trained on instructions, all while being nearly 60 times smaller. It showcases exceptional adaptability to new, unseen editing instructions and composite commands, integrating multiple editing actions seamlessly. Extensive evaluations reveal a clear preference for CoEdITā€™s edits over those from other leading text editing models, highlighting its effectiveness and versatility in enhancing writing quality. šŸš€

Other Trending Papers šŸ…šŸ…šŸ…

  • StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback [Paper]

  • Specialized Language Models with Cheap Inference from Limited Domain Data [Paper]

  • TravelPlanner: A Benchmark for Real-World Planning with Language Agents [Paper]

  • Repeat After Me: Transformers are Better than State Space Models at Copying [Paper]

Recommended Newsletters šŸ“šŸ“šŸ“

Stay up-to-date with AI.

AI wonā€™t replace you, but a person using AI might. Thatā€™s why 500,000+ professionals read The Rundownā€“ the free newsletter that keeps you updated on the latest AI news, tools, and tutorials in 5 minutes a day.