• AI Research Insights
  • Posts
  • 🔥 AI's Hottest Research Updates: Cross-Episodic Curriculum (CEC) + HyperHuman + Show-1 + Featured AI Tools + Featured AI Startups

🔥 AI's Hottest Research Updates: Cross-Episodic Curriculum (CEC) + HyperHuman + Show-1 + Featured AI Tools + Featured AI Startups

This newsletter brings AI research news that is much more technical than most resources but still digestible and applicable

Hey Folks!

This newsletter will discuss some cool AI research papers and AI tools. Happy learning!

👉 What is Trending in AI/ML Research?

How can the learning efficiency and generalization of Transformer agents be enhanced? This paper introduces the Cross-Episodic Curriculum (CEC) algorithm, which embeds cross-episodic experiences into a Transformer's context to form an evolving curriculum. By arranging online learning sessions and mixed-quality demonstrations sequentially, CEC crafts curricula that illustrate the progression and skill enhancement across episodes. When merged with the robust pattern recognition of Transformers, this results in an impactful cross-episodic attention mechanism. The algorithm proves effective in scenarios like multi-task reinforcement learning with discrete control in DeepMind Lab and imitation learning with varied data quality in RoboMimic. CEC-trained policies consistently outperform others in performance and generalization.

Sponsored
AI Minds NewsletterNewsletter at the Intersection of Human Minds and AI

Given the limitations of existing models like Stable Diffusion and DALL-E 2, how can we improve the hyper-realism of generated human images? Addressing this, this paper introduces "HyperHuman", a novel framework designed to harness the inherent multi-level structure of human images, from the broad body skeleton to intricate spatial geometry. The approach involves creating a vast dataset, "HumanVerse", comprising 340M images annotated with pose, depth, and surface details. Using this, the proposed "Latent Structural Diffusion Model" concurrently denoises and synthesizes RGB images, integrating appearance, spatial relationships, and geometry. Additionally, a "Structure-Guided Refiner" enhances resolution and visual quality. The results reveal unparalleled hyper-realism in generated human images across varied contexts.

How can we efficiently generate high-quality videos with precise text-video alignment from large-scale pre-trained Diffusion Models (VDMs)? Addressing existing limitations in pixel-based and latent-based VDMs, this paper introduces "Show-1", a pioneering hybrid model. Show-1 combines the best of both VDMs: it initially employs pixel-based VDMs to create a low-resolution video with robust text-video correlation, then leverages a novel expert translation technique with latent-based VDMs to upscale this video to high resolution. The result is a system that offers precise alignment akin to pixel VDMs but at a fraction of the computational cost, with GPU memory usage dropping dramatically from 72G to 15G during inference. This model outperforms on standard benchmarks.

Sponsored
AI Minds NewsletterNewsletter at the Intersection of Human Minds and AI

Can Transformers efficiently handle long sequences given their memory constraints? Addressing this limitation, this new paper from UC Berkeley introduces "Ring Attention", a novel approach that employs blockwise computation of self-attention. This technique disperses lengthy sequences across multiple devices, synchronizing the communication of key-value blocks with blockwise attention computation. As a result, Ring Attention permits training and inference of sequences considerably longer than previous memory-efficient Transformers, essentially bypassing the individual device memory restrictions. Experimental outcomes from language modeling tasks validate Ring Attention's capacity to accommodate larger sequence input sizes while enhancing performance.

Can the effectiveness of instruction tuning and zero-shot generalization in auto-regressive LLMs be enhanced by further pretraining with retrieval? This paper from NVIDIA unveils "Retro 48B", an expansion on the existing pretrained retrieval-augmented LLMs, such as the 7.5B parameter Retro. By continuing to pretrain the 43B GPT model using the Retro augmentation method on an additional 100 billion tokens and retrieving from 1.2 trillion tokens, the resulting Retro 48B model significantly surpasses the original 43B GPT in perplexity. Post-instruction tuning, the InstructRetro model demonstrates marked improvements in zero-shot QA tasks. Notably, omitting the encoder from InstructRetro achieves similar outcomes, suggesting that retrieval pretraining bolsters the decoder's context processing for QA.

Sponsored
AI Minds NewsletterNewsletter at the Intersection of Human Minds and AI

Featured AI Tools For You

  • Adcreative AI: Boost your advertising and social media game with AdCreative.ai - the ultimate Artificial Intelligence solution. [Marketing and Sales]

  • LucidChart: Lucidchart is an intelligent diagramming application that brings teams together to make better decisions and build the future. [Graphs]

  • Haly AI: Haly is an AI Slackbot that functions as your personal virtual assistant. [Slack]

  • Decktopus: Decktopus is an AI-powered presentation tool that helps you create visually stunning slides in record time. [Presentation]

  • Rask AI: a one-stop-shop localization tool that allows content creators and companies to translate their videos into 130+ languages quickly and efficiently. [Speech and Translation]

  • Aragon AI: Get stunning professional headshots effortlessly with Aragon. [Photo and LinkedIn]

  • Retouch4me: Retouch4me offers a suite of state-of-the-art plugins designed to elevate your photo editing game [Photo Editing]

  • Motion: Motion: AI-powered daily scheduling for a more productive life. [Productivity and Automation]

Sponsored
AI Minds NewsletterNewsletter at the Intersection of Human Minds and AI

Featured AI Startups For You