• AI Research Insights
  • Posts
  • AI News: šŸš€ Meta AI introduces SAM | Forget plugins. ChatGPT can solve general computer tasks using a keyboard and mouse!! | UC Berkeley Researchers Introduce Koala | Meet Baize....

AI News: šŸš€ Meta AI introduces SAM | Forget plugins. ChatGPT can solve general computer tasks using a keyboard and mouse!! | UC Berkeley Researchers Introduce Koala | Meet Baize....

This newsletter brings AI research news that is much more technical than most resources but still digestible and applicable

Meta AI introduces SAM (Segment Anything Model): A Foundation model for image segmentation. Meta AI team released both their general Segment Anything Model (SAM) and Segment Anything 1-Billion mask dataset (SA-1B), the largest ever segmentation dataset, to enable a broad set of applications and foster further research into foundation models for computer vision. They are making the SA-1B dataset available for research purposes, and the Segment Anything Model is available under a permissive open license (Apache 2.0). Check out the demo to try SAM with your own images.

What is AI Hallucination? What Goes Wrong with AI Chatbots? How to Spot a Hallucinating Artificial Intelligence?: The phenomenon known as artificial intelligence hallucination happens when an AI model produces results that are not what was anticipated. Be aware that some AI models have been taught to purposefully make outputs without connection to real-world input (data). Hallucinations may occur in big language-based models like ChatGPT and its equivalents due to improper transformer decoding (machine learning model). Using an encoder-decoder (input-output) sequence, a transformer in AI is a deep learning model that employs self-attention (semantic connections between words in a sentence) to create text that resembles what a human would write.

Forget plugins. ChatGPT can solve general computer tasks using a keyboard and mouse!!- The trick? Recursively criticizing and improving the output (RCI): The RCI approach significantly outperforms existing LLM methods for automating computer tasks and surpasses supervised learning (SL) and reinforcement learning (RL) approaches on the MiniWoB++ benchmark. RCI is competitive with the state-of-the-art SL+RL method, using only a handful of demonstrations per task rather than tens of thousands, and without a task-specific reward function. Furthermore, we demonstrate RCI prompting's effectiveness in enhancing LLMs' reasoning abilities on a suite of natural language reasoning tasks, outperforming chain of thought (CoT) prompting. We find that RCI combined with CoT performs better than either separately.

Meet Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data. A team of researchers from the University of California, San Diego, and Sun Yat-sen University, China, in collaboration with Microsoft Research, have developed a novel pipeline architecture that uses ChatGPT to engage in a conversation with itself in order to automatically generate a high-quality multi-turn chat corpus. Moreover, the teamā€™s research also focuses on employing a parameter-efficient tuning strategy to optimize large language models with constrained computational resources. Using their generated chat corpus, the group of researchers fine-tuned Metaā€™s open-source large language model, LLaMA, resulting in a new model called Baize. This open-source chat model has exceptional performance and can function with just one GPU, making it a practical choice for many researchers with computational limitations.

UC Berkeley Researchers Introduce Koala:Ā A Dialogue Model for Academic Research. Koala is a chatbot trained by fine-tuning Metaā€™s LLaMA on dialogue data gathered from the web. The researchers describe the dataset curation and training process of their model, and also present the results of a user study that compares our model to ChatGPT and Stanfordā€™s Alpaca. The results show that Koala can effectively respond to a variety of user queries, generating responses that are often preferred over Alpaca, and at least tied with ChatGPT in over half of the cases.

Whose Opinions Do LLMs Reflect?Ā This AI Paper From Stanford Examines the Opinions Reflected by Language Models LMs Through the Lens of Public Opinion Polls:Ā The team assessed 9 LMs from AI21 Labs and OpenAI with parameters ranging from 350M to 178B using the resulting OpinionQA dataset by contrasting the modelā€™s opinion with that of the overall US population and 60 different demographic groupings (which included democrats, individuals over 65 in age, widowed, etc.). The researchers primarily looked at three aspects of the findings: representativeness, steerability, and consistency. ā€œRepresentativenessā€ refers to how closely the default LM beliefs match those of the US populace as a whole or a particular segment. It was discovered that there is a significant divergence between contemporary LMsā€™ views and those of American demographic groupings on various topics such as climate change, etc.

Meet ChatArena: a Python library of multi-agent language game environments that facilitates communication and collaboration between multiple large language models (LLMs). ChatArena comes with a built-in environment for simulating conversations between multiple agents. Players can be defined with different role descriptions, and the environment facilitates their interactions. Moderated conversation takes it a step further! It allows you to control the game dynamics using an LLM, with the LLM deciding the game state transitions and when the game ends.

A Framework For Applying Psychotherapy to LLMs:Ā IBM and Columbia researchers have developed the SafeguardGPT framework, which employs psychotherapy to correct harmful behaviors in AI chatbots and create healthy AI systems. The framework consists of four AI agents: a Chatbot, a "User," a "Therapist," and a "Critic." In a simulated social conversation, the researchers demonstrated the effectiveness of SafeguardGPT, which improved the quality of conversations between AI chatbots and humans. Although there are still challenges to address, SafeguardGPT is a promising approach to aligning AI chatbots with human values. By utilizing psychotherapy and reinforcement learning techniques, the framework enables AI chatbots to safely and ethically learn and adapt to human preferences and values, leading to the development of more human-centric and responsible AI.

Do You Know Marktechpost has a community of 1.5 Million+ AI Professionals and Engineers? For partnership, please feel to contact us through this form.