Close Menu
Şevket Ayaksız

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Save 45% on Anker’s Prime 6-in-1 USB-C Charger

    Mayıs 8, 2025

    Tariffs Force 8BitDo to Pause U.S. Deliveries

    Mayıs 8, 2025

    PC Manager App Now Displays Microsoft 365 Advertisements

    Mayıs 8, 2025
    Facebook X (Twitter) Instagram
    • software
    • Gadgets
    Facebook X (Twitter) Instagram
    Şevket AyaksızŞevket Ayaksız
    Subscribe
    • Home
    • Technology

      Ryzen 8000 HX Series Brings Affordable Power to Gaming Laptops

      Nisan 10, 2025

      Today only: Asus OLED laptop with 16GB RAM drops to $550

      Nisan 6, 2025

      Panther Lake: Intel’s Upcoming Hybrid Hero for PCs

      Nisan 5, 2025

      A new Xbox gaming handheld? Asus’ teaser video sparks speculation

      Nisan 2, 2025

      Now available—Coolify’s ‘holographic’ PC fans bring a unique visual effect

      Nisan 2, 2025
    • Adobe
    • Microsoft
    • java
    • Oracle
    Şevket Ayaksız
    Anasayfa » Raising the Bar: Unveiling RAG for More Accurate and Reliable Large Language Models
    software

    Raising the Bar: Unveiling RAG for More Accurate and Reliable Large Language Models

    By ayaksızOcak 24, 2024Yorum yapılmamış3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The problems: LLM hallucinations and limited context
    LLMs often take a long time using expensive resources to train, sometimes months of run time using dozens of state-of-the-art server GPUs such as NVIDIA H100s. Keeping the LLMs completely up-to-date by retraining from scratch is a non-starter, although the less-expensive process of fine-tuning the base model on newer data can help.

    Fine-tuning sometimes has its drawbacks, however, as it can reduce functionality present in the base model (such as general-purpose queries handled well in Llama) when adding new functionality by fine-tuning (such as code generation added to Code Llama).

    What happens if you ask an LLM that was trained on data that ended in 2022 about something that occurred in 2023? Two possibilities: It will either realize it doesn’t know, or it won’t. If the former, it will typically tell you about its training data, e.g. “As of my last update in January 2022, I had information on….” If the latter, it will try to give you an answer based on older, similar but irrelevant data, or it might outright make stuff up (hallucinate).

    To avoid triggering LLM hallucinations, it sometimes helps to mention the date of an event or a relevant web URL in your prompt. You can also supply a relevant document, but providing long documents (whether by supplying the text or the URL) works only until the LLM’s context limit is reached, and then it stops reading. By the way, the context limits differ among models: two Claude models offer a 100K token context window, which works out to about 75,000 words, which is much higher than most other LLMs.

    The solution: Ground the LLM with facts
    As you can guess from the title and beginning of this article, one answer to both of these problems is retrieval-augmented generation. At a high level, RAG works by combining an internet or document search with a language model, in ways that get around the issues you would encounter by trying to do the two steps manually, for example the problem of having the output from the search exceed the language model’s context limit.

    The first step in RAG is to use the query for an internet or document or database search, and vectorize the source information into a dense high-dimensional form, typically by generating an embedding vector and storing it in a vector database. This is the retrieval phase.

    Then you can vectorize the query itself and use FAISS or another similarity search, typically using a cosine metric for similarity, against the vector database, and use that to extract the most relevant portions (or top K items) of the source information and present them to the LLM along with the query text. This is the augmentation phase.

    Finally, the LLM, referred to in the original Facebook AI paper as a seq2seq model, generates an answer. This is the generation phase.

    Post Views: 118
    Code technology
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    ayaksız
    • Website

    Related Posts

    PC Manager App Now Displays Microsoft 365 Advertisements

    Mayıs 8, 2025

    Microsoft Raises Xbox Series X Price by $100 Amid Global Adjustments

    Mayıs 8, 2025

    The Cot framework simplifies web development in Rust

    Nisan 29, 2025
    Add A Comment

    Comments are closed.

    Editors Picks
    8.5

    Apple Planning Big Mac Redesign and Half-Sized Old Mac

    Ocak 5, 2021

    Autonomous Driving Startup Attracts Chinese Investor

    Ocak 5, 2021

    Onboard Cameras Allow Disabled Quadcopters to Fly

    Ocak 5, 2021
    Top Reviews
    9.1

    Review: T-Mobile Winning 5G Race Around the World

    By sevketayaksiz
    8.9

    Samsung Galaxy S21 Ultra Review: the New King of Android Phones

    By sevketayaksiz
    8.9

    Xiaomi Mi 10: New Variant with Snapdragon 870 Review

    By sevketayaksiz
    Advertisement
    Demo
    Şevket Ayaksız
    Facebook X (Twitter) Instagram YouTube
    • Home
    • Adobe
    • microsoft
    • java
    • Oracle
    • Contact
    © 2025 Theme Designed by Şevket Ayaksız.

    Type above and press Enter to search. Press Esc to cancel.