Prelegent: Konrad Staniszewski
Large language models store their knowledge in parameters and require costly fine-tuning to update. An interesting alternative is to provide new knowledge in the model's context. However, typical models have relatively short context lengths.
In this presentation, I will discuss one of the potential solutions to this problem - retrieval augmented transformer models. Those models utilize a large external database to store information about already processed parts of the text and retrieve most matching entries to improve the performance during the inference.