Perhaps surprisingly, we find that shorter context window LLM with simple retrieval-augmentation at inference can perform close to longer context LLM finetuned via positional interpolation for ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results