GitHub - microsoft/LLMLingua: To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
Tags:
About This Document
File info
Documents with similar tags (experimental)
2023-10-07 About
2023-05-24 About
2020-04-25 About