Cappy: Outperforming and boosting large multi-task language models with a small scorer | AI 资讯 | 云织星·工具台
Posted by Yun Zhu and Lijuan Liu, Software Engineers, Google Research Large language model (LLM) advancements have led to a new paradigm that unifies various natural language processing (NLP) tasks within an instruction-following framework. This paradigm is exemplified by recent multi-task LLMs, such as T0, FLAN, and OPT-IML.
如页面未自动加载,请开启 JavaScript。