About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Yijun Tian
- sl:arxiv_num : 2309.15427
- sl:arxiv_published : 2023-09-27T06:33:29Z
- sl:arxiv_summary : Large Language Models (LLMs) have shown remarkable generalization capability
with exceptional performance in various language modeling tasks. However, they
still exhibit inherent limitations in precisely capturing and returning
grounded knowledge. While existing work has explored utilizing knowledge graphs
to enhance language modeling via joint training and customized model
architectures, applying this to LLMs is problematic owing to their large number
of parameters and high computational cost. In addition, how to leverage the
pre-trained LLMs and avoid training a customized model from scratch remains an
open question. In this work, we propose Graph Neural Prompting (GNP), a novel
plug-and-play method to assist pre-trained LLMs in learning beneficial
knowledge from KGs. GNP encompasses various designs, including a standard graph
neural network encoder, a cross-modality pooling module, a domain projector,
and a self-supervised link prediction objective. Extensive experiments on
multiple datasets demonstrate the superiority of GNP on both commonsense and
biomedical reasoning tasks across different LLM sizes and settings.@en
- sl:arxiv_title : Graph Neural Prompting with Large Language Models@en
- sl:arxiv_updated : 2023-09-27T06:33:29Z
- sl:bookmarkOf : https://arxiv.org/abs/2309.15427
- sl:creationDate : 2023-09-28
- sl:creationTime : 2023-09-28T08:52:07Z
Documents with similar tags (experimental)