征稿 | LKM@IJCAI2024 延期截稿至 6 月 1 日

LKM2024: 1st International OpenKG Workshop on Large Knowledge-enhanced Models

In conjunction with the IJCAI 2024, the 33rd International Joint Conference on Artificial Intelligence

Jeju Island, South Korea, August 3, 2024

(http://lkm2024.openkg.org/)

6a37cbe65c71bf2ac9dbb62ae59527f2.gif

OpenKG在今年的IJCAI (CCF A类会议)上组织关于“Large Knowledge-enhanced Models (LKM)”的专题Workshop,征稿延期至6月1日 (AOE时间),欢迎赐稿!部分录用论文会被推荐到相关SCI、EI期刊,期待8月相聚济州岛。此次Workshop支持Non-archival 投稿,欢迎各位老师同学投递IJCAI、AAAI、WWW、NeurIPS、ICML、ACL、EMNLP、SIGIR、KDD、ICLR等会议论文(无格式要求)前来参会,共同交流研讨。

Overview

We are excited to announce the first International OpenKG Workshop on Large Knowledge-enhanced Models (LKM2024) held in conjunction with IJCAI 2024. The workshop aims to bring together researchers from academia and industry to discuss the latest advances and challenges on a varietyof topics over knowledge-enhanced large language models in AI, and the integration of large models with symbolic KR such as KG.

Humankind accumulates knowledge about the world in the processes of perceiving the world, with natural languages as the primary carrier of world knowledge. Representing and processing these world knowledge has been central to its objectives since the early advent of AI. Indeed, both LLMs and KGs were developed to handle world knowledge but exhibit distinct advantages and limitations. LLMs excel in language comprehension and offer expansive coverage of knowledge, but incur significant training costs and struggle with authenticity problems of logic reasoning. KGs provide highly accurate and explicit knowledge representation, enabling more controlled reasoning and being immune from hallucination problems, but face scalability challenges and struggle with reasoning transferability. A deeper integration of these two technologies promises a more holistic, reliable, and controllable approach to knowledge processing in AI.

Natural languages merely encode world knowledge through sequences of words, while human cognitive processes extend far beyond simple word sequences. Considering the intricate nature of human knowledge, we advocate for the research over Large Knowledge-enhanced Models (LKM), specifically engineered to manage diversified spectrum of knowledge structures. In this workshop, we focus on exploring large models through the lens of “knowledge”. We expect to investigate the role of symbolic knowledge such as Knowledge Graphs (KGs) in enhancing LLMs, and also interested in how LLMs can amplify traditional symbolic knowledge bases. We welcome all submissions related to but not limited to the following topics:

  • Large model knowledge enhancement

  • Integration of LLM and symbolic KR

  • Knowledge-injecting LLM pretraining

  • Structure-inducing LLM pre-training

  • Knowledge-augmented prompt learning

  • Knowledge-enhanced instruction learning

  • Graph RAG and KG RAG

  • LLM-enhanced symbolic query and reasoning

  • Large model knowledge extraction

  • Large model knowledge editing

  • Large model knowledge reasoning

  • Knowledge-augmented multi-modal large models

  • Multimodal learning for KGs and LLMs

  • Knowledge-enhanced Hallucination Detection and Mitigation

  • Semantic tools for LLMs

  • Knowledgeable AI agents

  • Integration of LLM and KG for world models

  • Domain-specific LLMs training leveraging KGs

  • Applications of combing KGs and LLMs

  • Open resources combining KGs and LLMs

Important Dates

  • Submission deadline: June 1, 2024 AOE

  • Notification to authors: June 7, 2024 AOE

  • Camera-ready deadline: July 15, 2024 AOE

Submission Details

Submission URL: https://cmt3.research.microsoft.com/LKM2024

Format: Submissions are invited in the form of 7-page papers (with an additional 2 pages for references) for inclusion in the proceedings, or a 2-page abstract for poster and demonstration proposals. All submissions must adhere to the formatting requirements specified in the conference's author guidelines, available at https://www.ijcai.org/authors_kit. Accepted papers will be featured in the workshop program and incorporated into the workshop proceedings, although authors may choose to opt out of this inclusion.

Dual-submission policy: We welcome ongoing and unpublished work. We also welcome papers that are under review at the time of submission, or that have been recently accepted.

Archival/Non-archival:  The accepted papers may choose between two publication options: archival or non-archival. Selected papers will be invited to submit extensional versions to the Data Intelligence Journal (https://direct.mit.edu/dint), Elsevier Journal of Big Data Research (SCI Indexed). To qualify for archival publication, submissions must be notably original and not previously published in other venues or journals.

Non-archival papers, on the other hand, are permitted to be works that have been presented or published in another venue or journal.

In person presentation: Accepted papers are expected to be presented in person and at least one author of each accepted paper is required to register.

The workshop is organized by OpenKG, an open research community committed to the innovation on open technologies for KGs and their integration with modern large language models.

Steering Committee

Huajun Chen, Zhejiang University,China

Guilin Qi, Southeast University,China

Haofen Wang, Tongji University,China

Program Chairs

Ningyu Zhang, Zhejiang University, China

Tianxing Wu, Southeast University, China

Meng Wang, Tongji University, China


OpenKG

OpenKG(中文开放知识图谱)旨在推动以中文为核心的知识图谱数据的开放、互联及众包,并促进知识图谱算法、工具及平台的开源开放。

5040098670e732deac1352b245055ed5.png

点击阅读原文,进入 OpenKG 网站。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值