English notes for KG

  1. For example, if we want the model to acquire the knowledge of
    “Paracetamol can treat cold”, a large number of cooccurrences of
    ”Paracetamol” and ”cold” are required in the pre-training
    corpus. Instead of this strategy, what else can we do to make
    the model a domain expert
    ? The knowledge graph (KG), which was
    called ontology in early research, serves as a good solution。

    例如,如果我们想让模型获得“扑热息痛可以治疗感冒”的知识,那么在训练前的语料 库中就需要大量的“扑热息痛”和“感冒”同时出现。除了这个策略,我们还能做些什么让模型成为领域专家呢?知识图(KG)在早期的研究中被称为本体论,是一种很好的解决方案。

  2. As shown in Figure 1, the model architecture of K-BERT consists of four modules, i.e., knowledge layer, embeddinglayer, seeing layer and mask-transformer.
    如图1所示,××模型体系结构由××××层、××××层、××××层和××××四个模块组成

  3. To some degree
    在某种程度上

  4. However, there are two challenges lies in the road of this knowledge integration: (1) Heterogeneous Embedding Space (HES): In general, the embedding vectors of words in text and entities in KG are obtained in separate ways, making their vector-space inconsistent; (2) Knowledge Noise (KN): Too much knowledge incorporation may divert the sentence from its correct meaning. To overcome these challenges, this paper propose a Knowledge-enabled Bidirectional Encoder Representation from Transformers (K-BERT).
    然而,在×××××的道路上有两个挑战:(1)×××(2)××××××

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值