1 Introduction
At present, there are two kinds of methods, the distance model-based embedding transformation method and the semantic matching model-based bilinear model. The idea of both ofthem is to embed the knowledge graph containingentities and relations into the continuous low-latitude real vector space
There are two kinds of reasoning of temporal knowledge graph, the first is interpolation, for a temporal knowledge graph with time from to to t T , the task of interpolation is to complete the fact that missing in the time t between to and t T . The second is the extrapolation, its main task is to predict new facts in time t > t T . We focus on extrapolation tasks because predicting new facts about future times based on facts in the known knowledge graph helps humans understand hidden factors in events and respond to possible future events,which can be applied to disaster relief or financial analysis.
2 Related Works
2.1 Know-Evolve
uses temporal point process to model entities to describe the impact of time on entities, but has a drawback for concurrent temporal reasoning at the same time
2.2 TA-TransE
Temporal-Aware Version of TransE(TA-TransE)[8] based on modeling of embedded time information combines time in text form with a Recurrent Neural Network(RNN) is embedded into the relationship and the entity prediction is performed usingscoring function in TransE.
2.3 Tempoarl TransE
Temporal TransE[9] represents entities and relationships in the same vector space, and uses a scoring function similar to TransE for knowledge reasoning
2.4 ChronoR
ChronoR[lO] also projects entities and relationships to other space through transformation, rotates them based on time, and then uses a new scoring function to reason. However, these dynamic reasoning models only deal with a single time point,cannot capture the time correlation and cannot generalize the graph structure information to the future time.
2.5 RE-Net
Recurrent Event Network(RE-Net) can reason concurrent facts at multiple time points in temporal knowledge graph and model temporal correlation
However, although the aggregator of RGCN[12] used in Re-Net can obtain the neighborhood information under the overall relationship of a certain time, the neighborhood information of non-target entities will also be added in the process of aggregation, which leads to the decrease of the reasoning ability ofthe model and the longer operation time of RGCN aggregation based on graph convolution.
2.6 RE-GCN
optimize the RGCN aggregation, shorten the training timeof the model, and at the same time add the static constraint of the entity to the model for reasoning
3 Static Model
前提知识
G={E,R,S}表示Knowledge Base
E={e1,e2,e3...e|E|}表示实体集,其中E表示不同实体
R={r1,r2,r3...r|R|}表示关系集,其中R表示不同关系
S∈ E✖R✖E表示知识图谱
(h,r,t)中,h,t表示头尾实体,r表示关系
3.1 Translation Distance Model(平移距离模型)
3.1.1 TransE
3.1.2 TransR
3.1.3 TransH
总结:这三个模型是比较简单的。论文给了图,看图就很容易明白意思
3.2 Semantic Model(语义模型)
3.2.1 RESCAL
3.2.2 DistMult
3.2.3 HolE
总结:以上三个模型的建模图。其实都是用神经网络去搭建的。理解他们之间公式的区别再结合图,就知道对哪里进行了改进
3.3 Modle based on rotation(基于旋转模型)
3.3.1 RotatE
- h,r,t∈C
K
^K
K:
- 在该模型中,实体h和t以及关系r都被嵌入到复数空间C中,C表示复数每个嵌入都是一个复数向量,k表示向量的维度
- 关系r作为一种旋转操作,即该模型中将r看作头实体h和尾实体t之间的一个旋转角度
-
∘
\circ
∘
- 这个符号表示元素逐次乘积,表示对向量h和向量r的每个对应元素进行逐元素相乘。由于h和r是复数向量,因此这个乘积实际上实在复数空间中进行的旋转操作,也就是说关系r通过复数旋转将头实体h变换为尾实体t
- h
∘
\circ
∘r - t
- 这个表达式表示:将h向量通过关系 r进行旋转后,减去尾实体 t的向量。换句话说,模型衡量的是h在关系 r作用下旋转后与 t 之间的差异。
- 这个差异越小,意味着h通过关系r后非常接近t
-
∥
\parallel
∥h
∘
\circ
∘r - t
∥
\parallel
∥
- 这个表示范数,通常是欧几里得范数,计算两个复数向量之间的距离,衡量他们在复数空间的差异
4 Dynamic Models(动态模型)
时序知识图谱,用四元组(s,r,o,t)
分别代表头实体,关系,尾实体,事实发生时间
4.1 Embedded temporal information model(嵌入式时间信息模型)
4.1.1 TA-TransE
4.2 Time rotation model(时间旋转模型)
4.2.1 ChronoR