2021-8-2
定义无向网络:
Definition :An undirected network is a tuple
N
=
(
V
,
E
,
W
)
N = (V, E, W)
N=(V,E,W),where
V
V
V is the set of nodes,
E
⊆
V
×
V
E \subseteq V \times V
E⊆V×V where
(
v
i
,
v
j
)
∈
E
⇔
(
v
j
,
v
i
)
∈
E
(v_i,v_j) \in E \Leftrightarrow (v_j,v_i) \in E
(vi,vj)∈E⇔(vj,vi)∈E is the set of nodes,
w
∈
W
w \in W
w∈W is the weighted of
(
v
i
,
v
j
)
(v_i, v_j)
(vi,vj).
本质上来说,网是一种特殊的图,只不过带上权值赋予其更实际的作用。
自己画一棵树, 将其元组各部分写出来 (特别是函数
p
p
p ).
树定义的代码如下:
public class Tree {
/**
* 节点数. 表示节点 v_0 至 v_{n-1}.
*/
int n;
/**
* 根节点. 0 至 n-1.
*/
int root;
/**
* 父节点.
*/
int[] parent;
/**
* 构造一棵树, 第一个节点为根节点, 其余节点均为其直接子节点, 也均为叶节点.
*/
public Tree(int paraN) {
n = paraN;
parent = new int[n];
parent[0] = -1; // -1 即 \phi
}// Of the constructor
}//Of class Tree
针对该树, 将代码中的变量值写出来 (特别是 parent 数组).
对于其中的变量,n=6,root=0,p[0]=-1,p[1]=0,p[2]=0,p[3]=1,p[4]=1.
画一棵三叉树, 并写出它的 child 数组.
按照本贴风格, 重新定义树. 提示: 还是应该定义 parent 函数, 字母表里面只有一个元素.
根据图、树、m mm-叉树的学习, 谈谈你对元组的理解.
$Let
ϕ
\phi
ϕ be the empty node, a tree is a triple
T
=
(
A
,
r
,
Σ
,
c
)
T=(\mathbf{A}, r, \Sigma, c)
T=(A,r,Σ,c) where
∙
A
≠
ϕ
\bull \mathbf{A} \neq \phi
∙A=ϕ is the set of nodes;
∙
r
∈
A
\bull r \in \mathbf{A}
∙r∈A is the root node;
∙
Σ
=
{
0
}
\bull \Sigma = \{0\}
∙Σ={0} is the alphabet;
∙
c
:
(
A
∪
{
ϕ
}
)
×
Σ
∗
→
A
∪
{
ϕ
}
\bull c: (\mathbf{A} \cup \{\phi\}) \times \Sigma^{*} \rightarrow \mathbf{A} \cup\{\phi\}
∙c:(A∪{ϕ})×Σ∗→A∪{ϕ} satisfying ;
∙
∀
a
∈
A
,
∃
1
s
∈
Σ
∗
st.
c
(
r
,
s
)
=
a
\bull \forall a \in \mathbf{A}, \exists 1 s \in \Sigma^{*} \text { st. } c(r, s) = a
∙∀a∈A,∃1s∈Σ∗ st. c(r,s)=a
2021-8-3
A multi-label distribution decision system is a tuple S = ( X , Y ) S = (\mathbf{X}, \mathbf{Y}) S=(X,Y) where X = [ x i j ] n × m ∈ R n × m \mathbf{X} = [x_{ij}]_{n \times m} \in \mathbb{R}^{n \times m} X=[xij]n×m∈Rn×m is the data matrix, Y = [ y i k ] n × l ∈ [ 0 , 1 ] n × l \mathbf{Y} = [y_{ik}]_{n \times l} \in [0, 1]^{n \times l} Y=[yik]n×l∈[0,1]n×l is the label matrix, s.t. ∑ k = 1 l y i k = 1 \sum_{k = 1}^l y_{ik} = 1 ∑k=1lyik=1, n n n is the number of instances, m m m is the number of features, and l l l is the number of labels.
min v ( j ) E ( v ( j ) ) = ∑ i = 1 l L ( y i , g ( j ) ( x i ) ) + ∑ i = l + 1 l + u v i ( j ) L ( g ‾ ( j − 1 ) ( x i ) , g ( j ) ( x i ) ) \min_{ \mathbf{v}^{(j)}} E\left( \mathbf{v}^{(j)}\right)= \sum_{i = 1}^{l} L\left(y_{i}, g^{(j)}(\mathbf{x}_{i})\right)+ \sum_{i=l+1}^{l+u}v_{i}^{(j)}L\left(\overline{g}^{(j-1)}{(\mathbf{x}_i)}, g^{(j)}(\mathbf{x}_{i})\right) v(j)minE(v(j))=i=1∑lL(yi,g(j)(xi))+i=l+1∑l+uvi(j)L(g(j−1)(xi),g(j)(xi))