摘要
人工智能术语翻译第三部分,包括I、J、K、L开头的词汇!

I
英文术语 | 中文翻译 | 常用缩写 | 备注 |
---|
I.I.D. Assumption | 独立同分布假设 | | |
Identically Distributed | 同分布的 | | |
Identifiable | 可辨认的 | | |
Identity Function | 恒等函数 | | |
Identity Mapping | 恒等映射 | | |
Identity Matrix | 单位矩阵 | | |
Ill Conditioning | 病态 | | |
Ill-Formed Problem | 病态问题 | | |
Image | 图像 | | |
Image Restoration | 图像还原 | | |
Imitation Learning | 模仿学习 | | |
Immorality | 不道德 | | |
Imperfect Information | 不完美信息 | | |
Implicit Density Model | 隐式密度模型 | | |
Import | 导入 | | |
Importance Sampling | 重要性采样 | | |
Improved Iterative Scaling | 改进的迭代尺度法 | IIS | |
Incomplete-Data | 不完全数据 | | |
Incremental Learning | 增量学习 | | |
Indefinite Integral | 不定积分 | | |
Independence | 独立 | | |
Independent | 相互独立的 | | |
Independent and Identically Distributed | 独立同分布 | I.I.D. | |
Independent Component Analysis | 独立成分分析 | ICA | |
Independent Subspace Analysis | 独立子空间分析 | | |
Index of Matrix | 索引 | | |
Indicator Function | 指示函数 | | |
Individual Learner | 个体学习器 | | |
Induction | 归纳 | | |
Inductive Bias | 归纳偏好 | | |
Inductive Learning | 归纳学习 | | |
Inductive Logic Programming | 归纳逻辑程序设计 | ILP | |
Inductive Transfer Learning | 归纳迁移学习 | | |
Inequality Constraint | 不等式约束 | | |
Inference | 推断 | | |
Infinite | 无限 | | |
Infinitely Exchangeable | 无限可交换 | | |
Information Divergence | 信息散度 | | |
Information Entropy | 信息熵 | | |
Information Gain | 信息增益 | | 统计 |
Information Gain Ratio | 信息增益比 | | 统计 |
Information Retrieval | 信息检索 | | |
Information Theory | 信息论 | | |
Inner Product | 内积 | | |
Input | 输入 | | |
Input Distribution | 输入分布 | | |
Input Gate | 输入门 | | |
Input Layer | 输入层 | | |
Input Space | 输入空间 | | |
Insensitive Loss | 不敏感损失 | | |
Instance | 示例 | | |
Instance Segmentation | 实例分割 | | |
Integer Linear Programming | 整数线性规划 | ILP | |
Integer Programming | 整数规划 | | |
Integration | 积分 | | |
Inter-Cluster Similarity | 簇间相似度 | | |
Internal Covariate Shift | 内部协变量偏移 | | |
Internal Node | 内部结点 | | |
International Conference For Machine Learning | 国际机器学习大会 | ICML | |
Intervention Query | 干预查询 | | |
Intra-Attention | 内部注意力 | | |
Intra-Cluster Similarity | 簇内相似度 | | |
Intrinsic Value | 固有值 | | |
Invariance | 不变性 | | |
Invariant | 不变 | | |
Inverse Matrix | 逆矩阵 | | |
Inverse Reinforcement Learning | 逆强化学习 | IRL | |
Inverse Resolution | 逆归结 | | |
Inverse Time Decay | 逆时衰减 | | |
Invert | 求逆 | | |
Irreducible | 不可约的 | | |
Irrelevant Feature | 无关特征 | | |
Isometric Mapping | 等度量映射 | Isomap | |
Isotonic Regression | 等分回归 | | |
Isotropic | 各向同性 | | |
Isotropic Gaussian Distribution | 各向同性高斯分布 | | |
Iteration | 迭代 | | 数学、机器学习 |
Iterative Dichotomiser | 迭代二分器 | | |
Id3 Algorithm | Id3 算法 | | |
Image And Speech Recognition | 图像和语音识别 | | |
Image Classification | 图像分类 | | |
Image Classifier | 图像分类器 | | |
Image Recognition | 图像识别 | | 机器学习 |
Informative Priors | 信息先验 | | |
Input-Output Pairs | 输入输出对 | | |
Instance-Based | 基于实例的 | | |
Intelligent Machine | 智能机器 | | |
Intermediate Neurons | 中间神经元 | | 机器学习 |
Internet Of Things | 物联网 | IoT | |
Interpolation Coordinate | 插值坐标 | | |
Interpretability | 可解释性 | | |
Inverse Neural Modeling | 逆神经建模 | INN | |
Inverse Neural Network Modeling | 逆神经网络建模 | | |
Iterative Learning | 迭代学习 | | |
J
英文术语 | 中文翻译 | 常用缩写 | 备注 |
---|
Jacobian | 雅克比 | | |
Jacobian Matrix | 雅可比矩阵 | | |
Jensen Inequality | Jensen不等式 | | |
Jensen-Shannon Divergence | JS散度 | JSD | |
Joint Probability Density Function | 联合概率密度函数 | | |
Joint Probability Distribution | 联合概率分布 | | |
Junction Tree Algorithm | 联合树算法 | | |
Joint Distribution | 联合分布 | | |
Jordan-Elman Neural Networks | Jordan-Elman 神经网络 | | |
K
英文术语 | 中文翻译 | 常用缩写 | 备注 |
---|
K-Armed Bandit Problem | k-摇臂老虎机 | | |
K-Fold Cross Validation | k 折交叉验证 | K-FOLD CV | 统计 |
K-Means Clustering | k-均值聚类 | | |
K-Nearest Neighbor Classifier | k-近邻分类器 | | |
K-Nearest Neighbor Method | k-近邻 | K-NN | 统计 |
Karush-Kuhn-Tucker Condition | KKT条件 | | |
Karush–Kuhn–Tucker | Karush–Kuhn–Tucker | | |
Kd Tree | Kd 树 | | |
Kernel Density Estimation | 核密度估计 | | |
Kernel Function | 核函数 | | |
Kernel Machine | 核机器 | | |
Kernel Matrix | 核矩阵 | | |
Kernel Method | 核方法 | | 机器学习 |
Kernel Regression | 核回归 | | |
Kernel Trick | 核技巧 | | |
Kernelized | 核化 | | |
Kernelized Linear Discriminant Analysis | 核线性判别分析 | KLDA | |
Kernelized PCA | 核主成分分析 | KPCA | |
Key-Value Store | 键-值数据库 | | |
KL Divergence | KL散度 | | |
Knowledge | 知识 | | |
Knowledge Base | 知识库 | | |
Knowledge Distillation | 知识蒸馏 | | |
Knowledge Engineering | 知识工程 | | |
Knowledge Graph | 知识图谱 | | |
Knowledge Representation | 知识表征 | | |
Kronecker Product | Kronecker积 | | |
Krylov Method | Krylov方法 | | |
K Clusters | K聚类 | | |
K Nearest Points | K 最近点 | | 统计 |
K-1 Folds | K-1 折 | | |
K-Edge (O-K Edge) | K-边缘(O-K 边缘) | | |
K-Means | K-均值 | | 统计 |
Kendall’S Tau | 肯德尔等级相关系数 | | |
Kernel Ridge Regression | 核岭回归 | KRR | |
Kernels | 内核 | | |
Kinetic Curve | 动力学曲线 | | |
KNN Model | K 近邻模型 | | |
Knowledge Extraction | 知识提取 | | |
Knowledge Gradient | 知识梯度 | KG | |
L
英文术语 | 中文翻译 | 常用缩写 | 备注 |
---|
L-BFGS | L-BFGS | | |
Label | 标签/标记 | | |
Label Propagation | 标记传播 | | |
Label Smoothing | 标签平滑 | | |
Label Space | 标记空间 | | |
Labeled | 标注 | | |
Lagrange Dual Problem | 拉格朗日对偶问题 | | |
Lagrange Duality | 拉格朗日对偶性 | | |
Lagrange Function | 拉格朗日函数 | | |
Lagrange Multiplier | 拉格朗日乘子 | | |
Language Model | 语言模型 | | |
Language Modeling | 语言模型化 | | |
Laplace Distribution | Laplace分布 | | |
Laplace Smoothing | 拉普拉斯平滑 | | |
Laplacian Correction | 拉普拉斯修正 | | |
Large Learning Step | 大学习步骤 | | |
Las Vegas Method | 拉斯维加斯方法 | | |
Latent | 潜在 | | |
Latent Dirichlet Allocation | 潜在狄利克雷分配 | LDA | |
Latent Layer | 潜层 | | |
Latent Semantic Analysis | 潜在语义分析 | LSA | |
Latent Semantic Indexing | 潜在语义索引 | LSI | |
Latent Variable | 潜变量/隐变量 | | |
Law of Large Numbers | 大数定律 | | |
Layer | 层 | | |
Layer Normalization | 层规范化 | | |
Layer-Wise | 逐层的 | | |
Layer-Wise Adaptive Rate Scaling | 逐层适应率缩放 | LARS | |
Layer-Wise Normalization | 逐层规范化 | | |
Layer-Wise Pretraining | 逐层预训练 | | |
Layer-Wise Training | 逐层训练 | | |
Lazy Learning | 懒惰学习 | | |
Leaf Node | 叶结点 | | |
Leaky Lelu Function | 泄漏线性整流函数 | | |
Leaky Relu | 泄漏修正线性单元/泄漏整流线性单元 | | |
Leaky Unit | 渗漏单元 | | |
Learned | 学成 | | |
Learned Approximate Inference | 学习近似推断 | | |
Learner | 学习器 | | |
Learning | 学习 | | |
Learning Algorithm | 学习算法 | | |
Learning By Analogy | 类比学习 | | |
Learning Rate | 学习率 | | |
Learning Rate Annealing | 学习率退火 | | |
Learning Rate Decay | 学习率衰减 | | |
Learning Rate Warmup | 学习率预热 | | |
Learning To Learn | 学习的学习 | | |
Learning Vector Quantization | 学习向量量化 | LVQ | |
Least General Generalization | 最小一般泛化 | | |
Least Mean Squares | 最小均方 | LMS | |
Least Square Method | 最小二乘法 | LSM | |
Least Squares Regression Tree | 最小二乘回归树 | | |
Leave-One-Out Cross Validation | 留一交叉验证 | | |
Leave-One-Out | 留一法 | LOO | |
Lebesgue-Integrable | 勒贝格可积 | | |
Left Eigenvector | 左特征向量 | | |
Left Singular Vector | 左奇异向量 | | |
Leibniz’s Rule | 莱布尼兹法则 | | |
Lifelong Learning | 终身学习 | | |
Likelihood | 似然 | | |
Line Search | 线搜索 | | |
Linear Auto-Regressive Network | 线性自回归网络 | | |
Linear Chain | 线性链 | | |
Linear Chain Conditional Random Field | 线性链条件随机场 | | |
Linear Classification Model | 线性分类模型 | | |
Linear Classifier | 线性分类器 | | |
Linear Combination | 线性组合 | | 数学 |
Linear Dependence | 线性相关 | | |
Linear Discriminant Analysis | 线性判别分析 | LDA | 统计、机器学习 |
Linear Factor Model | 线性因子模型 | | |
Linear Mapping | 线性映射 | | |
Linear Model | 线性模型 | LR | 统计、机器学习 |
Linear Programming | 线性规划 | | |
Linear Regression | 线性回归 | | 统计、数学 |
Linear Scaling Rule | 线性缩放规则 | | |
Linear Scan | 线性扫描 | | |
Linear Space | 线性空间 | | |
Linear Support Vector Machine | 线性支持向量机 | | |
Linear Support Vector Machine In Linearly Separable Case | 线性可分支持向量机 | | |
Linear Threshold Units | 线性阈值单元 | | |
Linear Transformation | 线性变换 | | |
Linearly Independent | 线性无关 | | |
Linearly Separable | 线性可分 | | |
Linearly Separable Data Set | 线性可分数据集 | | |
Link Analysis | 链接分析 | | |
Link Function | 联系函数 | | |
Link Prediction | 链接预测 | | |
Link Table | 连接表 | | |
Linkage | 连接 | | |
Linked Importance Sampling | 链接重要采样 | | |
Lipschitz | Lipschitz | | |
Lipschitz Constant | Lipschitz常数 | | |
Lipschitz Continuous | Lipschitz连续 | | |
Liquid State Machine | 流体状态机 | | |
Local Conditional Probability Distribution | 局部条件概率分布 | | |
Local Constancy Prior | 局部不变性先验 | | |
Local Contrast Normalization | 局部对比度规范化 | | |
Local Curvature | 局部曲率 | | |
Local Descent | 局部下降 | | |
Local Invariances | 局部不变性 | | |
Local Kernel | 局部核 | | |
Local Markov Property | 局部马尔可夫性 | | |
Local Maxima | 局部极大值 | | |
Local Maximum | 局部极大点 | | |
Local Minima | 局部极小 | | |
Local Minimizer | 局部最小解 | | |
Local Minimum | 局部极小 | | |
Local Representation | 局部式表示/局部式表征 | | |
Local Response Normalization | 局部响应规范化 | LRN | |
Locally Linear Embedding | 局部线性嵌入 | LLE | |
Log Likelihood | 对数似然函数 | | |
Log Linear Model | 对数线性模型 | | |
Log-Likelihood | 对数似然 | | |
Log-Likelihood Loss Function | 对数似然损失函数 | | |
Log-Linear Regression | 对数线性回归 | | |
Logarithmic Loss Function | 对数损失函数 | | |
Logarithmic Scale | 对数尺度 | | |
Logistic Distribution | 对数几率分布 | | |
Logistic Function | 对数几率函数 | | |
Logistic Loss | 对率损失 | | |
Logistic Regression | 对数几率回归 | LR | 统计、机器学习 |
Logistic Sigmoid | 对数几率Sigmoid | | |
Logit | 对数几率 | | |
Long Short Term Memory | 长短期记忆 | LSTM | |
Long Short-Term Memory Network | 长短期记忆网络 | LSTM | |
Long-Term Dependencies Problem | 长程依赖问题 | | |
Long-Term Dependency | 长期依赖 | | |
Long-Term Memory | 长期记忆 | | |
Loop | 环 | | |
Loopy Belief Propagation | 环状信念传播 | LBP | |
Loss | 损失 | | |
Loss Function | 损失函数 | | 机器学习 |
Low Rank Matrix Approximation | 低秩矩阵近似 | | |
Lp Distance | Lp距离 | | |
L1 And L2 Regularization | L1与L2正则化 | | |
Laboratory Level | 实验室级别 | | |
Language Processing | 语言处理 | | |
Laplacian Prior | 拉普拉斯先验 | | |
Large-Scale Data Storage | 大规模数据存储 | | |
Lasers | 激光器 | | |
Lasso Regression | 拉索回归 | | |
LBP | 局部二值模式 | | |
Least Absolute Shrinkage And Selection Operator | Lasso回归 | LASSO | |
Least Square Support Vector Machine | 最小二乘支持向量机 | LSSVM | |
Ligand-Field | 配位场 | | |
Linear | 线性的 | | 数学 |
Linear Dimension Reduction Methods | 线性降维方法 | | |
Linear Vibronic Coupling Model | 线性振子耦合模型 | | |
Local Recurrent | 本地卷积 | | |
Logic And Heuristics Applied To Synthetic Analysis | LHASA 程序 | LHASA | |
Long-Range Prediction | 长期预测 | | |
Long-Range Prediction Models | 长期预测模型 | | |
Long-Term Planning | 长期规划 | | |
Long-Term Reward | 长期回报 | | |