Letter A
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Absolute value rectification | 絕對(duì)值整流 | [1] |
Activation Function | 激活函數(shù) | [1] / [2] |
Accumulated error backpropagation | 累積誤差反向傳播 | [1] |
Acoustic modeling | 聲學(xué)建模 | [1] |
Acquisition funtion | 采集函數(shù) | [1] |
Actor-critic method | 行為-評(píng)判方法 | [1] |
Adaptive bitrate (ABR) algorithm | 自適應(yīng)比特率算法 | [1] |
Adaptive Resonance Theory/ART | 自適應(yīng)諧振理論 | [1] |
Addictive model | 加性模型 | [1] |
Adversarial example | 對(duì)抗樣本 | [1] |
Adversarial Networks | 對(duì)抗網(wǎng)絡(luò) | [1] |
Affine Layer | 仿射層 | [1] |
Affinity matrix | 親和矩陣 | [1] |
Agent | 智能體 | [1] / [2] / [3] / [4] |
Algorithm | 算法 | [1] / [2] / [3] |
Alpha-beta pruning | α-β剪枝 | [1] |
Alternative splicing dataset | 選擇性剪接數(shù)據(jù)集 | [1] |
Analytic gradient | 解析梯度 | [1] |
Ancestral Sampling | 原始采樣 | [1] |
Annealed importance sampling | 退火重要采樣 | [1] |
Anomaly detection | 異常檢測(cè) | [1] |
Application-speci?c integrated circuit | 專用集成電路 | [1] |
Approximate Bayesian computation | 近似貝葉斯計(jì)算 | [1] |
Approximate inference | 近似推斷 | [1] |
Approximation | 近似 | [1] |
Architecture | 架構(gòu) | [1] |
Area Under ROC Curve/AUC | Roc 曲線下面積 | [1] |
Artificial General Intelligence/AGI | 通用人工智能 | [1] |
Artificial Intelligence/AI | 人工智能 | [1] / [2] / [3] |
Association analysis | 關(guān)聯(lián)分析 | [1] |
Asymptotically unbiased | 漸近無偏 | [1] |
Asynchoronous Stochastic Gradient Descent | 異步隨機(jī)梯度下降 | [1] |
Attention mechanism | 注意力機(jī)制 | [1] / [2] / [3] |
Attribute conditional independence assumption | 屬性條件獨(dú)立性假設(shè) | [1] |
Attribute space | 屬性空間 | [1] |
Attribute value | 屬性值 | [1] |
Augmented Lagrangian | 增廣拉格朗日法 | [1] |
Autoencoder | 自編碼器 | [1] |
Automatic di?erentiation | 自動(dòng)微分 | [1] |
Automatic speech recognition/ASR | 自動(dòng)語音識(shí)別 | [1] |
Automatic summarization | 自動(dòng)摘要 | [1] |
Auto-regressive network | 自回歸網(wǎng)絡(luò) | [1] |
Average gradient | 平均梯度 | [1] |
Average-Pooling | 平均池化 | [1] |
Letter B
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Backpropagation/BP | 反向傳播 | [1] |
Backpropagation Through Time | 通過時(shí)間的反向傳播 | [1] |
Backward induction | 逆向歸納 | [1] |
Bag of words/BoW | 詞袋 | [1] |
Base learner | 基學(xué)習(xí)器 | [1] |
Base learning algorithm | 基學(xué)習(xí)算法 | [1] |
Batch | 批次 | [1][5] |
Batch Normalization/BN | 批次歸一化 | [1] |
Bayes decision rule | 貝葉斯判定準(zhǔn)則 | [1] |
Bayes error | 貝葉斯誤差 | [1] |
Bayes Model Averaging/BMA | 貝葉斯模型平均 | [1] |
Bayes optimal classifier | 貝葉斯最優(yōu)分類器 | [1] |
Bayesian decision theory | 貝葉斯決策論 | [1] |
Bayesian network | 貝葉斯網(wǎng)絡(luò) | [1] |
Bayesian optimization | 貝葉斯優(yōu)化 | [1] |
Beam search | 束搜索 | [1] |
Bechmark | 基準(zhǔn) | [1] |
Belief network | 信念網(wǎng)絡(luò) | [1] |
Bellman equation | 貝爾曼方程 | [1] |
Between-class scatter matrix | 類間散度矩陣 | [1] |
Bias | 偏置 / 偏差 | [1] |
Biased | 有偏 | [1] |
Biased importance sampling | 有偏重要采樣 | [1] |
Bias-variance decomposition | 偏差-方差分解 | [1] |
Bias-Variance Dilemma | 偏差 - 方差困境 | [1] |
Bi-directional Long-Short Term Memory/Bi-LSTM | 雙向長(zhǎng)短期記憶 | [1] |
Binary classification | 二元分類 | [1] |
Binary relation | 二元關(guān)系 | [1] |
Binary sparse coding | 二值稀疏編碼 | [1] |
Binomial distribution | 二項(xiàng)分布 | [1] |
Binomial test | 二項(xiàng)檢驗(yàn) | [1] |
Bi-partition | 二分法 | [1] |
Block coordinate descent | 塊坐標(biāo)下降 | [1] |
Block Gibbs Sampling | 塊吉布斯采樣 | [1] |
Boilerplate code | 樣板代碼 | [1] |
Boltzmann distribution | 玻爾茲曼分布 | [1] |
Boltzmann machine | 玻爾茲曼機(jī) | [1] |
Bootstrap sampling | 自助采樣法/可重復(fù)采樣/有放回采樣 | [1] |
Bootstrapping | 自助法 | [1] |
Bottleneck layer | 瓶頸層 | [1] |
Bounding Boxes | 邊界框 | [1] |
Break-Event Point/BEP | 平衡點(diǎn) | [1] |
Bridge sampling | 橋式采樣 | [1] |
Broadcasting | 廣播 | [1] |
Burning-in | 磨合 | [1] |
Letter C
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Calculus of variations | 變分法 | [1] |
Calibration | 校準(zhǔn) | [1] |
Canonical | 正則的 | |
Cascade/Coalesced | 級(jí)聯(lián) | [1] |
Cascade-Correlation | 級(jí)聯(lián)相關(guān) | [1] |
Categorical attribute | 分類屬性 | [1][5] |
Categorical distribution | 范疇分布馏慨、分類分布 | [1] |
Causal factor | 因果因子 | [1] |
Causal modeling | 因果模型 | [1] |
Centered difference | 中心差分 | [1] |
Central limit theorem | 中心極限定理 | [1] |
Chain rule | 鏈?zhǔn)椒▌t | [1] |
Chordal graph | 弦圖 | [1] |
Class-conditional probability | 類條件概率 | [1] |
Classification and regression tree/CART | 分類與回歸樹 | [1] |
Classifier | 分類器 | [1] |
Class-imbalance | 分類不平衡 | [1] |
Clip gradient | 梯度截?cái)?/td> | [1] |
Clique potential | 團(tuán)勢(shì)能 | [1] |
Closed-form | 閉式 | [1] |
Cluster | 簇/類/集群 | [1] |
Cluster analysis | 聚類分析 | [1] |
Clustering | 聚類 | [1] / |
Clustering ensemble | 聚類集成 | [1] |
Co-adapting | 共適應(yīng) | [1] |
Coding matrix | 編碼矩陣 | [1] |
Collaborative filtering | 協(xié)同過濾 | [1] |
COLT | 國(guó)際學(xué)習(xí)理論會(huì)議 | [1] |
Committee-based learning | 基于委員會(huì)的學(xué)習(xí) | [1] |
Competitive learning | 競(jìng)爭(zhēng)型學(xué)習(xí) | [1] |
Complete graph | 完全圖 | [1] |
Component learner | 組件學(xué)習(xí)器 | [1] |
Comprehensibility | 可解釋性 | [1] |
Computation Cost | 計(jì)算成本 | [1] |
Computational Linguistics | 計(jì)算語言學(xué) | [1] |
Computer vision | 計(jì)算機(jī)視覺 | [1] |
Concept drift | 概念漂移 | [1] |
Concept Learning System/CLS | 概念學(xué)習(xí)系統(tǒng) | [1] |
Conditional entropy | 條件熵 | [1] |
Conditional mutual information | 條件互信息 | [1] |
Conditional Probability Table/CPT | 條件概率表 | [1] |
Conditional random field/CRF | 條件隨機(jī)場(chǎng) | [1] |
Conditional risk | 條件風(fēng)險(xiǎn) | [1] |
Confidence | 置信度 | [1] |
Confusion matrix | 混淆矩陣 | [1] |
Conjugate directions | 共軛方向 | [1] |
Conjugate distribution | 共軛分布 | [1] |
Conjugate gradient | 共軛梯度 | [1] |
Connection weight | 連接權(quán) | [1] |
Connectionism | 連結(jié)主義 | [1] |
Consistency | 一致性/相合性 | [1] |
Consistency convergence | 一致性收斂 | [1] |
Contingency table | 列聯(lián)表 | [1] |
Continuation method | 延拓法 | [1] |
Continuous attribute | 連續(xù)屬性 | [1] |
Contractive autoencoder | 收縮自編碼器 | [1] |
Contractive neural network | 收縮神經(jīng)網(wǎng)絡(luò) | [1] |
Convex optimization | 凸優(yōu)化 | [1] |
Convergence | 收斂 | [1] |
Conversational agent | 會(huì)話智能體 | [1] |
Convex quadratic programming | 凸二次規(guī)劃 | [1] |
Convexity | 凸性 | [1] |
Convolutional Boltzmann Machine | 卷積玻爾茲曼機(jī) | [1] |
Convolutional neural network/CNN | 卷積神經(jīng)網(wǎng)絡(luò) | [1]/[2]/[3] |
Co-occurrence | 同現(xiàn) | [1] |
Coordinate descent | 坐標(biāo)下降 | [1] |
Correlation coefficient | 相關(guān)系數(shù) | [1] |
Cosine similarity | 余弦相似度 | [1] |
Cost curve | 成本曲線 | [1] |
Cost Function | 成本函數(shù) | [1] |
Cost matrix | 成本矩陣 | [1] |
Cost-sensitive | 成本敏感 | [1] |
Covariance | 協(xié)方差 | [1] |
Covariance matrix | 協(xié)方差矩陣 | [1] |
Cross entropy | 交叉熵 | [1] |
Cross validation | 交叉驗(yàn)證 | [1] |
Cross-correlation | 互相關(guān)函數(shù) | [1] |
Crowdsourcing | 眾包 | [1] |
Cumulative function | 累積函數(shù) | [1] |
Curse of dimensionality | 維度災(zāi)難 | [1] |
Curve-fitting | 曲線擬合 | [1] |
Cut point | 截?cái)帱c(diǎn) | [1] |
Cutting plane algorithm | 割平面法 | [1] |
Letter D
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Data generating distribution | 數(shù)據(jù)生成分布 | [1] |
Data mining | 數(shù)據(jù)挖掘 | [1] |
Data parallelism | 數(shù)據(jù)并行 | [1] |
Data set | 數(shù)據(jù)集 | [1] |
Data Wrangling | 數(shù)據(jù)整理 | [1] |
Dataset augmentation | 數(shù)據(jù)集增強(qiáng) | [1] |
Debugging strategy | 調(diào)試策略 | [1] |
Decision Boundary | 決策邊界 | [1] |
Decision stump | 決策樹樁 | [1] |
Decision tree | 決策樹/判定樹 | [1]/[2] |
Deconvolutional Network | 解卷積網(wǎng)絡(luò) | [1] |
Deduction | 演繹 | [1] |
Deep Belief Network | 深度信念網(wǎng)絡(luò) | [1] |
Deep Boltzmann Machine | 深度玻爾茲曼機(jī) | [1] |
Deep circuit | 深度回路 | [1] |
Deep Convolutional Generative Adversarial Network/DCGAN | 深度卷積生成對(duì)抗網(wǎng)絡(luò) | [1] |
Deep generative model | 深度生成模型 | [1] |
Deep learning | 深度學(xué)習(xí) | [1]/[2]/[3] |
Deep neural network/DNN | 深度神經(jīng)網(wǎng)絡(luò) | [1]/[2]/[3] |
Deep Q-Learning | 深度 Q 學(xué)習(xí) | [1]/[2] |
Deep Q-Network | 深度 Q 網(wǎng)絡(luò) | [1] |
Denoising autoencoder | 去噪自編碼器 | [1] |
Denoising score matching | 去噪得分匹配 | [1] |
Density estimation | 密度估計(jì) | [1] |
Density-based clustering | 密度聚類 | [1] |
Detailed balance | 細(xì)致平衡 | [1] |
Determinant | 行列式 | [1] |
Deterministic | 確定性 | |
Diagonal matrix | 對(duì)角矩陣 | [1] |
Differentiable neural computer | 可微分神經(jīng)計(jì)算機(jī) | [1] |
Di?erential entropy | 微分熵 | [1] |
Di?erential equation | 微分方程 | [1] |
Dimensionality reduction algorithm | 降維算法 | [1] |
Directed edge | 有向邊 | [1] |
Directed graphical model | 有向圖模型 | [1] |
Directional derivative | 方向?qū)?shù) | [1] |
Dirichlet distribution | 狄利克雷分布 | [1] |
Disagreement measure | 不合度量 | [1] |
Discriminative model | 判別模型 | [1] |
Discriminator | 判別器 | [1] |
Discriminator network | 判別器網(wǎng)絡(luò) | [1] |
Distance measure | 距離度量 | [1] |
Distance metric learning | 距離度量學(xué)習(xí) | [1] |
Distribution | 分布 | [1] |
Divergence | 散度 | [1] |
Diversity measure | 多樣性度量/差異性度量 | [1] |
Domain adaption | 領(lǐng)域自適應(yīng) | [1] |
Dominant strategy | 占優(yōu)策略 | [1] |
Double backprop | 雙反向傳播 | [1] |
Doubly block circulant matrix | 雙重分塊循環(huán)矩陣 | [1] |
Downsampling | 下采樣 | [1] |
D-separation/Directed separation | 有向分離 | [1] |
Dual problem | 對(duì)偶問題 | [1] |
Dummy node | 啞結(jié)點(diǎn) | [1] |
Dynamic Fusion | 動(dòng)態(tài)融合 | [1] |
Dynamic programming | 動(dòng)態(tài)規(guī)劃 | [1] |
Letter E
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Echo state network | 回聲狀態(tài)網(wǎng)絡(luò) | [1] |
Edge device | 邊緣設(shè)備 | [1] |
Eigendecomposition | 特征分解 | [1] |
Eigenvalue | 特征值 | [1] |
Eigenvalue decomposition | 特征值分解 | [1] |
Eigenvector | 特征向量 | [1] |
Element-wise product | 元素對(duì)應(yīng)乘積 | [1] |
Ellipsoid method | 橢球法 | [1] |
Embedding | 嵌套 | [1][5] |
Emotional analysis | 情緒分析 | [1] |
Empirical conditional entropy | 經(jīng)驗(yàn)條件熵 | [1] |
Empirical entropy | 經(jīng)驗(yàn)熵 | [1] |
Empirical error | 經(jīng)驗(yàn)誤差 | [1] |
Empirical risk | 經(jīng)驗(yàn)風(fēng)險(xiǎn) | [1] |
End-to-End | 端到端 | [1] |
Energy-based model | 基于能量的模型 | [1] |
Ensemble learning | 集成學(xué)習(xí) | [1] |
Ensemble pruning | 集成修剪 | [1] |
Epochs | 輪數(shù)/周期 | [1][5] |
Error Correcting Output Codes/ECOC | 糾錯(cuò)輸出碼 | [1] |
Error rate | 錯(cuò)誤率 | [1] |
Error-ambiguity decomposition | 誤差-分歧分解 | [1] |
Euclidean distance | 歐氏距離 | [1] |
Euclidean norm | 歐幾里得范數(shù) | [1] |
Evolutionary computation | 演化計(jì)算 | [1] |
Exact | 確切的 | |
Expectation-Maximization/EM | 期望最大化 | [1] |
Expected loss | 期望損失 | [1] |
Expert network | 專家網(wǎng)絡(luò) | [1] |
Explaining away e?ect | 相消解釋作用 | [1] |
Exploding Gradient Problem | 梯度爆炸問題 | [1] |
Exploitation | 利用 | [1] |
Exploration | 探索 | [1] |
Exponential loss function | 指數(shù)損失函數(shù) | [1] |
Extreme Learning Machine/ELM | 超限學(xué)習(xí)機(jī) | [1] |
Letter F
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Factor analysis | 因子分析 | [1] |
Factorization | 因子分解 | [1] |
Factors of variation | 變差因素 | [1] |
False negative | 假負(fù)例 | [1] |
False positive | 假正例 | [1] |
False Positive Rate/FPR | 假正例率 | [1] |
Fault-tolerant asynchronous training | 容錯(cuò)異步訓(xùn)練 | [1] |
Feature engineering | 特征工程 | [1] |
Feature extractor | 特征提取器 | [1] |
Feature map | 特征圖 | [1] |
Feature selection | 特征選擇 | [1] |
Feature vector | 特征向量 | [1] |
Featured Learning | 特征學(xué)習(xí) | [1] |
Feedforward Neural Networks/FNN | 前饋神經(jīng)網(wǎng)絡(luò) | [1] |
Field Programmable Gated Array | 現(xiàn)場(chǎng)可編程門陣列 | [1] |
Fine-tuning | 精調(diào) | [1] |
Finite difference | 有限差分 | [1] |
Fixed point equation | 不動(dòng)點(diǎn)方程 | [1] |
Flipping output | 翻轉(zhuǎn)法 | [1] |
Fluctuation | 震蕩 | [1] |
Folk Theorem | 無名氏定理 | [1] |
Forget gate | 遺忘門 | [1] |
Forward stagewise algorithm | 前向分步算法 | [1] |
Fourier transform | 傅立葉變換 | [1] |
Frequentist | 頻率主義學(xué)派 | [1] |
Frequentist probability | 頻率派概率 | [1] |
Full-rank matrix | 滿秩矩陣 | [1] |
Functional derivative | 泛函導(dǎo)數(shù) | [1] |
Functional neuron | 功能神經(jīng)元 | [1] |
Letter G
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Gain ratio | 增益率 | [1] |
Game payoff | 博弈效用 | [1] |
Game theory | 博弈論 | [1] |
Gated recurrent net/GRN | 門控循環(huán)網(wǎng)絡(luò) | [1] |
Gaussian kernel function | 高斯核函數(shù) | [1] |
Gaussian Mixture Model | 高斯混合模型 | [1] |
Gaussian Process | 高斯過程 | [1] |
General Problem Solving | 通用問題求解 | [1] |
Generalization | 泛化 | [1] |
Generalization error | 泛化誤差 | [1] |
Generalization error bound | 泛化誤差上界 | [1] |
Generalized Lagrange function | 廣義拉格朗日函數(shù) | [1] |
Generalized linear model | 廣義線性模型 | [1] |
Generalized pseudolikelihood | 廣義偽似然 | [1] |
Generalized Rayleigh quotient | 廣義瑞利商 | [1] |
Generalized score matching | 廣義得分匹配 | [1] |
Generative Adversarial Networks/GAN | 生成對(duì)抗網(wǎng)絡(luò) | [1]/[2]/[3] |
Generative Model | 生成模型 | [1]/[2]/[3] |
Generative moment matching network | 生成矩匹配網(wǎng)絡(luò) | [1] |
Generator | 生成器 | [1] |
Genetic Algorithm/GA | 遺傳算法 | [1]/[2]/[3] |
Giant magnetoresistance | 巨磁阻 | [1] |
Gibbs sampling | 吉布斯采樣 | [1] |
Gini index | 基尼指數(shù) | [1] |
Global contrast normalization | 全局對(duì)比度歸一化 | [1] |
Global minimum | 全局最小 | [1] |
Global Optimization | 全局優(yōu)化 | [1] |
Gradient boosting tree | 梯度提升樹 | [1] |
Gradient Descent | 梯度下降 | [1] |
Gradient energy distribution | 梯度能量分布 | [1] |
Graph theory | 圖論 | [1] |
Grid search | 網(wǎng)格搜索 | [1] |
Ground-truth | 真相/真實(shí) | [1] |
Letter H
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Hard margin | 硬間隔 | [1] |
Hard voting | 硬投票 | [1] |
Harmonic mean | 調(diào)和平均 | [1] |
Hesse matrix | 海賽矩陣 | [1] |
Heterogeneous Information Network/HIN | 異質(zhì)信息網(wǎng)絡(luò) | [1] |
Hidden dynamic model | 隱動(dòng)態(tài)模型 | [1] |
Hidden layer | 隱藏層 | [1] |
Hidden Markov Model/HMM | 隱馬爾可夫模型 | [1] |
Hierarchical clustering | 層次聚類 | [1] |
Hilbert space | 希爾伯特空間 | [1] |
Hinge loss function | 合頁損失函數(shù) | [1] |
Hold-out | 留出法 | [1] |
Homogeneous | 同質(zhì) | [1] |
Hybrid computing | 混合計(jì)算 | [1] |
Hyperparameter | 超參數(shù) | [1]/[2] |
Hypothesis | 假設(shè) | [1] |
Hypothesis test | 假設(shè)檢驗(yàn) | [1] |
Letter I
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
ICML | 國(guó)際機(jī)器學(xué)習(xí)會(huì)議 | [1] |
Identity matrix | 單位矩陣 | [1] |
Image restoration | 圖像復(fù)原 | [1] |
Imperfect Information | 不完美信息 | [1] |
Improved iterative scaling/IIS | 改進(jìn)的迭代尺度法 | [1] |
Incremental learning | 增量學(xué)習(xí) | [1] |
Independent and identically distributed/i.i.d. | 獨(dú)立同分布 | [1] |
Independent Component Analysis/ICA | 獨(dú)立成分分析 | [1] |
Independent subspace analysis | 獨(dú)立子空間分析 | [1] |
Indicator function | 指示函數(shù) | [1] |
Individual learner | 個(gè)體學(xué)習(xí)器 | [1] |
Induction | 歸納 | [1] |
Inductive bias | 歸納偏好 | [1] |
Inductive learning | 歸納學(xué)習(xí) | [1] |
Inductive Logic Programming/ILP | 歸納邏輯程序設(shè)計(jì) | [1] |
Inequality constraint | 不等式約束 | [1] |
Inference | 推斷 | [1] |
Information entropy | 信息熵 | [1] |
Information gain | 信息增益 | [1] |
Input layer | 輸入層 | [1] |
Insensitive loss | 不敏感損失 | [1] |
Instance segmentation | 實(shí)例分割 | [1] |
Inter-cluster similarity | 簇間相似度 | [1] |
International Conference for Machine Learning/ICML | 國(guó)際機(jī)器學(xué)習(xí)大會(huì) | [1] |
Intra-cluster similarity | 簇內(nèi)相似度 | [1] |
Intrinsic value | 固有值 | [1] |
Invariance | 不變性 | [1] |
Invert | 求逆 | [1] |
Isometric Mapping/Isomap | 等度量映射 | [1] |
Isotonic regression | 等分回歸 | [1] |
Iterative Dichotomiser | 迭代二分器 | [1] |
Letter J
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Jensen-Shannon Divergence/JSD | JS 散度 | [1] |
Letter k
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Kernel method | 核方法 | [1] |
Kernel trick | 核技巧 | [1] |
Kernelized Linear Discriminant Analysis/KLDA | 核線性判別分析 | [1] |
K-fold cross validation | k 折交叉驗(yàn)證/k 倍交叉驗(yàn)證 | [1] |
K-Means Clustering | K - 均值聚類 | [1] |
K-Nearest Neighbours Algorithm/KNN | K近鄰算法 | [1] |
Knowledge base | 知識(shí)庫 | [1] |
Knowledge Engineering | 知識(shí)工程 | [1] |
Knowledge graph | 知識(shí)圖譜 | [1]/[2]/[3] |
Knowledge Representation | 知識(shí)表征 | [1] |
Letter L
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Label space | 標(biāo)記空間 | [1] |
Lagrange duality | 拉格朗日對(duì)偶性 | [1] |
Lagrange multiplier | 拉格朗日乘子 | [1] |
Laplace smoothing | 拉普拉斯平滑 | [1] |
Laplacian correction | 拉普拉斯修正 | [1] |
Latent Dirichlet Allocation/LDA | 隱狄利克雷分布 | [1] |
Latent semantic analysis | 潛在語義分析 | [1] |
Latent variable | 隱變量 | [1] |
Law of large number | 大數(shù)定理 | [1] |
Layer-wise Adaptive Rate Scaling/LARS | 層級(jí)對(duì)應(yīng)的適應(yīng)率縮放 | [1] |
Lazy learning | 懶惰學(xué)習(xí) | [1] |
Leaky ReLU | 滲漏整流線性單元 | [1] |
Learner | 學(xué)習(xí)器 | [1] |
Learning by analogy | 類比學(xué)習(xí) | [1] |
Learning rate | 學(xué)習(xí)速率 | [1] |
Learning Vector Quantization/LVQ | 學(xué)習(xí)向量量化 | [1] |
Least squares regression tree | 最小二乘回歸樹 | [1] |
Leave-One-Out/LOO | 留一法 | [1] |
Lebesgue-integrable | 勒貝格可積 | [1] |
Left eigenvector | 左特征向量 | [1] |
Leibniz’s rule | 萊布尼茲法則 | [1] |
Linear Discriminant Analysis/LDA | 線性判別 | [1] |
Linear model | 線性模型 | [1] |
Linear Regression | 線性回歸 | [1]/[2] |
Linear threshold units | 線性閥值單元 | [1] |
Link function | 聯(lián)系函數(shù) | [1] |
Local conditional probability distribution | 局部條件概率分布 | [1] |
Local contrast normalization | 局部對(duì)比度歸一化 | [1] |
Local curvature | 局部曲率 | [1] |
Local Invariances | 局部不變性 | [1] |
Local Markov property | 局部馬爾可夫性 | [1] |
Local minimum | 局部最小 | [1] |
Log likelihood | 對(duì)數(shù)似然 | [1] |
Log odds/logit | 對(duì)數(shù)幾率 | [1] |
Logistic Regression | Logistic 回歸 | [1] |
Log-likelihood | 對(duì)數(shù)似然 | [1] |
Log-linear regression | 對(duì)數(shù)線性回歸 | [1] |
Long-Short Term Memory/LSTM | 長(zhǎng)短期記憶 | [1]/[2]/[3] |
Long-term dependency | 長(zhǎng)期依賴 | [1] |
Loopy belief propagation | 環(huán)狀信念傳播 | [1] |
Loss function | 損失函數(shù) | [1] |
Low rank matrix approximation | 低秩矩陣近似 | [1] |
Letter M
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Machine translation/MT | 機(jī)器翻譯 | [1] |
Macron-P | 宏查準(zhǔn)率 | [1] |
Macron-R | 宏查全率 | [1] |
Main diagonal | 主對(duì)角線 | [1] |
Majority voting | 絕對(duì)多數(shù)投票法 | [1] |
Manifold assumption | 流形假設(shè) | [1] |
Manifold learning | 流形學(xué)習(xí) | [1] |
Manifold tangent classi?er | 流形正切分類器 | [1] |
Margin theory | 間隔理論 | [1] |
Marginal distribution | 邊緣分布 | [1] |
Marginal independence | 邊緣獨(dú)立性 | [1] |
Marginal probability distribution | 邊緣概率分布 | [1] |
Marginalization | 邊際化 | [1] |
Markov Chain | 馬爾可夫鏈 | [1] |
Markov Chain Monte Carlo/MCMC | 馬爾可夫鏈蒙特卡羅方法 | [1] |
Markov Random Field | 馬爾可夫隨機(jī)場(chǎng) | [1] |
Matrix inversion | 逆矩陣 | [1] |
Maximal clique | 最大團(tuán) | [1] |
Maximum A Posteriori | 最大后驗(yàn) | [1] |
Maximum Likelihood Estimation/MLE | 極大似然估計(jì)/極大似然法 | [1] |
Maximum margin | 最大間隔 | [1] |
Maximum weighted spanning tree | 最大帶權(quán)生成樹 | [1] |
Max-Pooling | 最大池化 | [1] |
Mean product of Student t-distribution | 學(xué)生 t 分布均值乘積 | [1] |
Mean squared error | 均方誤差 | [1] |
Mean-covariance restricted Boltzmann machine | 均值-協(xié)方差受限玻爾茲曼機(jī) | [1] |
Measure theory | 測(cè)度論 | [1] |
Meta-learner | 元學(xué)習(xí)器 | [1] |
Metric learning | 度量學(xué)習(xí) | [1] |
Micro-P | 微查準(zhǔn)率 | [1] |
Micro-R | 微查全率 | [1] |
Mini-Batch SGD | 小批次隨機(jī)梯度下降 | [1] |
Minimal Description Length/MDL | 最小描述長(zhǎng)度 | [1] |
Minimax game | 極小極大博弈 | [1] |
Misclassification cost | 誤分類成本 | [1] |
Mixture density network | 混合密度網(wǎng)絡(luò) | [1] |
Mixture of experts | 混合專家 | [1] |
Model predictive control (MPC) | 模型預(yù)測(cè)控制 | [1] |
Moment matching | 矩匹配 | [1] |
Momentum | 動(dòng)量 | [1] |
Monte Carlo Estimate | 蒙特卡洛估計(jì) | [1] |
Moore's Law | 摩爾定律 | [1] |
Moral graph | 道德圖/端正圖 | [1] |
Multi-class classification | 多類別分類 | [1] |
Multi-document summarization | 多文檔摘要 | [1] |
Multi-kernel learning | 多核學(xué)習(xí) | [1] |
Multi-layer feedforward neural networks | 多層前饋神經(jīng)網(wǎng)絡(luò) | [1] |
Multilayer Perceptron/MLP | 多層感知器 | [1] |
Multimodal learning | 多模態(tài)學(xué)習(xí) | [1] |
Multinomial distribution | 多項(xiàng)分布 | [1] |
Multiple Dimensional Scaling | 多維縮放 | [1] |
Multiple linear regression | 多元線性回歸 | [1] |
Multi-response Linear Regression/MLR | 多響應(yīng)線性回歸 | [1] |
Multivariate normal distribution | 多維正態(tài)分布 | [1] |
Mutual information | 互信息 | [1] |
Letter N
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Naive bayes | 樸素貝葉斯 | [1] |
Naive Bayes Classifier | 樸素貝葉斯分類器 | [1] |
Named entity recognition | 命名實(shí)體識(shí)別 | [1] |
Nash equilibrium | 納什均衡 | [1] |
Nash reversion | 納什回歸 | [1] |
Natural language generation/NLG | 自然語言生成 | [1] |
Natural language processing | 自然語言處理 | [1]/[2]/[3] |
Nearest-neighbor search | 最近鄰搜索 | [1] |
Negative class | 負(fù)類 | [1] |
Negative correlation | 負(fù)相關(guān)法 | [1] |
Negative de?nite | 負(fù)定 | [1] |
Negative Log Likelihood | 負(fù)對(duì)數(shù)似然 | [1] |
Negative semide?nite | 半負(fù)定 | [1] |
Neighbourhood Component Analysis/NCA | 近鄰成分分析 | [1] |
Neural Machine Translation | 神經(jīng)機(jī)器翻譯 | [1] |
Neural Turing Machine | 神經(jīng)圖靈機(jī) | [1] |
Neuromorphic Computing | 神經(jīng)形態(tài)計(jì)算 | [1]/[2]/[3] |
Newton method | 牛頓法 | [1] |
Conference on Neural Information Processing Systems/NIPS | 國(guó)際神經(jīng)信息處理系統(tǒng)會(huì)議 | [1] |
No Free Lunch Theorem/NFL | 沒有免費(fèi)的午餐定理 | [1] |
Noise-contrastive estimation | 噪音對(duì)比估計(jì) | [1] |
Nominal attribute | 列名屬性 | [1] |
Non-convex optimization | 非凸優(yōu)化 | [1] |
Nonlinear model | 非線性模型 | [1] |
Non-linear oscillation | 非線性振蕩 | [1] |
Non-metric distance | 非度量距離 | [1] |
Non-negative matrix factorization | 非負(fù)矩陣分解 | [1] |
Non-ordinal attribute | 無序?qū)傩?/td> | [1] |
Non-Saturating Game | 非飽和博弈 | [1] |
Norm | 范數(shù) | [1] |
Normalization | 歸一化 | [1] |
Nuclear norm | 核范數(shù) | [1] |
Numerical attribute | 數(shù)值屬性 | [1] |
Numerical optimization | 數(shù)值優(yōu)化 | [1] |
Letter O
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Objective function | 目標(biāo)函數(shù) | [1] |
Oblique decision tree | 斜決策樹 | [1] |
Occam's razor | 奧卡姆剃刀 | [1] |
Odds | 幾率 | [1] |
Offline inference | 離線推斷 | [1] |
Off-Policy | 離策略 | [1] |
Offset vector | 偏移向量 | [1] |
One shot learning | 一次性學(xué)習(xí) | [1] |
One-Dependent Estimator/ODE | 獨(dú)依賴估計(jì) | [1] |
Online inference | 在線推斷 | [1] |
On-Policy | 在策略 | [1] |
Ordinal attribute | 有序?qū)傩?/td> | [1] |
Orthogonal matrix | 正交矩陣 | [1] |
Orthonormal | 標(biāo)準(zhǔn)正交 | [1] |
Outlier | 異常值/離群值 | [1][5] |
Out-of-bag estimate | 包外估計(jì) | [1] |
Output layer | 輸出層 | [1] |
Output smearing | 輸出調(diào)制法 | [1] |
Overcomplete | 過完備 | [1] |
Overfitting | 過擬合/過配 | [1] |
Oversampling | 過采樣 | [1] |
Letter P
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Paired t-test | 成對(duì) t 檢驗(yàn) | [1] |
Pairwise | 成對(duì)型 | [1] |
Pairwise Markov property | 成對(duì)馬爾可夫性 | [1] |
Parallel tempering | 并行回火 | [1] |
Parameter | 參數(shù) | [1] |
Parameter estimation | 參數(shù)估計(jì) | [1] |
Parameter Server | 參數(shù)服務(wù)器 | [1] |
Parameter tuning | 調(diào)參 | [1] |
Parse tree | 解析樹 | [1] |
Partial derivative | 偏導(dǎo)數(shù) | [1] |
Particle Swarm Optimization/PSO | 粒子群優(yōu)化算法 | [1] |
Part-of-speech tagging | 詞性標(biāo)注 | [1] |
Perceptron | 感知機(jī) | [1] |
Performance measure | 性能度量 | [1] |
Permutation invariant | 置換不變性 | [1] |
Perplexity | 困惑度 | [1] |
Pictorial structure | 圖形結(jié)構(gòu) | [1] |
Plug and Play Generative Network | 即插即用生成網(wǎng)絡(luò) | [1] |
Plurality voting | 相對(duì)多數(shù)投票法 | [1] |
Polarity detection | 極性檢測(cè) | [1] |
Polynomial Basis Function | 多項(xiàng)式基函數(shù) | [1] |
Polynomial kernel function | 多項(xiàng)式核函數(shù) | [1] |
Pooling | 池化 | [1] |
Positive class | 正類 | [1] |
Positive definite matrix | 正定矩陣 | [1] |
Posterior inference | 后驗(yàn)推斷 | [1] |
Posterior probability | 后驗(yàn)概率 | [1] |
Post-hoc test | 后續(xù)檢驗(yàn) | [1] |
Post-pruning | 后剪枝 | [1] |
potential function | 勢(shì)函數(shù) | [1] |
Power method | 冪方法 | [1] |
Precision | 查準(zhǔn)率/精確率 | [1][5] |
Prepruning | 預(yù)剪枝 | [1] |
Principal component analysis/PCA | 主成分分析 | [1] |
Principle of multiple explanations | 多釋原則 | [1] |
Prior knowledge | 先驗(yàn)知識(shí) | [1] |
Probability Graphical Model | 概率圖模型 | [1] |
Proximal Gradient Descent/PGD | 近端梯度下降 | [1] |
Pruning | 剪枝 | [1] |
Pseudo-label | 偽標(biāo)記 | [1] |
Letter Q
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Quadratic Programming | 二次規(guī)劃 | [1] |
Quantized Neural Network/QNN | 量子化神經(jīng)網(wǎng)絡(luò) | [1] |
Quantum computer | 量子計(jì)算機(jī) | [1]/[2]/[3] |
Quantum Computing | 量子計(jì)算 | [1]/[2]/[3] |
Quantum machine learning | 量子機(jī)器學(xué)習(xí) | [1] |
Quasi Newton method | 擬牛頓法 | [1] |
Quasi-concave | 擬凹 | [1] |
Letter R
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Radial Basis Function/RBF | 徑向基函數(shù) | [1] |
Random Forest Algorithm | 隨機(jī)森林算法 | [1] |
Random walk | 隨機(jī)漫步 | [1] |
Recall | 召回率/查全率 | [1] |
Receiver Operating Characteristic/ROC | 受試者工作特征 | [1] |
Rectified Linear Unit/ReLU | 線性修正單元 | [1] |
Recurrent Neural Network | 循環(huán)神經(jīng)網(wǎng)絡(luò) | [1]/[2]/[3] |
Recursive neural network | 遞歸神經(jīng)網(wǎng)絡(luò) | [1] |
Reference model | 參考模型 | [1] |
Regression | 回歸 | [1] |
Regularization | 正則化 | [1] |
Regularizer | 正則化項(xiàng) | [1] |
Reinforcement learning/RL | 強(qiáng)化學(xué)習(xí) | [1]/[2]/[3] |
Relative entropy | 相對(duì)熵 | [1] |
Reparametrization | 重參數(shù)化 | [1] |
Representation learning | 表征學(xué)習(xí) | [1] |
Representer theorem | 表示定理 | [1] |
Reproducing Kernel Hilbert Space/RKHS | 再生核希爾伯特空間 | [1] |
Re-sampling | 重采樣法 | [1] |
Rescaling | 再縮放 | [1] |
Reservoir computing | 儲(chǔ)層計(jì)算 | [1] |
Residual Blocks | 殘差塊 | [1] |
Residual Mapping | 殘差映射 | [1] |
Residual Network | 殘差網(wǎng)絡(luò) | [1] |
Restricted Boltzmann Machine/RBM | 受限玻爾茲曼機(jī) | [1] |
Restricted Isometry Property/RIP | 限定等距性 | [1] |
Reverse mode accumulation | 反向模式累加 | [1] |
Re-weighting | 重賦權(quán)法 | [1] |
Ridge regression | 嶺回歸 | [1] |
Robustness | 穩(wěn)健性/魯棒性 | [1] |
Root node | 根結(jié)點(diǎn) | [1] |
Rule Engine | 規(guī)則引擎 | [1] |
Rule learning | 規(guī)則學(xué)習(xí) | [1] |
Letter S
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Saddle point | 鞍點(diǎn) | [1] |
Saddle-free Newton method | 無鞍牛頓法 | [1] |
Saliency map | 顯著圖 | [1] |
Sample space | 樣本空間 | [1] |
Sampling | 采樣 | [1] |
Score function | 評(píng)分函數(shù) | [1] |
Second derivative | 二階導(dǎo)數(shù) | [1] |
Second-order method | 二階方法 | [1] |
Self-contrastive estimation | 自對(duì)比估計(jì) | [1] |
Self-Driving | 自動(dòng)駕駛 | [1]/[2]/[3] |
Self-Organizing Map/SOM | 自組織映射 | [1] |
Semantic hashing | 語義哈希 | [1] |
Semantic segmentation | 語義分割 | [1] |
Semantic similarity | 語義相似度 | [1] |
Semi-Definite Programming | 半正定規(guī)劃 | [1] |
Semi-naive Bayes classifiers | 半樸素貝葉斯分類器 | [1] |
Semi-restricted Boltzmann Machine | 半受限波爾茲曼機(jī) | [1] |
Semi-Supervised Learning | 半監(jiān)督學(xué)習(xí) | [1]/[2]/[3] |
semi-Supervised Support Vector Machine | 半監(jiān)督支持向量機(jī) | [1] |
Sentiment analysis | 情感分析 | [1] |
Separating hyperplane | 分離超平面 | [1] |
Shannon entropy | 香農(nóng)熵 | [1] |
Shift invariance | 平移不變性 | [1] |
Siamese Network | 孿生網(wǎng)絡(luò) | [1] |
Sigmoid function | Sigmoid 函數(shù)/S 型函數(shù) | [1]/[5] |
Similarity measure | 相似度度量 | [1] |
Simulated annealing | 模擬退火 | [1] |
Simultaneous localization and mapping/SLAM | 同步定位與地圖構(gòu)建 | [1] |
Singular value | 奇異值 | [1] |
Singular Value Decomposition | 奇異值分解 | [1] |
Slack variables | 松弛變量 | [1] |
Slowness principle | 慢性原則 | [1] |
Smoothing | 平滑 | [1] |
Smoothness prior | 平滑先驗(yàn) | [1] |
Soft margin | 軟間隔 | [1] |
Soft margin maximization | 軟間隔最大化 | [1] |
Soft voting | 軟投票 | [1] |
Sparse activation | 稀疏激活 | [1] |
Sparse coding | 稀疏編碼 | [1] |
Sparse connectivity | 稀疏連接 | [1] |
Sparse initialization | 稀疏初始化 | [1] |
Sparse representation | 稀疏表征 | [1] |
Sparsity | 稀疏性 | [1] |
Specialization | 特化 | [1] |
Spectral Clustering | 譜聚類 | [1] |
Spectral radius | 譜半徑 | [1] |
Speech Recognition | 語音識(shí)別 | [1]/[2]/[3] |
Spiking Neural Nets | 脈沖神經(jīng)網(wǎng)絡(luò) | [1] |
Splitting variable | 切分變量 | [1] |
Squashing function | 擠壓函數(shù) | [1] |
Stability-plasticity dilemma | 可塑性-穩(wěn)定性困境 | [1] |
Stacked Deconvolutional Network/SDN | 堆疊解卷積網(wǎng)絡(luò) | [1] |
Standard deviation | 標(biāo)準(zhǔn)差 | [1] |
Static game | 靜態(tài)博弈 | [1] |
Stationary distribution | 穩(wěn)態(tài)分布 | [1] |
Stationary point | 駐點(diǎn) | [1] |
Statistical learning | 統(tǒng)計(jì)學(xué)習(xí) | [1] |
Status feature function | 狀態(tài)特征函數(shù) | [1] |
Stochastic gradient descent | 隨機(jī)梯度下降 | [1] |
Stochastic Matrix | 隨機(jī)矩陣 | [1] |
Stochastic maximum likelihood | 隨機(jī)最大似然 | [1] |
Stochastic Neighbor Embedding | 隨機(jī)近鄰嵌入 | [1] |
Stratified sampling | 分層采樣 | [1] |
Structural risk | 結(jié)構(gòu)風(fēng)險(xiǎn) | [1] |
Structural risk minimization/SRM | 結(jié)構(gòu)風(fēng)險(xiǎn)最小化 | [1] |
Structured variational inference | 結(jié)構(gòu)化變分推斷 | [1] |
Subsampling | 下采樣 | [1] |
Subspace | 子空間 | [1] |
Supervised learning | 監(jiān)督學(xué)習(xí)/有導(dǎo)師學(xué)習(xí) | [1] |
support vector expansion | 支持向量展式 | [1] |
Support Vector Machine/SVM | 支持向量機(jī) | [1] |
Surrogat loss | 替代損失 | [1] |
Surrogate function | 替代函數(shù) | [1] |
Symbolic learning | 符號(hào)學(xué)習(xí) | [1] |
Symbolism | 符號(hào)主義 | [1] |
Synset | 同義詞集 | [1] |
Synthetic feature | 合成特征 | [1] |
Letter T
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Tangent plane | 切平面 | [1] |
Tangent prop | 正切傳播 | [1] |
T-Distribution Stochastic Neighbour Embedding/t-SNE | T分布隨機(jī)近鄰嵌入 | [1] |
Tempered transition | 回火轉(zhuǎn)移 | [1] |
Tensor | 張量 | [1] |
Tensor Processing Units/TPU | 張量處理單元 | [1] |
The least square method | 最小二乘法 | [1] |
Threshold | 閾值 | [1] |
Threshold logic unit | 閾值邏輯單元 | [1] |
Threshold-moving | 閾值移動(dòng) | [1] |
Tiled convolution | 平鋪卷積 | [1] |
Time delay neural network | 時(shí)延神經(jīng)網(wǎng)絡(luò) | [1] |
Time Step | 時(shí)間步驟 | [1] |
Tractable | 易處理的 | |
Tokenization | 標(biāo)記化/分詞 | [1] |
Training error | 訓(xùn)練誤差 | [1] |
Training instance | 訓(xùn)練實(shí)例 | [1] |
Transductive learning | 直推學(xué)習(xí) | [1] |
Transfer learning | 遷移學(xué)習(xí)/轉(zhuǎn)移學(xué)習(xí) | [1]/[5] |
Treebank | 樹庫 | [1] |
Trial-by-error | 試錯(cuò)法 | [1] |
Triangulate | 三角形化 | [1] |
Trigram | 三元語法 | [1] |
True negative | 真負(fù)例 | [1]/[5] |
True positive | 真正例 | [1]/[5] |
True Positive Rate/TPR | 真正例率 | [1] |
Turing Machine | 圖靈機(jī) | [1] |
Twice-learning | 二次學(xué)習(xí) | [1] |
Two-dimensional array | 二維數(shù)組 | [1] |
Letter U
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Underestimation | 欠估計(jì) | [1] |
Underfitting | 欠擬合/欠配 | [1] |
Undersampling | 欠采樣 | [1] |
Understandability | 可理解性 | [1] |
Undirected graphical model | 無向圖模型 | [1] |
Unequal cost | 非均等代價(jià) | [1] |
Unit norm | 單位范數(shù) | [1] |
Unit test | 單元測(cè)試 | [1] |
Unit variance | 單位方差 | [1] |
Unitary matrix | 酉矩陣 | [1] |
Unit-step function | 單位階躍函數(shù) | [1] |
Univariate decision tree | 單變量決策樹 | [1] |
Unprojection | 反投影 | [1] |
Unshared convolution | 非共享卷積 | [1] |
Unsupervised learning | 無監(jiān)督學(xué)習(xí)/無導(dǎo)師學(xué)習(xí) | [1] |
Unsupervised layer-wise training | 無監(jiān)督逐層訓(xùn)練 | [1] |
Upper Confidence Bounds | 上置信界限 | [1] |
Upsampling | 上采樣 | [1] |
Letter V
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Vanishing Gradient Problem | 梯度消失問題 | [1] |
Variational derivative | 變分導(dǎo)數(shù) | [1] |
Variational free energy | 變分自由能 | [1] |
Variational inference | 變分推斷 | [1] |
VC Theory | VC維理論 | [1] |
Version space | 版本空間 | [1] |
Virtual adversarial example | 虛擬對(duì)抗樣本 | [1] |
Viterbi algorithm | 維特比算法 | [1] |
Von Neumann architecture | 馮 · 諾伊曼架構(gòu) | [1] |
Letter W
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Weak learner | 弱學(xué)習(xí)器 | [1] |
Weight | 權(quán)重 | [1] |
Weight decaying | 權(quán)值衰減 | [1] |
Weight sharing | 權(quán)共享 | [1] |
Weighted voting | 加權(quán)投票法 | [1] |
Wasserstein GAN/WGAN | Wasserstein生成對(duì)抗網(wǎng)絡(luò) | [1] |
Within-class scatter matrix | 類內(nèi)散度矩陣 | [1] |
Word embedding | 詞嵌入 | [1] |
Word sense disambiguation | 詞義消歧 | [1] |
Letter Z
英文/縮寫 | 漢語 | 來源&擴(kuò)展 |
---|---|---|
Zero mean | 零均值 | [1] |
Zero-data learning | 零數(shù)據(jù)學(xué)習(xí) | [1] |
Zero-shot learning | 零次學(xué)習(xí) | [1] |