Eng: Statistics of Data Analysis

1. Conditional probability?


2. Bayes theorem

P(A | B) is a conditional probability: the likelihood of event?A occurring given that B is true.

P(B | A) is also a conditional probability: the likelihood of event B occurring given that?A is true.

P(A) and P(B) are the probabilities of observing A and?B independently of each other; this is known as the marginal probability.

Bayes theorem Interpretations:

Bayesian inference derives the posterior probability as a consequence of two antecedents: a prior probability and a "likelihood function" derived from a statistical model for the observed data. Bayesian inference computes the posterior probability according to Bayes' theorem:

H: stands for any?hypothesis?whose probability may be affected by data?(called?evidence?below). Often there are competing hypotheses, and the task is to determine which is the most probable.

P(H):?the prior probability, is the estimate of the probability of the hypothesis H?before?the data E, the current evidence, is observed.

P(H | E):?the posterior probability, is the probability of H given E, i.e., after E is observed. This is what we want to know: the probability of a hypothesis given the observed evidence.

P(E | H):?is the probability of observing E?given?H, and is called the likelihood. As a function of?E with H fixed, it indicates the compatibility of the evidence with the given hypothesis. The likelihood function is a function of the evidence,?E, while the posterior probability is a function of the hypothesis, H.

P(E): is sometimes termed the marginal likelihood or "model evidence". This factor is the same for all possible hypotheses being considered (as is evident from the fact that the hypothesis H does not appear anywhere in the symbol, unlike for all the other factors), so this factor does not enter into determining the relative probabilities of different hypotheses.

For different values of H, only the factors P(H) and P(E | H), both in the numerator, affect the value of P(H | E) – the posterior probability of a hypothesis is proportional to its prior probability (its inherent likeliness) and the newly acquired likelihood (its compatibility with the new observed evidence).

Sometimes, Bayes theorem can be written as:

where the factor?P(E | H) / P(E) can be interpreted as the impact of?E on the probability of H.


3. Binomial distribution

Binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: a random variable containing a single bit of information: success/yes/true/one (with probability p) or failure/no/false/zero (with probability q = 1 ? p).

In general, if the random variable X follows the binomial distribution with parameters n ∈ ? and p ∈ [0,1], we write X ~ B(n, p). The probability of getting exactly k successes in n trials is given by the probability mass function:

for?k?=?0,?1,?2,?…,?n, where? ?
binomial coefficient

The cumulative distribution function can be expressed as:

where "|k|"* is the "floor" under?k, i.e. the?greatest integer?less than or equal to?k.

Mean: E(X) = np; Variance: Var(X) = npq = np(1-q); Mode:

*Covariance between two binomials:

If two binomially distributed random variables?X?and?Y?are observed together, estimating their covariance can be useful. The covariance?is Cov(X,Y) = E(XY) - μX * μY

In the case?n=?1 (the case of?Bernoulli trials)?XY?is non-zero only when both?X?and?Y?are one, and?μX?and?μY?are equal to the two probabilities. Defining?pB?as the probability of both happening at the same time, this gives

for?n?independent pairwise trials

In a bivariate setting involving random variables X and Y, there is a particular expectation that is often of interest. It is called covariance and is given by: Cov(X,Y) = E((X-E(X))(Y-E(Y)) where the expectation is taken over the bivariate distribution of X and Y. Alternatively, Cov(X,Y) = E(XY) - E(X)E(Y)

Moreover, a scaled version of covariance is the correlation ρ which is given by?

ρ = Corr(X,Y) = Cov(x,y) / [sqrt(Var(X)*sqrt(Var(Y)], Var(X)=σx^2

The correlation ρ is the population analogue of the sample correlation coefficient r that is used to describe the degree of linear relationship involving paired data.


Confidence Interval

Assume that total number of successes X ~ B(n,p) with np>=5, n(1-p)>=5 so that the normal approximation to the binomial is reasonable.

In practice, p is unknown. Under the normal approximation, we have X ~ N(np, np(1-p)) and we define p^ = X/n as the proportion of successes. Since p^ is a linear combination of normal random variable, it follows that p^ ~ N(p,p(1-p)/n) then the probability statement is?

Let?Za/2?denote the (1-a/2)100-th percentile for the standard normal distribution, a?(1-a)100%?approximation confidence interval (because we user normal distribution to the binomial and the substitution of p with p-hat)?for p-hat?is given by


4. Normal distribution

the normal (or Gaussian) distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known.

If X ~ N(μ,σ^2), then E(X) = μ, and Var(X) = σ^2?

σ^2 is the variance and not the standard deviation.

A random variable Z ~ N(0,1) is referred to as standard normal and it has the simplified pdf:?

The relationship between an arbitrary normal random variable X ~ N(μ, σ^2) and the standard normal distribution is expressed via (X-μ) / σ ~ N(0,1)?

Confidence Interval

In this case, we assume X1,X2,...,XN iid normal(μ, σ^2)? where our interest concerns that μ is unknown and σ is known for ease of development. (In real world, we can't find a case with known σ & unknown μ)

X-bar ~ N(μ, σ^2/n)

convert to standard normal distribution. when z= 1.96, F(x) = 0.975

Rearranging terms:

Finally, we obtain a 95% confidence interval (as follows) for μ

More generally, let Za/2 denote the (1-a/2)100-th percentile for the standard normal distribution, a (1-a)100% confidence interval for μ is given by

we use the observed value x-bar. It is understood that confidence intervals are functions of observed statistics.

When n is large, it turns out that the sample standard deviation s provides a good estimate of σ. Therefore, we are able to to provide a confidence interval for μ in the more realistic?case where σ is unknown. We simply replace σ in above mentioned functions with s.


5. Descriptive statistics

It concerns the presentation of data (either numerical or graphical) in a way that makes it easier to digest data.

Dotplot: for univariate data

? ? ????outliers: too big or small

????????centrality: values in the middle portion of the dotplot

????????dispersion: spread or variation in the data

Histograms: for univariate data, the size of dataset n is fairly large

? ? ? ? modality: a histogram with two distinct humps is referred to as bimodal

? ? ? ? skewness:

? ? ? ? symmetry:

? ? ? ? How to choose interval as x-axis: choose the number of intervals roughly equal to sqrt(n) where n is the number of observations.

? ??????For those intervals are not equal length, we should plot?relative frequency?divided by intervals length on the vertical axis, instead of using?frequency.

Boxplot:?for?univariate?data, is most appropriately used when the data are divided into groups.

? ? ? ? sample median (Q2); top-edge is 3/4 quantile (Q3); bottom-edge: 1/4 quantile (Q1)

? ??????interquartile range (IQR) : Q3-Q1, known as?ΔQ

? ??????maximum interval: Q3+1.5ΔQ or 90th percentile

????????minimum interval: Q1-1.5ΔQ or 10th percentile

? ? ? ? values that out of max & min intervals are Outliers.

? ? ? ? whiskers (vertical dashed lines) extend to the outer limits of the data and circles correspond to outliers.

Scatterplot: it is appropriate for paired data

? ? ? ? extrapolated data: when predicting, you should be cautious about predictions based on extrapolated data. There perhaps appears a positive increase trend from the pairplot with two variables X,Y, but it doesn't mean they have the same relationship for X, Y. (Data should be combined with the real world)

Correlation Coefficient: measure the degree of linear association between x and y

It is a numerical descriptive statistic for investigating paired data is the sample correlation or correlation or correlation coefficient r defined by?

Properties:

? ? -1 <= r <= 1

? ? when r close to 1, the points are clustered about a line with positive slope

? ? when r close to -1, the points are clustered about a line with negative slope

? ? when r close to 0, points are lack of linear relationship. However, there may be a quadratic relationship

? ? when x and y are correlated (not close to 0), it merely denotes the presence of a linear association. For example, weight and height are positively correlated, and it is obviously wrong to state that one causes the other.

? ? In order to establish cause and effect relationship, we should do a controlled study.

easy to calculate


6. Law of large numbers

the average?of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

7. Central limited theorem

For the CLT, we assume that the random variables X1,X2,...,Xn are *iid from a population with mean μ and variance σ^2. The CLT states that as n => infinity, the distribution of *(X_bar - μ)/(σ/sqrt(n)) converges to the distribution of a standard normal random variable.

*iid: independent and identically distributed, which means X's are independent of one another and arise from the same probability distribution.

從一個(gè)均值為?μ 掘譬、標(biāo)準(zhǔn)差為σ的總體中選取一個(gè)有n個(gè)觀測(cè)值的隨機(jī)樣本孕惜。那么當(dāng)n足夠大時(shí)弛作,xˉ的抽樣分布將近似服從均值μxˉ=μ、標(biāo)準(zhǔn)差σxˉ=σ/√n的正態(tài)分布焙蚓。并且樣本量越大,對(duì)xˉ的抽樣分布的正太近似越好

In?probability theory, the?central limit theorem?(CLT) establishes that, in some situations, when?independent random variables?are added, their properly normalized sum tends toward a?normal distribution?(informally a "bell curve") even if the original variables themselves are not normally distributed.?

要求:

1. 總體本身的分布不要求正態(tài)分布

2. 樣本每組要足夠大,但也不需要太大 n≥30

中心極限定理在理論上保證了我們可以用只抽樣一部分的方法,達(dá)到推測(cè)研究對(duì)象統(tǒng)計(jì)參數(shù)的目的饼丘。


8. Linear regression: not mentioned Gradient Descent?and Cost Function in machine learning aspect.

linear regression is predicting the value of a variable Y(dependent variable) based on some variable X(independent variable) provided there is a linear relationship between X and Y.

Y=b0 + b1X+e

(Recall that the regression equation without the error term,?Y=b0 + b1X , is called the least squares line.)

a shallow-sloped estimated regression line, y^

SSTO, a.k.a SST, sum of squared total: sum of difference from the mean of y and data point yi

SSE, sum of squared error:?sum of difference from the estimated?regression line?and data point yi

SSR, sum of squared regression:?quantifies how far the estimated sloped regression line, y^i, is from the horizontal "no relationship line," the sample mean or yˉ.

SST = SSR + SSE

From the above example, it tells us that most of the variation in the response?y?(SSTO?= 1827.6) is just due to random variation (SSE?= 1708.5), not due to the regression of?y?on?x?(SSR?= 119.1).

Coefficient of Determination or r^2

between 0 and 1

? ? If?r^2?= 1, all data points fall perfectly on the regression line.?The predictor?x?accounts for?all?of the variation in?y!

????If?r^2?= 0, the estimated regression line is perfectly horizontal. The predictor?x?accounts for?none?of the variation in?y!

? ??r^2?×100 percent of the variation in?y?is 'explained by' the variation in predictor?x.

? ??SSE is the amount of variation that is left unexplained by the model.

? ? R-squared Cautions:

? ? ? ? 1. The coefficient of determination?r^2?and the correlation coefficient?r?quantify the strength of a?linear?relationship. It is possible that?r^2?= 0% and?r?= 0, suggesting there is no linear relation between?x?and?y, and yet a perfect curved (or "curvilinear" relationship) exists.

? ? ? ? [Most misinterpreting concept]?2. A large r^2 value should not be interpreted as meaning that the estimated regression line fits the data well.

? ? ? ? Although the R-squared value is 92% and only 8% of the variation US population is left to explain after taking into account the year in a linear way. The plot suggests that a curve plot describe the relationship even better. (Its large value does suggest that taking into account year is better than not doing so. It just doesn't tell us that we could still do better.)

? ? ? ? 3.?The coefficient of determination r2 and the correlation coefficient r can both be greatly affected by just one data point (or a few data points).

? ? ? ? 4.?Correlation (or association) does not imply causation.

VIF variance inflation rate: 1/(1-r^2)

????VIF check the co-linearity between explanatory variables. Over 5 is too bad.


9. Hypothesis test

H0: null hypothesis; H1: alternative hypothesis.

Testing begins by assuming that H0 is true, and data is collected in an attempt to establish the truth of H1.

H0 is usually what you would typically expect (ie, H0 represents the status quo).

In inference step, we calculate a p-value, defined as the probability of observing data as extreme or more extreme (in the direction of H1) than what we observed given that H0 is true.

Significance level: a, usually equal to 0.01, 0.05

? ? If p-value is less than a, reject H0;

? ? If p-value is larger than a, fail to reject H0.

......


10. Model Selection: AIC, BIC, Normality, Homoscedasticity, Outlier Detection

When fitting models, it is possible to increase the likelihood by adding parameters, but doing so may result in?overfitting. Both BIC and AIC attempt to resolve this problem by introducing a penalty term for the number of parameters in the model.

AIC Akaike information criterion: 2k - 2ln(L) where k is the number of parameters in the model (or the number of degrees of freedom being used? up); ln(L) is the 'log likelihood', which is a measure of how well the model fits the data. Low AIC is better. 2k is the 'penalty' term.

AIC measure the Goodness of fit & Complexity (number of terms)

Comparing AIC with the proportion of variance explained, R^2, R^2 only measures goodness of fit.

However, because of co-linearity, sometimes that variable is 'stealing' the significance from some other term. The AIC doesn't care which terms are significant, it just looks at how well the model fits as a whole.

BIC Bayesian Information Criterion: (ln(n)*k) - 2ln(L) where n is the number of observations, also call the sample size, k stands for the number of parameters (df).

BIC is similar to the AIC, but imposes a larger penalty term for complexity. Lower BIC is better. And BIC favors for simpler models, given a set of candidate models. What's more, BIC is easier to find significance in variables that are unimportant when n is large because of large penalty.


When selecting models, one criterion (AIC/BIC) is not sufficient to cover all the aspects of the model.

we also need to check influential outliers,?homoscedasticity (equal variance) and normality.

Residual is to check above mentioned properties.?


To check normality: use Shapiro-Wilks Test

It is a hypothesis test whose null hypothesis is 'your data is normally distributed'

Large p-value, fail to reject H0, you have no evidence against normality; small p-value, reject H0, so you have evidence of non-normality


To check homoscedasticity: use Levene Test

Still hypothesis with null hypothesis: all input samples are from populations with equal variances.


Outlier Detection: in statistical method, not mention approaches in data mining aspect.

noise: it is random error or variance in a measured variable

????noise should be removed before outlier detection.

outlier: A data object that deviates significantly from the normal objects as if it were generated by a different mechanism. It violates the mechanism that generates the normal data.

Parametric Methods I: detection univariate outliers based on Normal Distribution

? ??μ+3σ region contains 99.7% data, outliers are out of this region.

Parametric Methods II:?detection of multivariate outliers.?

? ??bottom line: transform the multivariate outlier detection task into a univariate outlier detection problem

? ? ? ? use X^2-statistic: (chi square statistic)

O is observed value; E is expected value,??“i” is the “ith” position in the contingency table.

????????If?X^2-statistic is large, then Object Oi is an outlier.

? ??????A?low value?for chi-square means there is a high correlation between your two sets of data. In theory, if your observed and expected values were equal (“no difference”) then chi-square would be zero — an event that is unlikely to happen in real life. You could take your calculated chi-square value and compare it to a?critical value from a chi-square table.?If the chi-square value is more than the critical value, then there is a significant difference.

? ??????A chi-square statistic is one way to show a relationship between two?categorical variables. In statistics, there are two types of variables: numerical (countable) variables and non-numerical (categorical) variables. The chi-squared statistic is a single number that tells you how much difference exists between your observed counts and the counts you would expect if there were no relationship at all in the population.

[Omit] Parametric Methods III: Using mixture of parametric distributions

Outlier Detection is a big topic that can be expand for another article. Let me stop it here in Statistics topic.


Statistics notation:

Note that statistics are quantities that we can?calculate, and therefore?do not depend on unknown parameters. Moveover, statistics have associated probability distributions, and we are sometimes interested in the distributions of statistics.?

Capital letters, in statistics, usually denote?random variables

MLE: maximum likelihood estimate? ? 最大似然估計(jì)

MSE:? ? mean squared error? ? 誤差均方

RMSE:? ? root mean squared error 誤差均方根

r^2:? ? coefficient of determination? ? 確定系數(shù)

SE:? ? standard error? ? 標(biāo)準(zhǔn)誤

SEM:? ? standard error of the mean? ? ?均數(shù)的標(biāo)準(zhǔn)誤

SS:? ? sum of squares? ? 平方和

SSE:? ? sum of squared error of the prediction function

SSR:? ? sum of squared residuals

SST:? ? total sum of squares


最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市缴守,隨后出現(xiàn)的幾起案子葬毫,更是在濱河造成了極大的恐慌镇辉,老刑警劉巖屡穗,帶你破解...
    沈念sama閱讀 218,036評(píng)論 6 506
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件贴捡,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡村砂,警方通過查閱死者的電腦和手機(jī)烂斋,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,046評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來础废,“玉大人汛骂,你說我怎么就攤上這事∑老伲” “怎么了帘瞭?”我有些...
    開封第一講書人閱讀 164,411評(píng)論 0 354
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)蒿讥。 經(jīng)常有香客問我蝶念,道長(zhǎng),這世上最難降的妖魔是什么芋绸? 我笑而不...
    開封第一講書人閱讀 58,622評(píng)論 1 293
  • 正文 為了忘掉前任媒殉,我火速辦了婚禮,結(jié)果婚禮上摔敛,老公的妹妹穿的比我還像新娘廷蓉。我一直安慰自己,他們只是感情好马昙,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,661評(píng)論 6 392
  • 文/花漫 我一把揭開白布桃犬。 她就那樣靜靜地躺著,像睡著了一般行楞。 火紅的嫁衣襯著肌膚如雪疫萤。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,521評(píng)論 1 304
  • 那天敢伸,我揣著相機(jī)與錄音扯饶,去河邊找鬼。 笑死池颈,一個(gè)胖子當(dāng)著我的面吹牛尾序,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播躯砰,決...
    沈念sama閱讀 40,288評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼每币,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來了琢歇?” 一聲冷哼從身側(cè)響起兰怠,我...
    開封第一講書人閱讀 39,200評(píng)論 0 276
  • 序言:老撾萬榮一對(duì)情侶失蹤梦鉴,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后揭保,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體肥橙,經(jīng)...
    沈念sama閱讀 45,644評(píng)論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,837評(píng)論 3 336
  • 正文 我和宋清朗相戀三年秸侣,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了存筏。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 39,953評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡味榛,死狀恐怖椭坚,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情搏色,我是刑警寧澤善茎,帶...
    沈念sama閱讀 35,673評(píng)論 5 346
  • 正文 年R本政府宣布,位于F島的核電站频轿,受9級(jí)特大地震影響垂涯,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜略吨,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,281評(píng)論 3 329
  • 文/蒙蒙 一集币、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧翠忠,春花似錦鞠苟、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,889評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至考榨,卻和暖如春跨细,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背河质。 一陣腳步聲響...
    開封第一講書人閱讀 33,011評(píng)論 1 269
  • 我被黑心中介騙來泰國打工冀惭, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人掀鹅。 一個(gè)月前我還...
    沈念sama閱讀 48,119評(píng)論 3 370
  • 正文 我出身青樓散休,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國和親乐尊。 傳聞我的和親對(duì)象是個(gè)殘疾皇子戚丸,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,901評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi閱讀 7,332評(píng)論 0 10
  • 今天午飯?jiān)跁x老周吃的,由于好久沒來胁勺,點(diǎn)的多了世澜。飯后我們?cè)谠聣⒉剑諝獠皇呛芎靡黾福仓荒苋塘艘撕势告;貋斫鉀Q點(diǎn)工作問題蛇捌,比...
    小王加油啊閱讀 176評(píng)論 0 0
  • 精要主義如何讓自己變得更美好 探索和思考對(duì)自己重要有意義的事~留有空白思考~比如讀書~內(nèi)心太過浮躁~外在影響的東西...
    星星_8d4c閱讀 166評(píng)論 0 0
  • 兩排課桌是我們課堂的距離兩百名次是我們成績(jī)的差距兩顆糖果是你認(rèn)識(shí)我的開始兩本筆記是我們半年的悄悄話兩張彩紙是我的告...
    劉白1996閱讀 1,239評(píng)論 1 2
  • 想念pad,打字也舒服咱台,借著地?zé)粲陌档墓饴绨瑁犅爩憣懣纯础胍S身帶著pad,就得有裝得下的包包回溺,包包的挑選不能太...
    大末子閱讀 145評(píng)論 0 0