M1
two types of features: continuous and categorical
LAB
ASS 1. r'
('CSV does not exist' - Pandas DataFrame)
df = pd.read_csv("E:\\Inbox\Python\\DAT210x-master\\Module2\Datasets\\tutorial.csv")
ASS2
df.columns = ['motor', 'screw', 'pgain', 'vgain', 'class']
File exists but python says 'does not exist'
M3. Exploring Data
Lecture: Visualizations
Lecture: Basic Plots
1. Histograms
- Histograms help you understand the distribution of a feature in your dataset
- They accomplish this by simultaneously answering the questions where in your feature's domain your records are located at, and *how *many records exist there
- Histograms are only really meaningful with categorical data
Coincidentally, these two questions are also answered by the .unique() and .value_counts() methods discussed in the feature wrangling section, but in a graphical way. Be sure to take note of this in the exploring section of your course map!
If you have a continuous feature, it must first be *binned *or discretized by transforming the continuous feature into a categorical one by grouping similar values together
If your interest lies in probabilities per bin rather than frequency counts, set the named parameter normed=True, which will normalize your results as percentages. MatPlotLib's online API documentation exposes many other features and optional parameters that can be used with histograms, such as cumulative and histtype.
= .plot.hist()
2. 2D Scatter Plots
2D scatter plots are used to visually inspect if a correlation exist between the charted features.
- They don't have to be continuous, but they must at least be ordinal.
Without ordering, the position of the plots would have no meaning.
This is your basic 2D scatter plot. Notice you have to call .scatter on a dataframe rather than a series, since two features are needed rather than just one. You also have to specify which features within the dataset you want graphed. You'll be using scatter plots so frequently in your data analysis you should also know how to create them directly from MatPlotLib, in addition to knowing how to graph them from Pandas. This is because many Pandas methods actually return regular NumPy NDArrays, rather than fully qualified Pandas dataframes.
= .plot.scatter(x = '', y = '')
3D Scatter Plots
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_xlabel('Final Grade')
ax.set_ylabel('First Grade')
ax.set_zlabel('Daily Alcohol')
ax.scatter(student_dataset.G1, student_dataset.G3, student_dataset['Dalc'], c='r', marker=' .')
plt.show()
Higher Dimensionality Visualizations
Lab: Visualizations
Assignment 1
- 加載數(shù)據(jù)到dataframe
- 切片處理
- 生成 Histograms
Gaussian / normal distribution?/ more variance
Assignment 2
- 2d scatter plot
- 3D scatter plot
Be sure to label your axes, and use the optional display parameter c='red'. - parallel coordinates chart
- Andrews curve plot
- Drop the id column
我想簡單了 從M3之后lab不在放在最后燥透,相反一個quiz搭配著一個lab
Transforming Data
- 為了去除 redundant or even poor features in your dataset
實現(xiàn) machine learning algorithm to succeed
你需要 discerning(整理), discriminating and independent
A transformer is any algorithm you apply to your dataset that changes either the feature count or feature values, but does not alter the number of observations
Another popular transformer use is that of dimensionality reduction, where the number of features in your dataset is intelligently reduced to a subset of the original.
Principal Component Analysis
- Unsupervised learning aims to discover some type of hidden structure
within your data.
Without a label or correct answer to test against, there is no metric for evaluating unsupervised learning algorithms. Principal Component Analysis (PCA), a transformation that attempts to convert your possibly correlated features into a set of linearly uncorrelated ones, is the first unsupervised learning algorithm you'll study.
the group of dimensionality reduction
PCA's approach to dimensionality reduction is to derive a set of degrees of freedom that can then be used to reproduce most of the variability of your data.
However you probably didn't, since that view doesn't contain enough variance, or information to easily be discernible as a telephone pole.
Stated differently, it accesses your dataset's covariance structure directly using matrix calculations and eigenvectors to compute the best unique features that describe your samples.
直接進入數(shù)據(jù)的協(xié)方差結(jié)構(gòu)
運用矩陣運算和特征向量去計算最佳特征值
An iterative approach to this would first find the* *center of your data, based off its numeric features. Next, it would search for the direction that has the most variance or widest spread of values. That direction is the principal component vector, so it is then added to a list. By searching for more directions of maximal variance that are orthogonal to all previously computed vectors, more principal component can then be added to the list. This set of vectors form a new feature space that you can represent your samples with.
y運用迭代的方法
In our telephone pole example, the frontal view had more variance than the bird's-eye view and so it was preferred by PCA.
the group of dimensionality reduction
PCA's approach to dimensionality reduction is to derive a set of degrees of freedom that can then be used to reproduce most of the variability of your data.
However you probably didn't, since that view doesn't contain enough variance, or information to easily be discernible as a telephone pole.
Stated differently, it accesses your dataset's covariance structure directly using matrix calculations and eigenvectors to compute the best unique features that describe your samples.
直接進入數(shù)據(jù)的協(xié)方差結(jié)構(gòu)
運用矩陣運算和特征
When Should I Use PCA?
PCA, and in fact all dimensionality reduction methods, have three main uses:
To handle the clear goal of reducing the dimensionality and thus complexity of your dataset.
To pre-process your data in preparation for other supervised learning tasks, such as regression and classification.
To make visualizing your data easier, since we can only perceive three dimensions simultaneously.
By using PCA, rather than you creating categories manually, it *discovers *the natural categories that exist in your data.
One warning is that again, being unsupervised, PCA can't tell you exactly know what the newly created components or features mean