《Hands-On Machine Learning with Scikit-Learn and TensorFlow》附錄B:機器學習項目清單

Machine Learning Project Checklist

  1. Frame the problem and look at the big picture.
  2. Get the data.
  3. Explore the data to gain insights.
  4. Prepare the data to better expose the underlying data patterns to Machine Learning algorithms.
  5. Explore many different models and short-list the best ones.
  6. Fine-tune your models and combine them into a great solution.
  7. Present your solution.
  8. Launch, monitor, and maintain your system

Frame the Problem and Look at the Big Picture

  1. Define the objective in business terms.
  2. How will your solution be used?
  3. What are the current solutions/workarounds (if any)?
  4. How should you frame this problem (supervised/unsupervised, online/offline,etc.)?
  5. How should performance be measured?
  6. Is the performance measure aligned with the business objective?
  7. What would be the minimum performance needed to reach the business objective?
  8. What are comparable problems? Can you reuse experience or tools?
  9. Is human expertise available?
  10. How would you solve the problem manually?
  11. List the assumptions you (or others) have made so far.
  12. Verify assumptions if possible.

Get the Data

Note: automate as much as possible so you can easily get fresh data.

  1. List the data you need and how much you need.
  2. Find and document where you can get that data.
  3. Check how much space it will take.
  4. Check legal obligations, and get authorization if necessary.
  5. Get access authorizations.
  6. Create a workspace (with enough storage space).
  7. Get the data.
  8. Convert the data to a format you can easily manipulate (without changing the data
    itself).
  9. Ensure sensitive information is deleted or protected (e.g., anonymized).
  10. Check the size and type of data (time series, sample, geographical, etc.).
  11. Sample a test set, put it aside, and never look at it (no data snooping!).

Explore the Data

Note: try to get insights from a field expert for these steps.

  1. Create a copy of the data for exploration (sampling it down to a manageable size if
    necessary).
  2. Create a Jupyter notebook to keep a record of your data exploration.
  3. Study each attribute and its characteristics:
    ? Name
    ? Type (categorical, int/float, bounded/unbounded, text, structured, etc.)
    ? % of missing values
    ? Noisiness and type of noise (stochastic, outliers, rounding errors, etc.)
    ? Possibly useful for the task?
    ? Type of distribution (Gaussian, uniform, logarithmic, etc.)
  4. For supervised learning tasks, identify the target attribute(s).
  5. Visualize the data.
  6. Study the correlations between attributes.
  7. Study how you would solve the problem manually.
  8. Identify the promising transformations you may want to apply.
  9. Identify extra data that would be useful
  10. Document what you have learned.

Prepare the Data

Notes:
? Work on copies of the data (keep the original dataset intact).
? Write functions for all data transformations you apply, for five reasons:
—So you can easily prepare the data the next time you get a fresh dataset
—So you can apply these transformations in future projects
—To clean and prepare the test set
—To clean and prepare new data instances once your solution is live
—To make it easy to treat your preparation choices as hyperparameters

  1. Data cleaning:
    ? Fix or remove outliers (optional).
    ? Fill in missing values (e.g., with zero, mean, median…) or drop their rows
    (orcolumns).

  2. Feature selection (optional):
    ? Drop the attributes that provide no useful information for the task.

  3. Feature engineering, where appropriate:
    ? Discretize continuous features.
    ? Decompose features (e.g., categorical, date/time, etc.).
    ? Add promising transformations of features (e.g., log(x), sqrt(x), x^2, etc.).
    ? Aggregate features into promising new features.

  4. Feature scaling: standardize or normalize features.

Short-List Promising Models

Notes:
? If the data is huge, you may want to sample smaller training sets so you can
train many different models in a reasonable time (be aware that this penalizes
complex models such as large neural nets or Random Forests).
? Once again, try to automate these steps as much as possible.

  1. Train many quick and dirty models from different categories (e.g., linear,
    naive Bayes, SVM, Random Forests, neural net, etc.) using standard parameters.

  2. Measure and compare their performance.
    ? For each model, use N-fold cross-validation and compute the mean
    and standarddeviation of the performance measure on the N folds.

  3. Analyze the most significant variables for each algorithm.

  4. Analyze the types of errors the models make.
    ? What data would a human have used to avoid these errors?

  5. Have a quick round of feature selection and engineering.

  6. Have one or two more quick iterations of the five previous steps.

  7. Short-list the top three to five most promising models, preferring models
    that make different types of errors.

Fine-Tune the System

Notes:
? You will want to use as much data as possible for this step, especially
as you movetoward the end of fine-tuning.
? As always automate what you can.

  1. Fine-tune the hyperparameters using cross-validation.
    ? Treat your data transformation choices as hyperparameters, especially
    when you are not sure about them (e.g., should I replace missing values
    with zero or with the median value? Or just drop the rows?).
    ? Unless there are very few hyperparameter values to explore, prefer
    random search over grid search. If training is very long, you may prefer
    a Bayesianoptimization approach (e.g., using Gaussian process priors).

  2. Try Ensemble methods. Combining your best models will often perform
    better than running them individually.

  3. Once you are confident about your final model, measure its performance
    on thetest set to estimate the generalization error.

Present Your Solution

  1. Document what you have done.

  2. Create a nice presentation.
    ? Make sure you highlight the big picture first.

  3. Explain why your solution achieves the business objective.

  4. Don’t forget to present interesting points you noticed along the way.
    ? Describe what worked and what did not.
    ? List your assumptions and your system’s limitations.

  5. Ensure your key findings are communicated through beautiful visualizations
    or easy-to-remember statements (e.g., “the median income is the number-one
    predictor of housing prices”).

Launch!

  1. Get your solution ready for production (plug into production data inputs,
    write unit tests, etc.).

  2. Write monitoring code to check your system’s live performance at regular
    interval sand trigger alerts when it drops.
    ? Beware of slow degradation too: models tend to “rot” as data evolves.
    ? Measuring performance may require a human pipeline (e.g., via a crowd
    sourcing service).
    ? Also monitor your inputs’ quality (e.g., a malfunctioning sensor sending
    random values, or another team’s output becoming stale). This is particularly
    important for online learning systems.

  3. Retrain your models on a regular basis on fresh data (automate as
    much as possible).

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯(lián)系作者
  • 序言:七十年代末擦酌,一起剝皮案震驚了整個濱河市霸妹,隨后出現的幾起案子错妖,更是在濱河造成了極大的恐慌,老刑警劉巖谆棺,帶你破解...
    沈念sama閱讀 219,589評論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件栽燕,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機纫谅,發(fā)現死者居然都...
    沈念sama閱讀 93,615評論 3 396
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來溅固,“玉大人唾戚,你說我怎么就攤上這事惫谤。” “怎么了?”我有些...
    開封第一講書人閱讀 165,933評論 0 356
  • 文/不壞的土叔 我叫張陵负拟,是天一觀的道長。 經常有香客問我袖外,道長循头,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,976評論 1 295
  • 正文 為了忘掉前任爆捞,我火速辦了婚禮奉瘤,結果婚禮上,老公的妹妹穿的比我還像新娘煮甥。我一直安慰自己盗温,他們只是感情好,可當我...
    茶點故事閱讀 67,999評論 6 393
  • 文/花漫 我一把揭開白布成肘。 她就那樣靜靜地躺著卖局,像睡著了一般。 火紅的嫁衣襯著肌膚如雪双霍。 梳的紋絲不亂的頭發(fā)上砚偶,一...
    開封第一講書人閱讀 51,775評論 1 307
  • 那天,我揣著相機與錄音洒闸,去河邊找鬼染坯。 笑死,一個胖子當著我的面吹牛顷蟀,可吹牛的內容都是我干的酒请。 我是一名探鬼主播,決...
    沈念sama閱讀 40,474評論 3 420
  • 文/蒼蘭香墨 我猛地睜開眼鸣个,長吁一口氣:“原來是場噩夢啊……” “哼羞反!你這毒婦竟也來了?” 一聲冷哼從身側響起囤萤,我...
    開封第一講書人閱讀 39,359評論 0 276
  • 序言:老撾萬榮一對情侶失蹤昼窗,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后涛舍,有當地人在樹林里發(fā)現了一具尸體澄惊,經...
    沈念sama閱讀 45,854評論 1 317
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 38,007評論 3 338
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現自己被綠了掸驱。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片肛搬。...
    茶點故事閱讀 40,146評論 1 351
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖毕贼,靈堂內的尸體忽然破棺而出温赔,到底是詐尸還是另有隱情,我是刑警寧澤鬼癣,帶...
    沈念sama閱讀 35,826評論 5 346
  • 正文 年R本政府宣布陶贼,位于F島的核電站,受9級特大地震影響待秃,放射性物質發(fā)生泄漏拜秧。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 41,484評論 3 331
  • 文/蒙蒙 一章郁、第九天 我趴在偏房一處隱蔽的房頂上張望枉氮。 院中可真熱鬧,春花似錦暖庄、人聲如沸嘲恍。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,029評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽佃牛。三九已至,卻和暖如春医舆,著一層夾襖步出監(jiān)牢的瞬間俘侠,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,153評論 1 272
  • 我被黑心中介騙來泰國打工蔬将, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留爷速,地道東北人。 一個月前我還...
    沈念sama閱讀 48,420評論 3 373
  • 正文 我出身青樓霞怀,卻偏偏與公主長得像惫东,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子毙石,可洞房花燭夜當晚...
    茶點故事閱讀 45,107評論 2 356

推薦閱讀更多精彩內容