Multi-class AdaBoost learning experience

This is my learning experience when I read the paper Multi-class AdaBoost. To learn this new algorithm, first I reviewed AdaBoost and its relationship with Forward Stagewise Additive Modeling. Then I realized that multi-AdaBoost is natural extension of AdaBoost.

AdaBoost

Let T(x) denote a weak multi-class classifier that assigns a class label to x, then the AdaBoost algorithm proceeds as follows:


AdaBoost

Pay attention to the weak classifier weight, α, to let α >0, we need to ensure the err<0.5. This can be easily done when it's binary classification problem, because error rate of random guessing is 0.5. But it is difficult to achieve in multi-class case. So it may easily fail in multi-class case.

Forward Stagewise Additive Modeling

First let's take a look at Additive Model:

Additive Model

Typically, {βm, γm} are estimated by minimizing some loss function L, which measures the prediction errors over training data.



Directly optimizing such loss function is often difficult. However, consider it's an additive model, a simple greedy method can be used. We can sequentially optimize the following loss function, then add new base functions to the expansion function f(x) without changing the parameters that have been added.

loss function for each step

Thus the Forward Stagewise Additive Modeling algorithm (denote FSAM) is:

  1. Initialize f_0(x)=0
  2. For m = 1 to M :
    a) Compute
    ![](http://www.forkosh.com/mathtex.cgi? \Large \ (\beta_m,\gamma_m) = \mathop{\arg\max}{\beta, \gamma} \sum{i=1}^{N} L[y_i,f_{m-1}(x_i) + \beta b(x_i;\gamma)] (2.1) )

b) Set
![](http://www.forkosh.com/mathtex.cgi? \Large f_m(x) = f_{m-1}(x) + \beta_m b(x;\gamma_m) --(2.2))

AdaBoost and Forward Stagewise Additive Modeling

In fact AdaBoost is equivalent to Forward Stagewise Additive Modeling using the exponential loss funtion
![](http://www.forkosh.com/mathtex.cgi? \Large L[y,f(x)] = exp[-yf(x)]. )

Again let T(x) denote a classifier that assigns a class label to x, in this case, T(x)=1 or -1.

Then (2.1) is:
![](http://www.forkosh.com/mathtex.cgi? \Large (\beta_m,T_m) = \mathop{\arg\max}{\beta, T} \sum{i=1}^{N} \ exp [-y_i[f_{m-1}(x_i) + \beta T(x_i)]])

To simplify the expression, let
![](http://www.forkosh.com/mathtex.cgi? \Large w^{(m)}i = exp[-y_i f{m-1}(x_i)] )
Notice that w depends neither on β nor T(x).

Then the expression is
![](http://www.forkosh.com/mathtex.cgi? \Large (\beta_m,T_m) = \mathop{\arg\max}{\beta, T} \sum{i=1}^{N} w^{(m)}_i exp [-\beta T(x_i)])

It can be rewritten as
![](http://www.forkosh.com/mathtex.cgi? \Large (\beta_m,T_m) = \mathop{\arg\max}{\beta, T} \ e^{-\beta}\sum{y_i = T(x_i)} w^{(m)}i + e^{\beta}\sum{y_i \neq T(x_i)} w^{(m)}i \ = \mathop{\arg\max}{\beta, T} \ (e^\beta - e^{-\beta}) \sum_{i=1}^N w_i^{(m)} I[y_i \neq T(x_i)] + e{-\beta}\sum_{i=1}N w_i^{(m)} )

Now we can get that
![](http://www.forkosh.com/mathtex.cgi? \Large \beta m=\frac{1}{2} log \frac{\sum{i=1}^N w_i^{(m)} (1- I[y_i \neq T_m(x_i)]}{\sum_{i=1}^N I[y_i \neq T_m(x_i)]} )

denote err_m as
![](http://www.forkosh.com/mathtex.cgi? \Large err_m = \frac {\sum_{i=1}^N w_i^{(m)} I[y_i \neq T_m(x_i) ]} {\sum_{i=1}^N w_i^{(m)}} )

Then
![](http://www.forkosh.com/mathtex.cgi? \Large \beta _m=\frac{1}{2} log \frac{1-err_m}{err_m})

Look familiar, right? It is actually the AdaBoost's weak classifier weight, α_m, except the 1/2. I take the 1/2 as learning rate.

Also recall w_m,
![](http://www.forkosh.com/mathtex.cgi? \Large w^{(m)}i = exp[-y_i f{m-1}(x_i)] )
Then
![](http://www.forkosh.com/mathtex.cgi? \Large w_i^{(m+1)}_i = w_i^{(m)} exp[-\beta_m y_i T_m(x_i)] \= w_i^{(m)} \cdot exp[2\beta_m I[y_i \neq T_m(x_i)] \cdot exp(-\beta_m))
using the fact that
![](http://www.forkosh.com/mathtex.cgi? \Large -y_iT_m(x_i)=2 \cdot I[ y_i \neq T_m(x_i)]-1)
compare to the weight in AdaBoost,

weight in AdaBoost

We can know that they are equivalent.

As I mentioned before, AdaBoost can easily fail in multi-class case due to the error rate. To avoid this, a natural way is to do something about α. That's what SAMME does. SAMME is short for Stagewise Additive Modeling using a Multi-class Exponential loss function.

SAMME


Note that SAMME is very similar to AdaBoost, except the term log(K-1). Obviously, when K=2, SAMME is equivalent to AdaBoost. In SAMME, in order for α to be positive, we only need err<(K-1)/K. It's just a little better than random guessing.

SAMME and Forward Stagewise Additive Modeling

We can mimic the steps in AdaBoost and Forward Stagewise Additive Modeling to prove SAMME is equivalent to Forward Stagewise Additive Modeling using a Multi-class Exponential loss function.

In the multi-class classification setting, we can recode the output c with a K-dimensional vector y, as:
y_k = 1 if c == k else -1 / (K - 1), like


y encoding

Given the training data, we wish to find f(x) = (f1(x),...,fK(x))T such that
![](http://www.forkosh.com/mathtex.cgi? \Large \mathop{\min}{f(x)}\sum{i=1}^nL[y_i,f(x_i)]\subject\ to \quad f_1(x)+f_2(x)+...+f_K(X)=0 )
We consider f(x) has following form:
![](http://www.forkosh.com/mathtex.cgi? \Large f(x) = \sum_{m=1}^M\beta_{m}g_{m}(x) )
where β is weight and g(x) is base classifier.

Let's start from the loss function. The multi-class exponential loss function:

![](http://www.forkosh.com/mathtex.cgi? \Large L(y,f)=exp[-1/K(y_1f_1+...+y_Kf_K)]\=exp(-\frac{1}{K} y^Tf) )

Then (2.1) in multi-class case is:
![](http://www.forkosh.com/mathtex.cgi? \Large (\beta_m,g_m) = \mathop{\arg\max}{\beta, g} \sum{i=1}^{N} \ exp [-\frac{1}{K} y_i^T(f_{m-1}(x_i) + \beta g(x_i)] \ =\mathop{\arg\max}{\beta, g} \sum{i=1}^{N}w_i\ exp[-\frac{1}{K}\beta y_i^T g(x_i)] \qquad\qquad(3.1))
where
![](http://www.forkosh.com/mathtex.cgi? \Large w_i^m=exp[-\frac{1}{K}\beta y_i^Tf_{m-1}(x_i)] )
Notice that every g(x) has a one-to-one correspondence with a multi-class classifier T (x) in the following way:
![](http://www.forkosh.com/mathtex.cgi? \Large T(x) =k,\qquad if \quad g_K(x)=1 )

Hence, solving for g_m(x) is equivalent to finding the multi-class classifier T_m(x) that can generate g_m(x).

Recall that in AdaBoost and Forward Stagewise Additive Modeling, we rewrite the expression as
![](http://www.forkosh.com/mathtex.cgi? \Large (\beta_m,T_m) = \mathop{\arg\max}{\beta, T} \ e^{-\beta}\sum{y_i = T(x_i)} w^{(m)}i + e^{\beta}\sum{y_i \neq T(x_i)} w^{(m)}i \ = \mathop{\arg\max}{\beta, T} \ (e^\beta - e^{-\beta}) \sum_{i=1}^N w_i^{(m)} I[y_i \neq T(x_i)] + e{-\beta}\sum_{i=1}N w_i^{(m)} )

Similarly, we rewrite (3.1) as
![](http://www.forkosh.com/mathtex.cgi? \Large (\beta_m,T_m) = \mathop{\arg\max}{\beta, T} \ e^{-\frac{\beta}{K-1}}\sum{c_i = T(x_i)} w^{(m)}i + e{\frac{\beta}{(K-1)2}}\sum{c_i \neq T(x_i)} w^{(m)}i \ = \mathop{\arg\max}{\beta, T} \ [e{\frac{\beta}{(K-1)2}} - e^{-\frac{\beta}{K-1}} ]\sum_{i=1}^N w_i^{(m)} I[c_i \neq T(x_i)]+ e{-\frac{\beta}{K-1}}\sum_{i=1}N w_i^{(m)} )

So the solution is
![](http://www.forkosh.com/mathtex.cgi? \Large \beta m=\frac{(K-1)^2}{K} [log \frac{1-err_m}{err_m} +log(K-1) ] )
where error
![](http://www.forkosh.com/mathtex.cgi? \Large err_m = \frac {\sum
{i=1}^N w_i^{(m)} I[c_i \neq T_m(x_i) ]} {\sum_{i=1}^N w_i^{(m)}} ).

Look at the β_m, we can ignore (K-1)^2/K, the rest is just the difference between AdaBoost and SAMME. It proves that the extra term log(K ? 1) in SAMME is not artificial.

SAMME.R

The SAMME algorithm expects the weak learner to deliver a classifier T (x) ∈ {1, . . . , K}. Analternative is to use real-valued confidence-rated predictions such as weighted probability estimates,to update the additive model, rather than the classifications themselves.




In my opinion, the SAMME.R uses more information than SAMME, and has more robustness.

Python3 Implementation of Multi-class AdaBoost using SAMME and SAMME.R

__author__ = 'Xin'

import numpy as np
from numpy.core.umath_tests import inner1d
from copy import deepcopy


class AdaBoostClassifier(object):
    '''
    Parameters
    -----------
    base_estimator: object
        The base model from which the boosted ensemble is built.

    n_estimators: integer, optional(default=50)
        The maximum number of estimators

    learning_rate: float, optional(default=1)

    algorithm: {'SAMME','SAMME.R'}, optional(default='SAMME.R')
        SAMME.R uses predicted probabilities to update wights, while SAMME uses class error rate

    random_state: int or None, optional(default=None)


    Attributes
    -------------
    estimators_: list of base estimators

    estimator_weights_: array of floats
        Weights for each base_estimator

    estimator_errors_: array of floats
        Classification error for each estimator in the boosted ensemble.
    
    Reference:
    1. [multi-adaboost](https://web.stanford.edu/~hastie/Papers/samme.pdf)

    2. [scikit-learn:weight_boosting](https://github.com/scikit-learn/
    scikit-learn/blob/51a765a/sklearn/ensemble/weight_boosting.py#L289)
    
    '''

    def __init__(self, *args, **kwargs):
        if kwargs and args:
            raise ValueError(
                '''AdaBoostClassifier can only be called with keyword
                   arguments for the following keywords: base_estimator ,n_estimators,
                    learning_rate,algorithm,random_state''')
        allowed_keys = ['base_estimator', 'n_estimators', 'learning_rate', 'algorithm', 'random_state']
        keywords_used = kwargs.keys()
        for keyword in keywords_used:
            if keyword not in allowed_keys:
                raise ValueError(keyword + ":  Wrong keyword used --- check spelling")

        n_estimators = 50
        learning_rate = 1
        algorithm = 'SAMME.R'
        random_state = None

        if kwargs and not args:
            if 'base_estimator' in kwargs:
                base_estimator = kwargs.pop('base_estimator')
            else:
                raise ValueError('''base_estimator can not be None''')
            if 'n_estimators' in kwargs: n_estimators = kwargs.pop('n_estimators')
            if 'learning_rate' in kwargs: learning_rate = kwargs.pop('learning_rate')
            if 'algorithm' in kwargs: algorithm = kwargs.pop('algorithm')
            if 'random_state' in kwargs: random_state = kwargs.pop('random_state')

        self.base_estimator_ = base_estimator
        self.n_estimators_ = n_estimators
        self.learning_rate_ = learning_rate
        self.algorithm_ = algorithm
        self.random_state_ = random_state
        self.estimators_ = list()
        self.estimator_weights_ = np.zeros(self.n_estimators_)
        self.estimator_errors_ = np.ones(self.n_estimators_)


    def _samme_proba(self, estimator, n_classes, X):
        """Calculate algorithm 4, step 2, equation c) of Zhu et al [1].

        References
        ----------
        .. [1] J. Zhu, H. Zou, S. Rosset, T. Hastie, "Multi-class AdaBoost", 2009.

        """
        proba = estimator.predict_proba(X)

        # Displace zero probabilities so the log is defined.
        # Also fix negative elements which may occur with
        # negative sample weights.
        proba[proba < np.finfo(proba.dtype).eps] = np.finfo(proba.dtype).eps
        log_proba = np.log(proba)

        return (n_classes - 1) * (log_proba - (1. / n_classes)
                                  * log_proba.sum(axis=1)[:, np.newaxis])


    def fit(self, X, y):
        self.n_samples = X.shape[0]
        # There is hidden trouble for classes, here the classes will be sorted.
        # So in boost we have to ensure that the predict results have the same classes sort
        self.classes_ = np.array(sorted(list(set(y))))
        self.n_classes_ = len(self.classes_)
        for iboost in range(self.n_estimators_):
            if iboost == 0:
                sample_weight = np.ones(self.n_samples) / self.n_samples

            sample_weight, estimator_weight, estimator_error = self.boost(X, y, sample_weight)

            # early stop
            if estimator_error == None:
                break

            # append error and weight
            self.estimator_errors_[iboost] = estimator_error
            self.estimator_weights_[iboost] = estimator_weight

            if estimator_error <= 0:
                break

        return self


    def boost(self, X, y, sample_weight):
        if self.algorithm_ == 'SAMME':
            return self.discrete_boost(X, y, sample_weight)
        elif self.algorithm_ == 'SAMME.R':
            return self.real_boost(X, y, sample_weight)

    def real_boost(self, X, y, sample_weight):
        estimator = deepcopy(self.base_estimator_)
        if self.random_state_:
            estimator.set_params(random_state=1)

        estimator.fit(X, y, sample_weight=sample_weight)

        y_pred = estimator.predict(X)
        incorrect = y_pred != y
        estimator_error = np.dot(incorrect, sample_weight) / np.sum(sample_weight, axis=0)

        # if worse than random guess, stop boosting
        if estimator_error >= 1.0 - 1 / self.n_classes_:
            return None, None, None

        y_predict_proba = estimator.predict_proba(X)
        # repalce zero
        y_predict_proba[y_predict_proba < np.finfo(y_predict_proba.dtype).eps] = np.finfo(y_predict_proba.dtype).eps

        y_codes = np.array([-1. / (self.n_classes_ - 1), 1.])
        y_coding = y_codes.take(self.classes_ == y[:, np.newaxis])

        # for sample weight update
        intermediate_variable = (-1. * self.learning_rate_ * (((self.n_classes_ - 1) / self.n_classes_) *
                                                              inner1d(y_coding, np.log(
                                                                  y_predict_proba))))  #dot iterate for each row

        # update sample weight
        sample_weight *= np.exp(intermediate_variable)

        sample_weight_sum = np.sum(sample_weight, axis=0)
        if sample_weight_sum <= 0:
            return None, None, None

        # normalize sample weight
        sample_weight /= sample_weight_sum

        # append the estimator
        self.estimators_.append(estimator)

        return sample_weight, 1, estimator_error


    def discrete_boost(self, X, y, sample_weight):
        estimator = deepcopy(self.base_estimator_)
        if self.random_state_:
            estimator.set_params(random_state=1)

        estimator.fit(X, y, sample_weight=sample_weight)

        y_pred = estimator.predict(X)
        incorrect = y_pred != y
        estimator_error = np.dot(incorrect, sample_weight) / np.sum(sample_weight, axis=0)

        # if worse than random guess, stop boosting
        if estimator_error >= 1 - 1 / self.n_classes_:
            return None, None, None

        # update estimator_weight
        estimator_weight = self.learning_rate_ * np.log((1 - estimator_error) / estimator_error) + np.log(
            self.n_classes_ - 1)

        if estimator_weight <= 0:
            return None, None, None

        # update sample weight
        sample_weight *= np.exp(estimator_weight * incorrect)

        sample_weight_sum = np.sum(sample_weight, axis=0)
        if sample_weight_sum <= 0:
            return None, None, None

        # normalize sample weight
        sample_weight /= sample_weight_sum

        # append the estimator
        self.estimators_.append(estimator)

        return sample_weight, estimator_weight, estimator_error

    def predict(self, X):
        n_classes = self.n_classes_
        classes = self.classes_[:, np.newaxis]
        pred = None

        if self.algorithm_ == 'SAMME.R':
            # The weights are all 1. for SAMME.R
            pred = sum(self._samme_proba(estimator, n_classes, X) for estimator in self.estimators_)
        else:  # self.algorithm == "SAMME"
            pred = sum((estimator.predict(X) == classes).T * w
                       for estimator, w in zip(self.estimators_,
                                               self.estimator_weights_))

        pred /= self.estimator_weights_.sum()
        if n_classes == 2:
            pred[:, 0] *= -1
            pred = pred.sum(axis=1)
            return self.classes_.take(pred > 0, axis=0)

        return self.classes_.take(np.argmax(pred, axis=1), axis=0)


    def predict_proba(self, X):
        if self.algorithm_ == 'SAMME.R':
            # The weights are all 1. for SAMME.R
            proba = sum(self._samme_proba(estimator, self.n_classes_, X)
                        for estimator in self.estimators_)
        else:  # self.algorithm == "SAMME"
            proba = sum(estimator.predict_proba(X) * w
                        for estimator, w in zip(self.estimators_,
                                                self.estimator_weights_))

        proba /= self.estimator_weights_.sum()
        proba = np.exp((1. / (n_classes - 1)) * proba)
        normalizer = proba.sum(axis=1)[:, np.newaxis]
        normalizer[normalizer == 0.0] = 1.0
        proba /= normalizer

        return proba
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市诊沪,隨后出現(xiàn)的幾起案子摩钙,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 207,113評論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件竞膳,死亡現(xiàn)場離奇詭異蹄殃,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)袋倔,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,644評論 2 381
  • 文/潘曉璐 我一進(jìn)店門雕蔽,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人宾娜,你說我怎么就攤上這事批狐。” “怎么了前塔?”我有些...
    開封第一講書人閱讀 153,340評論 0 344
  • 文/不壞的土叔 我叫張陵嚣艇,是天一觀的道長。 經(jīng)常有香客問我华弓,道長食零,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 55,449評論 1 279
  • 正文 為了忘掉前任寂屏,我火速辦了婚禮贰谣,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘凑保。我一直安慰自己冈爹,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,445評論 5 374
  • 文/花漫 我一把揭開白布欧引。 她就那樣靜靜地躺著频伤,像睡著了一般。 火紅的嫁衣襯著肌膚如雪芝此。 梳的紋絲不亂的頭發(fā)上憋肖,一...
    開封第一講書人閱讀 49,166評論 1 284
  • 那天,我揣著相機(jī)與錄音婚苹,去河邊找鬼岸更。 笑死,一個(gè)胖子當(dāng)著我的面吹牛膊升,可吹牛的內(nèi)容都是我干的怎炊。 我是一名探鬼主播,決...
    沈念sama閱讀 38,442評論 3 401
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼评肆!你這毒婦竟也來了债查?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 37,105評論 0 261
  • 序言:老撾萬榮一對情侶失蹤瓜挽,失蹤者是張志新(化名)和其女友劉穎盹廷,沒想到半個(gè)月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體久橙,經(jīng)...
    沈念sama閱讀 43,601評論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡俄占,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,066評論 2 325
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了淆衷。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片缸榄。...
    茶點(diǎn)故事閱讀 38,161評論 1 334
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖吭敢,靈堂內(nèi)的尸體忽然破棺而出碰凶,到底是詐尸還是另有隱情暮芭,我是刑警寧澤鹿驼,帶...
    沈念sama閱讀 33,792評論 4 323
  • 正文 年R本政府宣布,位于F島的核電站辕宏,受9級特大地震影響畜晰,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜瑞筐,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,351評論 3 307
  • 文/蒙蒙 一凄鼻、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧聚假,春花似錦块蚌、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,352評論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至瘪贱,卻和暖如春纱控,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背菜秦。 一陣腳步聲響...
    開封第一講書人閱讀 31,584評論 1 261
  • 我被黑心中介騙來泰國打工甜害, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人球昨。 一個(gè)月前我還...
    沈念sama閱讀 45,618評論 2 355
  • 正文 我出身青樓尔店,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個(gè)殘疾皇子嚣州,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,916評論 2 344

推薦閱讀更多精彩內(nèi)容

  • 十一期犬、離別 分手之時(shí),薛濤帶著所有的幻想避诽,寫了一首《贈(zèng)遠(yuǎn)》“知君未轉(zhuǎn)秦關(guān)騎龟虎,日照千門掩袖啼。閨閣不知戎馬事沙庐,月...
    香水春天閱讀 284評論 0 0
  • python的random模塊用于生成隨機(jī)數(shù)鲤妥,使用時(shí)需要用導(dǎo)入random函數(shù)。 常用函數(shù) random()用于生...
    汀沙云樹晚蒼蒼閱讀 380評論 0 0
  • 早上7點(diǎn)多慢悠悠的從床上爬起來,按了按太陽穴铸抑。昨晚熬夜到12點(diǎn)贡耽,現(xiàn)在頭還有點(diǎn)疼。不等睜眼伸手把充滿電...
    西地畔閱讀 255評論 1 2
  • 我是個(gè)女生,一個(gè)簡單又平凡的女生刁憋,渴望著童話般的愛情滥嘴,卻又像個(gè)下凡歷劫的罪人,每次動(dòng)心就要遭受一次千刀萬剮的痛至耻。 ...
    Antonzzz閱讀 200評論 0 0
  • 周三,我?guī)鹤尤D幼保健站體檢疤苹,這個(gè)熊孩子一路哭哭啼啼互广,我拖著他跌跌撞撞好不容易抽完血沖出人群,直到馬上要上車離開...
    愛君如初閱讀 834評論 3 1