Keras實現(xiàn)人臉遮擋檢測

未經(jīng)同意蚊惯,不得轉(zhuǎn)載

人臉遮擋檢測(Face occlusion detect)有助于構(gòu)建高質(zhì)量的人臉識別底庫愿卸。本文定義人臉的遮擋位置為5個區(qū)域:左眼,右眼截型,鼻子趴荸,嘴和下巴,基于Tensorflow + Keras訓(xùn)練一個簡單的CNN人臉遮擋檢測模型宦焦,并結(jié)合Dlib人臉檢測器實現(xiàn)測試Demo的示例发钝。

項目地址:https://github.com/Oreobird/Face-Occlusion-Detect

一、樣本準(zhǔn)備

1波闹、Cofw數(shù)據(jù)庫

數(shù)據(jù)是 .mat 格式酝豪,其中包括了圖像像素值,29個點的landmarks坐標(biāo)以及每個點的遮擋情況精堕,可以通過以下的代碼轉(zhuǎn)換為jpg圖像以及相應(yīng)的ground_true數(shù)據(jù):

#mat_file: COFW_train.mat, COFW_test.mat
#img_token: 'IsTr', 'IsT'
#bbox_token: 'bboxesTr', 'bboxesT'
#phis_token: 'phisTr', 'phisT'
def mat_to_files(mat_file, img_token, bbox_token, phis_token, img_dir, gt_txt_file):
    train_mat = h5py.File(mat_file, 'r')
    tr_imgs_obj = train_mat[img_token][:]
    total_num = tr_imgs_obj.shape[1]
    # print(total_num)

    with open(gt_txt_file, "w+") as trf:
        for i in range(total_num):
            img = train_mat[tr_imgs_obj[0][i]][:]
            bbox = train_mat[bbox_token][:]
            bbox = np.transpose(bbox)[i]

            img = np.transpose(img)
            if not os.path.exists(img_dir):
                os.mkdir(img_dir)

            cv2.imwrite(img_dir + "/{}.jpg".format(i), img)
            gt = train_mat[phis_token][:]
            gt = np.transpose(gt)[i]

            content = img_dir + "/{}.jpg,".format(i)
            for k in range(bbox.shape[0]):
                content = content + bbox[k].astype(str) + ' '
            content += ','
            for k in range(gt.shape[0]):
                content = content + gt[k].astype(str) + ' '
            content += '\n'
            trf.write(content)

if not os.path.exists(data_root + "train_ground_true.txt"):
    mat_to_files(data_root + "COFW_train.mat",
                     'IsTr', 'bboxesTr', 'phisTr',
                     data_root + "train",
                     data_root + "train_ground_true.txt")

if not os.path.exists(data_root + "test_ground_true.txt"):
    mat_to_files(data_root + "COFW_test.mat",
                     'IsT', 'bboxesT', 'phisT',
                     data_root + "test",
                     data_root + "test_ground_true.txt")

打開生成的train_ground_true.txt文件后孵淘,以逗號隔開的內(nèi)容為:

cofw數(shù)據(jù)信息

其中的landmarks信息為:
landmarks信息

2、分類標(biāo)簽
人臉遮擋定義為5個區(qū)域歹篓,分類的標(biāo)簽使用one-hot編碼瘫证,有遮擋的位為1,無遮擋為0滋捶,另外再加一個normal的標(biāo)簽表示正常無遮擋痛悯,當(dāng)其他5個區(qū)域中至少有一個為1時,normal位就為0重窟,格式為:

[normal载萌,left-eye,right-eye巡扇,nose扭仁,mouth,chin]

對應(yīng)的landmarks遮擋標(biāo)記下標(biāo)索引為:

left-eye:8,10,12,13,16
right-eye:9,11,14,15,17
nose:18,19,20,21
mouth:22,23,24,25,26,27
chin:28

當(dāng)其中的這些點有遮擋時厅翔,則認為此區(qū)域被遮擋乖坠。可以通過以下代碼對圖像生成相應(yīng)的標(biāo)注信息:

# gt_txt:ground_true文件
# face_img_dir: 圖像目錄
# face_txt:要生成的標(biāo)注信息文件
# show:在人臉圖像上顯示29個landmarks點

def face_label(gt_txt, face_img_dir, face_txt, show=False):
    img_num = 1
    with open(face_txt, "w+") as face_txt_fp:
        with open(gt_txt, 'r') as gt_fp:
            line = gt_fp.readline()
            while line:
                img_path, bbox, phis = line.split(',')

                phis = phis.strip('\n').strip(' ').split(' ')
                phis = [int(float(x)) for x in phis]
                # print(phis)

                if show:
                    for i in range(29):
                        cv2.circle(img, (phis[i], phis[i + 29]), 2, (0, 255, 255))
                        cv2.putText(img, str(i), (phis[i], phis[i + 29]),
 cv2.FONT_HERSHEY_COMPLEX,0.3,(0,0,255),1)
                        cv2.imshow("img", img)
                        cv2.waitKey(0)

                slot = phis[58:]
                label = [1, 0, 0, 0, 0, 0]
                # if slot[0] and slot[2] and slot[4] and slot[5]:
                # label[1] = 1 # left eyebrow
                # label[0] = 0
                if slot[16]: # slot[10] or slot[12] or slot[13] or slot[16] or slot[8]:
                    label[1] = 1 # left eye
                    label[0] = 0
                # if slot[1] and slot[3] and slot[6] and slot[7]:
                # label[3] = 1 # right eyebrow
                # label[0] = 0
                if slot[17]: # slot[11] or slot[14] or slot[15] or slot[17] or slot[9]:
                    label[2] = 1 # right eye
                    label[0] = 0
                if slot[20]: # slot[18] or slot[19] or slot[20] or slot[21]:
                    label[3] = 1 # nose
                    label[0] = 0
                if slot[22] or slot[23] or slot[25] or slot[26] or slot[27]: # or slot[24]
                    label[4] = 1 # mouth
                    label[0] = 0
                if slot[28]:
                    label[5] = 1 # chin
                    label[0] = 0

                lab_str = ''
                for x in label:
                    lab_str += str(x) + ' '

                content = face_img_dir + "{}.jpg".format(img_num) + ',' + lab_str.rstrip(' ')
                content += '\n'
                # print(content)
                face_txt_fp.write(content)

                line = gt_fp.readline()
                img_num += 1

將show參數(shù)置為True后刀闷,可以得到landmarks的29個點位置如下:


cofw人臉landmarks點

3熊泵、數(shù)據(jù)預(yù)處理
由于Cofw的樣本數(shù)據(jù)只有1000多個仰迁,并且有遮擋的人臉很少,為了避免訓(xùn)練時過擬合顽分,需要對原始樣本做一個處理徐许。本文通過對5個區(qū)域隨機疊加隨機灰度值的遮擋塊來擴展遮擋樣本:

遮擋塊構(gòu)造

測試時發(fā)現(xiàn),此方法會導(dǎo)致當(dāng)人臉有普通的眼鏡和有胡子時的誤識別率卒蘸,會把眼鏡和胡子也識別為遮擋雌隅,所以在構(gòu)造訓(xùn)練樣本時,加入了200張正常眼鏡和200張稀疏胡子的正樣本缸沃,再對所有圖像作左右上下和中心區(qū)域的截取恰起。將2000張左右的原始圖擴展為100000多張訓(xùn)練樣本。最終趾牧,訓(xùn)練樣本圖像大小resize為96x96检盼,85%做訓(xùn)練,15%做驗證武氓。

二梯皿、模型構(gòu)建

1、模型結(jié)構(gòu)

由于可以是人臉多區(qū)域的遮擋县恕,所以可以建模為多標(biāo)簽的分類問題东羹。模型結(jié)構(gòu)如下圖所示:


模型結(jié)構(gòu)

多標(biāo)簽分類的輸出層激活函數(shù)使用Sigmoid,loss類型為binary_crossentropy忠烛,即對每個標(biāo)簽做二分類属提。模型類的代碼如下,其中實現(xiàn)了自定義的簡單模型和基于VGG16來finetune的兩種構(gòu)建方式美尸。

import tensorflow as tf
import os
import numpy as np
import cv2
import scipy.io as sio
import heapq
# import tensorflow.contrib.eager as tfe
# tfe.enable_eager_execution()
# np.set_printoptions(threshold=np.nan)
EPOCHS = 25
class FodNet:
def __init__(self, dataset, class_num, batch_size, input_size, fine_tune=True, fine_tune_model_file='imagenet'):
    self.class_num = class_num
    self.batch_size = batch_size
    self.input_size = input_size
    self.dataset = dataset
    self.fine_tune_model_file = fine_tune_model_file
    if fine_tune:
        self.model = self.fine_tune_model()
    else:
        self.model = self.__create_model()

def __base_model(self, inputs):
    feature = tf.keras.layers.Conv2D(filters=64, kernel_size=(5, 5), strides=(1, 1), padding='same')(inputs)
    feature = tf.keras.layers.BatchNormalization()(feature)
    feature = tf.keras.layers.Activation(activation=tf.nn.relu)(feature)
    feature = tf.keras.layers.Conv2D(filters=64, kernel_size=(5, 5), strides=(1, 1), padding='same')(feature)
    feature = tf.keras.layers.BatchNormalization()(feature)
    feature = tf.keras.layers.Activation(activation=tf.nn.relu)(feature)
    feature = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2))(feature)

    feature = tf.keras.layers.Conv2D(filters=64, kernel_size=(3, 3), strides=(1, 1), padding='same')(feature)
    feature = tf.keras.layers.BatchNormalization()(feature)
    feature = tf.keras.layers.Activation(activation=tf.nn.relu)(feature)
    feature = tf.keras.layers.Conv2D(filters=64, kernel_size=(3, 3), strides=(1, 1), padding='same')(feature)
    feature = tf.keras.layers.BatchNormalization()(feature)
    feature = tf.keras.layers.Activation(activation=tf.nn.relu)(feature)
    feature = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2))(feature)
    return feature

 def __dense(self, feature):
    feature = tf.keras.layers.Flatten()(feature)
    feature = tf.keras.layers.Dense(units=128)(feature)
    feature = tf.keras.layers.BatchNormalization()(feature)
    feature = tf.keras.layers.Activation(activation=tf.nn.relu)(feature)
    feature = tf.keras.layers.Dropout(0.5)(feature)
    feature = tf.keras.layers.Dense(units=256)(feature)
    feature = tf.keras.layers.BatchNormalization()(feature)
    feature = tf.keras.layers.Activation(activation=tf.nn.relu)(feature)
    feature = tf.keras.layers.Dropout(0.5)(feature)
    return feature

def __create_model(self):
    input_fod = tf.keras.layers.Input(name='fod_input', shape=(self.input_size,   self.input_size, 1))
    feature_fod = self.__base_model(input_fod)
    feature_fod = self.__dense(feature_fod)
    output_fod = tf.keras.layers.Dense(name='fod_output', units=self.class_num, activation=tf.nn.sigmoid)(feature_fod)
    model = tf.keras.Model(inputs=[input_fod], outputs=[output_fod])
    losses = {
      'fod_output': 'binary_crossentropy',
    }
    model.compile(optimizer=tf.train.AdamOptimizer(),
                  loss=losses,
                  metrics=['accuracy'])
    return model
def __extract_output(self, model, name, input):
    model._name = name
    for layer in model.layers:
        layer.trainable = True
    return model(input)

def fine_tune_model(self):
    input_fod = tf.keras.layers.Input(name='fod_input', shape=(self.input_size, self.input_size, 3))
    # resnet_fod = tf.keras.applications.ResNet50(weights='imagenet', include_top=False)
    # feature_fod = self.__extract_output(resnet_fod, 'resnet_fod', input_fod)

    vgg16_fod = tf.keras.applications.VGG16(weights=self.fine_tune_model_file, include_top=False)
    feature_fod = self.__extract_output(vgg16_fod, 'vgg16_fod', input_fod)

    feature_fod = self.__dense(feature_fod)
    output_fod = tf.keras.layers.Dense(name='fod_output', units=self.class_num, activation=tf.nn.sigmoid)(feature_fod)

    model = tf.keras.Model(inputs=[input_fod], outputs=[output_fod])
    losses = {
        'fod_output': 'binary_crossentropy',
    }

    model.compile(optimizer=tf.train.AdamOptimizer(),
    loss=losses,
    metrics=['accuracy'])
    return model

def fit(self, model_file, checkpoint_dir, log_dir, max_epoches=EPOCHS, train=True):
    self.model.summary()
    if not train:
        self.model.load_weights(model_file)
    else:
        cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_dir,
                                                         save_weights_only=True,
                                                          save_best_only=True,
                                                          period=2,
                                                          verbose=1)
        earlystop_cb = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
                                                        mode='min',
                                                        min_delta=0.001,
                                                        patience=3,
                                                        verbose=1)
        tb_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
        input_name_list = ['fod_input']
        output_name_list = ['fod_output']           
        self.model.fit_generator(generator=self.dataset.data_generator(input_name_list, output_name_list, 'train.txt'),
                                epochs=max_epoches,
                                steps_per_epoch=self.dataset.train_num() // self.batch_size,
                                validation_data=self.dataset.data_generator(input_name_list, output_name_list, 'val.txt'),
                                validation_steps=self.dataset.val_num() // self.batch_size,
                                callbacks=[cp_callback, earlystop_cb, tb_callback],
                                max_queue_size=10,
                                workers=1,
                                verbose=1)
        self.model.save(model_file)

def predict(self):
    input_name_list = ['fod_input']
    output_name_list = ['fod_output']
    predictions = self.model.predict_generator(generator=self.dataset.data_generator(input_name_list, output_name_list, 'test.txt', shuffle=False),
 steps=self.dataset.test_num() // self.batch_size,
 verbose=1)
    if len(predictions) > 0:
        fod_preds = predictions
        # print(fod_preds)
        test_data = self.dataset.data_generator(input_name_list, output_name_list, 'test.txt', shuffle=False)
        correct = 0
        steps = self.dataset.test_num() // self.batch_size
        total = steps * self.batch_size

        for step in range(steps):
            _, test_batch_y = next(test_data)
            fod_real_batch = test_batch_y['fod_output']
            for i, fod_real in enumerate(fod_real_batch):
                fod_real = fod_real.tolist()
                one_num = fod_real.count(1)
                fod_pred_idxs = sorted(list(map(fod_preds[self.batch_size * step + i].tolist().index,
                                            heapq.nlargest(one_num, fod_preds[self.batch_size * step + i]))))
                fod_real_idxs = [i for i,x in enumerate(fod_real) if x == 1]
                # print(fod_pred_idxs)
                # print(fod_real_idxs)
                if fod_real_idxs == fod_pred_idxs:
                    correct += 1
        print("fod==> correct:{}, total:{}, correct_rate:{}".format(correct, total, 1.0 * correct / total))
    return predictions

def test_online(self, face_imgs):
    batch_x = np.array(face_imgs[0]['fod_input'], dtype=np.float32)
    batch_x = np.expand_dims(batch_x, 0)

    predictions = self.model.predict({'fod_input': batch_x}, batch_size=1)
    # predictions = np.asarray(predictions)
    return predictions

2冤议、訓(xùn)練曲線

acc曲線

loss曲線

三、測試

通過Dlib實時檢測人臉實現(xiàn)师坎,需要先下載好Dlib的人臉檢測模型文件:

shape_predictor_68_face_landmarks.dat

dlib實現(xiàn)了68個人臉landmarks點的檢測恕酸,這里根據(jù)這些點來crop出人臉區(qū),在有遮擋時在 相應(yīng)的區(qū)域畫圈圈胯陋,代碼如下:

import os
import dlib
from imutils import face_utils
import cv2
import numpy as np
class CameraTester():
    def __init__(self, net=None, input_size=96, fine_tune=False, face_landmark_path='./model/shape_predictor_68_face_landmarks.dat'):
        self.cap = cv2.VideoCapture(0)
        if not self.cap.isOpened():
            raise Exception("Unable to connect to camera.")
        self.detector = dlib.get_frontal_face_detector()
        self.predictor = dlib.shape_predictor(face_landmark_path)
        self.net = net
        self.input_size = input_size
        self.fine_tune = fine_tune

    def crop_face(self, shape, img, input_size):
        x = []
        y = []
        for (_x, _y) in shape:
            x.append(_x)
            y.append(_y)

        max_x = min(max(x), img.shape[1])
        min_x = max(min(x), 0)
        max_y = min(max(y), img.shape[0])
        min_y = max(min(y), 0)

        Lx = max_x - min_x
        Ly = max_y - min_y
        Lmax = int(max(Lx, Ly))
        delta = Lmax // 2

        center_x = (max(x) + min(x)) // 2
        center_y = (max(y) + min(y)) // 2
        start_x = int(center_x - delta)
        start_y = int(center_y - 0.99 * delta)
        end_x = int(center_x + delta)
        end_y = int(center_y + 1.01 * delta)
        start_y = 0 if start_y < 0 else start_y
        start_x = 0 if start_x < 0 else start_x
        end_x = img.shape[1] if end_x > img.shape[1] else end_x
        end_y = img.shape[0] if end_y > img.shape[0] else end_y

        crop_face = img[start_y:end_y, start_x:end_x]
        print(crop_face.shape)
        crop_face = cv2.cvtColor(crop_face, cv2.COLOR_BGR2GRAY)

        crop_face = cv2.resize(crop_face, (input_size, input_size)) / 255
        channel = 3 if self.fine_tune else 1
        crop_face = np.resize(crop_face, (self.input_size, self.input_size, channel))
        return crop_face, start_y, end_y, start_x, end_x

    def get_area(self, shape, idx):
        #[[x, y], radius]
        left_eye = [(shape[42] + shape[45]) // 2, abs(shape[45][0] - shape[42][0])]
        right_eye = [(shape[36] + shape[39]) // 2, abs(shape[39][0] - shape[36][0])]
        nose = [shape[30], int(abs(shape[31][0] - shape[35][0]) / 1.5)]
        mouth = [(shape[48] + shape[54]) // 2, abs(shape[48][0] - shape[54][0]) // 2]
        chin = [shape[8], nose[1]]
        area = [None, left_eye, right_eye, nose, mouth, chin]
        block_area = [x for i, x in enumerate(area) if i in idx]
        return block_area

    def draw_occlusion_area(self, img, shape, idx):
        area = self.get_area(shape, idx)
        for k, v in enumerate(area):
            if v:
                cv2.circle(img, tuple(v[0]), v[1], (0, 255, 0))

    def run(self):
        frames = []
        while self.cap.isOpened():
            ret, frame = self.cap.read()
            if ret:
                frame = cv2.resize(frame, (640, 480))
                face_rects = self.detector(frame, 0)
                if len(face_rects) > 0:
                    shape = self.predictor(frame, face_rects[0])
                    shape = face_utils.shape_to_np(shape)
                    input_img, start_y, end_y, start_x, end_x = self.crop_face(shape, frame, self.input_size)
                    cv2.rectangle(frame, (start_x, start_y), (end_x, end_y), (0, 255, 0), thickness=2)
                    frames.append({'fod_input': input_img})
                    if len(frames) == 1:
                        pred = self.net.test_online(frames)
                        print(pred)
                        idx = [i for i, x in enumerate(pred[0]) if x > 0.5]
                        frames = []
                        print(idx)
                        if len(idx):
                            self.draw_occlusion_area(frame, shape, idx)
                        else:
                            print("No face detect")
            cv2.imshow("frame", frame)
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break

四蕊温、總結(jié)

本文基于Tensorflow+keras+Dlib實現(xiàn)了一個人臉遮擋實時檢測的Demo。由于訓(xùn)練樣本的比較單一遏乔,模型簡單义矛,實現(xiàn)的效果準(zhǔn)確率還有待提高。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末盟萨,一起剝皮案震驚了整個濱河市凉翻,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌捻激,老刑警劉巖制轰,帶你破解...
    沈念sama閱讀 217,542評論 6 504
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件前计,死亡現(xiàn)場離奇詭異,居然都是意外死亡艇挨,警方通過查閱死者的電腦和手機残炮,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,822評論 3 394
  • 文/潘曉璐 我一進店門韭赘,熙熙樓的掌柜王于貴愁眉苦臉地迎上來缩滨,“玉大人,你說我怎么就攤上這事泉瞻÷雎” “怎么了?”我有些...
    開封第一講書人閱讀 163,912評論 0 354
  • 文/不壞的土叔 我叫張陵袖牙,是天一觀的道長侧巨。 經(jīng)常有香客問我,道長鞭达,這世上最難降的妖魔是什么司忱? 我笑而不...
    開封第一講書人閱讀 58,449評論 1 293
  • 正文 為了忘掉前任,我火速辦了婚禮畴蹭,結(jié)果婚禮上坦仍,老公的妹妹穿的比我還像新娘。我一直安慰自己叨襟,他們只是感情好繁扎,可當(dāng)我...
    茶點故事閱讀 67,500評論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著糊闽,像睡著了一般梳玫。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上右犹,一...
    開封第一講書人閱讀 51,370評論 1 302
  • 那天提澎,我揣著相機與錄音,去河邊找鬼念链。 笑死盼忌,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的钓账。 我是一名探鬼主播碴犬,決...
    沈念sama閱讀 40,193評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼梆暮!你這毒婦竟也來了服协?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 39,074評論 0 276
  • 序言:老撾萬榮一對情侶失蹤啦粹,失蹤者是張志新(化名)和其女友劉穎偿荷,沒想到半個月后窘游,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,505評論 1 314
  • 正文 獨居荒郊野嶺守林人離奇死亡跳纳,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,722評論 3 335
  • 正文 我和宋清朗相戀三年忍饰,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片寺庄。...
    茶點故事閱讀 39,841評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡艾蓝,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出斗塘,到底是詐尸還是另有隱情赢织,我是刑警寧澤,帶...
    沈念sama閱讀 35,569評論 5 345
  • 正文 年R本政府宣布馍盟,位于F島的核電站于置,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏贞岭。R本人自食惡果不足惜八毯,卻給世界環(huán)境...
    茶點故事閱讀 41,168評論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望瞄桨。 院中可真熱鬧话速,春花似錦、人聲如沸讲婚。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,783評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽筹麸。三九已至活合,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間物赶,已是汗流浹背白指。 一陣腳步聲響...
    開封第一講書人閱讀 32,918評論 1 269
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留酵紫,地道東北人告嘲。 一個月前我還...
    沈念sama閱讀 47,962評論 2 370
  • 正文 我出身青樓,卻偏偏與公主長得像奖地,于是被迫代替她去往敵國和親橄唬。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 44,781評論 2 354

推薦閱讀更多精彩內(nèi)容

  • 注意:課程需要使用大量Python編程并使用Numpy 完成矩陣運算参歹,所以學(xué)習(xí)本lecture的同時需要學(xué)習(xí)Pyt...
    HRain閱讀 2,788評論 0 5
  • 在主動發(fā)掘個案的過程中仰楚,我,第一個接觸到的個案是4床患者。一個從四川嫁到湖南來的中年女性僧界∏揉郑基本情況,上周五(5.2...
    霽雨初晴閱讀 755評論 0 0
  • 劉婷網(wǎng)中11期堅持分享第124天2018.12.09 昨天晚上是中級第一次課捂襟,劉老師在課上說明了中級課程的設(shè)置咬腕。之...
    天然去雕飾_ab82閱讀 118評論 0 0
  • 《原創(chuàng)作品》 果然涨共,自那次不愉快的談話之后,容易再也沒有聯(lián)系木子闯狱,甚至于煞赢,微信扣扣又再一次拉黑了木子。 意料之中的...
    曼蘿白薇閱讀 321評論 2 6