kubernetes生產(chǎn)實踐之redis-cluster

方案一 自定義yaml文件安裝redis cluster

背景

在Kubernetes中部署Redis集群面臨挑戰(zhàn),因為每個Redis實例都依賴于一個配置文件危号,該文件可以跟蹤其他集群實例及其角色什猖。為此票彪,需要結(jié)合使用Kubernetes StatefulSets和PersistentVolumes實現(xiàn)。
redis cluster架構(gòu)示意圖:


image.png

創(chuàng)建StatefulSet yaml文件

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-cluster
data:
  update.sh: |
    #!/bin/sh
    REDIS_NODES="/data/nodes.conf"
    sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${REDIS_NODES}
    exec "$@"
  redis.conf: |+
    bind 0.0.0.0
    cluster-enabled yes
    cluster-require-full-coverage no
    cluster-node-timeout 30000
    cluster-config-file /data/nodes.conf
    cluster-migration-barrier 1
    appendonly yes
    protected-mode no
---
apiVersion: apps.kruise.io/v1beta1
# apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-cluster
spec:
  serviceName: redis-cluster
  replicas: 6
  selector:
    matchLabels:
      app: redis-cluster
  template:
    metadata:
      labels:
        app: redis-cluster
    spec:
      containers:
      - name: redis
        image: redis:6.2.1-alpine
        ports:
        - containerPort: 6379
          name: client
        - containerPort: 16379
          name: gossip
        command: ["/conf/update.sh", "redis-server", "/conf/redis.conf"]
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: conf
          mountPath: /conf
          readOnly: false
        - name: data
          mountPath: /data
          readOnly: false
      volumes:
      - name: conf
        configMap:
          name: redis-cluster
          defaultMode: 0755
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 100Gi
      storageClassName: rbd

注釋:

cluster-migration-barrier 那些分配后仍然剩余migration barrier個從節(jié)點的主節(jié)點才會觸發(fā)節(jié)點分配不狮,而不是分配前有migration barrier個從節(jié)點的主節(jié)點就會觸發(fā)節(jié)點分配降铸,默認(rèn)是1,生產(chǎn)環(huán)境建議維持默認(rèn)值
protected-mode no 參數(shù)是為了禁止外網(wǎng)訪問redis摇零,如果啟用了推掸,則只能夠通過lookback ip(127.0.0.1)訪問Redis,如果外網(wǎng)訪問redis驻仅,會報出異常谅畅。
apiVersion: apps.kruise.io/v1beta1 控制器這里使用kruise提供的Advanced StatefulSet,如果集群沒有安裝kruise噪服,可以使用 apps/v1

安裝redis-cluster

[root@qd01-stop-k8s-master001 redis]# kubectl apply -f install-redis.yaml
configmap/redis-cluster created
statefulset.apps.kruise.io/redis-cluster created

[root@qd01-stop-k8s-master001 redis]# kubectl get po -n op
NAME              READY   STATUS    RESTARTS   AGE
redis-cluster-0   1/1     Running   0          3m26s
redis-cluster-1   1/1     Running   0          3m14s
redis-cluster-2   1/1     Running   0          2m54s
redis-cluster-3   1/1     Running   0          2m23s
redis-cluster-4   1/1     Running   0          2m14s
redis-cluster-5   1/1     Running   0          114s

創(chuàng)建redis-cluster service

---
apiVersion: v1
kind: Service
metadata:
  name: redis-cluster
  namespace: op
spec:
  type: ClusterIP
  ports:
  - port: 6379
    targetPort: 6379
    name: client
  - port: 16379
    targetPort: 16379
    name: gossip
  selector:
    app: redis-cluster
[root@qd01-stop-k8s-master001 redis]# kubectl apply -f redis-svc.yml
service/redis-cluster created
[root@qd01-stop-k8s-master001 redis]# kubectl get svc -n op
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
redis-cluster   ClusterIP   10.97.197.224   <none>        6379/TCP,16379/TCP   9s

測試能后連通
[root@qd01-stop-k8s-master001 redis]# telnet  10.97.197.224 6379
Trying 10.97.197.224...
Connected to 10.97.197.224.
Escape character is '^]'.

初始化redis-cluster

執(zhí)行如下命令毡泻,獲取到pod IP,然后使用redis-cli --cluster創(chuàng)建集群

[root@qd01-stop-k8s-master001 redis]# kubectl -n op exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl -n op get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}')
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 100.88.43.67:6379 to 100.64.147.152:6379
Adding replica 100.113.170.5:6379 to 100.98.174.217:6379
Adding replica 100.64.147.153:6379 to 100.80.158.227:6379
M: b47b27a3dbddf3fc1370cbe14ae753f4fce20b04 100.64.147.152:6379
   slots:[0-5460] (5461 slots) master
M: 09543217c903350e963fc4fdf4acb73f8a1b7f8b 100.98.174.217:6379
   slots:[5461-10922] (5462 slots) master
M: 5389ace495b68eeac85370d6783648dff68f2fb6 100.80.158.227:6379
   slots:[10923-16383] (5461 slots) master
S: b1f39714c006ae55b12b18e6537303d7a00e1704 100.64.147.153:6379
   replicates 5389ace495b68eeac85370d6783648dff68f2fb6
S: 0113f4668ec2f3ca2e9470c44bd5faab532b0936 100.88.43.67:6379
   replicates b47b27a3dbddf3fc1370cbe14ae753f4fce20b04
S: e1e2f18ae66c79f1943390beabb59613abbad38a 100.113.170.5:6379
   replicates 09543217c903350e963fc4fdf4acb73f8a1b7f8b
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 100.64.147.152:6379)
M: b47b27a3dbddf3fc1370cbe14ae753f4fce20b04 100.64.147.152:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 0113f4668ec2f3ca2e9470c44bd5faab532b0936 100.88.43.67:6379
   slots: (0 slots) slave
   replicates b47b27a3dbddf3fc1370cbe14ae753f4fce20b04
M: 09543217c903350e963fc4fdf4acb73f8a1b7f8b 100.98.174.217:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 5389ace495b68eeac85370d6783648dff68f2fb6 100.80.158.227:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: e1e2f18ae66c79f1943390beabb59613abbad38a 100.113.170.5:6379
   slots: (0 slots) slave
   replicates 09543217c903350e963fc4fdf4acb73f8a1b7f8b
S: b1f39714c006ae55b12b18e6537303d7a00e1704 100.64.147.153:6379
   slots: (0 slots) slave
   replicates 5389ace495b68eeac85370d6783648dff68f2fb6
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

驗證集群信息

[root@qd01-stop-k8s-master001 redis]# kubectl -n op  exec -it redis-cluster-0 -- redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:178
cluster_stats_messages_pong_sent:181
cluster_stats_messages_sent:359
cluster_stats_messages_ping_received:176
cluster_stats_messages_pong_received:178
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:359

[root@qd01-stop-k8s-master001 redis]# kubectl -n op  exec -it redis-cluster-0 -- redis-cli cluster nodes
0113f4668ec2f3ca2e9470c44bd5faab532b0936 100.88.43.67:6379@16379 slave b47b27a3dbddf3fc1370cbe14ae753f4fce20b04 0 1615348311156 1 connected
09543217c903350e963fc4fdf4acb73f8a1b7f8b 100.98.174.217:6379@16379 master - 0 1615348314162 2 connected 5461-10922
b47b27a3dbddf3fc1370cbe14ae753f4fce20b04 100.64.147.152:6379@16379 myself,master - 0 1615348312000 1 connected 0-5460
5389ace495b68eeac85370d6783648dff68f2fb6 100.80.158.227:6379@16379 master - 0 1615348312000 3 connected 10923-16383
e1e2f18ae66c79f1943390beabb59613abbad38a 100.113.170.5:6379@16379 slave 09543217c903350e963fc4fdf4acb73f8a1b7f8b 0 1615348313160 2 connected
b1f39714c006ae55b12b18e6537303d7a00e1704 100.64.147.153:6379@16379 slave 5389ace495b68eeac85370d6783648dff68f2fb6 0 1615348312158 3 connected

從輸出可以看到粘优,集群總共6個節(jié)點牙捉,三主三從

方案二 使用kubeDB安裝redis

安裝kubeDB

1、安裝KubeDB
獲取AppsCode License https://license-issuer.appscode.com/
下載KubeDB charts https://github.com/appscode/charts/tree/master/stable/kubedb-community

[root@qd01-stop-k8s-master001 kubedb-community]# unzip kubedb-community-v0.16.2.tgz
[root@qd01-stop-k8s-master001 kubedb-community]# cd kubedb-community
[root@qd01-stop-k8s-master001 kubedb-community]# ls -al
total 96
drwxr-xr-x 4 root root   158 Mar 10 15:26 .
drwxr-xr-x 3 root root    66 Mar 10 15:24 ..
-rw-r--r-- 1 root root   351 Feb 16 09:55 Chart.yaml
drwxr-xr-x 2 root root    28 Mar 10 15:24 ci
-rw-r--r-- 1 root root   493 Feb 16 09:55 doc.yaml
-rw-r--r-- 1 root root   353 Feb 16 09:55 .helmignore
-rw-r--r-- 1 root root 24422 Feb 16 09:55 README.md
drwxr-xr-x 2 root root  4096 Mar 10 15:24 templates
-rw-r--r-- 1 root root 47437 Feb 16 09:55 values.openapiv3_schema.yaml
-rw-r--r-- 1 root root  5230 Feb 16 09:55 values.yaml

修改values.yaml敬飒,把License文件放到kubedb-community目錄下
2、使用helm安裝

[root@qd01-stop-k8s-master001 kubedb-community]# helm install kubedb-community --namespace kube-system --set-file license=./kubedb-community-license.txt -f values.yaml  .
NAME: kubedb-community
LAST DEPLOYED: Wed Mar 10 15:38:59 2021
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that KubeDB has started, run:
  kubectl get deployment --namespace kube-system -l "app.kubernetes.io/name=kubedb-community,app.kubernetes.io/instance=kubedb-community"
Now install/upgrade appscode/kubedb-catalog chart.
To install, run:
  helm install kubedb-catalog appscode/kubedb-catalog --version v0.16.2 --namespace kube-system
To upgrade, run:
  helm upgrade kubedb-catalog appscode/kubedb-catalog --version v0.16.2 --namespace kube-system

執(zhí)行如下命令查看是否安裝完成
[root@qd01-stop-k8s-master001 kubedb-community]# kubectl get deployment --namespace kube-system -l "app.kubernetes.io/name=kubedb-community,app.kubernetes.io/instance=kubedb-community"
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
kubedb-community   1/1     1            1           38s

等待crds注冊成功
[root@qd01-stop-k8s-master001 kubedb-community]# kubectl get crds -l app.kubernetes.io/name=kubedb -w
NAME                                       CREATED AT
elasticsearches.kubedb.com                 2021-03-10T07:39:42Z
elasticsearchversions.catalog.kubedb.com   2021-03-10T07:39:45Z
etcds.kubedb.com                           2021-03-10T07:39:42Z
etcdversions.catalog.kubedb.com            2021-03-10T07:39:45Z
memcacheds.kubedb.com                      2021-03-10T07:39:43Z
memcachedversions.catalog.kubedb.com       2021-03-10T07:39:45Z
mongodbs.kubedb.com                        2021-03-10T07:39:43Z
mongodbversions.catalog.kubedb.com         2021-03-10T07:39:45Z
mysqls.kubedb.com                          2021-03-10T07:39:43Z
mysqlversions.catalog.kubedb.com           2021-03-10T07:39:46Z
perconaxtradbs.kubedb.com                  2021-03-10T07:39:43Z
perconaxtradbversions.catalog.kubedb.com   2021-03-10T07:39:46Z
pgbouncers.kubedb.com                      2021-03-10T07:39:44Z
pgbouncerversions.catalog.kubedb.com       2021-03-10T07:39:46Z
postgreses.kubedb.com                      2021-03-10T07:39:44Z
postgresversions.catalog.kubedb.com        2021-03-10T07:39:46Z
proxysqls.kubedb.com                       2021-03-10T07:39:44Z
proxysqlversions.catalog.kubedb.com        2021-03-10T07:39:46Z
redises.kubedb.com                         2021-03-10T07:39:45Z
redisversions.catalog.kubedb.com           2021-03-10T07:39:46Z

3芬位、安裝KubeDB Catalog
同樣无拗,先下載https://github.com/appscode/charts/tree/master/stable/kubedb-catalog

[root@qd01-stop-k8s-master001 kubedb-catalog]# tar -zxf kubedb-catalog-v0.16.2.tgz
[root@qd01-stop-k8s-master001 kubedb-catalog]# cd kubedb-catalog
[root@qd01-stop-k8s-master001 kubedb-catalog]# ls -al
total 24
drwxr-xr-x  3 root root  148 Mar 10 15:48 .
drwxr-xr-x  3 root root   28 Mar 10 15:48 ..
-rw-r--r--  1 root root  321 Jan 26 20:08 Chart.yaml
-rw-r--r--  1 root root  467 Jan 26 20:08 doc.yaml
-rw-r--r--  1 root root  353 Jan 26 20:08 .helmignore
-rw-r--r--  1 root root 3195 Jan 26 20:08 README.md
drwxr-xr-x 12 root root  188 Mar 10 15:48 templates
-rw-r--r--  1 root root  744 Jan 26 20:08 values.openapiv3_schema.yaml
-rw-r--r--  1 root root 1070 Jan 26 20:08 values.yaml

[root@qd01-stop-k8s-master001 kubedb-catalog]# helm install kubedb-catalog --namespace kube-system -f values.yaml  .
NAME: kubedb-catalog
LAST DEPLOYED: Wed Mar 10 15:50:50 2021
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

使用kubedb安裝redis

1、先看下官方給的redis生命周期示意圖


image.png

kubedb安裝redis支持如下特性

Features    Availability
Clustering  ?
Instant Backup  ?
Scheduled Backup    ?
Persistent Volume   ?
Initialize using Snapshot   ?
Initialize using Script ?
Custom Configuration    ?
Using Custom docker image   ?
Builtin Prometheus Discovery    ?
Using Prometheus operator   ?

2昧碉、查看支持的版本

[root@qd01-stop-k8s-master001 kubedb-catalog]# kubectl get redisversions
NAME       VERSION   DB_IMAGE                DEPRECATED   AGE
4.0.11     4.0.11    kubedb/redis:4.0.11                  15m
4.0.6-v2   4.0.6     kubedb/redis:4.0.6-v2                15m
5.0.3-v1   5.0.3     kubedb/redis:5.0.3-v1                15m
6.0.6      6.0.6     kubedb/redis:6.0.6                   15m

3英染、編輯yaml安裝文件
可以參照https://github.com/kubedb/docs/blob/v2021.01.26/docs/examples/redis/clustering/demo-1.yaml
這里選擇安裝6.0.6這個版本,我的集群storageClassName: "rbd"被饿,請根據(jù)實際修改
如果想自定義redis.conf,請參考https://github.com/kubedb/docs/blob/v2021.01.26/docs/examples/redis/custom-config/redis-custom.yaml

apiVersion: kubedb.com/v1alpha2
kind: Redis
metadata:
  name: redis-cluster
  namespace: op
spec:
  version: 6.0.6
  mode: Cluster
  cluster:
    master: 3
    replicas: 1
  storageType: Durable
  storage:
    resources:
      requests:
        storage: 1Gi
    storageClassName: "rbd"
    accessModes:
      - ReadWriteOnce

執(zhí)行安裝

[root@qd01-stop-k8s-master001 kubedb-community]# kubectl apply -f redis-cluster.yaml
redis.kubedb.com/redis-cluster created

安裝完成四康,可以如下查看
[root@qd01-stop-k8s-master001 kubedb-community]# kubectl get rd,po -n op
NAME                             VERSION   STATUS         AGE
redis.kubedb.com/redis-cluster   6.0.6     Provisioning   6m55s

NAME                         READY   STATUS    RESTARTS   AGE
pod/redis-cluster-shard0-0   1/1     Running   0          6m54s
pod/redis-cluster-shard0-1   1/1     Running   0          6m18s
pod/redis-cluster-shard1-0   1/1     Running   0          5m38s
pod/redis-cluster-shard1-1   1/1     Running   0          5m1s
pod/redis-cluster-shard2-0   1/1     Running   0          4m30s
pod/redis-cluster-shard2-1   1/1     Running   0          4m8s

[root@qd01-stop-k8s-master001 redis]# kubectl get svc -n op
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
redis-cluster        ClusterIP   10.97.197.224   <none>        6379/TCP   5h16m
redis-cluster-pods   ClusterIP   None            <none>        6379/TCP   17m

4、驗證集群


[root@qd01-stop-k8s-master001 kubedb-community]# kubectl get pods -n  op  -o jsonpath='{range.items[*]}{.metadata.name} ---------- {.status.podIP}:6379{"\t\n"}{end}' | grep redis
redis-cluster-shard0-0 ---------- 100.64.147.156:6379
redis-cluster-shard0-1 ---------- 100.98.174.218:6379
redis-cluster-shard1-0 ---------- 100.126.252.204:6379
redis-cluster-shard1-1 ---------- 100.113.170.6:6379
redis-cluster-shard2-0 ---------- 100.107.55.69:6379
redis-cluster-shard2-1 ---------- 100.78.230.4:6379

[root@qd01-stop-k8s-master001 redis]# kubectl -n op  exec -it redis-cluster-shard0-0  -- redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:3
cluster_my_epoch:1
cluster_stats_messages_ping_sent:864
cluster_stats_messages_pong_sent:882
cluster_stats_messages_sent:1746
cluster_stats_messages_ping_received:879
cluster_stats_messages_pong_received:864
cluster_stats_messages_meet_received:3
cluster_stats_messages_received:1746

[root@qd01-stop-k8s-master001 redis]# kubectl -n op  exec -it redis-cluster-shard0-0  -- redis-cli cluster nodes
1895cb4b9c31b848666c61000e502f55a29a8255 100.64.147.155:6379@16379 master - 0 1615365162008 2 connected 5461-10922
30bdbf2ca37001774498a9b935afbc1cd2ce389c 100.126.252.203:6379@16379 slave 2c06092fafa99e0158e39e6237a04fed25be3550 0 1615365163000 1 connected
9b2cfbd5c1b417121d410141b6da9512ad29ce3c 100.78.230.3:6379@16379 slave e83446c368839c5fdccf5f70e3b1004eb67cb651 0 1615365163512 3 connected
2c06092fafa99e0158e39e6237a04fed25be3550 100.82.197.130:6379@16379 myself,master - 0 1615365162000 1 connected 0-5460
1379d2b20f26ab13d53068d276ec5d988b7a0273 100.64.122.197:6379@16379 slave 1895cb4b9c31b848666c61000e502f55a29a8255 0 1615365163000 2 connected
e83446c368839c5fdccf5f70e3b1004eb67cb651 100.107.55.68:6379@16379 master - 0 1615365164014 3 connected 10923-16383
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末狭握,一起剝皮案震驚了整個濱河市闪金,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖哎垦,帶你破解...
    沈念sama閱讀 221,635評論 6 515
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件囱嫩,死亡現(xiàn)場離奇詭異,居然都是意外死亡漏设,警方通過查閱死者的電腦和手機墨闲,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,543評論 3 399
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來郑口,“玉大人鸳碧,你說我怎么就攤上這事∪裕” “怎么了瞻离?”我有些...
    開封第一講書人閱讀 168,083評論 0 360
  • 文/不壞的土叔 我叫張陵,是天一觀的道長仔夺。 經(jīng)常有香客問我琐脏,道長,這世上最難降的妖魔是什么缸兔? 我笑而不...
    開封第一講書人閱讀 59,640評論 1 296
  • 正文 為了忘掉前任日裙,我火速辦了婚禮,結(jié)果婚禮上惰蜜,老公的妹妹穿的比我還像新娘昂拂。我一直安慰自己,他們只是感情好抛猖,可當(dāng)我...
    茶點故事閱讀 68,640評論 6 397
  • 文/花漫 我一把揭開白布格侯。 她就那樣靜靜地躺著,像睡著了一般财著。 火紅的嫁衣襯著肌膚如雪联四。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 52,262評論 1 308
  • 那天撑教,我揣著相機與錄音朝墩,去河邊找鬼。 笑死伟姐,一個胖子當(dāng)著我的面吹牛收苏,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播愤兵,決...
    沈念sama閱讀 40,833評論 3 421
  • 文/蒼蘭香墨 我猛地睜開眼鹿霸,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了秆乳?” 一聲冷哼從身側(cè)響起懦鼠,我...
    開封第一講書人閱讀 39,736評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后葛闷,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體憋槐,經(jīng)...
    沈念sama閱讀 46,280評論 1 319
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 38,369評論 3 340
  • 正文 我和宋清朗相戀三年淑趾,在試婚紗的時候發(fā)現(xiàn)自己被綠了阳仔。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 40,503評論 1 352
  • 序言:一個原本活蹦亂跳的男人離奇死亡扣泊,死狀恐怖近范,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情延蟹,我是刑警寧澤评矩,帶...
    沈念sama閱讀 36,185評論 5 350
  • 正文 年R本政府宣布,位于F島的核電站阱飘,受9級特大地震影響斥杜,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜沥匈,卻給世界環(huán)境...
    茶點故事閱讀 41,870評論 3 333
  • 文/蒙蒙 一蔗喂、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧高帖,春花似錦缰儿、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,340評論 0 24
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至预麸,卻和暖如春瞪浸,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背吏祸。 一陣腳步聲響...
    開封第一講書人閱讀 33,460評論 1 272
  • 我被黑心中介騙來泰國打工默终, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人犁罩。 一個月前我還...
    沈念sama閱讀 48,909評論 3 376
  • 正文 我出身青樓,卻偏偏與公主長得像两疚,于是被迫代替她去往敵國和親床估。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 45,512評論 2 359

推薦閱讀更多精彩內(nèi)容