clickhouse + chproxy 集群搭建

集群規(guī)劃

集群架構(gòu)

其中 Distribute 是指創(chuàng)建分布式表的機器挪拟,在此文章中是將 Distribute 單獨部署的囱怕,也可以將 Distribute 角色劃分到每個 Replica 機器上耙饰,即在所有副本機器上創(chuàng)建相同的分布式表,可以使用 create table tbl on cluster 'cluster_name'塑崖。

角色分布

本次安裝中使用了 5 個 zookeeper 節(jié)點的集群文黎,這個對于安裝 clickhouse 不是必須的惹苗。


安裝步驟

基礎(chǔ)環(huán)境準(zhǔn)備

1. 安裝 clustershell

$ yum install -y clustershell 
$ vi /etc/clustershell/groups

all: clickhouse-node-[01-14]
replica1:clickhouse-node-[07,10,13]
replica2:clickhouse-node-[08,11,14]
distributed:clickhouse-node-[06,09,12]
chproxy:clickhouse-node-[06,09,12]

$ clush -a 'uptime'

2. 免密登陸

$ chmod 755 ~/.ssh 
$ ssh-keygen -t rsa 
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
$ chmod 600 ~/.ssh/authorized_keys 
$ vi /etc/ssh/sshd_config PubkeyAuthentication yes # 啟?公鑰私鑰配對認(rèn)證?式,ssh-copy-id需要此權(quán)限 
$ service sshd restart $ ssh-copy-id -i ~/.ssh/id_rsa.pub root@xxxx

安裝 Clickhouse

1. 安裝 RPM 包

在所有機器上下載并安裝 curl下載安裝腳本/包

clush -g all -b 'yum install -y curl'
在 replica, distributed, chproxy 機器上下載并執(zhí)行 packagecloud.io 提供的 clickhouse 安裝腳本

clush -g replica1,replica2,distributed,chproxy -b 'curl -s https://packagecloud.io/install/repositories/altinity/clickhouse/script.rpm.sh | sudo bash'

將 clickhouse-server, clickhouse-client 安裝到 replica 和 distributed 機器上

# check for availables
$ clush -g replica1,replica2,distributed -b 'sudo yum list "clickhouse*"'

# install
$ clush -g replica1,replica2,distributed -b 'sudo yum install -y clickhouse-server clickhouse-client clickhouse-compressor'

2. 修改 ulimit 配置

$ vi /etc/security/limits.d/clickhouse.conf

# 添加 core file size耸峭,允許程序 crash 時創(chuàng)建的 dump 文件大小
clickhouse       soft    core    1073741824
clickhouse       hard    core    1073741824

3. 修改啟動腳本

$ /etc/init.d/clickhouse-server

CLICKHOUSE_LOGDIR=/data/clickhouse/log

4.修改集群配置

根據(jù)以下文件參數(shù)修改 /etc/clickhouse-server/config.xml桩蓉,以下只包含需要修改替換的配置

<?xml version="1.0"?>
<yandex>
    <logger>
        <level>trace</level>
        <log>/data/clickhouse/logs/server.log</log>
        <errorlog>/data/clickhouse/logs/error.log</errorlog>
        <size>1000M</size>
        <count>10</count>
    </logger>
    
    <http_port>8123</http_port>
    <tcp_port>9000</tcp_port>
    <interserver_http_port>9009</interserver_http_port>
    <listen_host>0.0.0.0</listen_host>

    <path>/data/clickhouse/</path>
    <tmp_path>/data/clickhouse/tmp/</tmp_path>
    <users_config>users.xml</users_config>

    <default_profile>default</default_profile>
    <default_database>default</default_database>
    <remote_servers incl="clickhouse_remote_servers" />

    <zookeeper incl="zookeeper-servers" optional="true" />
    <macros incl="macros" optional="true" />
    <include_from>/etc/clickhouse-server/metrika.xml</include_from>
</yandex>

創(chuàng)建 /etc/clickhouse-server/metrika.xml

<yandex>
<clickhouse_remote_servers>
    <cluster-1>
    <shard>
            <internal_replication>true</internal_replication>
            <replica>
                <host>clickhouse-node-07</host>
                <port>9000</port>
                <user>default</user>
                <password>6lYaUiFi</password>
            </replica>
            <replica>
                <host>clickhouse-node-08</host>
                <port>9000</port>
                <user>default</user>
                <password>6lYaUiFi</password>
            </replica>
        </shard>
        <shard>
            <internal_replication>true</internal_replication>
            <replica>
                <host>clickhouse-node-10</host>
                <port>9000</port>
                <user>default</user>
                <password>6lYaUiFi</password>
            </replica>
            <replica>
                <host>clickhouse-node-11</host>
                <port>9000</port>
                <user>default</user>
                <password>6lYaUiFi</password>
            </replica>
        </shard>
        <shard>
            <internal_replication>true</internal_replication>
            <replica>
               <host>clickhouse-node-13</host>
               <port>9000</port>
               <user>default</user>
               <password>6lYaUiFi</password>
            </replica>
            <replica>
               <host>clickhouse-node-14</host>
               <port>9000</port>
               <user>default</user>
               <password>6lYaUiFi</password>
            </replica>
        </shard>
    </cluster-1>
</clickhouse_remote_servers>

<zookeeper-servers>
    <node index="1">
        <host>clickhouse-node-01</host>
        <port>2181</port>
    </node>
    <node index="2">
        <host>clickhouse-node-02</host>
        <port>2181</port>
    </node>
    <node index="3">
        <host>clickhouse-node-03</host>
        <port>2181</port>
    </node>
    <node index="4">
        <host>clickhouse-node-04</host>
        <port>2181</port>
    </node>
    <node index="5">
        <host>clickhouse-node-05</host>
        <port>2181</port>
    </node>
</zookeeper-servers>

<macros>
    <cluster>cluster-1</cluster>
    <replica>host_name</replica>
    <shard>shard_number</shard>
</macros>

<clickhouse_compression>
    <case>
        <min_part_size>10000000000</min_part_size>
        <min_part_size_ratio>0.01</min_part_size_ratio>
        <method>lz4</method> 
   </case>
</clickhouse_compression>
</yandex>

將 /etc/clickhouse-server/users.xml 修改為如下內(nèi)容

<?xml version="1.0"?>
<yandex>
    <profiles>
        <!-- 讀寫用戶設(shè)置  -->
        <default>
            <max_memory_usage>10000000000</max_memory_usage>
            <use_uncompressed_cache>0</use_uncompressed_cache>
            <load_balancing>random</load_balancing>
        </default>
        <!-- 只寫用戶設(shè)置  -->
        <readonly>
            <max_memory_usage>10000000000</max_memory_usage>
            <use_uncompressed_cache>0</use_uncompressed_cache>
            <load_balancing>random</load_balancing>
            <readonly>1</readonly>
        </readonly>
    </profiles>
    <!-- 配額  -->
    <quotas>
        <!-- Name of quota. -->
        <default>
            <interval>
                <duration>3600</duration>
                <queries>0</queries>
                <errors>0</errors>
                <result_rows>0</result_rows>
                <read_rows>0</read_rows>
                <execution_time>0</execution_time>
            </interval>
        </default>
    </quotas>
    <users>
        <!-- 讀寫用戶  -->
        <default>
            <password_sha256_hex>967f3bf355dddfabfca1c9f5cab39352b2ec1cd0b05f9e1e6b8f629705fe7d6e</password_sha256_hex>
            <networks incl="networks" replace="replace">
                <ip>::/0</ip>
            </networks>
            <profile>default</profile>
            <quota>default</quota>
        </default>
        <!-- 只讀用戶  -->
        <readonly>
            <password_sha256_hex>967f3bf355dddfabfca1c9f5cab39352b2ec1cd0b05f9e1e6b8f629705fe7d6e</password_sha256_hex>
            <networks incl="networks" replace="replace">
                <ip>::/0</ip>
            </networks>
            <profile>readonly</profile>
            <quota>default</quota>
        </readonly>
    </users>
</yandex>

5. 同步配置

將 clickhouse 用戶設(shè)置為 login 用戶

$ clush -g replica1,replica2,distributed -b 'usermod -s /bin/bash clickhouse'

將 clickhouse 放置到 /data/clickhouse/ 下

$ clush -g replica1,replica2,distributed -b 'mkdir /data/clickhouse/logs -p'
$ clush -g replica1,replica2,distributed -b 'chown clickhouse.clickhouse /data/clickhouse/ -R'

將配置文件復(fù)制到所有的 clickhouse 機器

$ clush -g replica1,replica2,distributed -b --copy /etc/security/limits.d/clickhouse.conf --dest /etc/security/limits.d/
$ clush -g replica1,replica2,distributed -b --copy /etc/init.d/clickhouse-server --dest /etc/init.d
$ clush -g replica1,replica2,distributed -b --copy /etc/clickhouse-server/config.xml --dest /etc/clickhouse-server/
$ clush -g replica1,replica2,distributed -b --copy /etc/clickhouse-server/users.xml --dest /etc/clickhouse-server/
$ clush -g replica1,replica2,distributed -b --copy /etc/clickhouse-server/metrika.xml --dest /etc/clickhouse-server/

修改各個機器的變量

# replace hostname
$ clush -g replica1,replica2,distributed -b 'sed -i "s/host_name/"$HOSTNAME"/" /data/clickhouse/metrika.xml'

# replace shard_number
$ clush -w clickhouse-node-[06-08] -b 'sed -i "s/shard_number/1/" /data/clickhouse/metrika.xml'
$ clush -w clickhouse-node-[09-11] -b 'sed -i "s/shard_number/2/" /data/clickhouse/metrika.xml'
$ clush -w clickhouse-node-[12-14] -b 'sed -i "s/shard_number/3/" /data/clickhouse/metrika.xml'

6. 重新啟動服務(wù)

# restart server
$ clush -g replica1,replica2,distributed -b 'service clickhouse-server restart'

# login with password
$ clickhouse-client -h 127.0.0.1 -d default -m -u default --password 6lYaUiFi 

登陸機器創(chuàng)建 local table

$ clickhouse-client

CREATE TABLE monchickey.image_label (
    label_id UInt32, 
    label_name String, 
    insert_time Date
) ENGINE = ReplicatedMergeTree('/clickhouse/tables/01-01/image_label','cluster01-01-1',insert_time, (label_id, insert_time), 8192)

安裝 chproxy

1. 下載 chproxy

https://github.com/Vertamedia/chproxy/releases

$ mkdir -p /data/chproxy
$ cd /data/chproxy
$ wget https://github.com/Vertamedia/chproxy/releases/download/1.13.0/chproxy-linux-amd64-1.13.0.tar.gz
$ tar -xzvf chproxy-*.gz

chproxy-linux-amd64

2. 配置文件

新建 /data/chproxy/config.yml

設(shè)置這個可以關(guān)閉安全檢查 hack_me_please: true

server:
  http:
      listen_addr: ":9090"
      allowed_networks: ["172.0.0.0/8"]

users:
  - name: "distributed-write"
    to_cluster: "distributed-write"
    to_user: "default"

  - name: "replica-write"
    to_cluster: "replica-write"
    to_user: "default"

  - name: "distributed-read"
    to_cluster: "distributed-read"
    to_user: "readonly"
    max_concurrent_queries: 6
    max_execution_time: 1m

clusters:
  - name: "replica-write"
    replicas:
      - name: "replica1"
        nodes: ["clickhouse-node-07:8123", "clickhouse-node-10:8123", "clickhouse-node-13:8123"]
      - name: "replica2"
        nodes: ["clickhouse-node-08:8123", "clickhouse-node-11:8123", "clickhouse-node-14:8123"]
    users:
      - name: "default"
        password: "6lYaUiFi"

  - name: "distributed-write"
    nodes: [
      "clickhouse-node-06:8123",
      "clickhouse-node-09:8123",
      "clickhouse-node-12:8123"
    ]
    users:
      - name: "default"
        password: "6lYaUiFi"

  - name: "distributed-read"
    nodes: [
      "clickhouse-node-06:8123",
      "clickhouse-node-09:8123",
      "clickhouse-node-12:8123"
    ]
    users:
    - name: "readonly"
      password: "6lYaUiFi"

caches:
  - name: "shortterm"
    dir: "/data/chproxy/cache/shortterm"
    max_size: 150Mb
    expire: 130s
新建 /data/chproxy/restart.sh
$ vim /data/chproxy/restart.sh

#!/bin/bash
cd $(dirname)
ps -ef | grep chproxy | head -2 | tail -1 | awk '{print $2}' | xargs kill -9
sudo -u chproxy nohup ./chproxy-linux-amd64 -config=./config/config.yml >> ./logs/chproxy.out 2>&1 &

3.分布式安裝

創(chuàng)建用戶
$ clush -g distributed -b 'useradd chproxy'
創(chuàng)建目錄
$ clush -g distributed -b 'mkdir -p /data/chproxy/logs'
$ clush -g distributed -b 'mkdir -p /data/chproxy/config'
$ clush -g distributed -b 'mkdir -p /data/chproxy/cache/shortterm'
$ clush -g distributed -b 'mkdir -p /data/chproxy/cache/longterm'
分發(fā)文件
$ clush -g distributed -b --copy /data/chproxy/chproxy-linux-amd64 --dest /data/chproxy/
$ clush -g distributed -b --copy /data/chproxy/config.yml --dest /data/chproxy/config/
$ clush -g distributed -b --copy /data/chproxy/restart.sh --dest /data/chproxy/
修改目錄權(quán)限
$ clush -g distributed -b 'chown -R chproxy.chproxy /data/chproxy'

4. 啟動 chproxy

$ clush -g distributed -b 'bash /data/chproxy/restart.sh'
$ clush -g distributed -b 'ps -ef | grep chproxy' # check

檢查 http 接口

# clichhouse
$ echo 'SELECT 1' | curl 'http://localhost:8123/?user=default&password=6lYaUiFi' --data-binary @-
1

echo 'SELECT 1' | curl 'http://default:6lYaUiFi@localhost:8123/' --data-binary @-
1

echo 'SELECT 1' | curl 'http://readonly:6lYaUiFi@localhost:8123/' --data-binary @-
1

# chproxy
echo 'SELECT 1' | curl 'http://clickhouse-node-06:9090/?user=distributed-read&password=' --data-binary @-
1

echo 'SELECT 1' | curl 'http://clickhouse-node-06:9090/?user=distributed-write&password=' --data-binary @-
1

監(jiān)控

clickhouse 監(jiān)控模板

https://github.com/Vertamedia/clickhouse-grafana

各個庫表,占用的存儲空間大小

SELECT
database,
table,
formatReadableSize ( sum( bytes ) ) AS size
FROM
system.parts
WHERE
active
GROUP BY database,table
ORDER BY sum( bytes ) DESC

遇到的問題

問題 #1

操作:

直接啟動 clickhouse-server 會報 ulimit 相關(guān)的錯誤

clickhouse-server --config-file=/etc/clickhouse-server/config.xml
報錯:
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = Exception: Cannot set max size of core file to 1073741824, e.what() = Exception
解決:
$ vi /etc/security/limits.d/clickhouse.conf
 
# 添加 core file size劳闹,允許程序 crash 時創(chuàng)建的 dump 文件大小
clickhouse       soft    core    1073741824
clickhouse       hard    core    1073741824

問題 #2

操作:
$ clickhouse-server --config-file=/etc/clickhouse-server/config.xml
報錯:
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Logging trace to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Logging trace to console
2019.02.13 15:15:36.539294 [ 1 ] {} <Information> : Starting ClickHouse 19.1.6 with revision 54413
2019.02.13 15:15:36.543324 [ 1 ] {} <Information> Application: starting up
2019.02.13 15:15:36.547676 [ 1 ] {} <Error> Application: DB::Exception: Effective user of the process (root) does not match the owner of the data (clickhouse). Run under 'sudo -u clickhouse'.
2019.02.13 15:15:36.547714 [ 1 ] {} <Information> Application: shutting down
2019.02.13 15:15:36.547729 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2019.02.13 15:15:36.547809 [ 2 ] {} <Information> BaseDaemon: Stop SignalListener thread

解決:

$ sudo -u clickhouse clickhouse-server --config-file=/etc/clickhouse-server/config.xml

問題 #3

操作:
$ sudo -u clickhouse clickhouse-server --config-file=/etc/clickhouse-server/config.xml
報錯:
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Couldn't save preprocessed config to /var/lib/clickhouse//preprocessed_configs/config.xml: Access to file denied: /var/lib/clickhouse//preprocessed_configs/config.xml
Logging trace to /var/log/clickhouse-server/clickhouse-server.log
Poco::Exception. Code: 1000, e.code() = 13, e.displayText() = Access to file denied: /var/log/clickhouse-server/clickhouse-server.log, e.what() = Access to file denied

解決:

chown -R clickhouse /var/log/clickhouse-server/

問題 #4

操作:
$ sudo -u clickhouse clickhouse-server --config-file=/etc/clickhouse-server/config.xml
報錯:
Logging trace to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Logging trace to console
2019.02.13 16:39:48.812708 [ 1 ] {} <Information> : Starting ClickHouse 19.1.6 with revision 54413
2019.02.13 16:39:48.815644 [ 1 ] {} <Information> Application: starting up
2019.02.13 16:39:48.819798 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 262144
2019.02.13 16:39:48.819827 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2019.02.13 16:39:48.819850 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone `Asia/Shanghai'.
2019.02.13 16:39:48.820256 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'clickhouse-node-13' as replica host.
2019.02.13 16:39:48.822770 [ 1 ] {} <Debug> ConfigReloader: Loading config `/data/clickhouse/users.xml'
Include not found: networks
Include not found: networks
2019.02.13 16:39:48.823764 [ 1 ] {} <Information> Application: Loading metadata.
2019.02.13 16:39:48.829479 [ 1 ] {} <Debug> Application: Loaded metadata.
2019.02.13 16:39:48.829592 [ 1 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2019.02.13 16:39:48.829762 [ 3 ] {} <Debug> DDLWorker: Started DDLWorker thread
2019.02.13 16:39:48.834746 [ 3 ] {} <Trace> ZooKeeper: initialized, hosts: clickhouse-node-03:2181,clickhouse-node-02:2181,clickhouse-node-05:2181,clickhouse-node-01:2181,clickhouse-node-04:2181
2019.02.13 16:39:48.834875 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2019.02.13 16:39:48.835531 [ 1 ] {} <Error> Application: Listen [::1]: 99: Net Exception: Cannot assign requested address: [::1]:8123  If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2019.02.13 16:39:48.835636 [ 1 ] {} <Information> Application: Listening http://127.0.0.1:8123
2019.02.13 16:39:48.835684 [ 1 ] {} <Information> Application: Listening tcp: 127.0.0.1:9000
2019.02.13 16:39:48.835734 [ 1 ] {} <Information> Application: Listening interserver http: 127.0.0.1:9009
2019.02.13 16:39:48.836105 [ 1 ] {} <Information> Application: Available RAM = 31.26 GiB; physical cores = 8; threads = 16.
2019.02.13 16:39:48.836120 [ 1 ] {} <Information> Application: Ready for connections.
2019.02.13 16:39:48.838717 [ 3 ] {} <Debug> DDLWorker: Processing tasks
2019.02.13 16:39:48.838977 [ 3 ] {} <Debug> DDLWorker: Waiting a watch
2019.02.13 16:39:50.838820 [ 23 ] {} <Debug> ConfigReloader: Loading config `/data/clickhouse/config.xml'
解決:
$ vim /etc/clickhouse-server/config.xml

<listen_host>0.0.0.0</listen_host>

問題 #5

操作:
$ clush -g replica1,replica2 -b 'service clickhouse-server stop'
問題:

8123 端口相關(guān)的進(jìn)程不能被停止

解決:
$ lsof -i :8123 | grep clickhouse | awk '{print $2}' | xargs kill -9
# or
$ service clickhouse-server forcestop

參考

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末院究,一起剝皮案震驚了整個濱河市洽瞬,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌业汰,老刑警劉巖伙窃,帶你破解...
    沈念sama閱讀 207,113評論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異样漆,居然都是意外死亡为障,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,644評論 2 381
  • 文/潘曉璐 我一進(jìn)店門放祟,熙熙樓的掌柜王于貴愁眉苦臉地迎上來产场,“玉大人,你說我怎么就攤上這事舞竿【┚埃” “怎么了?”我有些...
    開封第一講書人閱讀 153,340評論 0 344
  • 文/不壞的土叔 我叫張陵骗奖,是天一觀的道長确徙。 經(jīng)常有香客問我,道長执桌,這世上最難降的妖魔是什么鄙皇? 我笑而不...
    開封第一講書人閱讀 55,449評論 1 279
  • 正文 為了忘掉前任,我火速辦了婚禮仰挣,結(jié)果婚禮上伴逸,老公的妹妹穿的比我還像新娘。我一直安慰自己膘壶,他們只是感情好错蝴,可當(dāng)我...
    茶點故事閱讀 64,445評論 5 374
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著颓芭,像睡著了一般顷锰。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上亡问,一...
    開封第一講書人閱讀 49,166評論 1 284
  • 那天官紫,我揣著相機與錄音,去河邊找鬼州藕。 笑死束世,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的床玻。 我是一名探鬼主播毁涉,決...
    沈念sama閱讀 38,442評論 3 401
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼笨枯!你這毒婦竟也來了薪丁?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 37,105評論 0 261
  • 序言:老撾萬榮一對情侶失蹤馅精,失蹤者是張志新(化名)和其女友劉穎严嗜,沒想到半個月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體洲敢,經(jīng)...
    沈念sama閱讀 43,601評論 1 300
  • 正文 獨居荒郊野嶺守林人離奇死亡漫玄,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 36,066評論 2 325
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了压彭。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片睦优。...
    茶點故事閱讀 38,161評論 1 334
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖壮不,靈堂內(nèi)的尸體忽然破棺而出汗盘,到底是詐尸還是另有隱情,我是刑警寧澤询一,帶...
    沈念sama閱讀 33,792評論 4 323
  • 正文 年R本政府宣布隐孽,位于F島的核電站,受9級特大地震影響健蕊,放射性物質(zhì)發(fā)生泄漏菱阵。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 39,351評論 3 307
  • 文/蒙蒙 一缩功、第九天 我趴在偏房一處隱蔽的房頂上張望晴及。 院中可真熱鬧,春花似錦嫡锌、人聲如沸虑稼。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,352評論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽动雹。三九已至,卻和暖如春跟压,著一層夾襖步出監(jiān)牢的瞬間胰蝠,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 31,584評論 1 261
  • 我被黑心中介騙來泰國打工震蒋, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留茸塞,地道東北人。 一個月前我還...
    沈念sama閱讀 45,618評論 2 355
  • 正文 我出身青樓查剖,卻偏偏與公主長得像钾虐,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子笋庄,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 42,916評論 2 344

推薦閱讀更多精彩內(nèi)容