CentOS7環(huán)境部署虛擬ceph集群

一 資源瓢剿、版本信息

cpu:Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz //4core
mem:total 7686,swap 7935
os:Linux promote.cache-dns.local 3.10.0-957.el7.x86_64
ceph:rh-luminous

二 ceph簡介

  1. 分布式存儲
  2. ceph層次結(jié)構(gòu)
  3. 最簡部署方式:一個管理節(jié)點规揪、一個mon節(jié)點度苔、兩個osd節(jié)點

三 環(huán)境準(zhǔn)備

  1. 部署KVM虛擬環(huán)境,參考基于KVM的虛擬機環(huán)境搭建
  • 鏡像選擇

CentOS-7-x86_64-Minimal-1810.iso

  • 資源配置

mem:1G
disk:50G
cpu:1core

  • 虛擬機名稱

ceph

  • 虛擬網(wǎng)絡(luò)選擇

NAT:default

四 ceph節(jié)點工具安裝

本節(jié)操作在上一節(jié)創(chuàng)建的虛擬機中執(zhí)行

  1. 安裝常用網(wǎng)絡(luò)工具
yum install net-tools -y
  1. 網(wǎng)絡(luò)修改暂吉,使用靜態(tài)IP
ifconfig
netstat -rn

結(jié)果如下圖:


CEPH_Node_01.png

將這些信息寫到配置文件中固化:
修改DNS

echo "NETWORKING=yes" >> /etc/sysconfig/network
echo "DNS1=114.114.114.114" >> /etc/sysconfig/network
echo "DNS2=8.8.8.8" >> /etc/sysconfig/network

修改靜態(tài)IP

vi /etc/sysconfig/network-scripts/ifcfg-eth0

修改如下配置胖秒,其他配置不變

#BOOTPROTO="dhcp"  //這一行需要注釋掉
BOOTPROTO="static"
NM_CONTROLLED=no
IPADDR=192.168.122.122 //IP 和原先IP一樣也可
NETMASK=255.255.255.0
GATEWAY=192.168.122.1

添加主機名

echo "192.168.122.122 node" >> /etc/hosts 
echo "192.168.122.123 node1" >> /etc/hosts 
echo "192.168.122.124 node2" >> /etc/hosts 
echo "192.168.122.125 node3" >> /etc/hosts

重啟網(wǎng)絡(luò)服務(wù)

service network restart

NOTE:如果你是ssh到這個虛擬機的,會失去連接慕的,可以關(guān)閉終端重新連接

  1. yum相關(guān)
  • 安裝第三方源管理工具
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install -y yum-plugin-priorities
yum install -y yum-utils 
  • 源配置
    創(chuàng)建ceph源配置文件阎肝,并打開編輯
touch /etc/yum.repos.d/ceph.repo
vi /etc/yum.repos.d/ceph.repo

在文件中寫入如下內(nèi)容

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

更新yum源

yum update -y
  1. 時鐘相關(guān)
yum install -y ntp ntpdate ntp-doc
  1. 關(guān)閉防火墻
firewall-cmd --zone=public --add-service=ceph-mon --permanent
firewall-cmd --zone=public --add-service=ceph --permanent
firewall-cmd --reload
iptables -A INPUT -i eth0 -p tcp -s 192.168.122.0/24 -d 192.168.122.254 -j ACCEPT
iptables-save 
sudo setenforce 0
  1. ceph部署用戶
    創(chuàng)建用戶
useradd -g root -m cephD -d /home/cephD
passwd cephD

免密碼權(quán)限

echo "cephD ALL=(ALL)NOPASSWD: ALL" | sudo tee /etc/sudoers.d/cephD
chmod 0440 /etc/sudoers.d/cephD

五 ceph節(jié)點clone

本節(jié)操作在宿主機執(zhí)行

  1. 獲得root權(quán)限
sudo su
  1. 關(guān)閉虛擬機ceph
virsh shutdown ceph
  1. clone出ceph-1、ceph-2肮街、ceph-3節(jié)點
virt-clone -o ceph -n ceph-1 -f /home/data/ceph-1.qcow2
virt-clone -o ceph -n ceph-2 -f /home/data/ceph-2.qcow2
virt-clone -o ceph -n ceph-3 -f /home/data/ceph-3.qcow2

注:ceph為管理節(jié)點风题,ceph-1為mon節(jié)點,ceph-2嫉父、ceph-3為osd節(jié)點

  1. 掛載硬盤
  • 創(chuàng)建硬盤鏡像
qemu-img create -f qcow2 /home/data/osd1.qcow2 50g
qemu-img create -f qcow2 /home/data/osd2.qcow2 50g
qemu-img create -f qcow2 /home/data/osd3.qcow2 50g
  • 修改配置文件沛硅,將磁盤掛載到虛擬機(以ceph-2為例)
virsh edit ceph-2

添加如下內(nèi)容到domain.devices節(jié)點下

 <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/home/data/osd2.qcow2'/>
      <target dev='vdb' bus='virtio'/>
 </disk>
  1. 啟動虛擬機
virsh start ceph
virsh start ceph-1
virsh start ceph-2
virsh start ceph-3

查看虛擬機狀態(tài)

virsh list --all
CEPH_Node_02.png
  1. 修改虛擬機IP
    KVM進(jìn)行clone操作之后,虛擬機IP也被clone了绕辖,在同一網(wǎng)段中摇肌,IP沖突,需要手動修改(以ceph-1為例)
virt-viewer -c qemu:///system ceph-1

進(jìn)入控制臺仪际,root登陸围小,修改eth0的IP

vi /etc/sysconfig/network-scripts/ifcfg-eth0

修改IPADDR昵骤,不與其他虛擬機沖突
IPADDR=192.168.122.123
重啟網(wǎng)絡(luò)服務(wù)

service network restart

修改 ceph-2、ceph-3的IP為124肯适、125

六 ceph-deploy 部署ceph集群

本節(jié)操作在虛擬機ceph[ceph管理節(jié)點]上執(zhí)行

  1. 安裝ceph-deploy
yum install -y ceph-deploy
  1. 部署用戶的免密碼登陸其他節(jié)點
su - cephD

生成SSH秘鑰变秦,不輸入密碼,全部[enter]

ssh-keygen
CEPH_Node_03.png

添加信任框舔,執(zhí)行以下操作

ssh-copy-id cephD@node1
ssh-copy-id cephD@node2
ssh-copy-id cephD@node3
cd ~;
touch ~/.ssh/config;
vi ~/.ssh/config

輸入如下內(nèi)容

Host node1
    Hostname node1
    User cephD
Host node2
    Hostname node2
    User cephD
Host node3
    Hostname node3
    User cephD
  1. 創(chuàng)建集群
cd ~;
mkdir my-cluster;cd my-cluster;
ceph-deploy new node1

結(jié)果如下:


CEPH_Node_04.png

修改OSD默認(rèn)數(shù)量為2

echo "osd pool default size = 2" >> ceph.conf
echo "public_network = 192.168.122.0/24" >> ceph.conf
  1. 集群安裝ceph
ceph-deploy install --release luminous node node1 node2 node3
CEPH_Node_05.png
  1. 初始化ceph-moni服務(wù)
ceph-deploy mon create-initial
CEPH_Node_06.png
  1. 拷貝管理員配置到各個節(jié)點
ceph-deploy admin node node1 node2 node3
  1. 安裝管理例程
ceph-deploy mgr create node1 

NOTE:mgr和moni是什么關(guān)系

  1. 添加OSD節(jié)點
ceph-deploy osd create --data /dev/vdb node2
ceph-deploy osd create --data /dev/vdb node3
CEPH_Node_07.png
  1. 查看ceph集群狀態(tài)
ssh node1 sudo ceph health
ssh node2 sudo ceph health
ssh node3 sudo ceph health
CEPH_Node_08.png
ssh node1 sudo ceph -s
CEPH_Node_09.png
  1. 集群擴展
  • 新增元數(shù)據(jù)服務(wù)節(jié)點
ceph-deploy mds create node1
ceph-deploy mds create node2
  • 新增ceph-moni
ceph-deploy mon add node2 
ceph-deploy mon add node3

NOTE:現(xiàn)在集群三個節(jié)點都運行了ceph-moni ?

  • 新增管理例程節(jié)點
ceph-deploy mgr create node2 node3
  • 新增rgw實例
ceph-deploy rgw create node1
ceph-deploy rgw create node2
  1. pool操作
ceph osd pool create mytest 8  //創(chuàng)建
ceph osd pool rm mytest //刪除
  1. 對象操作
[cephD@node my-cluster]$ rados put test-object-1 ceph.log --pool=mytest
[cephD@node my-cluster]$ rados -p mytest ls
test-object-1
[cephD@node my-cluster]$ ceph osd map mytest test-object-1
osdmap e26 pool &apos;mytest&apos; (5) object &apos;test-object-1&apos; -&gt; pg 5.74dc35e2 (5.2) -&gt; up ([1,0], p1) acting ([1,0], p1)
[cephD@node my-cluster]$ rados rm test-object-1 --pool=mytest

七 ansible 部署ceph集群

本節(jié)在ceph主機以cephD用戶執(zhí)行

  1. 準(zhǔn)備工作
  • 卸載ceph集群
cd ~/my-cluster;
ceph-deploy purge node node1 node2 node3
ceph-deploy purgedata node node1 node2 node3
ceph-deploy forgetkeys
rm ceph.*
  • 安裝python-pip工具
cd ~;
sudo yum update -y;
sudo yum install -y python-pip;
  1. 安裝ceph-ansible
  • 安裝ansible-2.6.4
sudo yum install -y PyYAML
sudo yum install -y python-jinja2
sudo yum install -y python-paramiko
sudo yum install -y python-six
sudo yum install -y python2-cryptography
sudo yum install -y sshpass
wget https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.6.4-1.el7.ans.noarch.rpm
sudo rpm -ivh ansible-2.6.4-1.el7.ans.noarch.rpm
ansible --version
CEPH_ANSIBLE_02.png
  • 下載ceph-ansible
cd ~;
sudo yum install -y git;
git clone https://github.com/ceph/ceph-ansible.git
cd ceph-ansible;
git branch -a|grep stable

結(jié)果如下:


CEPH_ANSIBLE_01.png
  • 發(fā)行版說明
ceph-ansible分支 ceph版本 ansible版本
stable-3.0 jewel 和 luminous 2.4
stable-3.1 luminous 和 mimic 2.4
stable-3.2 luminous 和 mimic 2.6
master luminous 和 mimic 2.7
  • 選擇stable-3.2伴栓,解決python依賴
git checkout stable-3.2
sudo pip install -r requirements.txt
sudo pip install --upgrade pip
  1. 配置Inventory集群主機
sudo chmod 0660 /etc/ansible/hosts 
sudo echo "[mons]">>/etc/ansible/hosts
sudo echo "node1">>/etc/ansible/hosts
sudo echo "node2">>/etc/ansible/hosts
sudo echo "[osds]">>/etc/ansible/hosts
sudo echo "node2">>/etc/ansible/hosts
sudo echo "node3">>/etc/ansible/hosts
sudo echo "[mgrs]">>/etc/ansible/hosts
sudo echo "node1">>/etc/ansible/hosts
sudo echo "node2">>/etc/ansible/hosts
sudo echo "node3">>/etc/ansible/hosts
  1. 配置Playbook部署指令
cp site.yml.sample site.yml
  1. 配置ceph部署
cp group_vars/all.yml.sample group_vars/all.yml
vi group_vars/all.yml
------
###########
# INSTALL #
###########
ceph_origin:repository
ceph_repository: community
ceph_stable_release: luminous
ceph_stable_repo: "{{ ceph_mirror }}/rpm-{{ ceph_stable_release }}/el7/x86_64"
......
monitor_interface: eth0
......
public_network: 192.168.122.0/24
osd_objectstore: filestore
devices:
  - '/dev/vdb'
osd_scenario: collocated
------
  1. 安裝執(zhí)行
ansible-playbook site.yml -vv
ceph -s

NOTE:-vv 提示更多錯誤信息

PLAY RECAP ********************************************************************************************************************************************************************************************************
node1                      : ok=165  changed=26   unreachable=0    failed=0   
node2                      : ok=248  changed=35   unreachable=0    failed=0   
node3                      : ok=176  changed=26   unreachable=0    failed=0   


INSTALLER STATUS **************************************************************************************************************************************************************************************************
Install Ceph Monitor        : Complete (0:07:34)
Install Ceph Manager        : Complete (0:07:58)
Install Ceph OSD            : Complete (0:01:09)

Wednesday 27 March 2019  02:50:32 -0400 (0:00:00.065)       0:17:19.385 ******* 
=============================================================================== 
ceph-common : install redhat ceph packages --------------------------------------------------------------------------------------------------------------------------------------------------------------- 274.13s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:16 -------------------------------------------------------------------------------------------------------------------------
ceph-common : install redhat ceph packages --------------------------------------------------------------------------------------------------------------------------------------------------------------- 230.22s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:16 -------------------------------------------------------------------------------------------------------------------------
ceph-common : install centos dependencies ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 104.34s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:9 --------------------------------------------------------------------------------------------------------------------------
ceph-common : install centos dependencies ----------------------------------------------------------------------------------------------------------------------------------------------------------------- 93.92s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:9 --------------------------------------------------------------------------------------------------------------------------
ceph-mgr : install ceph-mgr package on RedHat or SUSE ----------------------------------------------------------------------------------------------------------------------------------------------------- 78.47s
/home/cephD/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2 ------------------------------------------------------------------------------------------------------------------------------------------------
ceph-mon : create ceph mgr keyring(s) when mon is not containerized --------------------------------------------------------------------------------------------------------------------------------------- 18.35s
/home/cephD/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml:61 ---------------------------------------------------------------------------------------------------------------------------------------------------
ceph-osd : manually prepare ceph "filestore" non-containerized osd disk(s) with collocated osd data and journal ------------------------------------------------------------------------------------------- 12.11s
/home/cephD/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53 ----------------------------------------------------------------------------------------------------------------------------------------
ceph-osd : activate osd(s) when device is a disk ----------------------------------------------------------------------------------------------------------------------------------------------------------- 9.93s
/home/cephD/ceph-ansible/roles/ceph-osd/tasks/activate_osds.yml:5 ------------------------------------------------------------------------------------------------------------------------------------------------
ceph-config : generate ceph configuration file: ceph.conf -------------------------------------------------------------------------------------------------------------------------------------------------- 7.68s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/main.yml:77 -----------------------------------------------------------------------------------------------------------------------------------------------------
ceph-mon : collect admin and bootstrap keys ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.42s
/home/cephD/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml:2 ----------------------------------------------------------------------------------------------------------------------------------------------------
ceph-mon : create monitor initial keyring ------------------------------------------------------------------------------------------------------------------------------------------------------------------ 5.64s
/home/cephD/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22 ---------------------------------------------------------------------------------------------------------------------------------------------
ceph-mgr : disable ceph mgr enabled modules ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.45s
/home/cephD/ceph-ansible/roles/ceph-mgr/tasks/main.yml:32 --------------------------------------------------------------------------------------------------------------------------------------------------------
ceph-config : generate ceph configuration file: ceph.conf -------------------------------------------------------------------------------------------------------------------------------------------------- 4.88s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/main.yml:77 -----------------------------------------------------------------------------------------------------------------------------------------------------
ceph-common : configure red hat ceph community repository stable key --------------------------------------------------------------------------------------------------------------------------------------- 4.35s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/redhat_community_repository.yml:2 ----------------------------------------------------------------------------------------------------------------------
ceph-common : configure red hat ceph community repository stable key --------------------------------------------------------------------------------------------------------------------------------------- 4.07s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/redhat_community_repository.yml:2 ----------------------------------------------------------------------------------------------------------------------
ceph-config : create ceph initial directories -------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.06s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml:18 ---------------------------------------------------------------------------------------------------------------------------------
ceph-common : purge yum cache ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 3.59s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/configure_redhat_repository_installation.yml:23 --------------------------------------------------------------------------------------------------------
ceph-common : configure red hat ceph community repository stable key --------------------------------------------------------------------------------------------------------------------------------------- 3.27s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/redhat_community_repository.yml:2 ----------------------------------------------------------------------------------------------------------------------
ceph-config : create ceph initial directories -------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.27s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml:18 ---------------------------------------------------------------------------------------------------------------------------------
ceph-config : create ceph initial directories -------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.12s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml:18 ---------------------------------------------------------------------------------------------------------------------------------

檢查集群狀態(tài)

cephD@node ceph-ansible (stable-3.2) $ ssh node1 sudo ceph -s
  cluster:
    id:     bb653ada-5753-4672-9d3b-b5e92846b897
    health: HEALTH_OK
 
  services:
    mon: 2 daemons, quorum node1,node2
    mgr: node2(active), standbys: node3, node1
    osd: 2 osds: 2 up, 2 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   214MiB used, 89.7GiB / 90.0GiB avail
    pgs:     

其他操作可以參考【七 使用ceph-deploy安裝】第7步之后的操作

NOTE:本節(jié)安裝沒有ceph-admin節(jié)點,所以node節(jié)點上是沒有ceph的雨饺,所有ceph操作需要在node1上執(zhí)行:
ssh node1

八 離線部署

本章在cceph主機以ephD用戶執(zhí)行

  1. 搭建本地倉庫
    CentOS7搭建本地倉庫--CEPH
  2. 使用ceph-ansible部署
    參考【七 ansible 部署ceph集群】
  3. 與第七章不一樣的地方
  • 安裝python-pip工具注意點
sudo pip install -r /home/cephD/ceph-ansible/requirements.txt --find-links=http://192.168.232.129/repo/python/deps/ --trusted-host 192.168.232.129
  • 配置ceph部署注意點
cp group_vars/all.yml.sample group_vars/all.yml
vi group_vars/all.yml
------
###########
# INSTALL #
###########
ceph_origin:repository
ceph_repository: custom
ceph_stable_release: luminous
ceph_stable_repo: "http://192.168.232.129/repo/ceph/luminous/"
......
monitor_interface: eth0
......
public_network: 192.168.122.0/24
osd_objectstore: filestore
devices:
  - '/dev/sdb'
osd_scenario: collocated
------
  1. 提醒
    ceph-ansible 部署ceph集群的時候 cephD用戶的一系列操作也是必要的

九 操作集群

  1. 啟動所有守護(hù)例程
sudo systemctl start ceph.target
  1. 停止所有守護(hù)例程
sudo systemctl stop ceph\*.service ceph\*.target

十 問題&解決

  1. [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph'
    Solution:等待20分鐘,再次執(zhí)行(有時候惑淳,由于網(wǎng)絡(luò)原因额港,yum install -y ceph ceph-radosgw 時間會超過300s,造成超時)
  2. [node1][WARNIN] Another app is currently holding the yum lock; waiting for it to exit...
    Solution:等待歧焦,或者通過[ps -ef|grep yum]找到鎖住的指令進(jìn)程移斩,cancel掉之后,以此執(zhí)行yum指令
  3. 安裝特別慢
    Solution:可以不在一個命令中安裝绢馍,經(jīng)測試向瓷,支持并行安裝,如下:
ceph-deploy install --release luminous node &
ceph-deploy install --release luminous node1 &
ceph-deploy install --release luminous node2 &
ceph-deploy install --release luminous node3 &
  1. auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring ---- ceph quorum_status --format json-pretty
sudo cp * /etc/ceph/
sudo chown cephD:root /etc/ceph/*
  1. [ceph_deploy.rgw][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; ...
ceph-deploy  --overwrite-conf rgw create node1
  1. [ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
echo "public_network = 192.168.122.0/24" >> ceph.conf
ceph-deploy --overwrite-conf config push node node1 node2 node3
  1. mgr和moni有啥區(qū)別
    在luminous版本之前,mgr進(jìn)程包含在moni進(jìn)程內(nèi)部舰涌,L版開始拆分出來

十一 參考文檔

http://docs.ceph.com/ceph-ansible/master/
http://docs.ceph.com/docs/master/start/

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末猖任,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子瓷耙,更是在濱河造成了極大的恐慌朱躺,老刑警劉巖,帶你破解...
    沈念sama閱讀 211,817評論 6 492
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件搁痛,死亡現(xiàn)場離奇詭異长搀,居然都是意外死亡,警方通過查閱死者的電腦和手機鸡典,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,329評論 3 385
  • 文/潘曉璐 我一進(jìn)店門源请,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人彻况,你說我怎么就攤上這事谁尸。” “怎么了疗垛?”我有些...
    開封第一講書人閱讀 157,354評論 0 348
  • 文/不壞的土叔 我叫張陵症汹,是天一觀的道長。 經(jīng)常有香客問我贷腕,道長背镇,這世上最難降的妖魔是什么咬展? 我笑而不...
    開封第一講書人閱讀 56,498評論 1 284
  • 正文 為了忘掉前任,我火速辦了婚禮瞒斩,結(jié)果婚禮上破婆,老公的妹妹穿的比我還像新娘。我一直安慰自己胸囱,他們只是感情好祷舀,可當(dāng)我...
    茶點故事閱讀 65,600評論 6 386
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著烹笔,像睡著了一般裳扯。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上谤职,一...
    開封第一講書人閱讀 49,829評論 1 290
  • 那天饰豺,我揣著相機與錄音,去河邊找鬼允蜈。 笑死冤吨,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的饶套。 我是一名探鬼主播漩蟆,決...
    沈念sama閱讀 38,979評論 3 408
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼妓蛮!你這毒婦竟也來了怠李?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 37,722評論 0 266
  • 序言:老撾萬榮一對情侶失蹤仔引,失蹤者是張志新(化名)和其女友劉穎扔仓,沒想到半個月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體咖耘,經(jīng)...
    沈念sama閱讀 44,189評論 1 303
  • 正文 獨居荒郊野嶺守林人離奇死亡翘簇,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 36,519評論 2 327
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了儿倒。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片版保。...
    茶點故事閱讀 38,654評論 1 340
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖夫否,靈堂內(nèi)的尸體忽然破棺而出彻犁,到底是詐尸還是另有隱情,我是刑警寧澤凰慈,帶...
    沈念sama閱讀 34,329評論 4 330
  • 正文 年R本政府宣布汞幢,位于F島的核電站,受9級特大地震影響微谓,放射性物質(zhì)發(fā)生泄漏森篷。R本人自食惡果不足惜输钩,卻給世界環(huán)境...
    茶點故事閱讀 39,940評論 3 313
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望仲智。 院中可真熱鬧买乃,春花似錦、人聲如沸钓辆。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,762評論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽前联。三九已至功戚,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間似嗤,已是汗流浹背疫铜。 一陣腳步聲響...
    開封第一講書人閱讀 31,993評論 1 266
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留双谆,地道東北人。 一個月前我還...
    沈念sama閱讀 46,382評論 2 360
  • 正文 我出身青樓席揽,卻偏偏與公主長得像顽馋,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子幌羞,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 43,543評論 2 349

推薦閱讀更多精彩內(nèi)容