2. Readme-zh_CN.md

[圖片上傳失敗...(image-47bff3-1555426723726)]
[圖片上傳失敗...(image-23a1a9-1555426723726)]
[圖片上傳失敗...(image-a9f209-1555426723726)]
[圖片上傳失敗...(image-16fb99-1555426723726)]

Piggy Metrics

個人財務(wù)的簡易解決方案

This is a proof-of-concept application, which demonstrates Microservice Architecture Pattern using Spring Boot, Spring Cloud and Docker。
With a pretty neat user interface, by the way恩尾。

[圖片上傳失敗...(image-49ef69-1555426723726)]
[圖片上傳失敗...(image-984dbb-1555426723726)]

業(yè)務(wù)服務(wù)

PiggyMetrics 分解為三個核心微服務(wù)碎绎。 全部都是獨立可部署應(yīng)用, 根據(jù)各自的業(yè)務(wù)域進(jìn)行編排诈悍。

<img width="880" alt="Functional services" src="https://cloud侨嘀。githubusercontent。com/assets/6069066/13900465/730f2922-ee20-11e5-8df0-e7b51c668847。png">

賬戶服務(wù)

涵蓋了通用用戶登錄邏輯以及驗證: 收入/支持 項目, 儲蓄以及賬戶設(shè)置惭墓。

方法 路徑 備注 用戶驗證 用戶界面
GET /accounts/{account} 獲取指定賬號信息
GET /accounts/current 獲取當(dāng)前賬戶信息 × ×
GET /accounts/demo 獲取演示賬戶信息 (收入/支出信息, 等) ×
PUT /accounts/current 保存當(dāng)前賬戶信息 × ×
POST /accounts/ 注冊新賬號 ×

統(tǒng)計信息

對每個賬號的主要統(tǒng)計數(shù)據(jù)進(jìn)行計算,并為捕獲時序而姐。( 不知道咋翻譯 譯者注 )數(shù)據(jù)點包含值腊凶,標(biāo)準(zhǔn)化為基礎(chǔ)貨幣和時間段。此數(shù)據(jù)用于跟蹤帳戶生命周期中的現(xiàn)金流動態(tài)拴念。

方法 路徑 備注 用戶驗證 用戶界面
GET /statistics/{account} 指定賬戶的統(tǒng)計信息
GET /statistics/current 獲取當(dāng)前統(tǒng)計信息 × ×
GET /statistics/demo 獲取演示賬戶統(tǒng)計信息 ×
PUT /statistics/{account} 創(chuàng)建或更新指定賬戶的時序數(shù)據(jù)

通知服務(wù)

存儲了用戶通訊錄信息以及通知設(shè)置 (譬如提醒和備份頻率)钧萍。
定時任務(wù)從其他服務(wù)收集了需要的信息以及發(fā)送郵件消息到訂閱用戶。

方法 路徑 備注 用戶驗證 用戶界面
GET /notifications/settings/current 獲取當(dāng)前用戶通知設(shè)置 × ×
PUT /notifications/settings/current 保存當(dāng)前用戶通知設(shè)置 × ×

備注

  • 每個微服務(wù)都有自己的數(shù)據(jù)庫, 導(dǎo)致沒有辦法繞過 API 以及直接訪問持久化數(shù)據(jù)
  • 這個項目, 我使用了MongoDB 作為每個微服務(wù)的主要數(shù)據(jù)庫政鼠。
    這對于這種多語言 持久化架構(gòu)是有好處的(譯的有點奇怪 譯者注) (選擇這個類型的數(shù)據(jù)庫是最好的!!)划煮。
  • 服務(wù)與服務(wù)間的通信比較簡單: 微服務(wù)交互只使用異步Restful API。
    現(xiàn)實系統(tǒng)中的常見做法是使用交互樣式的組合缔俄。
    例如, 使用異步 GET 請求去檢索數(shù)據(jù)以及異步去通過消息代理進(jìn)行創(chuàng)建/更新操作弛秋,以分離服務(wù)和緩沖消息。但是俐载,這將使我們進(jìn)入結(jié)果一致性世界蟹略。

基礎(chǔ)服務(wù)

分布式系統(tǒng)中有許多常見的模式,可以幫助我們使所描述的核心服務(wù)工作起來遏佣。[Spring Cloud](http://projects.spring.io/spring-cloud/)提供了增強(qiáng)Spring引導(dǎo)應(yīng)用程序行為以實現(xiàn)這些模式的強(qiáng)大工具挖炬。我將簡要介紹它們。
<img width="880" alt="Infrastructure services" src="https://cloud状婶。githubusercontent意敛。com/assets/6069066/13906840/365c0d94-eefa-11e5-90ad-9d74804ca412。png">

配置中心 服務(wù)

Spring Cloud Config 是用于分布式系統(tǒng)的水平可擴(kuò)展的集中配置服務(wù)膛虫。它使用可插入的存儲庫層草姻,當(dāng)前支持本地存儲、Git和Subversion稍刀。

在這個項目中撩独,我使用 native profile, 它只從本地類路徑加載配置文件。 你可以在 Config service resources 看到 shared 目錄账月。
現(xiàn)在综膀,當(dāng)通知服務(wù)請求其配置時,使用shared/notification-service局齿。ymlshared/application剧劝。yml (在所有客戶端應(yīng)用程序之間共享)。

客戶端配置

只需只用 spring-cloud-starter-config 依賴, 自動配置就可以完成剩下的了抓歼。

現(xiàn)在讥此,您不需要在應(yīng)用程序中嵌入任何屬性示绊。 只需要提供 bootstrap.yml 應(yīng)用名和配置中心地址:

spring:
  application:
    name: notification-service
  cloud:
    config:
      uri: http://config:8888
      fail-fast: true
使用 Spring Cloud Config, 可以動態(tài)的切換應(yīng)用的配置。

例如, [EmailService bean](https://github.com/jinweibin/PiggyMetrics/blob/master/notification-service/src/main/java/com
/piggymetrics/notification/service/EmailServiceImpl.java) 使用 @RefreshScope 注解暂论。
這就意味著不需要重啟以及重新編譯的情況就可以通知應(yīng)用服務(wù)變更電子郵件內(nèi)容和副標(biāo)題面褐。

首先將配置中心修改參數(shù),然后,發(fā)送刷新請求以通知服務(wù)參數(shù)更新:
curl -H "Authorization: Bearer #token#" -XPOST http://127.0.0.1:8000/notifications/refresh

另外, 你也可以使用 Git 的 Webhooks webhooks to automate this process

備注
  • 動態(tài)刷新有一些限制。 @RefreshScope 在有 @Configuration 注解的類不支持還有處理不了有 @Scheduled 注解的方法
  • fail-fast 屬性表示如果在不能連接配置中心的時候會在啟動時馬上失敗取胎。
  • 這里有些重要的筆記 security notes

鑒權(quán)服務(wù)

鑒權(quán)任務(wù)被分?jǐn)偟礁鱾€微服務(wù)上,那些被 OAuth2 tokens 授權(quán)的后臺服務(wù)資源展哭。
Auth Server is used for user authorization as well as for secure machine-to-machine communication inside a perimeter。
鑒權(quán)服務(wù)器用于用戶鑒權(quán),也用于在外圍環(huán)境中進(jìn)行安全的機(jī)器到機(jī)器通信闻蛀。匪傍。

這個項目用戶鑒權(quán)方式使用的是 Password credentials 授權(quán)方式
(因為他只給本地Piggmetrics用戶界面使用) ,另外微服務(wù)的授權(quán)使用 Client Credentials 授權(quán)。

Spring Cloud Security 提供了方便的注解以及自動配置使應(yīng)用能夠更加簡單的實現(xiàn)服務(wù)端以及客戶端的鑒權(quán) 觉痛。
在這里你可以學(xué)到更多 文檔 也可以在 Auth Server code確認(rèn)詳細(xì)配置役衡。

對于客戶端, 所有的鑒權(quán)工作都和原來基于 session 的鑒權(quán)方式一樣, 你可以在 request 中獲取 Principal 對象, 基于表達(dá)式和@PreAuthorize注解去驗證用戶的角色或者其他內(nèi)容
每個PiggyMetrics的客戶端(賬戶服務(wù),統(tǒng)計服務(wù),通知服務(wù)和瀏覽器)后端服務(wù)都擁有server作用域,瀏覽器則擁有ui
所以我們也可以保護(hù)控制器不受外部訪問的影響, 例如:

@PreAuthorize("#oauth2薪棒。hasScope('server')")
@RequestMapping(value = "accounts/{name}", method = RequestMethod.GET)
public List<DataPoint> getStatisticsByAccountName(@PathVariable String name) {
    return statisticsService.findByAccountName(name);
}

API 網(wǎng)關(guān)

如你所見, 這邊有3個核心服務(wù),他們向其他的客戶端暴露外部API接口手蝎。
在真實系統(tǒng)中,這個數(shù)字會隨著系統(tǒng)的復(fù)雜性增長得非常之快。
事實上, 為了渲染一個復(fù)雜的網(wǎng)頁可能要觸發(fā)上百上千個服務(wù)俐芯。

理論上, 客戶端可以直接發(fā)送請求到各個微服務(wù)供應(yīng)商去棵介。
但是很明顯的問題是, 這個操作會有很大的挑戰(zhàn)以及限制, 像是必須知道所有節(jié)點的地址, 分別對每一條信息執(zhí)行HTTP請求, 然后在一個客戶端去合并結(jié)果。
另一個問題是后端可能使用的是非Web友好協(xié)議吧史。

通常來說, 使用 API 網(wǎng)關(guān)可能是一個更好的方法邮辽。
It is a single entry point into the system, used to handle requests by routing them to the appropriate backend service or by invoking multiple backend services and aggregating the results
這樣進(jìn)入系統(tǒng)就只有一個入口, 可以通過將請求路由到適合的后端服務(wù)或者多個好多服務(wù)aggregating the results贸营。
此外吨述,它還可以用于身份驗證、監(jiān)控钞脂、壓力和金絲雀測試揣云、服務(wù)遷移、靜態(tài)響應(yīng)處理芳肌、主動流量管理灵再。

Netflix 開源了 這樣的邊緣服務(wù),
現(xiàn)在我們就可以使用 Spring Cloud 的@EnableZuulProxy 注解去開啟它。
In this project, I use Zuul to store static content (ui application) and to route requests to appropriate
這個項目里, 我使用了 Zuul 去存儲靜態(tài)資源內(nèi)容 ( 用戶界面應(yīng)用 ) 還有去路由請求到合適的微服務(wù)去亿笤。
Here's a simple prefix-based routing configuration for Notification service:
這里是一個簡單的基于前綴的通知服務(wù)的路由配置:

zuul:
  routes:
    notification-service:
        path: /notifications/**
        serviceId: notification-service
        stripPrefix: false

以上配置以為著所有以 /notifications 開頭的請求都會被路由到通知服務(wù)去。
這邊沒有往常的硬編碼的地址栋猖。 Zuul 使用了 服務(wù)發(fā)現(xiàn)
機(jī)制去定位通知服務(wù)的所有實例然后 [負(fù)載均衡](https://github.com/jinweibin/PiggyMetrics/blob/master/README
.md#http-client-load-balancer-and-circuit-breaker)净薛。

服務(wù)發(fā)現(xiàn)

另一種常見的架構(gòu)模式是服務(wù)發(fā)現(xiàn)。
這可以自動檢測到服務(wù)實例的網(wǎng)絡(luò)位置,
它可以根據(jù)服務(wù)的故障,升級或者是自動伸縮來動態(tài)的分配地址蒲拉。

服務(wù)發(fā)現(xiàn)的關(guān)鍵就是注冊中心肃拜。
這個項目使用了Netflix Eureka 作為服務(wù)的注冊中心痴腌。
Eureka is a good example of the client-side discovery pattern,
Eureka 是一個很好的客戶端發(fā)現(xiàn)模式的例子,
when client is responsible for determining locations of available service instances (using Registry server) and load balancing requests across them屹耐。

With Spring Boot, you can easily build Eureka Registry with spring-cloud-starter-eureka-server dependency, @EnableEurekaServer annotation and simple configuration properties唤崭。

Client support enabled with @EnableDiscoveryClient annotation an bootstrap。yml with application name:

spring:
  application:
    name: notification-service

Now, on application startup, it will register with Eureka Server and provide meta-data, such as host and port, health indicator URL, home page etc蝗碎。 Eureka receives heartbeat messages from each instance belonging to a service猛蔽。 If the heartbeat fails over a configurable timetable, the instance will be removed from the registry剥悟。

Also, Eureka provides a simple interface, where you can track running services and a number of available instances: http://localhost:8761

Load balancer, Circuit breaker and Http client

Netflix OSS provides another great set of tools。

Ribbon

Ribbon is a client side load balancer which gives you a lot of control over the behaviour of HTTP and TCP clients曼库。 Compared to a traditional load balancer, there is no need in additional hop for every over-the-wire invocation - you can contact desired service directly区岗。

Out of the box, it natively integrates with Spring Cloud and Service Discovery。 Eureka Client provides a dynamic list of available servers so Ribbon could balance between them毁枯。

Hystrix

Hystrix is the implementation of Circuit Breaker pattern, which gives a control over latency and failure from dependencies accessed over the network慈缔。 The main idea is to stop cascading failures in a distributed environment with a large number of microservices。 That helps to fail fast and recover as soon as possible - important aspects of fault-tolerant systems that self-heal种玛。

Besides circuit breaker control, with Hystrix you can add a fallback method that will be called to obtain a default value in case the main command fails藐鹤。

Moreover, Hystrix generates metrics on execution outcomes and latency for each command, that we can use to monitor system behavior

Feign

Feign is a declarative Http client, which seamlessly integrates with Ribbon and Hystrix赂韵。 Actually, with one spring-cloud-starter-feign dependency and @EnableFeignClients annotation you have a full set of Load balancer, Circuit breaker and Http client with sensible ready-to-go default configuration教藻。

Here is an example from Account Service:

@FeignClient(name = "statistics-service")
public interface StatisticsServiceClient {

    @RequestMapping(method = RequestMethod。PUT, value = "/statistics/{accountName}", consumes = MediaType右锨。APPLICATION_JSON_UTF8_VALUE)
    void updateStatistics(@PathVariable("accountName") String accountName, Account account);

}
  • Everything you need is just an interface
  • You can share @RequestMapping part between Spring MVC controller and Feign methods
  • Above example specifies just desired service id - statistics-service, thanks to autodiscovery through Eureka (but obviously you can access any resource with a specific url)

Monitor dashboard

In this project configuration, each microservice with Hystrix on board pushes metrics to Turbine via Spring Cloud Bus (with AMQP broker)括堤。 The Monitoring project is just a small Spring boot application with Turbine and Hystrix Dashboard

See below how to get it up and running绍移。

Let's see our system behavior under load: Account service calls Statistics service and it responses with a vary imitation delay悄窃。 Response timeout threshold is set to 1 second。

<img width="880" src="https://cloud蹂窖。githubusercontent轧抗。com/assets/6069066/14194375/d9a2dd80-f7be-11e5-8bcc-9a2fce753cfe。png">

<img width="212" src="https://cloud瞬测。githubusercontent横媚。com/assets/6069066/14127349/21e90026-f628-11e5-83f1-60108cb33490。gif"> <img width="212" src="https://cloud月趟。githubusercontent灯蝴。com/assets/6069066/14127348/21e6ed40-f628-11e5-9fa4-ed527bf35129。gif"> <img width="212" src="https://cloud孝宗。githubusercontent穷躁。com/assets/6069066/14127346/21b9aaa6-f628-11e5-9bba-aaccab60fd69。gif"> <img width="212" src="https://cloud因妇。githubusercontent问潭。com/assets/6069066/14127350/21eafe1c-f628-11e5-8ccd-a6b6873c046a猿诸。gif">
0 ms delay 500 ms delay 800 ms delay 1100 ms delay
Well behaving system。 The throughput is about 22 requests/second狡忙。 Small number of active threads in Statistics service梳虽。 The median service time is about 50 ms。 The number of active threads is growing灾茁。 We can see purple number of thread-pool rejections and therefore about 30-40% of errors, but circuit is still closed窜觉。 Half-open state: the ratio of failed commands is more than 50%, the circuit breaker kicks in。 After sleep window amount of time, the next request is let through删顶。 100 percent of the requests fail竖螃。 The circuit is now permanently open。 Retry after sleep time won't close circuit again, because the single request is too slow逗余。

Log analysis

Centralized logging can be very useful when attempting to identify problems in a distributed environment特咆。 Elasticsearch, Logstash and Kibana stack lets you search and analyze your logs, utilization and network activity data with ease。
Ready-to-go Docker configuration described in my other project录粱。

Distributed tracing

Analyzing problems in distributed systems can be difficult, for example, tracing requests that propagate from one microservice to another腻格。 It can be quite a challenge to try to find out how a request travels through the system, especially if you don't have any insight into the implementation of a microservice。 Even when there is logging, it is hard to tell which action correlates to a single request啥繁。

Spring Cloud Sleuth solves this problem by providing support for distributed tracing菜职。 It adds two types of IDs to the logging: traceId and spanId。 The spanId represents a basic unit of work, for example sending an HTTP request旗闽。 The traceId contains a set of spans forming a tree-like structure酬核。 For example, with a distributed big-data store, a trace might be formed by a PUT request。 Using traceId and spanId for each operation we know when and where our application is as it processes a request, making reading our logs much easier适室。

The logs are as follows, notice the [appname,traceId,spanId,exportable] entries from the Slf4J MDC:

2018-07-26 23:13:49嫡意。381  WARN [gateway,3216d0de1384bb4f,3216d0de1384bb4f,false] 2999 --- [nio-4000-exec-1] o。s捣辆。c蔬螟。n。z汽畴。f旧巾。r。s忍些。AbstractRibbonCommand    : The Hystrix timeout of 20000ms for the command account-service is set lower than the combination of the Ribbon read and connect timeout, 80000ms鲁猩。
2018-07-26 23:13:49。562  INFO [account-service,3216d0de1384bb4f,404ff09c5cf91d2e,false] 3079 --- [nio-6000-exec-1] c坐昙。p绳匀。account。service炸客。AccountServiceImpl   : new account has been created: test
  • appname: The name of the application that logged the span from the property spring疾棵。application。name
  • traceId: This is an ID that is assigned to a single request, job, or action
  • spanId: The ID of a specific operation that took place
  • exportable: Whether the log should be exported to Zipkin

Security

An advanced security configuration is beyond the scope of this proof-of-concept project痹仙。 For a more realistic simulation of a real system, consider to use https, JCE keystore to encrypt Microservices passwords and Config server properties content (see documentation for details)是尔。

Infrastructure automation

Deploying microservices, with their interdependence, is much more complex process than deploying monolithic application。 It is important to have fully automated infrastructure开仰。 We can achieve following benefits with Continuous Delivery approach:

  • The ability to release software anytime
  • Any build could end up being a release
  • Build artifacts once - deploy as needed

Here is a simple Continuous Delivery workflow, implemented in this project:

<img width="880" src="https://cloud拟枚。githubusercontent。com/assets/6069066/14159789/0dd7a7ce-f6e9-11e5-9fbb-a7fe0f4431e3众弓。png">

In this configuration, Travis CI builds tagged images for each successful git push恩溅。 So, there are always latest image for each microservice on Docker Hub and older images, tagged with git commit hash。 It's easy to deploy any of them and quickly rollback, if needed谓娃。

How to run all the things?

Keep in mind, that you are going to start 8 Spring Boot applications, 4 MongoDB instances and RabbitMq脚乡。 Make sure you have 4 Gb RAM available on your machine。 You can always run just vital services though: Gateway, Registry, Config, Auth Service and Account Service滨达。

Before you start

  • Install Docker and Docker Compose奶稠。
  • Export environment variables: CONFIG_SERVICE_PASSWORD, NOTIFICATION_SERVICE_PASSWORD, STATISTICS_SERVICE_PASSWORD, ACCOUNT_SERVICE_PASSWORD, MONGODB_PASSWORD (make sure they were exported: printenv)
  • Make sure to build the project: mvn package [-DskipTests]

Production mode

In this mode, all latest images will be pulled from Docker Hub。
Just copy docker-compose捡遍。yml and hit docker-compose up

Development mode

If you'd like to build images yourself (with some changes in the code, for example), you have to clone all repository and build artifacts with maven锌订。 Then, run docker-compose -f docker-compose。yml -f docker-compose画株。dev辆飘。yml up

docker-compose。dev谓传。yml inherits docker-compose蜈项。yml with additional possibility to build images locally and expose all containers ports for convenient development。

Important endpoints

Notes

All Spring Boot applications require already running Config Server for startup战得。 But we can start all containers simultaneously because of depends_on docker-compose option。

Also, Service Discovery mechanism needs some time after all applications startup庸推。 Any service is not available for discovery by clients until the instance, the Eureka server and the client all have the same metadata in their local cache, so it could take 3 heartbeats常侦。 Default heartbeat period is 30 seconds。

Contributions are welcome!

PiggyMetrics is open source, and would greatly appreciate your help贬媒。 Feel free to suggest and implement improvements聋亡。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市际乘,隨后出現(xiàn)的幾起案子坡倔,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 222,104評論 6 515
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件罪塔,死亡現(xiàn)場離奇詭異投蝉,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)征堪,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,816評論 3 399
  • 文/潘曉璐 我一進(jìn)店門瘩缆,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人佃蚜,你說我怎么就攤上這事庸娱。” “怎么了谐算?”我有些...
    開封第一講書人閱讀 168,697評論 0 360
  • 文/不壞的土叔 我叫張陵熟尉,是天一觀的道長。 經(jīng)常有香客問我洲脂,道長斤儿,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 59,836評論 1 298
  • 正文 為了忘掉前任腮考,我火速辦了婚禮雇毫,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘踩蔚。我一直安慰自己棚放,他們只是感情好,可當(dāng)我...
    茶點故事閱讀 68,851評論 6 397
  • 文/花漫 我一把揭開白布馅闽。 她就那樣靜靜地躺著飘蚯,像睡著了一般。 火紅的嫁衣襯著肌膚如雪福也。 梳的紋絲不亂的頭發(fā)上局骤,一...
    開封第一講書人閱讀 52,441評論 1 310
  • 那天,我揣著相機(jī)與錄音暴凑,去河邊找鬼峦甩。 笑死,一個胖子當(dāng)著我的面吹牛现喳,可吹牛的內(nèi)容都是我干的凯傲。 我是一名探鬼主播,決...
    沈念sama閱讀 40,992評論 3 421
  • 文/蒼蘭香墨 我猛地睜開眼嗦篱,長吁一口氣:“原來是場噩夢啊……” “哼冰单!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起灸促,我...
    開封第一講書人閱讀 39,899評論 0 276
  • 序言:老撾萬榮一對情侶失蹤诫欠,失蹤者是張志新(化名)和其女友劉穎涵卵,沒想到半個月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體荒叼,經(jīng)...
    沈念sama閱讀 46,457評論 1 318
  • 正文 獨居荒郊野嶺守林人離奇死亡轿偎,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 38,529評論 3 341
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了甩挫。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片贴硫。...
    茶點故事閱讀 40,664評論 1 352
  • 序言:一個原本活蹦亂跳的男人離奇死亡椿每,死狀恐怖伊者,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情间护,我是刑警寧澤亦渗,帶...
    沈念sama閱讀 36,346評論 5 350
  • 正文 年R本政府宣布,位于F島的核電站汁尺,受9級特大地震影響法精,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜痴突,卻給世界環(huán)境...
    茶點故事閱讀 42,025評論 3 334
  • 文/蒙蒙 一搂蜓、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧辽装,春花似錦帮碰、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,511評論 0 24
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至拓巧,卻和暖如春斯碌,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背肛度。 一陣腳步聲響...
    開封第一講書人閱讀 33,611評論 1 272
  • 我被黑心中介騙來泰國打工傻唾, 沒想到剛下飛機(jī)就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人承耿。 一個月前我還...
    沈念sama閱讀 49,081評論 3 377
  • 正文 我出身青樓冠骄,卻偏偏與公主長得像,于是被迫代替她去往敵國和親瘩绒。 傳聞我的和親對象是個殘疾皇子猴抹,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 45,675評論 2 359

推薦閱讀更多精彩內(nèi)容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi閱讀 7,346評論 0 10
  • The Inner Game of Tennis W Timothy Gallwey Jonathan Cape ...
    網(wǎng)事_79a3閱讀 12,104評論 3 20
  • 1、收藏夾效應(yīng)在當(dāng)下互聯(lián)網(wǎng)高度發(fā)展的時代锁荔,知識的共享性普遍存在蟀给。收藏夾從微信到QQ到微博蝙砌,甚至博客也都設(shè)置了這個功...
    眉心沒有美人痣閱讀 374評論 0 0
  • 從烏市到上海 火車上遇到了很多形形色色的人和事 和古麗換鋪的年輕人是個人很好的大學(xué)生。 幫他提東西跋理,會在我身后弱弱...
    凱紫閱讀 236評論 0 0
  • 在南城某個角落里择克,一個大賽場上的周圍坐滿了來自南城各個行業(yè)的人,好吧前普,其實也就百十來號人肚邢。 “兄弟們,都給我提起精...
    南宮憶閱讀 420評論 0 1