Investigating local queuing: Redis, NSQ and LMDB

Systems designed for cloud services assume instances can die at any time, so they’re written to defend against this. It’s important to also remember that networks in cloud services are also incredibly unreliable, and often much less reliable than the instances themselves. When considering a design, it’s important to remember that a node can be partitioned from other services and possibly for long periods of time.

One easy consideration here is logs (including stats and analytics events). We want to ensure delivery of logs, but we also don’t want delivery to affect service operation.

There’s lots of ways to handle this. Our original solution was to write logs to files, then to forward them along with logstash. We were doing this for bulk logs and for analytics events. However, logstash was using considerable resources per-node, so we switched to a local Redis daemon and a local python daemon (using gevent) to forward analytics events.

For short partition times a local Redis daemon with a worker is quite effective. Delivery is quick and the queue stays empty. For long partition times (or a long failure in a remote service) we’d continue serving requests, but at some point Redis would run out of memory and we could start dropping events.

We’ve been really happy with the Redis based solution. To date we haven’t had a partition event (either network failure or service failure) long enough for us to worry, but we also had a mismash of solutions for handling analytics events (and partitioning) across our services and wanted a standard solution for the problem. We had the option of rolling the local Redis solution out to everything, or going with something that was a bit more robust.

We made a choice to do a bit of investigation into options that were in-memory, but could go to disk when the data-set grows past memory limits. I won’t go too much into the details of the investigation (sorry), but we eventually narrowed the choice to NSQ.

During this same time period I had been solving another issue using LMDB, a memory-mapped database that’s thread-safe and multiprocess-safe. I wondered if we could avoid running a daemon for the queue at all, and simply have the processes push and pop from LMDB. Less daemons can mean less work and fewer possible failure points.

Before going too far into LMDB I also considered some other memory-mapped databases, but most explicitly state that they’re only thread safe and shouldn’t be used multi-process (like LevelDB and its variants). BDB could be a consideration, but its licensing change to AGPL3 makes it a bit toxic.

Initial testing for LMDB was promising. Write speeds were more than adequate, even with the writes being serialized across processes. Library support was generally adequate, even across languages. However, the major consideration was deadlocks across processes. LMDB claims it supports multi-process concurrency, which is true assuming perfect conditions.

With LMDB, reads are never blocked, but writes are serial, using a mutually exclusive lock at the database level. The lock is taken when a write transaction starts. Once a write transaction starts, all other threads or processes waiting for the write lock will block. If a process starts a transaction and exits uncleanly, any other process that was waiting on the lock will block indefinitely.

In LMDB’s development branch support has been added for robust mutexes, which solves this problem; however, it’s not available in a stable release and I also can’t seem to find any information about robust mutex support across containers, which would be necessary for this solution to work for us in the distant future.

LMDB was a fun diversion and mental exercise, but wasn’t an ideal solution for this. After spending a couple days on exploring LMDB I moved on to a product we had been wanting to explore for a while: NSQ.

NSQ is a realtime distributed messaging platform. In our use-case we’re only using it for local queuing, though. It’s really well suited for it. Performance for our use case was more than adequate and library support is reasonable. Even in the cases where there’s no libraries, the protocol for writes is simple and can either be TCP or HTTP based. The language support in python is good, assuming you’re using tornado, but the support for gevent isn’t wonderful. There’s a fork of the bitly python library that has gevent support, but it hasn’t been updated in a while and it looks like it was meant as a temporary project to make the bitly library more generic and that effort hasn’t been fully followed through.

From the consumer side we’re using the same in-house custom python daemon, adapted for NSQ, using the forked nsq-py project. The fork met the needs of our use-case, though it had issues with stale connections (which we’ve fixed).

The biggest benefit we’ve gained from the switch is that we can backoff to disk in case of long partitions. That said, there’s a lot of options that we have now as well. NSQ has a healthy suite of utilities. We could replace our custom python daemon with nsq_to_http, we could listen to a topic from multiple channels for a fast path (off to http) and a slow path (off to S3) for events, and we could forward from the local NSQ to centralized NSQs using nsq_to_nsq. Additionally the monitoring of NSQ is quite good. There’s a really helpful CLI utility nsq_stat for quick monitoring and by default NSQ ships stats off to statsd.

NSQ doesn’t seem to have a robust method of restarting or reloading for configuration changes, but we will rarely need to restart the daemon. NSQ process restarts are generally less than one second, so for services pushing into NSQ we do retries with backoff and take the latency hit associated with it.

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯(lián)系作者
  • 序言:七十年代末凭舶,一起剝皮案震驚了整個濱河市旭蠕,隨后出現(xiàn)的幾起案子席里,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 222,590評論 6 517
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件烙荷,死亡現(xiàn)場離奇詭異嫡锌,居然都是意外死亡虑稼,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 95,157評論 3 399
  • 文/潘曉璐 我一進店門世舰,熙熙樓的掌柜王于貴愁眉苦臉地迎上來动雹,“玉大人,你說我怎么就攤上這事跟压∫闰穑” “怎么了?”我有些...
    開封第一講書人閱讀 169,301評論 0 362
  • 文/不壞的土叔 我叫張陵震蒋,是天一觀的道長茸塞。 經(jīng)常有香客問我,道長查剖,這世上最難降的妖魔是什么钾虐? 我笑而不...
    開封第一講書人閱讀 60,078評論 1 300
  • 正文 為了忘掉前任,我火速辦了婚禮笋庄,結果婚禮上效扫,老公的妹妹穿的比我還像新娘。我一直安慰自己直砂,他們只是感情好菌仁,可當我...
    茶點故事閱讀 69,082評論 6 398
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著静暂,像睡著了一般济丘。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 52,682評論 1 312
  • 那天摹迷,我揣著相機與錄音疟赊,去河邊找鬼。 笑死峡碉,一個胖子當著我的面吹牛近哟,可吹牛的內容都是我干的。 我是一名探鬼主播异赫,決...
    沈念sama閱讀 41,155評論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼椅挣,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了塔拳?” 一聲冷哼從身側響起鼠证,我...
    開封第一講書人閱讀 40,098評論 0 277
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎靠抑,沒想到半個月后量九,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 46,638評論 1 319
  • 正文 獨居荒郊野嶺守林人離奇死亡颂碧,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 38,701評論 3 342
  • 正文 我和宋清朗相戀三年荠列,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片载城。...
    茶點故事閱讀 40,852評論 1 353
  • 序言:一個原本活蹦亂跳的男人離奇死亡肌似,死狀恐怖,靈堂內的尸體忽然破棺而出诉瓦,到底是詐尸還是另有隱情川队,我是刑警寧澤,帶...
    沈念sama閱讀 36,520評論 5 351
  • 正文 年R本政府宣布睬澡,位于F島的核電站固额,受9級特大地震影響,放射性物質發(fā)生泄漏煞聪。R本人自食惡果不足惜斗躏,卻給世界環(huán)境...
    茶點故事閱讀 42,181評論 3 335
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望昔脯。 院中可真熱鬧啄糙,春花似錦、人聲如沸云稚。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,674評論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽碱鳞。三九已至,卻和暖如春踱蛀,著一層夾襖步出監(jiān)牢的瞬間窿给,已是汗流浹背贵白。 一陣腳步聲響...
    開封第一講書人閱讀 33,788評論 1 274
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留崩泡,地道東北人禁荒。 一個月前我還...
    沈念sama閱讀 49,279評論 3 379
  • 正文 我出身青樓,卻偏偏與公主長得像角撞,于是被迫代替她去往敵國和親呛伴。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 45,851評論 2 361

推薦閱讀更多精彩內容