What Powers Instagram: Hundreds of Instances, Dozens of Technologies

What Powers Instagram: Hundreds of Instances, Dozens of Technologies

One of the questions we always get asked at meet-ups and conversations with other engineers is, “what’s your stack?” We thought it would be fun to give a sense of all the systems that power Instagram, at a high-level; you can look forward to more in-depth descriptions of some of these systems in the future. This is how our system has evolved in the just-over-1-year that we’ve been live, and while there are parts we’re always re-working, this is a glimpse of how a startup with a small engineering team can scale to our 14 million+ users in a little over a year. Our core principles when choosing a system are:

Keep it very simple

Don’t re-invent the wheel

Go with proven and solid technologies when you can

We’ll go from top to bottom:

OS / Hosting

We run Ubuntu Linux 11.04 (“Natty Narwhal”) on Amazon EC2. We’ve found previous versions of Ubuntu had all sorts of unpredictable freezing episodes on EC2 under high traffic, but Natty has been solid. We’ve only got 3 engineers, and our needs are still evolving, so self-hosting isn’t an option we’ve explored too deeply yet, though is something we may revisit in the future given the unparalleled growth in usage.

Load Balancing

Every request to Instagram servers goes through load balancing machines; we used to run 2nginxmachines and DNS Round-Robin between them. The downside of this approach is the time it takes for DNS to update in case one of the machines needs to get decomissioned. Recently, we moved to using Amazon’s Elastic Load Balancer, with 3 NGINX instances behind it that can be swapped in and out (and are automatically taken out of rotation if they fail a health check). We also terminate our SSL at the ELB level, which lessens the CPU load on nginx. We use Amazon’s Route53 for DNS, which they’ve recently added a pretty good GUI tool for in the AWS console.

Application Servers

Next up comes the application servers that handle our requests. We runDjangoon Amazon High-CPU Extra-Large machines, and as our usage grows we’ve gone from just a few of these machines to over 25 of them (luckily, this is one area that’s easy to horizontally scale as they are stateless). We’ve found that our particular work-load is very CPU-bound rather than memory-bound, so the High-CPU Extra-Large instance type provides the right balance of memory and CPU.

We usehttp://gunicorn.org/as our WSGI server; we used to use mod_wsgi and Apache, but found Gunicorn was much easier to configure, and less CPU-intensive. To run commands on many instances at once (like deploying code), we useFabric, which recently added a useful parallel mode so that deploys take a matter of seconds.

Data storage

Most of our data (users, photo metadata, tags, etc) lives in PostgreSQL; we’vepreviously writtenabout how we shard across our different Postgres instances. Our main shard cluster involves 12 Quadruple Extra-Large memory instances (and twelve replicas in a different zone.)

We’ve found that Amazon’s network disk system (EBS) doesn’t support enough disk seeks per second, so having all of our working set in memory is extremely important. To get reasonable IO performance, we set up our EBS drives in a software RAID using mdadm.

As a quick tip, we’ve found thatvmtouchis a fantastic tool for managing what data is in memory, especially when failing over from one machine to another where there is no active memory profile already.Here is the scriptwe use to parse the output of a vmtouch run on one machine and print out the corresponding vmtouch command to run on another system to match its current memory status.

All of our PostgreSQL instances run in a master-replica setup using Streaming Replication, and we use EBS snapshotting to take frequent backups of our systems. We use XFS as our file system, which lets us freeze & unfreeze the RAID arrays when snapshotting, in order to guarantee a consistent snapshot (our original inspiration came fromec2-consistent-snapshot. To get streaming replication started, our favorite tool isrepmgrby the folks at 2ndQuadrant.

To connect to our databases from our app servers, we made early on that had a huge impact on performance was usingPgbouncerto pool our connections to PostgreSQL. We foundChristophe Pettus’s blogto be a great resource for Django, PostgreSQL and Pgbouncer tips.

The photos themselves go straight to Amazon S3, which currently stores several terabytes of photo data for us. We use Amazon CloudFront as our CDN, which helps with image load times from users around the world (like in Japan, our second most-popular country).

We also useRedisextensively; it powers our main feed, our activity feed, our sessions system (here’s our Django session backend), and otherrelated systems. All of Redis’ data needs to fit in memory, so we end up running several Quadruple Extra-Large Memory instances for Redis, too, and occasionally shard across a few Redis instances for any given subsystem. We run Redis in a master-replica setup, and have the replicas constantly saving the DB out to disk, and finally use EBS snapshots to backup those DB dumps (we found that dumping the DB on the master was too taxing). Since Redis allows writes to its replicas, it makes for very easy online failover to a new Redis machine, without requiring any downtime.

For ourgeo-search API, we used PostgreSQL for many months, but once our Media entries were sharded, moved over to usingApache Solr. It has a simple JSON interface, so as far as our application is concerned, it’s just another API to consume.

Finally, like any modern Web service, we use Memcached for caching, and currently have 6 Memcached instances, which we connect to using pylibmc & libmemcached. Amazon has an Elastic Cache service they’ve recently launched, but it’s not any cheaper than running our instances, so we haven’t pushed ourselves to switch quite yet.

Task Queue & Push Notifications

When a user decides to share out an Instagram photo to Twitter or Facebook, or when we need to notify one of ourReal-time subscribersof a new photo posted, we push that task intoGearman, a task queue system originally written at Danga. Doing it asynchronously through the task queue means that media uploads can finish quickly, while the ‘heavy lifting’ can run in the background. We have about 200 workers (all written in Python) consuming the task queue at any given time, split between the services we share to. We also do our feed fan-out in Gearman, so posting is as responsive for a new user as it is for a user with many followers.

For doing push notifications, the most cost-effective solution we found washttps://github.com/samuraisam/pyapns, an open-source Twisted service that has handled over a billion push notifications for us, and has been rock-solid.

Monitoring

With 100+ instances, it’s important to keep on top of what’s going on across the board. We useMuninto graph metrics across all of our system, and also alert us if anything is outside of its normal range. We write a lot of custom Munin plugins, building on top ofPython-Munin, to graph metrics that aren’t system-level (for example, signups per minute, photos posted per second, etc). We usePingdomfor external monitoring of the service, andPagerDutyfor handling notifications and incidents.

For Python error reporting, we useSentry, an awesome open-source Django app written by the folks at Disqus. At any given time, we can sign-on and see what errors are happening across our system, in real time.

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子绳锅,更是在濱河造成了極大的恐慌围辙,老刑警劉巖溢谤,帶你破解...
    沈念sama閱讀 222,627評(píng)論 6 517
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件拔疚,死亡現(xiàn)場離奇詭異鹅搪,居然都是意外死亡禀苦,警方通過查閱死者的電腦和手機(jī)蔓肯,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 95,180評(píng)論 3 399
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來振乏,“玉大人蔗包,你說我怎么就攤上這事』塾剩” “怎么了调限?”我有些...
    開封第一講書人閱讀 169,346評(píng)論 0 362
  • 文/不壞的土叔 我叫張陵,是天一觀的道長误澳。 經(jīng)常有香客問我耻矮,道長,這世上最難降的妖魔是什么忆谓? 我笑而不...
    開封第一講書人閱讀 60,097評(píng)論 1 300
  • 正文 為了忘掉前任裆装,我火速辦了婚禮,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘哨免。我一直安慰自己茎活,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 69,100評(píng)論 6 398
  • 文/花漫 我一把揭開白布铁瞒。 她就那樣靜靜地躺著妙色,像睡著了一般。 火紅的嫁衣襯著肌膚如雪慧耍。 梳的紋絲不亂的頭發(fā)上身辨,一...
    開封第一講書人閱讀 52,696評(píng)論 1 312
  • 那天,我揣著相機(jī)與錄音芍碧,去河邊找鬼煌珊。 笑死,一個(gè)胖子當(dāng)著我的面吹牛泌豆,可吹牛的內(nèi)容都是我干的定庵。 我是一名探鬼主播,決...
    沈念sama閱讀 41,165評(píng)論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼踪危,長吁一口氣:“原來是場噩夢啊……” “哼蔬浙!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起贞远,我...
    開封第一講書人閱讀 40,108評(píng)論 0 277
  • 序言:老撾萬榮一對(duì)情侶失蹤畴博,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后蓝仲,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體俱病,經(jīng)...
    沈念sama閱讀 46,646評(píng)論 1 319
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,709評(píng)論 3 342
  • 正文 我和宋清朗相戀三年袱结,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了亮隙。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 40,861評(píng)論 1 353
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡垢夹,死狀恐怖溢吻,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情棚饵,我是刑警寧澤煤裙,帶...
    沈念sama閱讀 36,527評(píng)論 5 351
  • 正文 年R本政府宣布,位于F島的核電站噪漾,受9級(jí)特大地震影響硼砰,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜欣硼,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,196評(píng)論 3 336
  • 文/蒙蒙 一题翰、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦豹障、人聲如沸冯事。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,698評(píng)論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽昵仅。三九已至,卻和暖如春累魔,著一層夾襖步出監(jiān)牢的瞬間摔笤,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,804評(píng)論 1 274
  • 我被黑心中介騙來泰國打工垦写, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留吕世,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 49,287評(píng)論 3 379
  • 正文 我出身青樓梯投,卻偏偏與公主長得像命辖,于是被迫代替她去往敵國和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子分蓖,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,860評(píng)論 2 361

推薦閱讀更多精彩內(nèi)容

  • 關(guān)于我的對(duì)別人有錯(cuò)沒有定解(老覺得都是自己的錯(cuò))尔艇,師父(我先生)又發(fā)表了精辟的見解。 他說么鹤,他不知道那樣說話會(huì)讓我...
    走向健康閱讀 545評(píng)論 0 1
  • 轉(zhuǎn)眼間已經(jīng)實(shí)習(xí)了2個(gè)半月漓帚,昨天立下flag說今天一定要寫一寫這段時(shí)間的實(shí)習(xí)收獲,實(shí)現(xiàn)flag—— 1.主動(dòng)反饋午磁,形...
    Leona2028閱讀 481評(píng)論 0 0
  • 在這段感情里,最遺憾的是你從頭到尾都沒有正式給過我頭銜毡们,所謂的女朋友只存在你我兩人的私人認(rèn)知中迅皇,到后來你刪了我所有...
    049e8e02d6b7閱讀 211評(píng)論 0 0