TBBD Throwback: The Big Billion Day 2015!

On October 6, 2014, a seven-year-old startup waged a big bet. Flipkart’s first-ever Big Billion Day broke news, the internet, and the country’s supply chain infrastructure. The single-day festive sale event that Flipkart pioneered in India, and which media commentators compared to Black Friday in the US, was remembered for having made e-commerce history, besides setting the bar way up high for online sale events in India. The next year, in 2015, Flipkart geared up for a rinse-and-repeat, this time spreading the sale event over five days, from October 6 to 12. Vaidyanathan S, YD to his mates, was there. In this feverishly penned insider’s memoir, he describes the excitement, anxiety and exhilaration that gripped a sleep-deprived, adrenalin-addled startup that was all set to repeat, and rewrite, history. Revisit this well-thumbed back page from Flipkart's archives.

TBBD Throwback

For the first time in my life, instead of cribbing about how slow and painful the IRCTC ticket booking system was, I had a different level of respect for it. For I could, for the first time, understand what the power of India truly was when it came to internet traffic.

TBBD 2015 Throwback

The preparations had started about four months ago, with all sorts of defensive armor being planned in detail by both the product and the infrastructure teams. The storm was very much real and was going to hit every team with a force we’d never seen or experienced before. This was different from anything else we’d seen — so we were promised. TBBD or ‘The Big Billion Day’ was right around the corner and what was worrying more was the fact that this time, the mega sale was for a whole week.

By now, we’d done a lot of app-only sale days and were pretty confident about the whole thing playing out. Systems were aptly engineered to handle all sorts of happy and non-happy flows and we’d eventually perfected every system to a very considerable degree of confidence. But, unfortunately, that was just the base requirement here — TBBD was a different beast altogether. Infrastructure programs had dominated most of the two months prior to the code-freeze date with mandatory attendance from every team in Flipkart.

In parallel, multiple new product constructs were being engineered in the user-path and in the order-path that were to handle the three dreaded problems in the supply-chain world — over-booking, under-booking, and matching the promise and fulfillment — all this by incurring the least cost possible. After all, the success of a supply-chain product construct can only be measured in how accurately we promise to the customer and, more importantly, how we stick to that promise.

While the user-path systems were mainly being engineered to handle very high QPS (queries per second), concurrency and consistency, the order-path systems consisting mainly of the promise system, order-management system, fulfillment system, and the logistics system were a different story altogether. Each and every order was P0 here — the internal async messaging and queueing system was key to make sure absolutely no message was lost anywhere; it was practically the backbone of the whole supply-chain.

TBBD 2015 Throwback
Inside a TBBD 2015 war-room, sleep is a stranger

Lessons learned from TBBD 2014

There were multiple lessons learnt from last year’s TBBD – solving very high QPS problems in the supply-chain world is a different story altogether. High QPS at the website would mean nothing if we did not have the manual capacity to pack or deliver the products.

With multiple systems talking to each other, even one very small system could potentially choke the whole supply-chain if it failed under high load. The topmost system had to understand the capacity of every system underneath it and pipe orders accordingly — capacity breaking in any of the underlying systems could mean disaster…

TBBD 2015 Throwback

There were so many things that were beyond our control, but every system had multiple constructs just to make sure we could constrain these problems as much as possible using algorithms backed by intelligence from big-data — but then, anything could go wrong even after all this. “Almost Unrealistic” NFR (Non Functional Requirement) runs at 5X the projected numbers were being conducted across all systems for almost a month — just to find out how each system would behave in case absolutely unprecedented traffic hit it.

Being in small and nimble teams meant every other team member was an independent worker — while this is generally very good, TBBD makes things a bit complicated. In order to ensure 24X7 continued support, every engineer had to know what every other engineer had worked on, in detail, to work in shifts. Thus began a series of knowledge-transfer sessions that probed in detail every feature, every remote possibility of it breaking under high-load, and alternatives for the same. Multiple alerts were installed on every possible metric — both infra and product — just so that the team gets alerted in time if there is something that is breaking/ shows signs of breaking. Because any small thing, unattended to, could potentially get compounded to at least 100X in a very short time, screwing up all things underneath it.

Night-outs were becoming more common — 16-hour weekdays and weekends were the norm till TBBD. Facilities were asked to beef up their support throughout the night — infrastructure teams had a lot of pressure to deliver all that they planned because every system would completely bank on them being super-solid in every planned/unplanned situation. Employees who stayed with their families often did not get to even see their loved ones for many days. And their families were eagerly waiting for TBBD to be successful, for they knew exactly how much effort was being put into this. There was very high energy in every team to get things out of the way as soon as possible — and at the same time there was also a very high sense of anticipation and concern.

TBBD 2015 Throwback
Work-life balance, with a TBBD flavor!

The veterans had a good idea of what was out there, but the newer joinees could not fathom why TBBD demanded so much from every team. An organizational plan was put in place that had details of all the ‘tiger-teams’ to ensure absolute unconditional support at every team level at all times through the week.

The Night of October 12

The uneasy calm before the storm was evident in the team — we checked and re-calibrated all the alerts, struck off all the last minute tasks, updated the playbooks to handle every possible outage that could potentially happen. Some folks were told to come late to office for their 12-hour night shift while the others had an option to leave early and get a good night’s sleep to handle the daytime load, but obviously nobody left early that day — everybody was eager to see what would happen at midnight when the deals went live.

All of the user path teams were already camping on the lower floors with huge screens and projectors to monitor live data while the order-path teams camped mainly on the upper floors with their share of metrics being projected live to the whole floor. Every screen had a different graph — from orders being taken, to lags in queues, to breaches, to server stats, to live tweets on #TBBD. The calm was evident — all the music that was playing a while back had stopped and every engineer was making sure there were no warnings on absolutely any metric.

The traffic started piling up by 10:30 PM because customers were downloading the app just before the sale kicked off at midnight and had already started buying things. All metrics were being closely monitored and looked good — we were ready for TBBD. Just some more time to go. The wait was frustrating. A quick surprise dance by some of the team members in Floor 2 eased the pressure a bit before teams went back to their frontiers. As the clock struck 00:00, there was an announcement on the emergency-loudspeaker that this was it!

Wild cheers greeted TBBD 2015 and we were in business!

While the user-path had some initial hiccups with the app, all of the the order-path graphs were already soaring – the majority of the people who had updated the app to the latest version had already started placing orders and we could see the inventory being depleted in real time at a super-fast rate – good that we had ample inventory for all products!

We were witnessing the power of a billion Indians at a scale none of us had seen before.

TBBD 2015 Throwback

As we’d been thoroughly prepared for this, systems were throttling just a little below the NFR stats that we’d done before — so we were doing good! TBBD is practically a race to the finish — for us, the finish was five more days while the finish for the customers was to get hold of a deal before someone else.

Out-of-stocks are bound to happen when the inventory is always going to be less than the number of people contending for it. What’s more important is that we show an item as OOS as soon as it happens; the customer shouldn’t be able to order a product that in reality was OOS unless it’s a neck-to-neck race condition.

The next one hour was very intense with all the metrics being very closely monitored. Nobody dared to even visit the restroom, almost like the television audience of a cricket match where one is expected to continue what one is doing if things are going well. Touch wood, as they say.

Post 1:00 AM we started relaxing a bit, playing some music, and reading what people had to say on Twitter. Now, the problem with Twitter is that people who were able to place orders successfully don’t tweet immediately about it, but the moment anyone has a bad experience with the app, they immediately take a screenshot and tweet about it. So, while there was some negative sentiment initially, we obviously knew that the positive sentiment was way larger, but that’ll only come into play when the order is delivered. Then the #thankyouflipkart tweets would come up!

By 1:30 or 2:00 AM, the day team finally left for home only to come back by 9 AM. The night team, meanwhile, tried closing in on valid expected issues. Most teams had one or two very minor bugs, which probably affected a very minuscule number of orders, but kept the rest of the 99.997% orders safe and healthy.

Warehouses from various parts of India, meanwhile, started dispatching items by 2:00 AM and the whole juggernaut was in motion. The night teams handed over the responsibility to the day teams by 11:00 AM. Every engineer in the team had a purpose being in the ‘tiger-team’ — once in a while, something would alarm and someone would practically go ‘attack’ the issue, resolve it immediately, or raise the issue to the right stakeholders.

As luck would have it, the rest of the days went smoothly with one or two alarms every night. Even systems that failed due to high load were able to come back up as if nothing had really happened. Anticipated issues were resolved according to the plan with minimal impact to other systems (fallbacks, hot-standby nodes, replication strategies et al).

All in all, every team demonstrated a very high sense of ownership and commitment to making TBBD a grand success at an organization level. The next two weeks would still be pretty hectic for the logistics teams while they attempted to deliver all the goods to the customers before the promised date. As they say, all’s well that ends well.

TBBD 2016, here we come!


Vaidyanathan S (YD) is an Engineering Manager in the Central Platforms Group (DBaaS). A Flipster of 8+ years’ vintage, he is currently at work on his 9th edition of The Big Billion Days. This is the edited version of an article he had posted on his LinkedIn profile on October 20, 2015. Read the original here

Enjoy shopping on Flipkart