For the past few months, I’ve been presenting on planning for the perfect storm for realistic, but worst case situations and how to properly scope application performance testing projects based on some best practices I have learned over the years. I’m always looking for new material. Last week, I got some. On Friday (5/18/12), the whole world watched as Facebook finally had their IPO, and LOTS of people wanted to buy shares – but couldn’t. A “software glitch” was the culprit. Did you ever notice, it is always called a “glitch”? Let’s call it by it’s real, dirty name: DEFECT. It wasn’t a functional defect in this case, but a performance defect.
NASDAQ held numerous tests of its IPO processes over the previous week and assured traders it could handle the load. Robert Greifeld said in a conference call with reporters on Sunday that there had been a malfunction in the trading system’s design for processing order cancellations. They are now altering their IPO processes because of this. According to NASDAQ, “…matching up the buy and sell orders to arrive at the price of the first trade took five milliseconds instead of the normal three milliseconds. Amid this delay, the exchange’s systems were flooded with messages to adjust orders or cancel trades. The cancellation messages interfered with the matching up process, and made it reset.” Think about what a two (2) millisecond delay cost NASDAQ in credibility, not to mention the actual money it cost companies and individual traders! What happened? The perfect storm. My theory is that performance test planning more than likely focused on the order volumes, but not on cancellation volumes.
The Wall Street Journal took note and mentioned, “The IPO was the second technical snafu for a major stock exchange touting sophisticated electronic-trading systems in two months. In March, BATS Global Markets botched its IPO on its own exchange, blaming a software glitch in its own systems.” Ahhh, those glitches are everywhere these days. In all fairness, Greifeld stated that they had thousands of hours of testing for “a hundred scenarios” aimed at anticipating problems. It’s easy for us to be critical of the problem not having been involved. There comes a point where you’ll never finish testing if you don’t set up some limit of scope and prioritize what all could go wrong. You have to balance the risks that can be mitigated in the amount of time you have. This was one of the largest IPO’s ever, so there is a lot that can be learned from it. From a performance engineering perspective, it means that the planning phase needs to be looked at and updated. For this kind of system, planning HAS to be airtight and done exceptionally well. The takeaway of the Facebook IPO is the importance of having more than just “scripters” or “testers”, but critical thinking engineers involved in the planning of performance testing scenarios. That takes more than just experience with the tools of the trade. Not to say NASDAQ doesn’t have that – but a lesson for all of us to ensure that for out companies – that we HAVE done this.
What are your thoughts on the NASDAQ glitch for the Facebook IPO?
Did you enjoy this article? Help spread the word by sharing:
Engage in the conversation and leave a comment:
About Scott Moore (153 articles)
With over 20 years of IT experience with various platforms and technologies, Scott has tested some of the largest applications and infrastructures in the world. He is a Certified Instructor and Certified Product Consultant in HP’s LoadRunner and Performance Center products. He currently holds HP certifications for ASE, ASC, and CI. A thought leader in the APM space, he speaks regularly at IT conferences and events