Flash Crash Revisited: How Computers Changed Trading

Next Tuesday, CNBC will begin a 3-day series, Man versus Machine. We'll take a deeper look at how computers have influenced trading on Wall Street and changed the landscape of the markets.

We'll also discuss the upcoming report on the Flash Crash, which is due from the SEC some time this month. Let's hope the SEC comes to some conclusions and addresses the most obvious issues: structural flaws in the regulatory system, and what — if any — role high-frequency trading strategies played.

The SEC should take a Trading Hippocratic Oath: First, do no harm. The odds that tons of new rules make things worse, rather than better, has to be considered. If you doubt me, look at the 2,000 plus pages that make up the Dodds-Frank bill, and the literally tens of thousands of pages of regulations it will spawn. No one has a clue what these regulations will do to our financial infrastructure.

However, some tweaking of the regulatory structure is clearly in order.

First, what happened in the Flash Crash on May 6?

Market participants seem to have an idea of what happened.

First and foremost, the market was down early due to genuine worries about the stability of Europe.

Second, at midday, there was a rush of sell orders on the S&P E-mini futures contracts that dropped markets further.

There are legitimate questions on whether abuses by high-frequency traders may have been a factor in this rush. It is no surprise that there was a large number of cancelled orders that occurred. That's what high-frequency traders do: most are statistical arbitrage traders operating at hyper-speed. As volatility increased, orders increased, but most got cancelled because the momentary arbitrage does not pan out.

However, if a few traders employed a strategy of deliberate "quote-stuffing" to slow down the tape and take advantage of that latency, as has been alleged, that is a different story. That is NOT statistical arbitrage, it's market manipulation.

This is the major piece of the puzzle that still needs to be worked out. Work done by Nanex, a firm that examines trading patterns, suggests that there have been instances of unusual spikes in bids and offers that are anomalous even accounting for normal high frequency trading.

Hopefully, the SEC, which has been looking at this data, will make a statement on how strong the evidence is.

Some have already proposed a "transaction tax" that would charge traders who put in excessive bids and quotes a "tax." Almost certainly, the SEC will be recommending a higher level of surveillance.

Regardless: there were other factors at work on May 6.

1) The most obvious exacerbating factor was the regulatory structure. Under Reg NMS, orders must be routed to the best immediately available price. No one can "trade through" anyone else and buy or sell stock at inferior prices. Fair enough, but here's the loophole: if any venue slows down their trading, other venues can go around them and continue trading, even if prices are inferior.

That's what happened: the NYSE went to a "slow" market because its internal circuit breakers (Liquidity Replenishment Points, as they're called) were tripped. These delays amounted to roughly 90 seconds in some instances, but that was enough. No one else had these circuit breakers. Orders were sent around the NYSE to venues that had less liquidity at just the moment when the system needed the most liquidity.

You know what happened: prices dropped outside the NYSE floor because the NYSE wasn't in the market for those crucial seconds.

2) That rush of orders created the second structural problem: some data processing centers were overwhelmed, causing small but significant delays in the tape. Many high frequency trading programs stopped because they could not be sure about the accuracy of the trading data (we are trading in milliseconds, remember).

Lack of Liquidity - At The Worst Moment

Again, there was a lack of liquidity at just the moment the system needed the most liquidity.

Perversely, some HFT may have done the opposite: INCREASED their trading as volatility increased.

These issues suggest that the problems lie primarily with market structure and technology.

Point 1 has already been acknowledged as a major contributor to the Crash; that's why the SEC instituted UNIFORM circuit breakers across all trading venues.

Point 2, the overloading of the data processing centers, is more troublesome. It suggest that in periods of extreme stress some data processing systems did not perform up to snuff. If this is true, we need more robust data processing to deal with the microsecond world we are in (by the way, this is EXACTLY the same problem that happened in the 1987 and 1929 stock market crashes: pricing data slowed).

Or do we?

We also need to have an honest debate on "how fast is fast enough." We have already decided that under some circumstances we want to slow trading down: we have circuit breakers in place. Do we want to go further? Do we really want an arms race to see if we can make trades in under a microsecond (a millionth of a second)? Should we adopt some limit on how fast trades can be made, a minimum quote duration of, say, 1 second?

The SEC will certainly be addressing this as well.

Bookmark CNBC Data Pages:



Questions? Comments? tradertalk@cnbc.com