> Our analysis of the TWSE’s transition clearly demonstrates that continuous trading results in better liquidity provision, lower bid-ask spreads, more stable prices and enhanced price discovery, as well as higher trading volumes.
If we consider the function of a market to be to arrive at prices that lead to the optimal allocation of the goods sold on that market, intuitively it would seem that there should be a limit on how fast trades need to propagate to achieve that, and the limit would be tied to how fast new information relevant to the producers and consumers of those goods comes out.
I don't think I'm expressing this well but the idea is that prices of goods should be tied to things that actually affect those goods. That's generally going to be real world news.
If you turn up trading speed much past the speed necessary to deal with that I'd expect that you could end up with the market reacting to itself. Kind of like when you turn an amplifier up to much and start getting distortion and even feedback.
Broadly speaking, yes. Turning down liquidity increases spreads which affects which sorts of companies are able to raise what sorts of capital in those markets.
The paradox of HFT is that it's much smaller and more efficient than the slower, manpower-heavy Wall Street industry it replaced. It's just weird, which makes it easy to demonise in popular politics.
Note that American exchanges open and close with a batched cross. This hybrid approach is why most objections to intraday continuous trading is misplaced.
If you're talking about something like having an auction (per security) every N seconds, I don't see how that addresses the underlying issue, which is how to determine order priority.
If you have a bunch of orders at the same price on the same side, and an order comes in from the other side that crosses those orders (or there is an auction and there are orders on the other side which cross), how do you decide which of the resting orders at the same price should be filled first?
The most common way is that the first order to arrive at the exchange at that price gets filled first, and for that reason being fast is inherently advantageous.
If you're doing batches to reduce the advantage of being fast, you'd have to treat all orders that come in during a batch tick as simultaneous.
Resting orders from previous batches could have priority, if you want. You'd probably end up doing something with assignment of equal priority orders that looks like option assignment, basically random selection of shares among the pool of orders.
Personally, I'd fill unconditional market orders first, then market all or nothing (if fillable), then sort limit orders by price and from within limit orders of the same price, unconditional first, then all or nothing, then all or nothing + fill or kill.
I don't know if I would assign shares proportional to orders or to shares in orders. Probably shares in orders. Might be gamed, but putting in a really big order because you want to capture a couple shares is risky.
You could partially fulfil both resting orders, weighted by their (remaining) order size.
You might get "games" around people oversizing orders to try to get more "weight" to their orders, but that would be inefficient behaviour that could in turn be exploited, so people would still be incentivised to keep their orders honest.
> How about along a randomized delay (0-T time) to each order?
This is the sort of good idea that just entrenches the algos. (Former algorithmic derivatives trader.)
For small orders, these delays make no difference. For a big order, however, it could be disastrously embarassing. So now, instead of that fund's trader feeling comfortable directly submitting their trade using off-the-shelf execution algos, they'll route it to an HFT who can chunk it into itty bity orders diarrhea'd through to average out the randomness of those delays.
So now I probabilistically spam a ton of different orders to on average get my desired fill...
This just turns it into a "whoever is best at DoS'ing the exchange" game.
As the orderbook fills with competitor orders it makes sense for yourself to also spam orders so each of your orders maintains the same probability of being filled
I've argued in the past that we should have batch settlements every 30 seconds, instead of in real time. We don't really need microsecond based skimming/front running.
I've read the arguments that the microsecond trading serves a purpose that benefits all of us, but I fail to see how, even with the explanations.
I'm with you. Every 30 seconds. Cap the power of connection speed in trading. Trading should be based on the value of the item being traded, not on how short the fiber run is.
> read the arguments that the microsecond trading serves a purpose that benefits all of us, but I fail to see how, even with the explanations
What about an empirical argument? Microsecond trading reduces spreads and decreases volatility. It looks useless, so people try to regulate it away, and every time they do spreads widen and trading firms' and banks' profits fatten.
> Every 30 seconds. Cap the power of connection speed in trading
I'd go back to Wall Street if this happened: it would make market making profitable again.
CLOB's force market participants to compete on pricing (which is only indirectly related to latency, since you can quote tighter if you know your orders won't get picked off by other, faster, traders)
Taiwan used to have Batching style auction and it ultimately led to worse prices: https://focus.world-exchanges.org/articles/citadel-trading-a...
> Our analysis of the TWSE’s transition clearly demonstrates that continuous trading results in better liquidity provision, lower bid-ask spreads, more stable prices and enhanced price discovery, as well as higher trading volumes.
so now the race is to get the order in (or out) @ 29.999999985 seconds or 15nS before the batch deadline. Interesting twist on the game. Unlikely to change who wins it, could it be worse for retail punters?
We need to kill "front running" as a criticism of low-latency algo trding with fire. It's garbage.
Front running is highly illegal and is where a broker knows a client is going to do a big trade due to inside information and trades on the account of others (themselves, typically) to exploit that inside information. It's a straight up cheat.
Inferring from market data alone which way a price will move is legal, honest, been attempted since forever and absolutely fine. Also very, very difficult. Anyone who can do it makes the market more efficient, reduces the money available by doing it (which goes into investors pockets through tighter spreads) and really earns their money. You don't have to like them if you don't want to but it's worlds apart from front running using inside information.
Where did algo trading profit come from? Won by being more competitive from brokers profit with a good chunk of that broker profit going to investors. Spreads are tighter.
Where are the clients' yachts? Well tech did something about the some of the broker ripoffs earning their yachts - which puts money in your pocket.
Batching can greatly lower the returns to speed, which would be sufficient to get participants to invest less in speed. It doesn't need to reduce the returns to speed to 0, and indeed reducing the returns to speed to 0 is sort of an incoherent idea to begin with.
30 seconds seems reasonable. Don't the markets themselves make a fair amount of money off of providing fast access to the HFTs? Is that the primary perverse incentive?
There are cases to be made that you get tighter spreads.
The larger the time interval the larger the risk on pricing. If I am selling and it’s a large time to trade I am going to probably want to sell it for a higher price. The same goes on the bid.
Skywave has a point, they were through regulatory oversight to get their microwave working whereas these other firms went behind the FCC’s back and profited by not doing so. The fine is likely a lot lower than the profits they made so what incentive would future companies have to go through the proper channels?
He fell off the rails when Trump was running for office. I remember being really disappointed to see the trading observations (and his own product advertisement) replaced by political rants.
Radio waves travel at nearly the speed of light, whereas light in an fiber optic cable travels at ~67% of the speed of light due to the refractive index of glass.
In a vacuum, electro-magnetic waves travel at a speed of 3.336 microseconds (μs) per kilometer (km). Through the air, that speed is a tiny fraction slower, clocking in at 3.337 μs per km, while through a fiber-optic cable it takes 4.937 μs to travel one kilometer – this means that microwave transport is actually 48% faster than fiber-optic, all other things being equal.
I worked for three years designing custom low-latency point-to-point microwave radios for HFT for this very reason. They didn't need very high bandwidths (their long-haul network was less than 200 Mbit, whereas in New York/New Jersey we had about 5 Gbps because the hops were much shorter and they had licenses for more RF bandwidth at a higher frequency).
At those time scales, the difference is so large, it was incredible what they were willing to pay to build these networks!
I somewhat regret not specialising in RF/comms in my EE degree - this side of HFT sounds like a fascinating line of work (Trading at the Speed of Light was a great read).
The shortest commercially available submarine cable between the US and Sao Paulo alone is significantly higher than that (almost double), and it comes out of the east coast, so you'd still have to factor in the latency between Chicago and New York.
Even specialized low latency networks that mix wireless and fiber will still have much higher latency than the radio.
The tradeoff is that shortwave radio has very little bandwidth so you're restricted to simple signals.
The immediate allure of hollow-core fibers is that light travels through the air inside them at 300,000 km-per-second, 50 percent faster than the 200,000 km-per-second in solid glass, cutting latency in communications. Last year, euNetworks installed the world’s first commercial hollow-core cable from Lumenisity, a commercial spinoff of Southampton, to carry traffic to the London Stock Exchange. This year, Comcast installed a 40-km hybrid cable, including both hollow-core and solid-core fiber, in Philadelphia, the first in North America. Hollow-core fiber also looks good for delivering high laser power over longer distances for precision machining and other applications.
Yes, funnily enough Microsofts reason was not HFT but AI. Essentially inter-datacentre training is limited by latency between the datacentres.
Generally they want to build the datacentres close to metro areas, by using hollow core fibre the radius of where to place the data centres has essentially increased by 3/2. This significantly reduces land acquisition costs, and supposedly MS has already made back the acquisition cost for Lumenisity, through those savings.
That feels somewhat implausible. I assume a Microsoft sized data center starts at over $100 million. Moving the footprint X miles away might be cheaper, but is probably a drop in the bucket given everything else required for a build out. I would further assume that they were already some distance away from the top tier expensive real estate to accommodate the size of the facility.
By definition, it does, because the maximum speed is qualified by "the speed of light in a vacuum", so the speed of light [in other media] is simply a function of how much the medium slows it down, yet it is still the speed of light. Funny how that works!
https://archive.ph/2vQm6
Other than seemingly perverse incentives, is there a good reason not to quantize trading time?
Taiwan Stock Exchange used to have quantized trading times (read "frequent batch auction"), but it led to worse price discovery and a bigger bid ask spread: https://focus.world-exchanges.org/articles/citadel-trading-a...
> Our analysis of the TWSE’s transition clearly demonstrates that continuous trading results in better liquidity provision, lower bid-ask spreads, more stable prices and enhanced price discovery, as well as higher trading volumes.
Is that better liquidity, etc., actually needed?
If we consider the function of a market to be to arrive at prices that lead to the optimal allocation of the goods sold on that market, intuitively it would seem that there should be a limit on how fast trades need to propagate to achieve that, and the limit would be tied to how fast new information relevant to the producers and consumers of those goods comes out.
I don't think I'm expressing this well but the idea is that prices of goods should be tied to things that actually affect those goods. That's generally going to be real world news.
If you turn up trading speed much past the speed necessary to deal with that I'd expect that you could end up with the market reacting to itself. Kind of like when you turn an amplifier up to much and start getting distortion and even feedback.
> Is that better liquidity, etc., actually needed
Broadly speaking, yes. Turning down liquidity increases spreads which affects which sorts of companies are able to raise what sorts of capital in those markets.
The paradox of HFT is that it's much smaller and more efficient than the slower, manpower-heavy Wall Street industry it replaced. It's just weird, which makes it easy to demonise in popular politics.
Thank you, it is nice to see an empirical observation of before and after the transition to continuous trading.
Note that American exchanges open and close with a batched cross. This hybrid approach is why most objections to intraday continuous trading is misplaced.
If you're talking about something like having an auction (per security) every N seconds, I don't see how that addresses the underlying issue, which is how to determine order priority.
If you have a bunch of orders at the same price on the same side, and an order comes in from the other side that crosses those orders (or there is an auction and there are orders on the other side which cross), how do you decide which of the resting orders at the same price should be filled first?
The most common way is that the first order to arrive at the exchange at that price gets filled first, and for that reason being fast is inherently advantageous.
If you're doing batches to reduce the advantage of being fast, you'd have to treat all orders that come in during a batch tick as simultaneous.
Resting orders from previous batches could have priority, if you want. You'd probably end up doing something with assignment of equal priority orders that looks like option assignment, basically random selection of shares among the pool of orders.
Personally, I'd fill unconditional market orders first, then market all or nothing (if fillable), then sort limit orders by price and from within limit orders of the same price, unconditional first, then all or nothing, then all or nothing + fill or kill.
I don't know if I would assign shares proportional to orders or to shares in orders. Probably shares in orders. Might be gamed, but putting in a really big order because you want to capture a couple shares is risky.
You could partially fulfil both resting orders, weighted by their (remaining) order size.
You might get "games" around people oversizing orders to try to get more "weight" to their orders, but that would be inefficient behaviour that could in turn be exploited, so people would still be incentivised to keep their orders honest.
You fill the orders proportional to the order quantity for everyone
How about along a randomized delay (0-T time) to each order? For T=30s it will largely nullify millisecond latency advantages.
> How about along a randomized delay (0-T time) to each order?
This is the sort of good idea that just entrenches the algos. (Former algorithmic derivatives trader.)
For small orders, these delays make no difference. For a big order, however, it could be disastrously embarassing. So now, instead of that fund's trader feeling comfortable directly submitting their trade using off-the-shelf execution algos, they'll route it to an HFT who can chunk it into itty bity orders diarrhea'd through to average out the randomness of those delays.
Randomize orders using a cryptographic hash of the order, client info, and all other fields plus a random salt added when the order is submitted.
Sort by hash. Impossible to game unless you can break the hash function.
So now I probabilistically spam a ton of different orders to on average get my desired fill... This just turns it into a "whoever is best at DoS'ing the exchange" game. As the orderbook fills with competitor orders it makes sense for yourself to also spam orders so each of your orders maintains the same probability of being filled
Exchanges have requirements imposed on HFTs to prevent this kind of abuse. This one would be no different.
Impose a small order fee.
That will tend to discriminate against smaller traders, like 'retail' traders.
> will tend to discriminate against smaller traders, like 'retail' traders
Retail rarely hits an exchange.
I've argued in the past that we should have batch settlements every 30 seconds, instead of in real time. We don't really need microsecond based skimming/front running.
I've read the arguments that the microsecond trading serves a purpose that benefits all of us, but I fail to see how, even with the explanations.
I'm with you. Every 30 seconds. Cap the power of connection speed in trading. Trading should be based on the value of the item being traded, not on how short the fiber run is.
> read the arguments that the microsecond trading serves a purpose that benefits all of us, but I fail to see how, even with the explanations
What about an empirical argument? Microsecond trading reduces spreads and decreases volatility. It looks useless, so people try to regulate it away, and every time they do spreads widen and trading firms' and banks' profits fatten.
> Every 30 seconds. Cap the power of connection speed in trading
I'd go back to Wall Street if this happened: it would make market making profitable again.
CLOB's force market participants to compete on pricing (which is only indirectly related to latency, since you can quote tighter if you know your orders won't get picked off by other, faster, traders) Taiwan used to have Batching style auction and it ultimately led to worse prices: https://focus.world-exchanges.org/articles/citadel-trading-a... > Our analysis of the TWSE’s transition clearly demonstrates that continuous trading results in better liquidity provision, lower bid-ask spreads, more stable prices and enhanced price discovery, as well as higher trading volumes.
If there are multiple orders at the same price on the same side, how should we determine which ones are filled first?
Or put another way, how should we determine which orders are least likely to get filled?
Well either volume weighted or randomised then
so now the race is to get the order in (or out) @ 29.999999985 seconds or 15nS before the batch deadline. Interesting twist on the game. Unlikely to change who wins it, could it be worse for retail punters?
We need to kill "front running" as a criticism of low-latency algo trding with fire. It's garbage.
Front running is highly illegal and is where a broker knows a client is going to do a big trade due to inside information and trades on the account of others (themselves, typically) to exploit that inside information. It's a straight up cheat.
Inferring from market data alone which way a price will move is legal, honest, been attempted since forever and absolutely fine. Also very, very difficult. Anyone who can do it makes the market more efficient, reduces the money available by doing it (which goes into investors pockets through tighter spreads) and really earns their money. You don't have to like them if you don't want to but it's worlds apart from front running using inside information.
Where did algo trading profit come from? Won by being more competitive from brokers profit with a good chunk of that broker profit going to investors. Spreads are tighter.
Where are the clients' yachts? Well tech did something about the some of the broker ripoffs earning their yachts - which puts money in your pocket.
Batching can greatly lower the returns to speed, which would be sufficient to get participants to invest less in speed. It doesn't need to reduce the returns to speed to 0, and indeed reducing the returns to speed to 0 is sort of an incoherent idea to begin with.
You could randomize the batching deadline.
and it won't help retail investors either.
30 seconds seems reasonable. Don't the markets themselves make a fair amount of money off of providing fast access to the HFTs? Is that the primary perverse incentive?
Why not 1 minute then?
You have ignored the whole issue of how are you then ordering those contracts in 30second batches?
The non-terrible version of this proposal is called Frequent Batch Auctions. I've read the paper and it seems like a decent idea to me.
I have heard that some real-life venues have implemented the terrible version of this proposal instead though.
There are cases to be made that you get tighter spreads.
The larger the time interval the larger the risk on pricing. If I am selling and it’s a large time to trade I am going to probably want to sell it for a higher price. The same goes on the bid.
Skywave has a point, they were through regulatory oversight to get their microwave working whereas these other firms went behind the FCC’s back and profited by not doing so. The fine is likely a lot lower than the profits they made so what incentive would future companies have to go through the proper channels?
some fantastic older reading (2014):
HFT in My Backyard
https://news.ycombinator.com/item?id=8354278
https://news.ycombinator.com/item?id=8371852
Also, from the same blog:
Shortwave Trading | Part I | The West Chicago Tower Mystery
https://sniperinmahwah.wordpress.com/2018/05/07/shortwave-tr...
SHORTWAVE TRADING | PART II | FAQ AND OTHER CHICAGO AREA SITES
https://sniperinmahwah.wordpress.com/2018/06/07/shortwave-tr...
If you want even more fun reading, check out: http://www.nanex.net/aqck/aqckIndex.html
It’s the only site I know of that has posts like it. Sadly, he hasn’t posted in awhile.
He fell off the rails when Trump was running for office. I remember being really disappointed to see the trading observations (and his own product advertisement) replaced by political rants.
Remember when HN was always HFT comp dick swinging contests? Would any top grad nowadays go to Jane Street over OpenAI or Anthropic?
why would a radiowave that is reflected off the atmosphere (and therefore taking the longer route) be faster than a direct fibre cable?
Radio waves travel at nearly the speed of light, whereas light in an fiber optic cable travels at ~67% of the speed of light due to the refractive index of glass.
Ericsson blog wrote:
In a vacuum, electro-magnetic waves travel at a speed of 3.336 microseconds (μs) per kilometer (km). Through the air, that speed is a tiny fraction slower, clocking in at 3.337 μs per km, while through a fiber-optic cable it takes 4.937 μs to travel one kilometer – this means that microwave transport is actually 48% faster than fiber-optic, all other things being equal.
I worked for three years designing custom low-latency point-to-point microwave radios for HFT for this very reason. They didn't need very high bandwidths (their long-haul network was less than 200 Mbit, whereas in New York/New Jersey we had about 5 Gbps because the hops were much shorter and they had licenses for more RF bandwidth at a higher frequency).
At those time scales, the difference is so large, it was incredible what they were willing to pay to build these networks!
I somewhat regret not specialising in RF/comms in my EE degree - this side of HFT sounds like a fascinating line of work (Trading at the Speed of Light was a great read).
More bluntly: light in a fibre is still bouncing around a lot.
Not in single mode fibers, which still exhibit the effect. It's just about the refractive index of glass.
In addition to the radio signal being faster as noted by the other commenters, for long distances the radiowave is actually the shorter route.
If you take one of the routes in the article, Chicago to Sao Paulo.
The distance is about 8,400km in a straight line.
According to https://en.wikipedia.org/wiki/Skywave a single shortwave hop can reach 3,500km, so 3 hops are required, or about 30ms.
The shortest commercially available submarine cable between the US and Sao Paulo alone is significantly higher than that (almost double), and it comes out of the east coast, so you'd still have to factor in the latency between Chicago and New York.
Even specialized low latency networks that mix wireless and fiber will still have much higher latency than the radio.
The tradeoff is that shortwave radio has very little bandwidth so you're restricted to simple signals.
Speed of light in an optical fibre is about 2/3 that of the speed in air
Light doesn’t go at light speed through optical fiber.
Sure it does. It's just that the speed of light in non-hollow optical fiber is slower than light in a vacuum.
Microsoft bought a hollow optical fiber company for a reason.
Huh, 50% faster. https://spie.org/news/photonics-focus/julyaug-2022/speeding-...
Yes, funnily enough Microsofts reason was not HFT but AI. Essentially inter-datacentre training is limited by latency between the datacentres.
Generally they want to build the datacentres close to metro areas, by using hollow core fibre the radius of where to place the data centres has essentially increased by 3/2. This significantly reduces land acquisition costs, and supposedly MS has already made back the acquisition cost for Lumenisity, through those savings.
That feels somewhat implausible. I assume a Microsoft sized data center starts at over $100 million. Moving the footprint X miles away might be cheaper, but is probably a drop in the bucket given everything else required for a build out. I would further assume that they were already some distance away from the top tier expensive real estate to accommodate the size of the facility.
By definition, it does, because the maximum speed is qualified by "the speed of light in a vacuum", so the speed of light [in other media] is simply a function of how much the medium slows it down, yet it is still the speed of light. Funny how that works!
mev but for tradfi
FT Alphaville: High frequency trading
Skywave Networks accuses Wall Street titans of ‘continuous racketeering and conspiracy’
FT/Alphaville is blog attached to The Financial Times newspaper. It free to sign-up for an account.