The Risk Of Speed

When it comes to automation, people tend to assume the robots will perform the same tasks as the humans they replace, just with fewer mistakes and fewer days off. While that is true, automation almost always means changing how the work gets done, in order to break it into discrete operations. Instead of a man at a workstation, doing a series of tasks, each task is done as a single event by a single robot. This simplifies the task of automation and reduces the cost of the automation by eliminating variables.

This atomization of the work not only makes the work process more efficient, it changes how the humans have to analyze it. Instead of focusing on the people, they must focus on the process. That’s always part of process improvement, but because the process changes and the variables change, new phenomenon turn up in the process. In statistics, they say quantity has a quality all its own. In automated systems, speed has a quality all its own. Those super fast, super accurate robots change the nature of the process.

Think of the game of table tennis. It is a pretty simple game, in terms of strategy. The players try to trick one another with various tactics like setting up a shot or putting spin on the ball so it is hard to return. Player A will use top spin to force Player B to change how he strikes the ball. At some point Player A will change, thus fooling Player B, who then hits the ball beyond the far edge of the table. Alternatively, one player will make the other player move side to side, increasing the chances of a physical error.

If you are coaching table tennis, it is all about training the human to play against the other human. Now, replace the players with robots. The first thing that changes is the players will not make physical errors. So, the side to side business no longer makes sense. The same is true of using ball spin to induce a physical error. The robots will strike the ball correctly each time. In other words, when you remove human error and human emotion from the game, the strategy of the game has to change as well.

It also means the game changes. For example, the team that makes the first robot player will build it to capitalize on human error. Soon, other teams will replace their humans with robots. At that point, everyone stops trying to exploit human error. Instead, they are trying to make faster robots. If their robots can exceed the physical limits of the other robots, then they win. Soon, there is an arms race between the robot builders to make the fastest robot, in terms of physical response, along with the faster processors.

If you stop and think about what this would look like, it sounds kind of cool at first. The first robots would be slow and stupid, but eventually they would pretty amazing. They would go from amusing to terrifying as the speed of the game would become incomprehensible to humans. The speed, agility and processing power of the machines would have the ball flying through the air near its maximum velocity of 900 miles per hour. The paddles would be made of special material, in order to prevent them from flying apart.

Automating the game of table tennis would first result in removing the strategy of the game that exploits human failure. This would be true of any system that is being automated. System analysis would also change as the speed of the machines would create new points of failure and new challenges, in terms of finding efficiency and a competitive edge. In other words, as the problem solving shifts from the human variable to the engineering issues, system analysis has to change accordingly.

Now, instead of robots playing table tennis, let’s think of something else. Currently, close to 90% of trades in the equities markets are done by robots, which are just computer programs attached to the financial system. These programs have access to financial data throughout the system, which is inputted into their systems and the output is the buy and sell decisions. Teams of smart people called “quants” spend endless hours fine-tuning their programs to make them faster and more efficient at trading equities.

If you read the book The Money Game, which was written in the 1960’s, it presciently predicted the rise of the machines in the financial markets. What was clear to smart people at the dawn of the robot age, but not clear to most people, is the old systems regulating and controlling markets would not hold up to automation. It took the Black Monday crash of 1987 for everyone to realize that the controls had to change in order to accommodate the new robot players in the financial system.

In the 2000’s, the rise of high speed trading algorithms and large scale trading models eventually broke the system again. The emergence of the so-called “flash-crash” was entirely due to speed. While the first phase of automation removed the normal human checks on trading, resulting in runaway selling, the next phase of automation allowed for bad human decisions, like errors in trading algorithms, to be implemented so quickly, the systems could not respond. The result was erroneous sell-offs.

That brings us to the current market volatility. The decline itself is getting all of the attention, mostly for marketing and political reasons. The dullards in the media know how to sell gloom and they like blaming bad news on Trump. Historically, this bear market is not important. Whether it is called a correction or a bear market, the numbers are not all that significant. We’ve seen much worse. No one is jumping from their office windows and the public is not banging the sell button on the investment account.

What’s unique about this market is the weirdness. There is sustained volatility, but also a sustained decline, that does not appear to correlate to factors in the economy or in the financial system. The tiniest bit of news can cause wild swings. Apple announced what everyone should have known by now, that their toys are not selling as well as in the past, and the market takes a big tumble. Apple shares dropped 10% in minutes. Of course, this ripples to the rest of the market in seconds as well.

What could be happening is the next phase of automation. The speed and complexity of the algorithms are no longer comprehensible by the humans involved in the system. Like our table tennis playing robots, a level of speed and complexity passes the event horizon of humans to comprehend. Watching the robots play table tennis would be like watching a whirl of stars, beautiful, but impossible for the mind to fathom. Similarly, the new market dynamics may be reaching the limits of human regulators to fathom.

This is not to imply that the robot traders have become aware and are now taking control of the system from humans. That would be interesting, but the robots are still relatively dumb. Instead, they have reached levels of efficiency and speed that exceeds our ability to model properly. The result is the wild volatility and the seemingly irrational behavior of the markets. Put another way, this is the age of basic ideas implemented so fast and with such efficiency, they become irrational to their human creators.

68 thoughts on “The Risk Of Speed

  1. Carl over at Market-Ticker.org has talked about HFT. He noted as well that the speed of the huge trading houses could lead to a human investor loosing everything in the blink of an eye due to how fast trades can be done by the algorithms. HFTs have been playing games for a long time, and need to be hemmed up.

    He has been saying for a while that there needs to be at 2 second minimum dwell time when a position is posted to when it is removed, just to give a human trader a chance to react to it.

    If you are in the stock market, you can be hammered into bankruptcy in the blink of an eye. I understand that lots of people are invested there who shouldn’t be because nothing else offers the rate of return, but that does not mean your savings should be there.

  2. If extreme volatility is once again rearing its head, the antidote for most average-joe investors is Harry Browne’s Permanent Portfolio. He advocated a mix of 25% total stock market index fund; 25% long U.S. treasury bonds; 25% gold; and 25% cash or cash equivalents. With ETFs, it is easy to assemble such a portfolio and rebalance it when any one of the four asset classes is + or – 10%. Each asset is held to perform in one of four prevailing economic conditions: stocks for growth/prosperity; gold for inflation; bonds for recession; and cash for deflation. It rarely hits home runs, instead delivering solid 8-9% returns year over year while seriously dampening volatility and protecting from out-sized down-side losses of capital. I find it to be an ideal portfolio for retirement, as it beats inflation, provides modest growth and preserves wealth. If you’re more adventurous, you can carve out 10-20% of your net worth for investments that seek the Black Swan type of upside without risking the capital you’ll need to carry you to your late 90s. This strategy allowed me to retire at 55 with a small pension. YMMV. Buon Fortuna!

  3. This pattern has held for a long time. Trading technology advances beyond the ability of the regulatory structure to manage it. Everyone knows this is happening. That is always the effective result (and sometimes the explicit stated intention) of the technical advance. Due to regulatory capture the problem generally goes unaddressed until the inevitable crash. Then a new regulatory structure is instituted addressing the causes of the latest crash, and the pattern repeats.

    If anyone is around to pick up the pieces this time, we’ll get a transaction tax and a capital gains tax seriously penalizing short term holdings. Maybe other speed limits to put the brakes on the velocity and volatility of robot trading. Any attempt to prevent the crash before it happens will face the united opposition of all The Money and all Its Politicians. The old idea of Right and Left is generally obsolete this days especially when The Money is involved.

    • This is one of the biggest weaknesses of the Right. They don’t have a clue as to how dangerous our tech can be.

      Given any chance corporations will enslave you faster than the Government and the only check in the State. Right wing views? You don’t get to buy anything. Deplatform you from life or just chip you and use as a machine tool ala Continuum

      The only way to prevent is is a regime that controls how tech is used.

      I think elements of the Right think the collapse will be safer and will manage to keep tech in check. well in maybe a century or two as post industrial society keeps running down but we don’t have that long.

  4. OT:

    There is a transcript about from Tucker Carlson’s monologue on Breitbart about families, markets and our leaders. It’s so alt-right it’s scary and is one of the best I have ever come across in the last 30 years.

    https://www.breitbart.com/video/2019/01/03/carlson-romneys-trump-attack-indicative-of-a-ruling-class-who-feel-no-long-term-obligation-to-the-people-they-rule/

    This is the sort of article that one can pass around to prospective people who are leaning alt-right.

  5. Imagine a society where 90% of everything is done by robots

    Who buys all the production?

    The answer is the State

    Fundamentally though societies will have to be either for the benefit of the elite and their machines assuming they don’t exterminate the rest of us or humanity which suggests we’ll need a Butlerian Jihad sooner than later.

  6. Consider the possibility that this apparent irrationality and volatility has been happening all along, since the beginning of the stock market, but, the ability to report the second-by-second actions of the market was lacking.

    Perhaps all the crazy price action is simply the result of faster methods of reporting the action.

  7. Two things:

    1) re AI and humanly incomprehensible, the new chess program Alphazero recently dethroned the formerly strongest Stockfish, which the world champion was only able to draw in a minority of games. Alphazero was designed from a stochastic program invented to play the Chinese game of Go, which is in fact more intuitive than chess, and AI had been unable to defeat the strongest Go masters until Alphazero, which recently defeated the strongest Go player, a historically strong player. It broke his spirit and he cried! The technology was adapted to chess, and Alphazero learned by playing fifty games against itself, before it confronted Stockfish, which it soundly demolished. It considers fewer moves in less depth than previous programs, imitating human intuition. The point, it’s games feature bizarre, passive piece sacrifices that no human can readily perceive the compensation of! Interestingly, it spontaneously chose the Berlin defense against king pawn opening, which humans only recently determined to be best.

    2) I have a question about s&p 500 I would like to pose to you wide people. It’s graph is fairly level until 1985 when it abruptly increases to unprecedented heights, and continues to rise. Why is this?

    • Use a semi-log chart and you will have your answer. You are looking a compounding over a massive length of time in arithmetic form. You need to look at returns over time on dollars invested, and held, in a dynamic arrangement, and that is logarithmic, not arithmetic.

  8. The real underlying problem in all this — not that this gonna go away, or that the world will obey my whims — is that the underlying premise for value in an over-financialized universe is just ever more elaborate forms of arbitrage, or what we used to call skimming. Usury is slavery, and at this advanced level, arbitrage is just more slavery. These people don’t create value, they just get rich skimming a wee bit off of every transaction, for zillions of transactions. It’s just highly-organized mass theft, and it enslaves millions as a result.

    Value needs to be predicated on stewardship, production and distribution, not clever forms of masking slavery. I don’t care if the slavers are the robots or (((fellow white people))), it’s still slavery, and it has to end.

    • Going back to Nunya’s entry at the very top, only buy what you want to own. If you don’t want to own it, or not pay the price being asked, simply pass. It has become all about trading ‘em, not owning ‘em. Wall Street gets paid when you trade ‘em, but not when you buy and keep. Don’t fall for their game, they want to make it a horse race and have you switch horses. Instead, treat it like real estate; Buy low, collect the rents, build equity, and wait. Boring as all hell, but very profitable in the long run, if you make reasonable choices and not pay too much. Pick your spots; I’m waiting it out for now.

    • Most modern corporate executives are skimmers, not stewards. Ownership should come with responsibility. Limited liability and governmental blessing of incorporation have helped lead to this mess.

      • The typical executive contract gives big bonuses (in stock or stock options) for a substantially higher stock price over time. They then “short-time” the decision making in the stewardship of the company, and base it all on what it will do to the stock price in the short run. And there you go.

  9. Keep in mind the financial “news” is about coming up with reasons to explain today’s market fluctuations. There is a psychological overlay to market pricing, and taking advantage of the “madness of the crowds” is how George Soros got very rich.

    Investing is about taking specific positions and then sizing your bet properly. You can do this strategically (long term) and tactically (short term). Long term investing weighs the changing value of the investment choice over time, and a successful choice involves owning something in which the intrinsic value of ownership rises over time. Short term investing involves getting the changes in investor psychology correct. Positioning and sizing. That’s all it is. The rest is noise, and while AI can work through the noise to tactically trade optimally, it will have much more difficulty in strategically picking choices with long term intrinsic value, and a lot of trouble sizing the long term bets properly.

  10. True, but the algorithms are written by humans to reflect a certain strategy so they’re just executing what a person would have done more quickly. There’s not a ton of money in self-thinking AI algorithms.

    The biggest amount of money in a single strategy is index funds, particularly the S&P 500. It’s an algorithm. Buy or sell stocks to track as closely as possible the S&P 500 index. And, yes, index investing is a “quant” strategy. Any strategy based on a set of rules – usually developed in academia or certainly by practitioners who have academic training – and implemented religiously is a quant strategy.

    The same is true for the other big quant strategies (also stupidly known as Smart Beta), such as value, momentum, quality, low vol and trend.

    But AI and algorithms will always have a difficult time taking over the investing world because no matter how smart or fast their strategies, firms aren’t using their own money, they’re using other people’s money. And those people don’t know shit about investing. Boards of directors at pensions funds as well as everyday people have an extremely limited time frame for evaluating strategies, usually between one and three years. What’s more, they use incorrect benchmarks to evaluate those strategies.

    That’s a disastrous combination.

    Every decent investment strategy goes through periods of underperformance lasting well over three years, often for a decade or more – and that’s when compared to the correct benchmark. They’re sure as hell going to underperform an incorrect benchmark for long periods of time.

    Investing is way more about psychology than math because humans control the money, not computers. No computer will ever figure out a fool-proof way of making money that never underperforms for more than a year or two because everyone will quickly figure out what it’s doing and copy it.

    Nope. You don’t “outperform” the market (which is really outperforming other investors) by being smarter and faster than everyone else. You outperform by being more disciplined than everyone else.

    Most people would be best off just owning index funds and taking what the market will give them. If you want to do a bit better, I’d suggest a dual momentum system such as Gary Antonacci’s GEM, Vigilant Asset Allocation or ReSolve’s Adaptive Asset Allocation, but you have to learn why they work or you’ll bail out at the worst time.

  11. While the basic argument is still good, I’d bet the maximum velocity of a ping pong ball is more like 90 mph, not 900, as that would exceed the speed of sound and result in the ball being crushed with every blow.

      • Interesting. 916 mph. Never dowt the Zman. When they take my guns I’ll carry my fully automatic pong pistol instead.

      • Sure when fired by some cannon, and it says that would blast right through a paddle. Are they going to start using titanium? Top slam speed of 70 mph is more realistic.

        • Yes, yes they would. The physics is still conservation of momentum/energy. Also, programming the robot to increase contact time (follow through) which increases impulse and therefore momentum.

          Don’t believe the technology would advance to meet the need? Just look at modern football helmet tech.

          The only limiting factor is monetary. What is the financial benefit of hyper fast ping pong?

          Or, missile defense, or tank armor, since it’s the same set of physics principles in play. Tanks use ablative armor to deflect projectiles (protect the paddle, so to speak). Missile defense is basically trying to hit the ball (missile) with the paddle (interceptor).

          Thechnology can and would evolve.

  12. No, the issue is pretty simple. Somebody wrote some code that says “if the Fed raises rates, sell”. Could be as simple as a yes/no switch somebody throws that is Fed/good, Fed/bad. The robots start buying or selling accordingly.

    I’ve said for years that in the absence of ZIRP the Dow would be 2/3 or 1/2 it’s present value. Low rates were maintained to prop up Obama. They would have been maintained to prop up First Woman President.

    No reason for that now. Plus the Fed needs someplace to go when the next recession inevitably arrives.

    Like the last couple of tech-led corrections, the current one is the result of overpriced stocks unable to deliver on their promises. Tesla at $299? Facebook at $132? (It was $209 in July!)

    It’s not robots, it’s economics.

    Btw, you’re not wrong about speed. Most military advancements are designed to go faster, eliminate existing defenses, leading to faster defenses, and so it goes. There’s a great Star Trek episode from the 60’s where two societies agreed to just skip the bombs altogether and just agree to wipe out their own cities to save time.

  13. I don’t know what impact this has on your main thesis, but it definitely isn’t true that “The robots will strike the ball correctly each time.” Robots, just like humans, have physical limits, and if you push them to their limits they will start making errors. And humans can in fact play without error, provided they are playing well below their own limits. (Imagine a slow friendly game between an amateur and a world champion. The champion will probably not make any mistakes). So strategy will always have a place in the game, whether the players are biological or mechanical.

    • Sure. As I said, initially the robot players will be clumsy and stupid. The goal of the engineers will be to make them more nimble and increase their processing speed. This sets off a dynamic where at some point, the number of “errors” reduces to such a small number, it is perceived by the humans as perfect. In the Six Sigma world, one in a million is the minimum of acceptable defects. If one in ten million operations is defective, you have effectively eliminated defects, from the human perspective.

      It’s a good example of how automation changes the people working the problems. Once you cross a certain threshold of defects, for example, you shift from remediation and exception handling to exception analysis. The people end up operating within systems that are within a larger system that is unknown to them. Instead they focus on discrete operations within a set of interfaces.

      • This is exactly what I am disagreeing with! As long as robots have limits (which means always), if pushed to those limits they will start making a significant number of errors. What do you think would happen in one of those Six Sigma factories if you overclocked the robots and tried to double their output?

        The goals of a factory robot (minimize errors) and a table tennis robot (beat your opponent) are different, which means their strategies must be different. The competitive robot, just like the human champion, must be willing to skate to the edge and attempt difficult shots that it won’t always be able to pull off; otherwise it will be overwhelmed by a more aggressive and less risk-averse opponent. The factory robot, OTOH, always remains carefully within its limits, and as a result makes very few mistakes. But the basic dynamic — push hard to win, slow down to minimize errors — is the same for both humans and robots.

        • The goals of a factory robot (minimize errors) and a table tennis robot (beat your opponent) are different, which means their strategies must be different.

          Nope. Anyone who has played team sports knows the key to victory is minimizing your errors, while exploiting the other side’s errors. Two football teams that execute flawlessly for the entire game will end in a draw, controlling for externalities.

          • I used to circle track race. Success came from approaching the Saturday night like a machine. Optimize lap times, identify who you can trust to drive around and who to avoid, expect occasional bad nights, and have more good nights than your competitors. Worked very well.

            Optimizing robots for production and for market trading are two different things. Production is about defining tasks and optimizing motions and functions to complete the task. Robots in the markets are trying to identify patterns and act on them. Outcomes will vary, because the markets are not exact and not entirely predictable. One can argue that AI can master so many inputs that it will approximate an optimum outcome. The difficulty in markets is that each input also affects the other inputs, and the matrix of combinations of inputs is orders of magnitude higher than the inputs themselves. This also edges into problems of small sample sizes for any one combination of inputs and repeatability of outcomes from a given combination of inputs. Finally, the ability for one small transaction, at the right moment, to meaningfully set the price of an asset wildly higher or lower in “mark-to-market” value, trips everything up at times.

      • I would highly recommend a book by John Gall – “Systemantics” or in its third edition “The Systems Bible”. It’s very tongue in cheek but it describes the problems with systems and processes in one of the best ways I’ve ever seen. A lot what you covered comes up under several aphorisms: “The real world is what is reported to the system”, “Systems tend to oppose their own proper function”, and “In complex systems, malfunction and even total non-function may not be detectable for long periods, if ever.”.

        It’s probably the best book out there with lots of examples of complexity gone terribly wrong. Sadly many of our so-called process and system experts have never read it, and are far to confident in the results that they think they’ll get if they just implement a new silver bullet. “Lean”, “Six Sigma”, “Agile”, etc etc etc.

        Amazon as a good preview up with the excellent preface for free:
        https://www.amazon.com/SYSTEMANTICS-SYSTEMS-BIBLE-John-Gall-ebook/dp/B00AK1BIDM

        • “In complex systems, malfunction and even total non-function may not be detectable for long periods, if ever.”

          Ask anyone who repairs cars. Or better yet, ask any car-manufacturer warranty manager. There are plenty of things that happen in ‘perfectly good’ car electronics systems which don’t happen for quite some time, or happen but un-noticed until subsequent/consequential fail…..

  14. It treat the general financial news just like the drivel that is the normal news – noise to fill the airwaves between commercials.

    I invest in 3 ways.

    1. My company 401k has fairly limited choices of funds. I spread my savings around there as best I can.
    2. Most of my after-tax savings goes into other mutual funds in sectors I think are fairly stable.
    3. Once in a while I buy actual stock in a company I think will grow – and hang on to it for a while (usually years).

    The day-to-day stock news seems like complete nonsense.

  15. What you are describing is hypersensitivity, which is detrimental to robustness. In natural evolution, this characteristic is frequently eliminated from the population because it does not enhance reproduction. Similarly, if in our modern age of accelerating technology (and it’s side-effects of hardship extinction and dependence addiction) people choose to live in the moment rather than make babies, then organic robustness will decline until the species becomes vulnerable to a mass extinction event. Oddly, a financial crash that triggers a severe depression is actually a cure for hardship extinction.

  16. “That would be interesting, but the robots are still relatively dumb. Instead, they have reached levels of efficiency and speed that exceeds our ability to model properly. The result is the wild volatility and the seemingly irrational behavior of the markets.”

    This is an example of a problem by a philosopher named Hubert Dreyfus (died a few years ago). He published a book titled “Why Computers Can’t Think” followed up some years later by “Why Computers Still Can’t Think”. The AI crowd called him every name under the sun for a decade or two until they came to realize he was correct. The problem is now known as the “Dreyfus Problem”.

    In a nutshell, he identified a critical difference between human intelligence and machine intelligence: Human’s have an innate ability to apply their analytical abilities upon the external world. Dreyfus argued that all information is in the world and not in our heads. So as the world changes or as our understanding changes we learn to incorporate those changes into our thinking and modeling. A lot of our understanding about the natural world runs on instinct and intuition which operate at a level much deeper than reason.

    For a machine, all information resides not in nature but in the programming. The machine can not learn from nature in the way we learn. It can expand data sets, etc. But it can not change ‘paradigm’. It assumes at some rational level that nature is fundamentally understood and all new information takes the form of changes in frequency and noise.

    There’s been a break through in AI trying to learn from nature. It revolves around the game GO. It’s operating at a primitive level but shows promise. There are probably layers (a few? many? infinite?) to Dreyfus Problems. Something worth knowing about and keeping an eye on.

    • There’s also a good chance, in my opinion, that humans operate from a primitive model of their surroundings. This makes the model fast and highly adaptable. Read my post on cats and you’ll see what I mean. This suggests a layer to intelligence that prioritizes information in the model.

      • Years ago, met Jeff Hawkins at a dinner where he gave me a copy of “On Intelligence”–now a bit dated, but elements of his theory of how the brain modeled its surroundings and ran it’s own predictive models is spot on to your point. He was working on visual recognition, in his post Palm Computing days and were trying to understand how humans recognize objects from a tiny fraction of the data points required by a computer.

        In a larger sense what I’ve observed in my industry is that modeling speeds decision making, but exponentializes error. One source of immense puzzlement over the last several weeks was how badly all the widely accepted models screwed the pooch on Hurricane Michael, despite its rather ordinary characteristics and the wealth of geographic specific historical data. We’re talking factors 2 and 3x from otherwise very, very reliable damage models.

      • I believe I’ve quoted this before on the blog but this immediately recalls something Francis Cianfrocca said about human intelligence. (He’s a tech guy/Wall Street guy with a lot of experience in crypto, networks, military use of computer systems, etc.) He notes that there’s something fundamentally different about human cognition compared to how we make computers “think.” He said human intelligence relies heavily upon cutting corners, upon knowing what to ignore or discard from thought.

        • That is, in large part, what Hawkins describes in theories of how memory is laid down in the neo-cortex. The brain is exceptionally efficient in accessing only the minimum amount of data necessary to make a decision–and appears to have all sorts of heuristics wired in to help that happen.

          • Getting back to the sports thing, pros in any sport are great at paying attention to the only things that matter in their performance, and ignoring the rest. That is partly why when a play gets screwed up, it often royally goes into the toilet, as the players need to act outside their envelope of “knowns” that they are used to paying attention to. It is also why, in football for example, “fumble drills” are done in prectice, to teach the players how to respond reasonably well to a predictable and important element of a broken play.

          • We’re running very ancient brains with very old programming in a very new world. I also think a lot of our brain computing power is used in analyzing and threat avoidance. Computers are probably what our brains would be like if we weren’t worried about a possible snake in a shaky bush nearby.

        • People don’t think in words.
          Words are merely the way we describe our thoughts to ourselves and others.

        • That reminds me of a comparison between humans mastering a task and designing a program to do so. As humans mastery increases, the amount of energy performing the task decreases. The mind subconsciously determines the minimum number of neurons and muscles needed to perform it, and then it gets done autonomously. The classic example is driving a stick shift. Mentally taxing and haphazard at first, quickly becomes effortless and almost unconscious. With computer programs the opposite happens, they grow inexorably with each iteration.

  17. Zman, this was a great column, illuminating a difficult concept. Was it something in the works for a while, or the result of a “spontaneous” insight?

    The ping pong analogy cleverly clarified the larger issues of automation. I’m going to share this with my son who will be starting an engineering program in the fall. Thanks.

    • Honestly, I just thought of it this morning, but I like puzzles, so system analysis comes naturally.

  18. What you call “robots” in trading are actually artificial intelligence algorithms called “machine learning” that train on past data in order to predict future data. Thus, past performances of data on oil stocks are trained on in order to predict what a future stock price of oil would be based on a given trend.

    A new technology called “deep learning” allows even more types of data to be used to train on the stock data. So now, along with past stock prices, you can train on the text from news sites. Deep learning has gotten sophisticated enough to recognize things like sarcasm, irony, etc. in text descriptions. It of course means that with even a hint of bad news in the news websites will cause some sort of autocorrection, hence probably the wild swings you see as news on news websites oscillate between gloom and doom.

    This is great in terms of understanding trends and predictions. These algorithms are however not 100% perfect. They are after all based on whatever data is available and what data is used according to the bias of the trainer. These algorithms at times need to be retrained as they will have predicted incorrectly a batch of data which it had never seen or was never able to extrapolate. So another hiccup to the robot revolution is that these wild trends could be due to robots predicting on something they never accounted for before.

    • Not for nothin, but I was reading about trading models consuming the text of financial news twenty years ago. It’s not new. In fact, I think it was Bloomberg terminals that were first exploited for this purpose, which was back in the Bush years.

      Again, read that book I linked. Exploiting information asymmetry has always been the core of the equities markets. As data becomes more available to be consumed by increasingly complex models, asymmetry between models declines. Asymmetry between humans and these models increases.

      • True, but deep learning is a new step in getting a deeper context of news articles. I’ll definitely read the book, but look up Word2Vec and Glove. If you can get past the math, it explains how words are now relatable to each other. One of the trickiest things in natural language processing was getting the context and meaning of sentences. When computers using straight laced rules were unable to do them in the 1950’s, a long AI winter ensued. We now know linguistical grammar is more statistical, hence why text data is more accurate trained with data than with rules (which would have been done 20 years ago as big data didn’t exist back then). You’ll understand why now with big data and these new algorithms we are now in the midst of the 4th industrial revolution.

        • Please. I do a lot of work with data, and “Big Data” is mostly just another silver bullet sold by con men to idiot corporate leadership. “The Math” you rave about is mostly more of the same piss poor frequency-based and bayesian statistical techniques that end up being far to certain of their results. There are a few use cases that are somewhat valuable but in the whole most Big Data initiatives are ways for Hindus from Deliotte, Accenture, etc to fill space and provide kickbacks to the project sponsors.

          Computers do not think, can not think, and will not think as humans do. There have been some fun results from NLP, such as the wonderful Microsoft HilterBot Tay. Language requires nuance, and in person communication is far more than just the words spoken by someone. I don’t doubt we will have robots that can reasonably simulate the written word, but show me an AI that can write even a consistent 5-paragraph very short story and I’ll be impressed. MIT media lab did Shelly and all it did was the beginnings of stories.

          • This is very true. ML/”Big Data”/”Deep Learning” will cool off (or maybe even crash) in the same way that AI will, within the next few years. As in most cases with complex technologies, the most enthusiastic are the least well informed.

    • As Zman points out though, the computers are fast but dumb. In my masters level statistics course, I tried to develop a correlation between past data and future performance to predict stock movements. No matter how many variables or data points I added, I could never get a correlation coefficient above .5.

      Technical models are worthless because stocks move on anticipation, not past data. The only benefit computers provide is the speed of trades in the split second after news is released. That is why wall street firms lease computers from the NYSE. It gives them a fraction of a second advantage during the time between the news release and the stock trade.

      Despite all their complex models, approximately 70% of managed funds underperform the index averages, usually by the amount of the transaction fees, taxes and bid/ask spreads the funds pay for their trades.

      “Number one rule of Wall Street. Nobody, I don’t care if you’re Warren Buffett or if you’re Jimmy Buffett, nobody knows if a stock is gonna go up, down, sideways or in fucking circles, least of all stock brokers.” – The Wolf of Wall Street

    • All of these modeling tools are far too certain in their predictions. Machine and Deep Learning is very useful for modeling processes where the rules are well defined but the search spaces are wide, and there are very few variables (Chess, Go, etc). The robots trading the market are just doing what human traders have done for years. They make crazy predictions and they get lucky once in awhile. And they take advantage of timing arbitrage to get in ahead of another robot. At the bottom, it’s all gambling.

      I, for one, would like to see a few laws passed eliminating limited liability of shareholders and the legal personhood of corporations. There’s far to much play money on Wall St floating around screwing over the working folks. Company ownership used to mean the actual owner was responsible for his company, took care of his workforce, and provided social capital to the place of his business. He had a sense of noblesse oblige, even if not to aristocratic standards. Now our ‘executives’ are thieves in suits and the “owners” of these corporations get screwed too.

      • “Company ownership used to mean the actual owner was responsible for his company, took care of his workforce, and provided social capital to the place of his business.”

        That’s the money quote, right there. This ain’t yer granpappy’s capitalism.

  19. The automation you describe reminds me of what happened to the geeky battle bots originally birthed at MIT, which went on to become series of televised games. The first bots fought with weapons modeled after what humans use in hand to hand combat. They evolved into non-human weapons and strategies that test the limits of electric motors, gears, chains, belts, metallurgy, etc.

  20. Or maybe the Austrians were/are right? Before we start writing the next Terminator sequel, think about that for a second.

Comments are closed.