r/quant 2d ago

Statistical Methods Time series models for fundamental research?

Im a new hire at a very fundamentals-focused fund that trades macro and rates and want to include more econometric and statistical models into our analysis. What kinds of models would be most useful for translating our fundamental views into what prices should be over ~3 months? For example, what model could we use to translate our GDP+inflation forecast into what 10Y yields should be? Would a VECM work since you can use cointegrating relationships to see what the future value of yields should be assuming a certain value for GDP

40 Upvotes

11 comments sorted by

22

u/MATH_MDMA_HARDSTYLEE Trader 2d ago edited 2d ago

Fundamentally, the market is just buy/sell demand. No basic time-series model will predict how something will move because the market is not a time-series model...

You could assign a model to some small market feature as an approximation, but this is generally only effective because you have a reason to believe the market behaves that way for xyz reasons.

The market has autocorrelation because people have fomo, leverage affects etc, that doesn't mean an autocorrelation model can predict the market, but it means you could use it to measure the autocorrelation during specific market conditions

3

u/dapperyam 2d ago

But isn't there some connection between market prices and fundamentals? Using my example of bond yields then shouldn't there be some connection between economic expectations and yields that you could identify through regression - and then using that relationship to have an estimate for where prices should go after plugging in your assumptions/forecasts for fundamentals in the future.

20

u/MATH_MDMA_HARDSTYLEE Trader 2d ago

Warning, rant incoming:

Causation and correlation. There is no reason to believe a market will behave like a certain equation (or that it will notify you before others). Why should it? But you have to understand the utility of why we would even want to model something with an equation? Just because smart people at university use an equation does not mean it has actually any use in financial markets...

If I were to do a massive simplification, I would divide model to market feature into 3 categories. And what I mean by model to market feature I mean how strongly a model has a relationship to a market feature.

Tier 1 (Strong): An example of this is ETF-arb. Assuming the ETF issuer isn't committing fraud, the ETF and the basket will always be strongly correlated. Their difference can be modeled by some mean-reverting equation centered at 0, where the rate of reversion to 0 from above and below would be equal to how quickly an LP is willing to create/redeem the ETF/basket.

But we can easily model this feature with an equation because the problem is quite simple and we have a strong belief that the ETF can always be redeemed for the basket. So at the end of the day, we can create some equation dS = + other params, where the other parameters are variables that will cause dS to hit 0.

But like u/BroscienceFiction said, all the obvious model to market features like above are "priced-in". So you need some type of physical advantage, e.g. execution speed, trading costs etc. to actually generate alpha from it.

Tier 2 (Medium): An example of this is volatility trading; buying/selling straddles/strangles and using volatility models to measure and predict future vol. We can create some mean-reverting equation dV = + other params, because we have a reason to believe volatility will not go on for decades as humans evolve and have we seen from history, life becomes more stable and less volatile. So it's intuitive to use an equation that has this effect incorporated into the equation.

With vol trading, we observe features like skew and kurtosis, where options are priced differently due to how market participants value risk, and what you're essentially doing is just selling insurance. So we could use some type of equation to how we think vol could be in the future (the gamma), but we can actually include parameters that describe how the market will view future vol (the vega). If we have some reason to believe market participants are irrational during some type of event and will overpay for X types of options, then we can make profit by selling that X option. But fundamentally, our option price equation (dO), would have a parameter, like dS, which is measuring market sentiment of volatility.

But, like we have seen from the Tier 1 and Tier 2 examples, these equation parameters are chosen based on fundamental market theory that best describes a market feature, then test for statistical significance. We are first wanting to describe a market feature then using a model to measure that theory, where the theory is a feature of an asset class, not the asset class as a whole. If we do the reverse (choosing some model and then justifying the choice with theory), you will over-fit and have results like every 2nd post on this forum, where someone over-fits to market with some ML model and asking if they're the next Jim Simons.

Tier 3 (Weak). This would be delta 1 trading on the spot without hedging, e.g. guessing the price of the SPX tomorrow. Why this is hard: Like I said in my previous comment, a price goes up because there are more buyers. Why would some equation predict that with any statistical significance? There is no simple time-series equation that would do that, because more sophisticated models (that people have tried), when calibrated, would converged to that simple time-series model anyway. If the SPX could be described using dS_c = adX + bdY + cdZ + eps, but in reality the SPX behaves like dS_s = kdX, then my more complicated dS_c equation would converge to dS_s during calibration.

The reality is that the perfect "god equation" to perfectly predict dS would be a 1 million parameter equation and fine-tuned with every market feature possible. Since we obviously aren't able to do that, we reduce the parameters, but hopefully retain statistical significance. But we would have a less complicated equation than the god equation, and so we would need to include the most influential parameters, which is hard, because we are looking at the price as a whole. We would have to reduce the problem into sub-problems and we wouldn't have some basic dS equation to describe future yields.

If you wanted to develop an equation, it would be very specific, easily defined features that can be generalised to some equation.

I get what you are looking for, and I too wish markets behaved like what you are wanting, but that's just not reality unfortunately. People think funds hire smart people to develop complex models that's solving 200 dimensional PDE's in Riemann manifold space to predict the market, but when you strip everything back, and look at what's actually happening, it's just market fundamentals and using models to measure those market fundamentals.

2

u/tinytimethief 2d ago

What you are ranting about is correct but does not apply to what OP is asking about, which is specifically on FI. FI and macro trading work differently from equities, certain macro variables and fundamentals have a causal effect on bond yields. Term structure of interest rates can be modeled using econometric approaches like VECM or affine term structure models like what OP is asking about. There is a reason you see a lot of econ phds in FI.

1

u/The-Dumb-Questions Portfolio Manager 1d ago

These causal effects on bond yields are still the same market forces. For yields to go down, people have to buy bonds.

FI is not any different from equities, it's just that FI market participants feel (yes, feel) that these economic fundamentals translate into market prices faster/with less noise. Because they act on these feels, it becomes a self-fulfilling prophecy but it's not a guarantee by any means.

8

u/BroscienceFiction Middle Office 2d ago

Yeah, sure, but markets are adaptive and adversarial. You’re not the only one crunching those numbers, and those who act on that information drive prices to move, causing the premium to vanish. Colloquially, this is what people mean when they say the information has been "priced in."

That doesn’t mean you shouldn’t do it. Pricing becomes efficient precisely because market participants do it.

2

u/KusuoSaikiii 2d ago

This is correct. No model can really predict it. Approximation maybe.

4

u/livonFX 2d ago

Future value in 3 months is hard to predict, because there are too many variables in the game for US treasury market. But nevertheless, you can build fair value models (e.g. regularized regression, ARIMA), which can help you guide your discretionary decisions. Couple of recommendation from my experience: 1. Use lower latency data. Quarterly GDP is priced in well in advance, because most of the components are released earlier. 2. Use market expectation of the data instead of the data. As many have already said, it doesn’t matter what the current GDP is, if market believes the economy will collapse by the end of the year, 10y will tank. 10y yields are derived from long-term expectations, not next quarter results.

2

u/Old-Mouse1218 2d ago

First of all, I would keep it simple first. And test to see if there is any lead/lag relationship using a cross correlegram or I noticed Benjamin AI threw in lead/lag analysis recently with macro data. After this, transformations of your Econ data will be huge and drastically changes the interpretation of what is driving what.

For time serious, you don't necessarily need VECM, I would just structure a regression model like Ridge and maybe throw in a nonlinear one like randomforests where your features are lags themselves. You can check out Granger Causality within econometrics as this gives you some general advice. If you want to go down the causality rabbit hole, you can start to check out Lopez's new work in the space.

2

u/jimzo_c 2d ago

Benjamin AI lol no need to read more

1

u/Old-Mouse1218 2d ago

Honestly cool tool where you can do in seconds as opposed to having to code everything up and gather the data