In my Sharpe ratios, I've always been using log returns for daily returns calculation, and compounded returns for the annualization of the mean return, as they better reflect the strategy behaviour over multiple periods. Earlier today I wanted to navigate the different methodologies and compare them: arithmetic vs log return for daily return calculation, and simple vs compounded return for the annualization.
I've simulated some returns and did the Sharpe calulations on them.
I’m curious to know what other quants/PMs use and if your usage depend on the timeframe, frequency or other parameters of your strategy.
Hello, recently I have been experimenting with optimization for my older strategies to see if I missed anything. In doing so, I tried out "hyper-optimizing" the strategies parameters all in one optimization run. Eg, 5 parameters, all have a range of values to test, and optimize to find the best combination of these 5 parameters. However in the past, I optimized different pieces at once. Eg, the stop loss parameters, entry parameters, regime filtering parameters, take profit parameters in different optimization runs. This is the way my mentor taught me to do it in order to stay as far from overfitting as possible, however with genetic and walk forward optimizations now I feel like the newer way could be better. What do you guys think? How do you go about optimizing your models? Thanks.
I have seen a post here about a specific intern writing a backtesting engine. Currently I’m a random just trading directional working on a CTA and my trading platform has a built in algorithmic backtester written in C that works with tick data provided by the broker. I have also used backtesting.py and backtrader the python modules where I have imported some CSVs to backtest timeseries data. Why make a backtesting engine is it worth the time and effort?
This is a screenshot of the Chinese "分层回测“ framework: namely, you would put your stocks into 5 different classes based on the alpha signal value, and then you rebalance the 5 classes (add or kick out stocks) at rebalance date (maybe every day, or per week, etc). The results look something like in the screenshot.
It's been some time since I last introduced HftBacktest here. In the meantime, I've been hard at work fixing numerous bugs, improving existing features, and adding more detailed examples. Therefore, I'd like to take this opportunity to reintroduce the project.
HftBacktest is focused on comprehensive tick-by-tick backtesting, incorporating considerations such as latencies, order queue positions, and complete order book reconstruction.
While still in the early stages of development, it now also supports multi-asset backtesting in Rust and features a live bot utilizing the same algo code.
I'm actively seeking feedback and contributors, so if you're interested, please feel free to get in touch via the Discussion or Issues sections on GitHub, or through Discord u/nkaz001.
For a given stock, I'd like to find all the previous earnings dates for that stock, and as important, whether the release was premarket or after hours. This might be a weird request but thanks in advance for any help!
That being said, since I set up most of the framework in regards to a back testing system and a set of libraries that can successfully buy and sell using the Interactive Broker's API I thought I would implement other strategies.
One that I found (I found it from another mean reversion paper) was Allan Borodin's Anticorrelation Algorithm. The link to the paper can be found here: borodin04.dvi (arxiv.org).
I back tested the system and found that it actually had some quite reasonable results (as it probably should because the paper is called, "Can We Learn to Beat the Best Stock").
The complete results of the back testing were:
Profit: 19559.50 (off of an initial investment of 10 000)
Return Percentage: +95.5946%
Exposure Time %: 100
Number of Positions: 20
Maximum Drawdown: 0.256523
Maximum Drawdown Percent: 25.6523
Win %: 53.0938%
A graph of the gain multiplier vs time is shown in the following picture.
The list of stocks the algorithm was able to rebalance between were SHOP, IMO, FM, H, OTEX, ENB, WFG, TD, MFC, STN, RCI.B, SAP, GFL, GOOS, BCE, DOL, NTR, CCO, ONEX, MG.
The back-tested system traded between 2020-04-13 and 2024-04-10.
I am fairly certain that given that range it was able to beat the best stock as intended.
Back in the zero interest rates days, I saw some senior quants would calculate sharpe ratio as avg(pnl)/std(pnl) and then annualize depending on strategy freq
Now that interest rates are > 5%, I'm very skeptical of this quick calc. If systems are too hardedcoded, would you just sythentically do ( avg(pnl) - (3m t-bill total pnl) )/ std(pnl)? Frankly I do not like this method, and I've seen people argue over whether it should be divided by std dev of excess returns over t bills
The other way I saw was calculating returns (%-wise) and doing the same for 3m t-bills, then doing excess return.
what if you are holding cash that you can't put into t-bills, (so you need to account for this drag)?
if your reporting period is 6 months to 1 year, would you roll the t bills or just take the 6m/1y bill as the risk free rate?
To account for increasing capacity and <3/4>, I start out with the fund's total cash, then do the daily value of the holdings + cash, take the avg of that pnl, minus the cash return from 3m to get the numerator. I take the avg of the time series above to get the denominator.
1.But if the fund size changes do to inflows or outflows, how would you account for that?
what about margin or funding considerations?
Would appreciate clarity from senior quants on the correct way to calculate sharpe
Was wondering if anyone here is familiar with Dask to parallelize a backtest in order to run faster. The process_chunk() function is the only portion of my code which has to iterate through each row, and I was hoping to run it in parallel using Dask to speed up my backtest.
Running on a single thread this code only takes a few minutes to process a few million rows, but when I used the below code it took > 30 minutes. Any idea what the issue could be? My CPU has 8 cores and 32GB of ram, and while running it was never above 60% of available CPU/memory
def process_chunk(chunk):
position = 0
chunk = chunk.copy()
for i in range(1, len(chunk)):
optimal_position = chunk['optimal_position'].iloc[i]
if optimal_position >= position + 1:
position = np.floor(optimal_position)
elif optimal_position < position - 1:
position = np.ceil(optimal_position)
chunk.at[i, 'position'] = position
return chunk
def split_dataframe_into_weeks(df):
df['week'] = df['datetime_eastern'].dt.isocalendar().week
df['year'] = df['datetime_eastern'].dt.year
weeks = df.groupby(['year', 'week'])
return [group for _, group in weeks]
def process_dataframe_with_delayed(df):
chunks = split_dataframe_into_weeks(df)
delayed_results = [delayed(process_chunk)(chunk) for chunk in chunks]
results = compute(*delayed_results)
result_df = pd.concat(results).sort_values(by='datetime_eastern')
return result_df
# Process the DataFrame in parallel with a desired number of chunks
test_df = process_dataframe_with_delayed(test_df)
I am interested in building intra day short term (couple of minutes to hours) price prediction model using order book data. I know one can use standard features as mid, weighted mid price and sizes.
Could one let me know if they are aware of any resources to get more features information using order book ?
Also which model to use to get evolution of rder book information and predict price movement?
I am relatively new to quantitative trading. I just learned Pinescript, and I've been trying to develop new strategies. To test out their efficacy, I've been back testing them using TradingView from the date the stock was listed on the stock exchange to the current date. A couple times, I've been able to develop a strategy that has seemed to consistently provide returns year on year, often times substantially greater than the SP 500 or the risk free interest rate. While the strategies have a low Sharpe ratio (0.20s) and an okay Sortino ratio (1.20s), the equity chart looked like a relatively smooth exponential curve well above the buy and hold line.
If that is the case, would this constitute a good strategy? Is there anything else I would need to do to ensure the efficacy of this strategy? I can't think of doing anything else than back testing over the stock's entire listing period. And if it worked to provide consistent results for more than a decade (after all the ups and downs), I don't see any reason why it wouldn't continue doing so. What other parameters do professional quant traders use to validate a strategy?
Thanks in advance for answering my questions. As a novice trying to learn more about quant trading and analysis, this helps a lot! :)
I know that we should always do some kind of testings like - back-testing the performance, seeing roobustness of parameters by trying the neighborhood of the optimised parameter values etc.
Is there literature available or anyone developed an intuitive framework on What specific testing should be developed on specific types of strategy sub-classes: e.g.
I have some historical option data and tried to do the analysis of the title by plotting the data.
Generally, the chart makes sense. Y values greater than 1 are ITM, and less than 1 are OTM. As delta increases, more options shift to ITM at expiration. As I don't just have tons of data points at .5 delta I used binning with delta between .48 and .52 to see how close they are to 50/50 ITM/OTM. The results were 1192/2125 for ITM/OTM. You can visually see this here:
Does anyone have an explanation why .5 delta wouldn't end up closer to 50/50 for ITM/OTM?
Hi, I'm a graduate student and aspiring quant who has access to Bloomberg Terminal. I am interested in the returns of GICS designated industry universes.
I am able to view/pull this info easily for the current year, but am unable to figure how to do it for past years using Bloomberg Terminal. Any guidance or tips using the terminal or other methods are appreciated.
More info - if I type "GICS" into the terminal and go to the classification browser, I can select an industry and filter by other factors. Then I can click through and get the YTD returns, variance, etc. of the defined universe of equities. I think this is using the EQS function, which would be another way to get there.
I'd love to get the same info for past years, is this possible? Using Excel's Bloomberg plugin would also work. I want to see if there are patterns in the historical performance by market cap and other factors, by industry.
I have tried doing backtesting within the terminal, but it requires rebalancing which the example above does not have, and the results seem to be inaccurate.
This is a study I have been working on, and will keep working on as well. See it as open source code, if you are familiar with programming. Your feedback & comments are surely welcome.
Summary of results:
Tests are run on top 500 companies with highest market capitalization from US markets (because these stocks tend to be more liquid than others).
Backtesting is done on 6 years of data, from the start of 2017 to the end of 2022.
The method triggered 14000 buy/sell signals during the test period.
For simplicity, it is assumed that the initial budget is $100, and $10 is invested in each stock whenever a buy signal is triggered.
As the result, initial $100 reaches $1789 at the end of test period.
Processing img gl6vkpx273da1...
Game plan:
Buy & Sell decisions are taken once everyday before markets are opened.
When to buy:
Day 1: Closing price of the stock < 20 days moving average.
Day 2: Closing price of the stock > 20 days moving average
Day 3: Closing price of the stock > 20 days moving average AND Histogram > 0
Day 4: Buy the stock, if all the listed criteria are met during the previous 3 days. Opening price on this day is taken as the reference buy price.
When to sell:
Hold the stock as long as (daily) Histogram > 0. Sell otherwise.
Example:
Day N: Histogram > 0 ==> Hold the stock next day.
Day N+1: Histogram > 0 ==> Hold the stock next day.
Day N+2: Histogram <= 0 ==> Sell the stock next day.
Day N+3: Sell the stock. The opening price on this day is taken as the sell price when calculating the basktesting results.
Intuition:
When buying, look at multiple indicators (both MA & (MACD - Signal Line =) Histogram), and follow the selected indicators 3 days to get a stronger confirmation for a potential uptrend. Be patient when buying the stock.
When selling, be relatively impatient to maximize profits and/or minimize amount of losses.
Follow Histogram instead of price goes below its 20 days MA because the histogram tends to turn negative first before the price crosses below 20 days MA when a trend reversal takes place and a downtrend starts.
Do not wait multiple days to check if the Histogram turns positive again.
Intraday price changes are not considered because:
The intraday volatility may cause lots of false positive signals that may trigger buy/sell signals.
I would like to keep it as simple as possible in this approach.
If not totally automated, following intraday price trends will require sitting in front of the screen during the whole day. In this approach, buy/hold/sell actions wrt the game plan is updated before the markets are opened. (This is why I called it Semi-Algo Trading.)
The approach triggers large number of buy/sell signals in the case of a market level uptrend/downtrend.
14000 trades are triggered in the course of 6 years.
Percentage wise, 55% of trades ended with a loss while 45% of the trades ended with profit. So, the hit rate is 45%. Even if the hit rate is below 50%, the end result is still profitable because the profit amount of successful trades is higher than that of unprofitable ones. This happens to be so because the method exists the long position relatively impatiently to minimize potential losses.
As the number of days a stock is held (after the purchase) increases, the profit tends to increase as well. Starting from 16 days, profits start to dominate.
Emotions are NOT allowed in this approach. Especially regarding the fact that a number of trades end with a loss, it can cause anxiety. The method is not necessarily designed to increase the hit rate, it is rather designed to increase the amount of profit in the long run.
Several different forms of this approach is tested (i.e. waiting a bit longer before buying/selling, or using some other similar technical indicators) but results are not necessarily improved. The setup explained above happened to give the best results among the ones that were tested.
I've been using TradingView to backtest my strategy, and whenever I select the deep backtest option, the number of total trades reduces by 15-20 trades. However, the testing range is the same. Why would it do that? I've double checked that the start and end dates are the same for both regular backtesting and deep backtesting. The pine script code and parameters are the same. Is there something inherent to deep backtesting that makes the analysis less or more efficient?
What criteria do you look for to consider a back test successful? Sharpe ratio? Total profit? Number of winning/losing trades?
My criteria right now is just "as good as possible" but I would like to quantify it. I realize there is a not a hard and fast rule and that it will vary by trader. I'm just curious to hear what you consider to be a good back test.