Let's talk about insider trading. In an imaginary world where you know with certainty that the price of a stock will change on a given date - you can place a huge investment for enormous gains.

Why wouldn't you? There is no risk of the trade going bad because you have complete certainty in your buy (or sell) signal.

In the real world, an obvious downside is that this is illegal (unless you're in Congress), but the point remains - the size of your investment should scale with your confidence.

What do you do when you're in a similar spot while trading?

If you have a good system in place, you can increase (or decrease) your position size based on your level of confidence in the position, and do so without taking on a stupid amount of risk (like 99% of retail traders out there).

In fact, scaling your position size based on your level of conviction is a logical way to increase your returns - assuming you've done your backtesting homework, of course.

If the odds are in your favor, then you ought to increase your bet size accordingly. Of course, you shouldn't over bet - you could still be wrong no matter how high your conviction is - which would lead to a devastating loss.

As a systematic trader, you need to be sure your trading bot has well defined rules.

So what kind of rules can we use?

How Strong is your Signal?

Following up on a previous article, we explored adding multiple signals to our Starter System. By weighting them appropriately, we were able to combine them into a single, buy/sell signal.

Now, we're going to look at each of these components and determine how strong they are before combining them.

We'll call the signal strength our forecast for simplicity.

Including a forecast allows us to adjust our position based on how confident we are in the signal we're getting. Moreover, we'll also use it to adjust to the signal over time.

If risk is increasing, we should take some money off the table. Or if a trend is flattening out and signals are weakening, we can start getting out of our position before it completely reverses. Either of these can help cut down on your downside.

On the other hand, you might start with a small position due to a weak signal. But as more of your rules get on board and start screaming "Buy!" then your system will build a larger position.

This setup gives your system flexibility to adjust as the markets move through regimes.

Adding a Forecast

Let's say we have a variety of Moving Average Crossover (MAC), Mean Breakout (MBO) and carry signals in our system. How do we determine the forecast for these?

The MAC's strength is determined by taking the difference between the fast moving average and the slow moving average and dividing this by our instrument risk. The larger the gap and lower the risk, the stronger your forecast.

In psuedo-code, we have:

F_mac[t] = SMA(fast, t) - SMA(slow, t) / STD(t)
Or, with mathematical notation: $$SMA_t^N = \frac{1}{N}\sum_{i=1}^N P_i$$ $$F_{MAC}^{f/s} = \frac{SMA_t^{fast} - SMA_t^{slow}}{\sigma}$$ Scaling the MAC by the risk is important for a few reasons.

For one, scaling the value helps standardize the signal when we're comparing it to the other rules in our system. Another reason is due to the wide range of prices we can see. If you have two instruments, both with a 1% difference in your moving averages, but the first is at $1,000 and the second is at $10, then the un-adjusted forecast signal on the first is $10 while it's $0.10 on the second. Scaling by risk helps to keep things in balance so you aren't heavily biased towards the more expensive instruments just because they're expensive!

For the MBO and carry rules, things are even easier because these don't require any additional scaling. We can just use the values as they are (and you can find the details on them here so I don't have to paste chunks of old articles below).

Forecast Scaling Factors

There's one more step (actually, I lied, two more steps) before we're ready to work with these forecast values.

  1. Scale appropriately
  2. Set min and max values

Scale appropriately? Didn't we already do that when we adjusted the risk for the MAC?

Yes and no.

That was one very important type of scaling. There's a second adjustment that needs to be made for each of these values, they need to be scaled to keep their average value around 1.

This ensures that your different forecasts are all comparable to one another. If you have one forecast value that runs up to 40 and another at 2, then the second won't have much of an impact on your system.

Even within the same rule you can run into issues because the variations operate at different time scales.

Take a look at the forecasts for Tesla below:


These are all risk-adjusted, but don't have that second scaling factor. The long-term rules can really dominate simply because the slower the SMA, the larger the lag and larger the gaps can grow. You can see that the magnitude of changes is highly influenced by the time scale.

To get around these, Carver introduces scaling factors for each forecast variation.* Below, we show the scaling factors used in our system, taken from chapter 10 of Leveraged Trading:


*If you get the book, you'll see Carver's values are all 10X higher. He has a random scaling factor so that everything is 10X greater, then divides by 10 at the end because he likes 10. Well I like 1. It's arbitrary and makes no difference except now I can drop some zeros and don't have to divide by 10 in my code.

I'll be honest. I don't know exactly where Carver got these numbers from. He says it's from backtesting 37 different instruments and a whole bunch of synthetic data and should work for any market using these rules. That's about all the info I've got. I could try to reverse engineer it myself, but I'd rather spend my time elsewhere. So let's accept these values as Gospel and move on (for now).

Our code changes slightly for each of these rules. Here's the update for the MAC:

def _calcMAC(self, fast, slow, scale):
    name = f'MAC{self.n_sigs}'
    if f'SMA{fast}' not in self.data.columns:
        self.data[f'SMA{fast}'] = self.data['Close'].rolling(fast).mean()
    if f'SMA{slow}' not in self.data.columns:
        self.data[f'SMA{slow}'] = self.data['Close'].rolling(slow).mean()
    sig = self.data[f'SMA{fast}'] - self.data[f'SMA{slow}']
    sig = sig.ffill().fillna(0) / self.data['risk_units'] * scale
    self._clipForecast(sig, name)

Now, our final step is to constrain our forecast by our min/max values (last line in the code above). We're going to use +/- 2 for this.

We'll set up a new method to handle this:

def _clipForecast(self, signal, name):
    sig = np.where(signal > self.max_forecast, self.max_forecast, signal)
    sig = np.where(sig < self.min_forecast, self.min_forecast, sig)
    self.data[name] = sig

MBO and carry follow the exact same pattern, just multiply by the scaling factor and clip it if it is outside of +/- 2.

Finally, we can calculate our forecast. For this we take the forecast of each variant and multiply it by the weights we used in our previous article.


Forecast Example

Let's say we have all the weights and signals in the image above, and we've got these values to work with today:


Let's calculate the signal from the forecast!

First, we'll do an example MAC value. $$F_{MAC}^{8/32} = \bigg(\frac{9.89 - 9.97}{0.44}\bigg)(8.384) = -1.52$$ This is within +/- 2, so we leave it as is. The other values are outside of the range, so they get rounded to -2:

For the MBO values, we just multiply them by their appropriate scaling factor.

$$F_{MBO}^{20} = -0.37 \times 3.16 = -1.17$$

This, and all the MBO values, are greater than -2, so they don't get rounded.


The carry value just gets multiplied by its scaling factor of 3.

$$F_{CAR} = 0.64 \times 3 = 1.92$$

Finally, we get our signal by multiplying and summing the values of our rules and the weights.


Our weighted signal comes out to be 0.116, which means we're mildly long on this one because we're just over 0.

Bet on Your Signal

We want to take a position based on the strength of our signal, so we scale our target exposure accordingly.

$$s^T = \frac{r^T C \gamma}{\sigma}$$ where \(s^T\) is our target exposure in dollars, \(r^T\) is our target risk (12% for our default system), \(C\) is our total capital, \(\sigma\) is our instrument risk, and [latex]\gamma\) is our signal.

To make this concrete, if we have $1,000 to invest with a target risk of 12%, our signal at 0.116, and instrument risk of 16%, then we'd be looking to have $87 invested at this time. Compare that with the non-scaled value of $750 (just exclude the signal) and you see how we reduce our bet size to reflect our lack of conviction in the trade.

Conversely, a high-signal trade (max of 2, remember?) would have us doubling our average exposure from $750 to $1,500! In other words, our system would be so confident in it that it would lever up in order to hit its target.

What to do when the Winds Change?

What do you do when your forecast changes or your risk decreases?

In the previous systems, we did nothing - we were long or short with a static position. That makes trading simple, but not necessarily effective.

Now, we'll use our forecast to dynamically re-adjust our position size by comparing our current exposure with the target exposure.

Let's take the example from above, we have an initial target exposure of $87 and our stock is trading at $7 per share, so we buy 12 shares for an actual exposure of $84. Because of rounding, we're a little below our target, which is fine.

Some time passes and our new price is $8.50 and our forecast increases from a paltry 0.116 to 1.5 as more signals jump on board and go long. With the quick increase in price, we also saw a rise in the risk to 20%. We can recalculate our target and current exposure:

$$s^T = \frac{0.12 \times (\$8.50 \times 12 + \$916) \times 1.5}{0.2} = \$916.20$$ $$s^C = \$8.50 \times 12 = \$114$$ We're way off our target! Our current exposure is only $114, about 1/8th of what we should have, so we'll set our system to buy the 95 shares we need to get up to our new target, reflecting the higher confidence we have in our position.

Position Adjustments

We do need to be careful to avoid over trading. If we adjust our position too frequently, then we'll incur trading costs via transactions or the spread and erode our profits. Our exposure and target exposure change every day, so to prevent over trading, we're going to look at our exposure drift. This is a metric calculated by the difference between our target and current exposure, divided by our average exposure. If this is greater than a threshold (e.g. 10%), then we trade to bring our exposure back in line with the target.

What's our average exposure then?

Recall that our scaling factor was devised to set our signal to 1 on average. With that in mind, our average exposure is the exact same as our target exposure, except we set our forecast signal to 1. Easy.

$$s^{avg} = \frac{r^T C \gamma^{avg}}{\sigma}$$ Using the values in our example above, our average exposure comes out to $610.80.
The exposure drift is then: $$\Delta s = \frac{s^T - s^C}{s^{avg}}$$ In our example case, our drift blew out to a huge 131%! In our actual system, we'd be rebalancing more frequently so that we shouldn't encounter such a drastic change - although it can happen in response to large price moves.

Implementing the Forecast Algo

There aren't too many changes with the code versus the system developed in the last post in this series. We have a few new parameters, a slight modification to our signal calculation, and our rebalancing strategy via the exposure drift. The full code is available here:

class ForecastStarterSystem(ContinuousStarterSystem):
    Carver's Starter System without stop losses, multiple entry rules, and
    a forecast for position sizing and rebalancing.
    Adapted from Rob Carver's Leveraged Trading: https://amzn.to/3C1owYn
    ContinuousStarterSystem class defined here:
    def __init__(self, ticker: str, signals: dict, target_risk: float = 0.12,
                starting_capital: float = 1000, margin_cost: float = 0.04,
                short_cost: float = 0.001, interest_on_balance: float = 0.0, 
                start: str = '2000-01-01', end: str = '2020-12-31', 
                shorts: bool = True, weights: list = [],
                max_forecast: float = 2, min_forecast: float = -2,
                exposure_drift: float = 0.1,
                *args, **kwargs):
        self.max_forecast = max_forecast
        self.min_forecast = min_forecast
        self.exposure_drift = exposure_drift

            ticker=ticker, signals=signals, target_risk=target_risk,
            margin_cost=margin_cost, short_cost=short_cost, start=start,
            end=end, interest_on_balance=interest_on_balance, 
            starting_capital=starting_capital, shorts=shorts,


    def _clipForecast(self, signal, name):
        sig = np.where(signal > self.max_forecast, self.max_forecast, signal)
        sig = np.where(sig < self.min_forecast, self.min_forecast, sig)
        self.data[name] = sig
    def _calcMAC(self, fast, slow, scale):
        name = f'MAC{self.n_sigs}'
        if f'SMA{fast}' not in self.data.columns:
            self.data[f'SMA{fast}'] = self.data['Close'].rolling(fast).mean()
        if f'SMA{slow}' not in self.data.columns:
            self.data[f'SMA{slow}'] = self.data['Close'].rolling(slow).mean()
        sig = self.data[f'SMA{fast}'] - self.data[f'SMA{slow}']
        sig = sig.ffill().fillna(0) / self.data['risk_units'] * scale
        self._clipForecast(sig, name)

    def _calcMBO(self, periods, scale):
        name = f'MBO{self.n_sigs}'
        ul = self.data['Close'].rolling(periods).max()
        ll = self.data['Close'].rolling(periods).min()
        mean = self.data['Close'].rolling(periods).mean()
        self.data[f'SPrice{periods}'] = (self.data['Close'] - mean) / (ul - ll) 
        sig = self.data[f'SPrice{periods}'].ffill().fillna(0) * scale
        self._clipForecast(sig, name)

    def _calcCarry(self, scale):
        name = f'Carry{self.n_sigs}'
        ttm_div = self.data['Dividends'].rolling(252).sum()
        div_yield = ttm_div / self.data['Close']
        net_long = div_yield - self.margin_cost
        net_short = self.interest_on_balance - self.short_cost - ttm_div
        self.data['net_return'] = (net_long - net_short) / 2
        sig = self.data['net_return'] / self.data['STD'] * scale
        self._clipForecast(sig, name)

    def _calcSignals(self):
        self.data['STD'] = self.data['Close'].pct_change().rolling(252).std() \
            * np.sqrt(252)
        self.data['risk_units'] = self.data['STD'] * self.data['Close']
        self.n_sigs = 0
        for k, v in self.signals.items():
            if k == 'MAC':
                for v1 in v.values():
                    self._calcMAC(v1['fast'], v1['slow'], v1['scale'])
                    self.n_sigs += 1
            elif k == 'MBO':
                for v1 in v.values():
                    self._calcMBO(v1['N'], v1['scale'])
                    self.n_sigs += 1

            elif k == 'CAR':
                for v1 in v.values():
                    if v1['status']:
                        self.n_sigs += 1
    def _calcTotalSignal(self):
        self.data['signal'] = self.data.apply(lambda x:
            np.dot(x[self.signal_names].values, self.signal_weights),

    def _sizePosition(self, capital, price, instrument_risk, signal):
        exposure = self.target_risk * capital * \
            np.abs(signal) / instrument_risk
        shares = np.floor(exposure / price)
        if shares * price > capital:
            return np.floor(capital / price)
        return shares
    def _getExposureDrift(self, cash, position, price, signal,
        if position == 0:
            return 0, 0
        capital = cash + price * position
        exposure = self.target_risk * capital * signal / instrument_risk
        cur_exposure = price * position
        avg_exposure = self.target_risk * capital / instrument_risk * \
        return (exposure - cur_exposure) / avg_exposure, avg_exposure

    def run(self):
        position = np.zeros(self.data.shape[0])
        rebalance = position.copy()
        exp_delta = position.copy()
        cash = position.copy()
        for i, (ts, row) in enumerate(self.data.iterrows()):
            if any(np.isnan(row.values)):
                cash[i] = self._calcCash(cash[i-1], position[i], 
                    row['Close'], row['Dividends']) if i > 0 \
                        else self.starting_capital

            # Propagate values forward
            position[i] = position[i-1]
            cash[i] = self._calcCash(cash[i-1], position[i], 
                row['Close'], row['Dividends'])
            if row['signal'] > 0:
                if position[i] <= 0:
                    cash[i] += position[i] * row['Close']
                    position[i] = self._sizePosition(
                        cash[i], row['Close'], row['STD'], 
                    cash[i] -= position[i] * row['Close']
                    # continue
            elif row['signal'] < 0:
                if position[i] >= 0:
                    cash[i] += position[i] * row['Close']
                    if self.shorts:
                        position[i] = -self._sizePosition(
                            cash[i], row['Close'], row['STD'], 
                        cash[i] -= position[i] * row['Close']
                        # continue
                        position[i] = 0
                # Remain neutral if signal == 0
                cash[i] += position[i] * row['Close']
                position[i] = 0

            # Check for rebalancing
            delta_exposure, avg_exposure = self._getExposureDrift(
                cash[i], position[i], row['Close'], row['signal'], 
            exp_delta[i] += delta_exposure
            if np.abs(delta_exposure) >= self.exposure_drift:
                shares = np.round(delta_exposure * avg_exposure / 
                cash[i] -= shares * row['Close']
                position[i] += shares
                rebalance[i] += shares

        self.data['position'] = position
        self.data['cash'] = cash
        self.data['portfolio'] = self.data['position'] * \
            self.data['Close'] + self.data['cash']
        self.data['rebalance'] = rebalance
        self.data['exposure_drift'] = exp_delta
        self.data = calcReturns(self.data)

We inherit from our old model, add a few new parameters such as our min/max forecast and our exposure drift trigger, then initialize it and we're set. There are a few changes to some of the methods to take the scale factor into account and the signal dictionary structure changed slightly as well to include your custom scales (in case you want to change it).

sig_dict_carry = {
  'MAC' : {
    0: {'fast': 8, 'slow': 32, 'scale': 8.384},
    1: {'fast': 16, 'slow': 64, 'scale': 5.712},
    2: {'fast': 32, 'slow': 128, 'scale': 3.824},
    3: {'fast': 64, 'slow': 256, 'scale': 2.528}
  'MBO': {
    0: {'N': 20, 'scale': 3.16},
    1: {'N': 40, 'scale': 3.27},
    2: {'N': 80, 'scale': 3.35},
    3: {'N': 160, 'scale': 3.35},
    4: {'N': 320, 'scale': 3.35}
  'CAR': {
    0: {'status': True, 'scale': 3}

We initialize and run this just like the other systems:

sys_fcast = ForecastStarterSystem(ticker='HAL', signals=sig_dict)

Viewing the stats:

fcast_stats = getStratStats(sys_fcast.data['strat_log_returns'])
bh_stats = getStratStats(sys_fcast.data['log_returns'])
stats = pd.DataFrame([fcast_stats, bh_stats], 
    index=['Forecast', 'Buy and Hold'])

Looking at the equity curves, we see that the system is much more conservative, but out performs the underlying security over the long run.


We can also take a look at how the signal evolves over time and compare this to our continuous system without the forecast.


You can see that the forecast allows the system to take on larger positions both to the upside and downside. Although the continuous system's signal can range freely between -1 and 1, in practice, we round it up or down, which is why you see these large jumps in position in the second plot.

The forecast system really stands out in the second case during the COVID crash of 2020 where it went from a long position to being short, and then piled into that short position very quickly.

Let's zoom in on this region to better understand what's happening.


We're short HAL as its price is collapsing (bottom right plot). So our total capital is growing very quickly which bumps up the numerator in our target exposure. At the same time, the forecast is becoming stronger and stronger which is telling us we're more and more confident in this short. Finally, our actual exposure is decreasing because the price is falling.

All of this combines to lead to a rapid change in our exposure drift dictating that our system go short very quickly (risk is increasing too, but because that's on a rolling 12-month horizon, it moves much more slowly than these other values).

There is a big snap back in late March which results in our exposure drift blowing out to over 100% to the downside, meaning our position is now too large and we need to move the other way. You can see this in the top right plot in the day where the system closed out 80 shares of its position.

This was caused by a 26% jump in the price of the stock in one day.

The strategy wound up losing 9% on that move alone, but performed well through a very turbulent, boosting our portfolio by 22%.


Dynamic Trend Following

Adding our forecast lets our trend following system become more dynamic. We take bigger bets when it makes sense and adjust our position along the way.

This approach helps us avoid a lot of the big drawdowns that occur during trading so we can continue to grow and compound our position.

In the next post, we'll cover what Carver says provides the biggest boost to this system: diversification.

To stay up to date with the latest, subscribe, and check out our free demo here!