Hello
I am just wondering how people interpret the results from a walk forward optimization? What do you look for? How do compare the good results from the bad ones ?
thanks
Danny z
Walk Forward Testing
Hi personally - I use it to test if i am being realistic in how I would view the market as if i was watching it live time. If I see something developing I would do a trade, record it with my ideas and then continue to walk forward.
Slow and painful, but still quicker and cheaper than live markets.
It also helps with keeping a trade journal to record the thoughts to actually see if the thought process works or is appropriately accurate in the real world.
Makes a test far more realistic than just a back test.
so the answer to the question is if its profitable, and I can realistically follow it then its a success.
Slow and painful, but still quicker and cheaper than live markets.
It also helps with keeping a trade journal to record the thoughts to actually see if the thought process works or is appropriately accurate in the real world.
Makes a test far more realistic than just a back test.
so the answer to the question is if its profitable, and I can realistically follow it then its a success.
-
- Posts: 35
- Joined: 16 Nov 2009
- Contact:
The walk forward optimizer is one of the most powerful features of MC. As far as I know no other product offers this capability.
The main problem with optimization is that you are "curve fitting" or basing the optimized parameters chosen on past data. For example if I test a million combinations of 2 moving average crossovers in any market. Chances are two things will happen. The first is that a profitable combination will almost certainly be found, given the million combinations tested.
More importantly, the second discovery is that the probability of that "winning" combination of moving average crossovers making money in the future is slim to none.
The only way to determine whether a trading system will work in the future is to seperate your test data into "in sample" and "out of sample" data sets and to test the optimized version on the "out of sample: data, not the data you used to optimize your variables, "in-sample".
Normally using any other product I need to extract a portion of the data and run my optimized parameters on the "out of sample" data.
Multicharts does this automatically with the walk forward optimizer.
So ideally what I look for is a "out of sample" result that is comparable to the in sample results. If you average trade is +$100 in the in sample data and most importantly +$100 in an out of sample then.....post the system as you have an excellent prospect. I'm joking but the point is you have something "real" in terms of profitable if the out of sample results are good.
Bottom line focus more on the out of sample result rather then the in sample for profitability measurements.
The main problem with optimization is that you are "curve fitting" or basing the optimized parameters chosen on past data. For example if I test a million combinations of 2 moving average crossovers in any market. Chances are two things will happen. The first is that a profitable combination will almost certainly be found, given the million combinations tested.
More importantly, the second discovery is that the probability of that "winning" combination of moving average crossovers making money in the future is slim to none.
The only way to determine whether a trading system will work in the future is to seperate your test data into "in sample" and "out of sample" data sets and to test the optimized version on the "out of sample: data, not the data you used to optimize your variables, "in-sample".
Normally using any other product I need to extract a portion of the data and run my optimized parameters on the "out of sample" data.
Multicharts does this automatically with the walk forward optimizer.
So ideally what I look for is a "out of sample" result that is comparable to the in sample results. If you average trade is +$100 in the in sample data and most importantly +$100 in an out of sample then.....post the system as you have an excellent prospect. I'm joking but the point is you have something "real" in terms of profitable if the out of sample results are good.
Bottom line focus more on the out of sample result rather then the in sample for profitability measurements.
For me the advantage of wfbt is to check how optimization applies
to real world.
Let's say you optimize for NetProfit (NP) and your insample(IS) of NP
is $ 1000 and your outsample(OS) (optimized strat. applied to real world)
is -$ 500 then you define something like this:
strength_of_optimization (SOO) = (OS_NP / IS_NP) * 100.
so in case above = -50 %. This is a bad result and means something like
you can't rely on optimization or optimization not stable against market.
If your out sample is $ 1000 then you would get 100 % and this would mean your optimzation is realisitc.
Now you need some more runs , say 100 insamples and 100 outsample.
Then build the overall (squared) average over SOO.
If it is positive, you can expect a final win, if not, drop strategy or optimization, after my opinion.
(The questions is how to interpret SOO)
If you export to Excel, you are pretty fast to build the average.
(Personally i found so far no strategy which could bear up the SOO criteria. -
Perhaps there is another equation expressing a more realistic criteria
for optimization quality ?? )
to real world.
Let's say you optimize for NetProfit (NP) and your insample(IS) of NP
is $ 1000 and your outsample(OS) (optimized strat. applied to real world)
is -$ 500 then you define something like this:
strength_of_optimization (SOO) = (OS_NP / IS_NP) * 100.
so in case above = -50 %. This is a bad result and means something like
you can't rely on optimization or optimization not stable against market.
If your out sample is $ 1000 then you would get 100 % and this would mean your optimzation is realisitc.
Now you need some more runs , say 100 insamples and 100 outsample.
Then build the overall (squared) average over SOO.
If it is positive, you can expect a final win, if not, drop strategy or optimization, after my opinion.
(The questions is how to interpret SOO)
If you export to Excel, you are pretty fast to build the average.
(Personally i found so far no strategy which could bear up the SOO criteria. -
Perhaps there is another equation expressing a more realistic criteria
for optimization quality ?? )