Jump to content



Photo

A Question for System Builders / Testers


  • Please log in to reply
6 replies to this topic

#1 Sentient Being

Sentient Being

    Member

  • Traders-Talk User
  • 4,262 posts

Posted 08 July 2007 - 12:34 PM

I'm working with a simple system tester that comes with Metastock. One thing you quickly learn is that if you combine a few technical tools and optimize the various variables, you can end up with tens of thousands of optimizations, just running one system against one stock. Clearly a major opportunity to over optimize and create a useless system. And then there is the question of what value all those results are. How do you tell if a results is a valid optimization that might repeat of be profitable going forward as opposed to an OVER optimization? I understand the basic concept, I'm looking for "hills" or "mounds" of settings that are nearby and create a large grouping of profitable results. Meaning that going forward, similar settings might be more likely to be of value. It's easier to hit a mountain than to hit a profit spike. And those fabulous spikes of great returns.....often when charted you can see wild gyrations of extreme settings that create a one time great return that would most likely never repeat again. With thousands of results available, how does one sort through it? But I'd be interested in hearing from others who optimize, how they avoid over optimization. What they look for, any tricks they have come up with. Particularly if anyone is profitable at this and they have a few ideas. Using the tools I have and my non-existent programming skills, I tend to look for a large number of optimizations that get nearly identical results. Not the high values you see on unigue settings but when a large clumping of various settings work out in similar fashion. Then I look at the variations in settings and try to select a setting that is centered in the group. If anyone has some basic tips they think could help me out I'd appreciate it. And if you are reluctant to use this board to publish any little tricks you might be willing to share, I can be reached at this email address disposable12@comcast.net

Edited by Sentient Being, 08 July 2007 - 12:37 PM.

In the end we retain from our studies only that which we practically apply.

~ Johann Wolfgang Von Goethe ~

#2 colion

colion

    Member

  • Traders-Talk User
  • 1,169 posts

Posted 08 July 2007 - 01:25 PM

I'm working with a simple system tester that comes with Metastock. One thing you quickly learn is that if you combine a few technical tools and optimize the various variables, you can end up with tens of thousands of optimizations, just running one system against one stock. Clearly a major opportunity to over optimize and create a useless system.

And then there is the question of what value all those results are. How do you tell if a results is a valid optimization that might repeat of be profitable going forward as opposed to an OVER optimization?

I understand the basic concept, I'm looking for "hills" or "mounds" of settings that are nearby and create a large grouping of profitable results. Meaning that going forward, similar settings might be more likely to be of value. It's easier to hit a mountain than to hit a profit spike. And those fabulous spikes of great returns.....often when charted you can see wild gyrations of extreme settings that create a one time great return that would most likely never repeat again.

With thousands of results available, how does one sort through it?

But I'd be interested in hearing from others who optimize, how they avoid over optimization. What they look for, any tricks they have come up with. Particularly if anyone is profitable at this and they have a few ideas.

Using the tools I have and my non-existent programming skills, I tend to look for a large number of optimizations that get nearly identical results. Not the high values you see on unigue settings but when a large clumping of various settings work out in similar fashion. Then I look at the variations in settings and try to select a setting that is centered in the group.

If anyone has some basic tips they think could help me out I'd appreciate it. And if you are reluctant to use this board to publish any little tricks you might be willing to share, I can be reached at this email address disposable12@comcast.net





You have asked a "zillion" questions, but here are a few thoughts.



You can cut down on the number of optimization runs by minimizing the number of variables in your system. In general, I find that simpler often proves to be more robust and robust is good.



Optimizing in a sequential manner can reduce the work load. In other words, say you have 3 variables. Instead of running all 3 together, you could run one and then fix its value. Repeat for the next variable, etc. When you find the values for the variables run all in a single optimization with a limited range for each variable.



The number of optimization runs can also be reduced by first runing through a range of values in large steps. Then redo with finer steps in the value areas that looks best.



Perhaps most important, it is essential to get a good feel for how things will work out in the future. You can do this by splitting the data into two or more parts. The first is used for optimization runs. After optimization, the system is run on the data that was not used for optimization (out-of-test) to see if it still behaves OK with new data.



Unless you have some reason to otherwise, one also has to ensure that the data (both intest and out-of-test) represents the type of market that you are interested in (e.g., bull, bear, congestion, all, etc.). Don't assume that optimization for a bullish period will be OK in another environment. Of course, if you use a long test period that included a variety of market conditions it is more likely that your system will be robust and produce acceptable (not necessarily maximum) results under all conditions. Some actually develop bull, bear, etc. systems and others go for the one size fits all approach.



Lastly, you might consider the equity curve's look (e.g., rate of change, smooth up, minimal dips, etc.) as a measure of system performance and watch for degradation as an indication that a new optimization is needed.

#3 Rich

Rich

    Member

  • Traders-Talk User
  • 761 posts

Posted 08 July 2007 - 01:40 PM

You have addressed very difficult issues. Blind and double blind testing is the only way to determine if your final model is worth much. Even then, you can never be absolutely sure. I use a genetic algorithm to search through the solution space. The fitness function is adjusted downward (parsimony) based on the number of degrees of freedom of the candidate model. This way you don't add unnecessary variables (over optimization) that can fool you when doing your validation testing. There's a lot more to it than this, but hope this helps. Regards, Rich

#4 bigtrader

bigtrader

    Member

  • Traders-Talk User
  • 519 posts

Posted 08 July 2007 - 02:07 PM

:ninja:

No longer interested in debating with IGNORANT people.


#5 danzman

danzman

    Member

  • Traders-Talk ~
  • 908 posts

Posted 08 July 2007 - 04:24 PM

Good questions.



If you decide to go this route, you're in for some eye opening stuff. System trading
has a steep learning curve, but the profits can be phenominal.



First of all, if you're trading stocks, it's best to test a system and optimize over
MANY stocks. That way, you can find a universal EDGE.



Metastock isn't going to cut it. You're going to spend about 3K on software that
truly emulates a portfolio.



And that's just the start. The next phase is to develop SEVERAL systems that
are as uncorrelated as possible. So you'll end up trading 20-30 stocks at once
in order to get a favorable reward/risk ratio.



Most of the stuff you read on system trading is applied to commodities. For stocks,
there's an extra step. You'll need to come up with a filter that buys during a bull
market and shorts during a bear. If you don't, you're equity curve will look horrible.
Some people use a simple moving average on an index for this. I use COT data
on the S&P 500 (in a unique way...which has kept me bullish since June, 2006).





...and start reading this forum:



http://www.tradingblox.com/forum/



There's a wealth of stuff on there. Much more sophisticated discusions than this
or pretty much any site.



D
I don't make predictions, I just react.

#6 colion

colion

    Member

  • Traders-Talk User
  • 1,169 posts

Posted 08 July 2007 - 06:47 PM

Metastock isn't going to cut it. You're going to spend about 3K on software that
truly emulates a portfolio.



I agree that Metastock is not the way to go. But it is not necessary to spend anywhere near $3K. AmiBroker, for example, does the job quite nicely.

#7 Sentient Being

Sentient Being

    Member

  • Traders-Talk User
  • 4,262 posts

Posted 08 July 2007 - 09:13 PM

Thanks guys. Lots to digest. Most of it I'm at least familiar with. I'll be going over everything carefully.

Thanks for the link!


Good questions.

...and start reading this forum:

http://www.tradingblox.com/forum/

There's a wealth of stuff on there. Much more sophisticated discusions than this
or pretty much any site.


In the end we retain from our studies only that which we practically apply.

~ Johann Wolfgang Von Goethe ~