MultiCharts Optimization Speed

Questions about MultiCharts and user contributed studies.
User avatar
Bruce DeVault
Posts: 438
Joined: 19 Jan 2010
Location: Washington DC
Been thanked: 2 times
Contact:

MultiCharts Optimization Speed

Postby Bruce DeVault » 07 Feb 2010

We've conducted extensive tests on the latest MultiCharts 6.0 Beta vs. the latest TS 8.7 release, and the results are summarized by the attached graphic.

The test in question is an unaltered benchmark of strategy optimization speed.

In short, we did "real-world" tests disregarding all of the usual performance enhancing tweaks e.g. no overclocking, no offline mode, no disabling antivirus, no turning off multimonitor rigs or disabling all of the usual software on the PC. What we found to be brief was that as the processor generation increased, MultiCharts pulled further and further ahead in optimizer performance due to the extensive use of multithreading.
Attachments
MC Performance.gif
MC Performance.gif (61.46 KiB) Viewed 2377 times

User avatar
TJ
Posts: 6586
Joined: 29 Aug 2006
Location: Global Citizen
Has thanked: 971 times
Been thanked: 1907 times

Postby TJ » 07 Feb 2010

WOW !

Spaceant
Posts: 252
Joined: 30 May 2009
Has thanked: 1 time
Been thanked: 3 times

Postby Spaceant » 08 Feb 2010

Hi Bruce,

Can you provide the specification of your I7 3.2 Ghz computer?...... I would consider to build one.

Sa

User avatar
Bruce DeVault
Posts: 438
Joined: 19 Jan 2010
Location: Washington DC
Been thanked: 2 times
Contact:

Postby Bruce DeVault » 08 Feb 2010

You're welcome to email me about this. I'd rather not distract with details in the thread of how we built a specific machine, since we're not in the hardware business and it is just for our own performance testing purposes.

wegi
Posts: 112
Joined: 02 Jun 2009
Has thanked: 3 times
Been thanked: 12 times

Postby wegi » 09 Feb 2010

I have done an performancetest too, but i compared MC 5.5 with tradesignal 5 Std. edition.
Both are running on a Intel q9550@3,6Ghz, 4GB Ram.
virus scanner was running. I used the same historical data, 5 minutes bars.

My test was: changing an input in a strategy, recalculating, waiting for the result.
The strategy is based on 5 minutes bars, from jan 2007 to jan 2010.
In Tradesignal it takes about 30 secondes for calculating the strategy.
In MC i was not able to do this test, it took to long (minutes).
So i have used ony 1 year of 5 minute data and MC needs more than 2 minutes
for calculating the strategy.

User avatar
Andrew Kirillov
Posts: 1589
Joined: 28 Jul 2005
Has thanked: 2 times
Been thanked: 31 times
Contact:

Postby Andrew Kirillov » 09 Feb 2010

It is an interesting ducsussion.
Dear Wegi,
The possible reason of MC lag could be the anti-virus. The longest routine was data extraction. Since we transfer data from tsserver.exe to MultiCharts.exe your antivirus may intercept each call and slow down the overall process by 100 factor or more. At the same time trade signal has a single process and don’t pass the data back and forth. Thus the antivirus can’t intercept it. So the fair test is to plot a chart and run an optimization with several thousand iterations. The anti-virus must be reconfigured or disabled.

It would be good if you publish your code and a workspace. We benchmarked MultiCharts vs TradeSignal a year ago and MultiCharts outperformed it dramatically in strategy execution and optimization. It is interesting that they use multi-threaded optimization too, but it is slow anyway.

Thank you.

wegi
Posts: 112
Joined: 02 Jun 2009
Has thanked: 3 times
Been thanked: 12 times

Postby wegi » 09 Feb 2010

Thank you for this information.
I will do some test next days, with out my virus scanner.

I have not tested optimization ! just open the chart with the strategy, change one input and wait for recalculation. i stopped time, starting from pressing ok in the format signal dialg, ending when i see the equity curve.
the same in tradesignal.
maybee mc outperforms tradesignal with optimization, but the current version can use multithreaded optimization to. i will try to test this.

because this is my real tradingstrategy, i can not post them as they is.
maybee i will rewrite it.

User avatar
Bruce DeVault
Posts: 438
Joined: 19 Jan 2010
Location: Washington DC
Been thanked: 2 times
Contact:

Postby Bruce DeVault » 09 Feb 2010

Wegi, this sounds like an antivirus issue. Try disabling your antivirus temporarily to see if that's the case, and if it is, perhaps a different antivirus program won't have this trouble.

We run the latest Norton 360 (which is a fairly "heavy" application as such things go) and don't have any problems at all.

You can also try testing in offline mode, to take the TS to MC communication out of the loop - this will help you get a clearer picture of evaluation speeds without getting bogged down in the question of how long does it take to retrieve historical data which is probably where your antivirus program could be interfering with the process.

Kasper
Posts: 17
Joined: 30 Dec 2008

Postby Kasper » 09 Feb 2010

hmm -

A core i7 3.2GHz must be a Intel Core i7 965 Extreme.

but why the sudden performance increase - was it due to TURBO mode on the i7 or quad cores?

anyway - i'm bidding on a I7 975 3.3GHz Extreme for development :-) at the moment..

But most people can happily go with a i7 920 and push the clock a bit - it can take it.

:-)

User avatar
Bruce DeVault
Posts: 438
Joined: 19 Jan 2010
Location: Washington DC
Been thanked: 2 times
Contact:

Postby Bruce DeVault » 09 Feb 2010

We have a 975 on order. The 965 was available for testing here as it's a machine that was already up and running and was available.

We don't overclock here because we do professional business work. Overclocking is typically not recommended for situations in which downtime has a monetary value e.g. when trading real money live or running a business - it's more something you might consider to do on your home PC for playing games, when you're a hobbiest/enthusiast who is curious how far things can be pushed even if reliability isn't the same statistically, and when having the CPU burn out early or having one in a while reliability issues due to pushing the RAM chips to the edge of where they're functional is an acceptable risk. We would rather achieve high performance through good design, and if a 965 or 975 costs more than a 920, it costs what it costs. The difference is "small stuff" in comparison with a losing trade or two because your computer crashed one day while you're in a position, and you're trying to figure out which CPU voltage to tweak again to see if you can get it back past POST as it had been running this whole time so far without crashing. It's important to understand that just because a combination of overclocking settings runs for a few days without crashing, that doesn't mean it's "just as good" - often RAM chips or other circuits become stressed and fail over time when they're being pushed to a higher frequency or higher voltage than rated, and when you're in the trading business, that's usually not the sort of thing you want to have in the loop. It's fine to overclock on your play machine, but it's important that people realize it's not "the same thing" or suitable for typical business purposes.

The performance difference is principally because of the utilization of multiple cores together with hyperthreading.

User avatar
geizer
Posts: 375
Joined: 16 Jun 2008
Has thanked: 40 times
Been thanked: 38 times

Postby geizer » 10 Feb 2010

but why the sudden performance increase - was it due to TURBO mode on the i7 or quad cores?

I think that because Core i7 965 is not just Quad core CPU, but also a multi-threaded (aka Hyper threading by intel classification). That effectively makes the OS to see it, (and use it) as 8 "Cores" processor.

User avatar
Bruce DeVault
Posts: 438
Joined: 19 Jan 2010
Location: Washington DC
Been thanked: 2 times
Contact:

Postby Bruce DeVault » 10 Feb 2010

In general, that's correct - Core I7 Extreme Edition processors can utilize a combination of multiple cores and hyperthreading, along with improvements in the architecture to achieve performance increases over a simpler processor design. There's a decent write-up summarizing this at http://reviews.cnet.com/processors/inte ... 66836.html. Bear in mind there's a newer 975 design as well, and 6 core chips of a similar design are due out by the end of 2010 Q2.
Last edited by Bruce DeVault on 14 Feb 2010, edited 1 time in total.

Kasper
Posts: 17
Joined: 30 Dec 2008

Postby Kasper » 12 Feb 2010

Yes i know the i9 processor with 6 cores are coming - even a few ES (Engineering samples) on ebay.

But a i920 is basically a i 975/965 that could not pass complete testing. Every processor Intel makes for i1366 sockets - starts out as i975 - then get dumped to i965 if they cant pass all the tests - and then to i940 and then i920. The lower in the chain - the more things they cut out.

That is why over clocking a i920 is quite safe - I would say over-clock with up to 20-25% will not create any problems - but to be certain you can do a burn-in test on them for 24-48 hours. If they pass - they will never fail in trading set-up.

The extreme series is MADE for over-clocking - that is why they are called extreme. It means their multiplier is UNLOCKED - and Intel lets you over-clock them without warranty problems.

Non-Extreme processor have LOCKED multipliers - and requires different setup to over-clock.

The locking and unlocking of multipliers are done post production with laser cut circuit - once testing is completed. That then determines the type number and what model it will be when it comes out of the factory. But take them apart and they are identical apart from the "laser" cuts...

So an i920 is fabulous for over clocking - and can reliable reach very high speeds. You can even buy pre-tested i920 on ebay that can do 4ghz with aircooling.

User avatar
Bruce DeVault
Posts: 438
Joined: 19 Jan 2010
Location: Washington DC
Been thanked: 2 times
Contact:

Postby Bruce DeVault » 12 Feb 2010

This depends on what you mean I suppose by "reliable". Intel's usual testing process has been to test every chip until it fails, increasing the frequencies within a period of time, then back off and stamp it with a fairly generous safety margin from its only known failure point. So, what you're saying is that because there's a safety margin in the way it was tested at Intel, you're willing to go within that margin moreso than Intel is in their stamping determination. That's fine, and there's no doubt that this can be done in almost all cases to some extent - the question is, how reliable will it be and how far is a good idea? And the answer is that depends on what period of time you're talking about. If by reliable you mean one otherwise unexplained crash a week is acceptable, and burnout 1.5 years later, then sure, you can generally overclock quite far with those expectations. The important thing is that just having it pass POST and run for 24 hours without crashing isn't at all enough to say "it works, without qualification" - it's a continuum in which as you get closer to its "certain failure" point failures become increasingly frequent (although not deterministically) such that overclocking only a little may cause only one crash a month, and the working life of the chip is shortened a little. As you increase the frequency quite far, you'll find it doesn't pass POST anymore, but somewhere in the middle is where many people overclock, such that the working life of the chip is shortened and they also get crashes once in a while, such as once a week or once every couple of weeks, and because they never really know what caused the crash (could it just be Windows is flaky?) or did any controlled comparisons, they just assume it has nothing to do with the overclocking.

It is for this reason that in a business environment, where crashing has consequences that can be measured in dollars, overclocking isn't such a good idea. For your game machine, sure. When thousands of dollars are on the line with every minute of downtime, absolutely not.

Kasper
Posts: 17
Joined: 30 Dec 2008

Postby Kasper » 12 Feb 2010

but why the sudden performance increase - was it due to TURBO mode on the i7 or quad cores?

I think that because Core i7 965 is not just Quad core CPU, but also a multi-threaded (aka Hyper threading by intel classification). That effectively makes the OS to see it, (and use it) as 8 "Cores" processor.


hyperthreading is usually NOT as efficient as multiple cores. Hyperthreading forces the processor to stop - and push all content in progress to a stack handler - then work on another flow - then retrieve the cache items again to work on first flow. This is more advanced now but is mostly used for running multiple single thread applications at the same time.

Real multi-threaded applications like Multicharts should actually see a performance DECREASE when doing Optimization on a hyper-thread enabled machine vs a quad core with hyper-threading disabled - IF Multicharts works efficiently with multi processors.

Turbo mode EMULATES a single processor machine by "disabling" some of the cores and run the other cores at a lot higher FREQUENCY (again simplified) - the reason the processor shuts of some cores is so it will not get to hot while the active cores run fx at 3.8 Ghz if "normal processor speed is 3.06 GHz.

so the real question and why I asked was - to figure out if it was TURBO part of i965 or the Quadcore part that did the trick on reaching the high scores.

That will be quite telling about Multicharts multi processor performance.

So a test with hyper thread disabled and a test with turbo mode disabled - and a combination - will show a lot about the Multicharts coding style-and where you should put your money on future processors/systems to reach peak performance.

:-) damn this got technical.....

Kasper
Posts: 17
Joined: 30 Dec 2008

Postby Kasper » 12 Feb 2010

This depends on what you mean I suppose by "reliable". Intel's usual testing process has been to test every chip until it fails, increasing the frequencies within a period of time, then back off and stamp it with a fairly generous safety margin from its only known failure point.


hmm - first of all I say BURN IN test - not a simple boot-post. A burn-in test is maximum load on memory and calculations (100% cpu + 100% memory transfers for 24-48 hours. That is a normal burn in test.

But processor production is all about statistics - and usually the production ratio is bell curve shaped (look up any statistics book) - so the ones in the middle region is actually the MIDDLE spec version of the processor. - anything with better specs is on the right side - anything with lower specs is on the left side.

So everything below a MIDDLE spec processor - is where the VOLUME of sales are. That means that usually a processor manufacture will have to dig into the quality part (left part of curve) to cover the demand in the lower end specs of processors. That is why you 99% of times can reliable overclock the lowest end processor in a series to quite high GHz before you will see any problems. There are simply not 50%+ demand (volume) for the higher end (right side of the bell curve) processors. So the whole curve is shifted.

That means Intel is "forced" to give you a better quality processor than you are actually paying for with the low end processors. There is simply not enough yield in the curve to make "just the quality and quantity needed"

That is why Intel introduced multiplier locking in the first place - and why over-clocking the LOWEST version in a series of processors is more or less ALWAYS safe to do. But the higher spec processor you buy - the LESS able you will be to over-clock (margins become smaller)

If you thing anything else you are a victim to the Intel over-clocked marketing machine...... :wink: *G*


/Kasper

Kasper
Posts: 17
Joined: 30 Dec 2008

Postby Kasper » 12 Feb 2010

maybe :-) TS could use this

paralleluniverse.intel.com

upload a binary that can run without interaction - an Intel will free of charge show multi processor efficiency and give advice on how to improve it even more.

quite cool...

User avatar
geizer
Posts: 375
Joined: 16 Jun 2008
Has thanked: 40 times
Been thanked: 38 times

Postby geizer » 13 Feb 2010

Real multi-threaded applications like Multicharts should actually see a performance DECREASE when doing Optimization on a hyper-thread enabled machine vs a quad core with hyper-threading disabled - IF Multicharts works efficiently with multi processors.


Kasper,

Good point,
Right now i am in the market for a computer dedicated for trading & development. What is your choice?
A pure "Quad" or a "Quad" w/Hyper Threading enabled. Both have 4 real cores, but one is also a "hyper-threding".
Suppose everything else being equal (i.e. Price, Clock Frequencies, Cash sizes, pwr consumption, essentially - same generation CPUs).
What do you think? What's your choice?

Thank you,
--
Pavel

User avatar
TJ
Posts: 6586
Joined: 29 Aug 2006
Location: Global Citizen
Has thanked: 971 times
Been thanked: 1907 times

Postby TJ » 13 Feb 2010

my simple rule of thumb:
always buy the most expensive computer you can afford,
because:
-- software have a way of catching up to hardware
-- your demand grows over time
-- you have to pay for higher upgrade, now... or (inevitably) later
so might as well enjoy the higher power now.

Kasper
Posts: 17
Joined: 30 Dec 2008

Postby Kasper » 14 Feb 2010

Suppose everything else being equal[/u] (i.e. Price, Clock Frequencies, Cash sizes, pwr consumption, essentially - same generation CPUs).
What do you think? What's your choice?



I would ALWAYS take the one that have a hyper- threading possibility - if rest of specs was the same. HT you can ALWAYS disable in good motherboards - so then you can choose.

Right now I would buy a Core i7 920 based system and overclock it. Then you can always upgrade to a Core i9 when you want (6 cores) - or get a cheap Core i7 975 extreme once the Core i9 ships.

Make sure you buy a X58 based motherboard that supports the Core i7 - incl. Extreme editions. Then you should be quite well set for future upgrades.

I would NEVER buy the highest spec DESKTOP machine you can get. You waste your money. They will be 30-100% more expensive than taking a little bit lower spec processor. (C7 975 is about $1000 right now - C7 920 is about $280) - everything else is the same. Your performance will only suffer a bit - but not much. A 920 runs at 2.66 GHz and 975 at 3.33 GHz - and for simulations that is what matters when processors are from same series. Make sure you get good quality memory - and get 6GB (3 x 2GB)

You do not need high end graphics card (3D) - buy the cheapest ATI card with 3 display ports. You should be able to find them right now for about $80-120 - and they support ATI's "use all 3 screens as one big screen" (ATI Eyefinity) (Like the new ATI 5450 card - just make sure the OEM have put 3 digital outputs on it! That might differ from OEM to OEM)

The graphics card recommendation MIGHT change if TS starts to support offloading of simulations onto graphics card processor.....

So in short - any processor for the FCLGA1366 socket and X58 chipset.

Then put in an SSD drive - or 2 for safety. Run them in RAID MIRROR. You dont need capacity to run TS - just just need speed. Or get SAS drives (same drives as many server use) and get a SAS RAID controller. But just remember SAS drives are noisy buggers.

Disable SWAP file! That will actually slow you down and is usually not needed on a 4GB+ system. And SWAP files are not happy with SSD drives.

Run Windows 7 - 64bit (I do without any problems) else you wont be able to use the 6GB memory :-)


My own current config:

HP XW4600 3.0 GHz Q9650 - 8GB ECC memory - 2x15k RPM SAS drive RAID. NVidia Quadro FX3700 graphcis (only because i Dabble in Autocad)

Notebook: Lenovo WS700ds (dual screen) with QX9300 Quadcore Extreme processor (upgraded myself...) 128GB Intel SSD + 320GB Seagate HD. NVidia Quadro FX3700. 8GB Memory.

3 x HP2475w 24" monitors 1920x1200 (2 on HDMI - one using Display port)

Right now I'm waiting for Core i9 before I upgrade next time. HP Workstation XW4600 was ONLY because I got it VERY VERY cheap.

But I am a gear head :-)

Kasper
Posts: 17
Joined: 30 Dec 2008

Postby Kasper » 14 Feb 2010

to compare processor specs - check here

ark.intel.com/Default.aspx

there you can see all processors from Core i7 920, 940, 950, 960, 965 Extreme and 975 Extreme - all have the same specs apart from - they have different frequencies and:

965 and 975 that have a faster QT bandwidth - but that is ONLY because they run with a clock multiplier of 25 vs 20 on the other processors.

So they ALL cut from same sheet - but prices range from $284 to $995

:-)

wegi
Posts: 112
Joined: 02 Jun 2009
Has thanked: 3 times
Been thanked: 12 times

Postby wegi » 14 Feb 2010

Hi,

i have done some more performace tests.
I compare my strategy, based on two timeframes.
Calculated on 5min Bars (175000 Bars !), about 3 years of history,
same data in TradeSignal5 (TS5) and MultiCharts 5.5 (MC 5.5)

I loaded the charts and wait for the first calculation.
Then i change one input and i stopped the time for recalculation.

I turned off my virus scanner and the data was much faster loaded in MC

My Pc is: Intel Q9550, overclocked to 3,6Ghz, 4GB Ram


Times for recaculation:

0:31 TradeSignal 5 Std. Edition
=> 31 seconds.
Buy the way, without overclocking, at 2,8 Ghz i need 1 minute !

Multicharts 5.5
1:30 Original Strategy.

Then i removed some code, to finde out the bottleneck

1:30 without text_new
0:06 without pushpop.dll
=> only 6 seconds, amazing !

The external dll: pushpop.dll turned out to be the bottleneck, amazing.
Without it, 6 seconds is a very gooood performance.

To ensure, that the improvement is not because nothing is painted into the chart,
i removed the indicator script, which does the drawing. No performance improvement after removing.

But my problem is, that i can´t live without pushpop.dll


What i do:

I have one signal-script for the whole strategy.
At the EntryBar, i calculate my target, stoploss, sma ... everything.

Let´s explain it for my target:

Signal Script:

Code: Select all

   target = volatility(period) * faktor from input;
   ptPrice = entryprice + target;
   PUSH(7,DATE,TIME,ptPrice);
   sell next bar at ptPrice limit


Indicator Script:

Code: Select all

     pt = POP(7,DATE,TIME);   
     if pt > 0 then plot5(pt,"Target");



Because i can not paint into a chartwindow from a signal script, i have to transfer the value with a global variable.

Is there a better way to do this ?

I think that i can´t calculate everything in a function, which i can call from a signal and a indicator script,
because i need informations about my strategy: Entryprice, marketposition, equity, commisions.

Thats one big advantage of Tradesignal5. I can draw from everywhere !
And the second one is, that i can access higher timeframes very easy,
for example to calculate a 30min simple moving average on a 5 min timeframe: average(close,10) of 'data1 30m'
In multicharts i have to use a second chart.

How do you draw your informations into a chart ?

thx

wegi

User avatar
Bruce DeVault
Posts: 438
Joined: 19 Jan 2010
Location: Washington DC
Been thanked: 2 times
Contact:

Postby Bruce DeVault » 14 Feb 2010

Generally you would want to avoid using trend lines in anything being optimized - add an input parameter to turn this functionality on or off.

COM interop has some overhead, so generally you don't want to go out to a DLL for a trivial operation - you want to go out for something that's going to take some time, so that the benefits of faster compiled code over tokenized language can overcome the overhead of the transition.

I agree the inability to output plots from a signal/strategy/system is a shortcoming of PowerLanguage and of EasyLanguage - it's something other platforms such as NeoTicker are able to do very well and that has significant benefits. It would make sense when time permits to simply enhance the architecture to permit this.

MultiCharts does have the ability to reference longer time frames on the same chart using data2, or other more complicated techniques including ADE and compression. If you're unsure how to do this or of the pros and cons, please create a new thread about this topic and post your example of what you have so far so that others can help you with this.

User avatar
TJ
Posts: 6586
Joined: 29 Aug 2006
Location: Global Citizen
Has thanked: 971 times
Been thanked: 1907 times

Postby TJ » 14 Feb 2010

@wegi:

1. Andrew posted somewhere that plot in signal will be implemented in the future.

at the mean time, you can use drawing objects (ie TL, ARW or Text) if that helps.

2. you can calculate a 30min simple moving average on a 5 min timeframe chart by adding the 30 min data stream as data2.

value1 = average(close,10) of data2;


reference:
The essential EasyLanguage programming guide
Multi-Data Analysis... pg. 15
Multi-Data Indicators... pg.71
Multi-Data Reference... pg.72
Data(n)... pg.72
https://www.TS.com/support/bo ... ntials.pdf



HTH

User avatar
Bruce DeVault
Posts: 438
Joined: 19 Jan 2010
Location: Washington DC
Been thanked: 2 times
Contact:

Postby Bruce DeVault » 14 Feb 2010

To answer your question above regarding turbo mode, no, disabling turbo mode on an I7 does not increase or decrease MultiCharts performance per se. You have to bear in mind, turbo mode is designed to temporarily bump up the speed on one or two cores on a four core chip without overheating (in other words, improve performance while keeping the same thermal profile), to improve performance specifically on applications that aren't heavily multithreaded and thus don't use all 4 cores. Because MultiCharts is multithreaded, turning this on or off doesn't affect performance significantly, since it's basically always using all available cores when it's optimizing.

Hyperthreading is of course not as effective as having multiple cores, but it does offer performance improvements over turning it off, even for MultiCharts which is heavily threaded, because it's able to do more things at the same time e.g. 8 vs. 4 with it off. Yes, it takes some time to switch context vs. not switching, but this is more than made up for by the fact that the two threads proceed at the same time when it isn't switching context, thus there's a performance edge to using Hyperthreading if it's available for MultiCharts optimization. If you had to choose between having a 2 core + hyperthreading or a 4 core without hyperthreading, of course, the 4 core without hyperthreading would be much faster because it doesn't have to switch context. But, you don't have to choose - you can have both and by default the I7 does have both - crippling it by disabling this feature decreases performance. Turning off hyperthreading on a 4 core I7 chip (thus, only 4 threads at a time) results in a decrease in speed for MultiCharts optimization of almost 10%.

In comparison, limiting a 4 core I7 chip to 2 cores (but leaving hyperthreading on) results in a decrease in performance of 50%, just as expected for a heavily threaded application. (And of course, limiting it to 1 core results in almost 75% reduction in speed.)

So, in summary, turbo mode is not material to these results because that's for non-multithreaded applications. Hyperthreading helps, but not as much as multiple cores of course. The best results for a given clockspeed will come when hyperthreading is used together with multiple cores, on a fast architecture, such as the I7 has.

My intention in posting this thread was to give more details than were previously publicly available regarding the benefits of multithreading on MultiCharts for optimization. I hope this has helped to clear up more specifically how it helps.

wegi
Posts: 112
Joined: 02 Jun 2009
Has thanked: 3 times
Been thanked: 12 times

Postby wegi » 14 Feb 2010

Thx TJ and Bruce,

i will do some more tests to find out if the dll calls cause the overhead or if the logic in the pushpop.dll is to slow.

i will inform you - in an new thread - if i found something interessting.

Last question here, does multicharts provide global variables ?
I only found, that this ist possible with the globalVariable.dll

User avatar
Bruce DeVault
Posts: 438
Joined: 19 Jan 2010
Location: Washington DC
Been thanked: 2 times
Contact:

Postby Bruce DeVault » 14 Feb 2010

Please do a search on the forum for "global variables" as there are several threads specifically about this with download links etc. and it's unrelated to optimization performance with multithreading.

Kasper
Posts: 17
Joined: 30 Dec 2008

Postby Kasper » 14 Feb 2010

To answer your question above regarding turbo mode, no, disabling turbo mode on an I7 does not increase or decrease MultiCharts performance per se. You have to bear in mind, turbo mode is designed to temporarily bump up the speed on one or two cores on a four core chip without overheating (in other words, improve performance while keeping the same thermal profile), to improve performance specifically on applications that aren't heavily multithreaded and thus don't use all 4 cores. Because MultiCharts is multithreaded, turning this on or off doesn't affect performance significantly, since it's basically always using all available cores when it's optimizing.



If hyper threading actually increases speed - then TS should look at their code :-) That means the software is not utilizing the processor to the fullest extent. So running MC through Intel simulation might provide some insight. In Photoshop CS4 and Autocad - enabling hyper-threading actually gives a 5-15% performance drop, which is within "normal" calculated algorithmic loss from context switching.

But I would love to get CUDA (NVidia) or OpenCL (ATI and rest) support in MultiCharts. Math simulations are so much faster when you use a graphics card cpu to do it.

User avatar
Bruce DeVault
Posts: 438
Joined: 19 Jan 2010
Location: Washington DC
Been thanked: 2 times
Contact:

Postby Bruce DeVault » 15 Feb 2010

If hyper threading actually increases speed - then TS should look at their code That means the software is not utilizing the processor to the fullest extent. So running MC through Intel simulation might provide some insight. In Photoshop CS4 and Autocad - enabling hyper-threading actually gives a 5-15% performance drop, which is within "normal" calculated algorithmic loss from context switching.

First of all, it's possible to write code in such a way that hyperthreading decreases performance (depending on what each thread is doing), especially on Pentium 4 CPUs which had a less sophisticated implementation. However, that's by no means the norm, and the "short version" summary for modern CPUs would be this: turning on hyperthreading will not help single threaded applications, and will somewhat help heavily threaded applications. If you'd like to read more about this, I would suggest to start with Wikipedia at http://en.wikipedia.org/wiki/Hyper-threading which has a pretty good summary. (I know Wikipedia has its faults - but sending you to Intel's own documentation would be like asking you to read 6 months worth of paperwork and come back and report.) So yes, there were (especially in the past) some applications that decreased performance with hyperthreading turned on but this is unusual these days, especially after Pentium 4 processors are no longer the standard (we only keep one around anymore in case we have to test something on a slower "comparison" system). In general these days, the more heavily threaded an application is, the more it would benefit from hyperthreading, unless the threads themselves are resource constrained (which is a simple design consideration) - now that developers know hyperthreading exists and has for a long time, they simply build threads that aren't resource constrained so this isn't an issue. Of course, multiple cores helps more than hyperthreading but this isn't an either/or - it's a both situation and turning on both helps in general more than turning on either individually. If you read up on this, or simply test it yourself as we have, you'll see that this is the case and that hyperthreading does not decrease performance in general, or in the case of well written multithreaded applications specifically, with the possible exception of older Pentium 4 CPUs and applications written before developers understood about hyperthreading and how it handles resources (which for major applications generally means going back a number of versions now).

It would be false to suggest that because hyperthreading helps MultiCharts performance that it isn't threaded correctly - hyperthreading helps almost any properly written multithreaded application these days, assuming a modern processor and an operating system that supports hyperthreading as all modern ones do. It would be possible to design a benchmark in such a way that turning on hyperthreading hurt results (for instance, by resource starving a single thread but turning on several threads - some older benchmarks did this either by accident or on purpose) but that isn't the way modern applications are built now, and would itself be a "bad design" these days, now that developers are building applications specifically for multithreaded operating systems including multiple cores and hyperthreading and understand how it works and how to get the most of it (as both MultiCharts and Photoshop do - see below).

Regarding Photoshop CS4 - we have a copy of that here and don't see what you're referring to. In fact, published benchmarks back us up http://www.anandtech.com/cpuchipsets/sh ... =3634&p=11 showing that hyperthreading speeds up Photoshop, and is one of the reasons an I7 is faster than an I5 running the same number of cores and the same clockspeed:

Hyper Threading does have a real benefit in Photoshop and thus we see the Core i5 750 suffering a bit. It's still faster than the Phenom II 965 BE but it is marginally slower than the i7 920.

If you notice, you'll see their results look just like ours above on MultiCharts - because that's the reality of it. I think you must be mistaking some older version of Photoshop or maybe results on an older Pentium 4 processor because hyperthreading slowing down modern multithreaded applications just isn't the reality these days and I want to make sure users in general aren't confused to believe that hyperthreading will slow down their systems when in fact it speeds up modern well-written applications like MultiCharts and Photoshop CS4.

While multiple cores speed up performance more than hyperthreading does, in their best and most expensive processors, Intel re-introduced hyperthreading IN ADDITION to multiple cores for a simple reason - it gives another performance boost on top of the boost received by multiple cores. That's part of what sets I7 apart from I5 - hyperthreading helps - and why the I7 processors are worth more money - because with hyperthreading they're faster.

Regarding AutoCAD, it's worth noting that the older AutoCAD 14 didn't support multithreading. It appears in new versions including AutoCAD 2009 there may be a design defect / bug such that AutoDesk isn't handling hyperthreading correctly. See also http://discussion.autodesk.com/forums/m ... ID=6253023 for more details on this and users' own reports. Users report that with a 4 core processor and hyperthreading turned off, AutoCAD uses 4 virtual CPUs out of 4 available, while with hyperthreading turned on, AutoCAD uses 1 virtual CPU out of 8 available (which is what makes it much slower than with hyperthreading turned off):

The issue we currently have with this configuration is Autocad MEP 2009 will not recongize 2 or more cores when the hyperthread option is set in the BIOS. We only see one core working when hyperthread is enabled. We see four cores working when hyperthread is disabled. The x58 can toggle off or on this feature of hyperthread. So when it hyperthread is enabled, machine has 8 CPU cores. As stated though, MEP and Autocad 2009 will utilize all four cores when the x58 hyperthread feature is turned off, but only a single core when hyperthread is enabled.

This description above is indicative of a design flaw in AutoCAD rather than a performance optimization in that it's clearly in error and has nothing to do with multithreading in general - it's likely just an unintended bug that AutoDesk will fix in coming releases. So, in short, AutoCAD has an obvious bug in it, and it's something they'll almost certainly fix. AutoDesk themselves do not recommend to disable hyperthreading and talk about how it is handled just as multiple processors are, which makes it all the more likely this is a bug and something they'll be fixing if they haven't already.

But I would love to get CUDA (NVidia) or OpenCL (ATI and rest) support in MultiCharts. Math simulations are so much faster when you use a graphics card cpu to do it.

Regarding offloading tasks to the GPU, we've looked at this in some depth, and in general, the issue is that technical analysis / back-testing functions aren't raw computation tasks, but generally involve a lot of historical data to get what they need. It is possible to write back-testing that is specialized to GPU processing, but it requires dividing the tasks in a special way such that any one task requires only a much smaller amount of memory than normal for it to work efficiently. This is not the way technical analysis software works in general, and because of the general way EasyLanguage works, making it work with EasyLanguage would be especially challenging. It's an interesting thing to talk about, but its practicality is limited as of today. It is likely that in the years away future, there will be video cards with the next generation of these kinds of capabilities that can do more general tasks, and this will be able to be exploited, but to do so today would be a large distraction from the development path because it would require things to be written in an extremely specialized way (and you would likely have to give up PowerLanguage as well as the current back-testing architecture to get there). Something like this will come to pass in coming years, but it's going to be a while and the current GPU architectures just aren't there yet for general tasks like technical analysis.

Kasper
Posts: 17
Joined: 30 Dec 2008

Postby Kasper » 22 Feb 2010

Just wanted to point to a few very good articles / speed tests on the i7 platform.

the hyperthread/turbo mode is not black and white.. :-0

ixbtlabs.com/articles3/cpu/archspeed-2009-3-p1.html (all the articles in the series are actually quite good)

and is very application dependent in the Core i7 tests.

So we are both right - but Bruce a bit more right than me - so 4 points to Bruce and 1 to me :-)

In short - hyper-threading gives average 6-10% boost - but SOME applications suffer from it giving MINUS values. I do think if the apps are heavily optimized for multi threading - they will benefit from no hyper threading. The more real cores the less you should need hyper threading.

I just got a 6 core (12 hyper-thread cores) and will report on that :-) soon. (i need new motherboard and RAM before i'm up and running) - it will be my development station since trading happily runs on my very stable HP xw4600.

I think I'll put in 2x ATI 5570 to drive my 5 screens as ATI driver 10.3 should support multiple cards and Eyefinity - and the cards are relatively cheap.

Regarding CUDA/OpenCL - well Graphics cards are Math processors - and really great at Matrix math. Like multiplying a table of date with another table of data. But since I have not seen the architecture of MS base - it is hard to tell if CUDA/OpenCL can give any speedups. But a few big boys are using it now for financial calculations, option pricing, neural networks - and many other places where you use floating point.

User avatar
Henrik
Posts: 140
Joined: 13 Apr 2010
Has thanked: 25 times
Been thanked: 11 times

Postby Henrik » 19 Apr 2010

Which configuration should be better:

1.) Desktop Intel i7 980X (6x 3.33 GHz)
2.) Server AMD DualCPU 2x Opteron 6186 (2x 12x 1.9 GHz)?

And: how much RAM MultiCharts need? More RAM more better or are 8 GB of good RAM good enough for 1) and 2x6 GB good RAM for 2)?


edit:
and: it's not possible to delegate optimizion-prozesses to many desktop-PC's? :)

User avatar
TJ
Posts: 6586
Joined: 29 Aug 2006
Location: Global Citizen
Has thanked: 971 times
Been thanked: 1907 times

Postby TJ » 19 Apr 2010

Which configuration should be better:

1.) Desktop Intel i7 980X (6x 3.33 GHz)
2.) Server AMD DualCPU 2x Opteron 6186 (2x 12x 1.9 GHz)?


this is personal opinion only...

stay with Intel.


And: how much RAM MultiCharts need? More RAM more better or are 8 GB of good RAM good enough for 1) and 2x6 GB good RAM for 2)?

edit:
and: it's not possible to delegate optimizion-prozesses to many desktop-PC's? :)


32 bit apps can address up to 2GB per service.
MC is a 32bit app.

User avatar
Henrik
Posts: 140
Joined: 13 Apr 2010
Has thanked: 25 times
Been thanked: 11 times

Postby Henrik » 19 Apr 2010


32 bit apps can address up to 2GB per service.
MC is a 32bit app.


Thank you, TJ.

"per service" - means "per core" or "per (hyper)thread"?
How much fast RAM are optimal for MC on a i7 980X?

User avatar
Bruce DeVault
Posts: 438
Joined: 19 Jan 2010
Location: Washington DC
Been thanked: 2 times
Contact:

Postby Bruce DeVault » 19 Apr 2010

For many I7 mainboards (Intel X58 included) the fastest speeds are achieved by having at least three memory slots occupied, so that triple-channel access can be done. This usually means a minimum choice of 3GB (with 1 GB sticks) or 6GB (with 2 GB sticks). Since Windows 7 64 bit can handle >4GB of RAM, 6GB is often the optimal choice if you are running multiple applications, because each 32 bit application (like MC) can have its own 2GB to work with.

We're currently an all Intel shop when it comes to processors. There's nothing here against AMD processors per se - it's simply a decision we made to standardize in the interest of achieving the highest degree of quality control and thus reliability, having had some years of experience tracking down hard to track down differences in floating point computations and exception handling, etc. While these kinds of problems can invariably be solved, we simply decided to focus on Intel so we could devote any time this might require in the future to other things.

Having more than 6GB of RAM is probably not going to increase performance on MC optimization alone - MC can only use 2GB of RAM in its current incarnation, so the need for more would be driven only by (a) the desire to achieve triple channel performance and still have at least 4GB, or (b) the desire to use OTHER applications at the same time, each of which may also consume memory as well. The main argument for >6GB increasing speed on a 32 bit application would be to run a RAM drive, which is often not the low-hanging fruit.

It's worth noting that 64 bit Windows 7 can allocate 2GB to EACH 32 bit application, while 32 bit Windows 7 can only allocate 2GB to all 32 bit applications combined (and typically uses 2GB for applications and 2GB for OS/caching/other). Thus, if you have 4GB or especially if more such as 6GB to make best use of triple channel on an I7, we would typically recommend these days to go to 64 bit Windows 7 to make the best use of it.

Regarding your question above, no, it's not 2GB per core/thread, and hyperthreading does not affect the computation. It's 2GB per application process, regardless of how many threads or cores are involved.

User avatar
Henrik
Posts: 140
Joined: 13 Apr 2010
Has thanked: 25 times
Been thanked: 11 times

Postby Henrik » 20 Apr 2010

Ok thank you for clarification.

I think about a new PC...with i7 980X 6x 3.33 GHz instead of my actual PC AMD 2.6 GHz Quadcore...(4 years old...).

The i7 double the estimatet optimizion time I think? For example 24 hours instead of 48 hours?
Is there a way to go 4x faster than my AMD 2.6 GHz Quadcore?


The best solution (suggestion):
Make it possible to share the optimizion-prozess on 2 or more PCs.
At first the main-PC copy tickdata and strategycode to all the other PCs and then the mainPC say which PC calculate which parameters, and after all the mainPC collect the data from the other PCs and make a summary.
So you can use all PCs at home for long optimizion progresses...

User avatar
Bruce DeVault
Posts: 438
Joined: 19 Jan 2010
Location: Washington DC
Been thanked: 2 times
Contact:

Postby Bruce DeVault » 20 Apr 2010

You may want to look at PassMark's CPU list at http://www.cpubenchmark.net/ for a ballpark speed comparison of one processor with another for 100% cpu intensive tasks.

There are other platforms that do grid optimization (using more than one OS+Platform to conduct a large number of trials) but it isn't available on MC at this time. The was a big push to do grid optimization work just a few years ago, but since then, multicore processors have gained substantial traction these last couple of years, and now the low hanging fruit from a cost perspective is to use multicore/multi-CPU machines rather than multiple cheap low-end PCs, so the pendulum has swung somewhat back against grid work at least at the low end of retail trading from a home office - mainly because multicore, multi-CPU machines have gotten so inexpensive, and because Windows 7's multithreading support is better than the older OS support was.

User avatar
Henrik
Posts: 140
Joined: 13 Apr 2010
Has thanked: 25 times
Been thanked: 11 times

Postby Henrik » 02 Jun 2010

There are other platforms that do grid optimization (using more than one OS+Platform to conduct a large number of trials) but it isn't available on MC at this time. The was a big push to do grid optimization work just a few years ago, but ...


MT 5 (MT5) supports Multicore and supports more than one PC for one optimazion. In MT5 you need only a IP and port and password to connect to the other PCs in the network. On every PC you have to install MT5 and now MT5-1 share the database and optimizion-progress with all other PCs.
Maybe it's a good feature for MC, because I can easyliy double oder triple or... the optimizion-speed (I have some PC at home) and you (the TS Team) have another reason for me (customer) to buy a second MC licenze 8)

User avatar
Henrik
Posts: 140
Joined: 13 Apr 2010
Has thanked: 25 times
Been thanked: 11 times

Postby Henrik » 14 Jun 2010

FYI:
Comparison:

1.)Core 2 Duo 2.5 GHz P8700, 6 GB RAM (Laptop)
2.) AMD Quadcore 4x2.2 GHz, 8 GB RAM (Desktop)
3.) AMD 6core 6x 3.3 GHz mit Crucial SSD C300 with RAID-0, 8 GB RAM PC3-12800 DDR3-1600 CL7 RH (Latenz: CL7-7-7-24), ASUS Crosshair IV Formula
4.) same as 3., but Overclocked to 3,9 GHz

Optimizion of a stupid Strategy:
1.) needs 1.52 minutes,
2.) needs 1.52 minutes,
3.) needs 0.29 minutes ,
4.) needs 0.24 minutes.

=> overclocked actual CPU needs a quarter time of a normal CPU 2-3 years old.

User avatar
TJ
Posts: 6586
Joined: 29 Aug 2006
Location: Global Citizen
Has thanked: 971 times
Been thanked: 1907 times

Postby TJ » 14 Jun 2010

I am surprised the AMD quadcore performance was no better than an Intel dualcore notebook.

I would be interested to see the same test on an i7.

User avatar
Henrik
Posts: 140
Joined: 13 Apr 2010
Has thanked: 25 times
Been thanked: 11 times

Postby Henrik » 15 Jun 2010

I am surprised the AMD quadcore performance was no better than an Intel dualcore notebook.

I would be interested to see the same test on an i7.


The AMD is a little bit older, and have a slow mainboard. The Ontel dualcore is a brand new expensive laptop...

i7: yes, I want also see with 980X 6x3,33 GHz (AMD flagship vs. Intel flagship).
But I think the SSD-speed is also important. My raid-0-SSDs have a average speed of 400 MB/s, in the top of 580 MB/s. a classic harddrive have a speed about 80-120 MB/s (test with HD Tune)

User avatar
4trading
Posts: 47
Joined: 29 Jun 2010
Location: Texas
Has thanked: 22 times
Been thanked: 3 times

Re: MultiCharts Optimization Speed

Postby 4trading » 06 Aug 2010

Hi Henrik:

When you ran your AMD x6 cores, was MC using all 6 cores then? Based upon the backtest time, it would appear so.
:D
Last edited by 4trading on 06 Aug 2010, edited 1 time in total.

User avatar
Henrik
Posts: 140
Joined: 13 Apr 2010
Has thanked: 25 times
Been thanked: 11 times

Re: MultiCharts Optimization Speed

Postby Henrik » 06 Aug 2010

Yes, all 6 cores :D

User avatar
4trading
Posts: 47
Joined: 29 Jun 2010
Location: Texas
Has thanked: 22 times
Been thanked: 3 times

Re: MultiCharts Optimization Speed

Postby 4trading » 06 Aug 2010

Wow, that's what I thought. Thanks. That's great news. :D
Guten Tag.
Last edited by 4trading on 06 Aug 2010, edited 1 time in total.

User avatar
TJ
Posts: 6586
Joined: 29 Aug 2006
Location: Global Citizen
Has thanked: 971 times
Been thanked: 1907 times

Re: MultiCharts Optimization Speed

Postby TJ » 06 Aug 2010

Someone (forgotten who) tested MultiCharts on a 2x Quad core computer.
He verified that all 8 cores were used.

User avatar
4trading
Posts: 47
Joined: 29 Jun 2010
Location: Texas
Has thanked: 22 times
Been thanked: 3 times

Re: MultiCharts Optimization Speed

Postby 4trading » 07 Aug 2010

Thanks TJ, that's good news. I'm picking components for my new PC build now, so this is important info. MC is newer for me and I'm going to dedicate more time with it. It's quite a bit faster than TS for testing.

P.S. Nice icon or Avatar... 8)

Tresor
Posts: 1104
Joined: 29 Mar 2008
Has thanked: 12 times
Been thanked: 51 times

Re:

Postby Tresor » 14 Oct 2010

Henrik wrote:Crucial SSD C300


Henrik,

There is a finite number of read/write cycles that SSD memory can handle in its life. When it comes to SSD and MC: is each new incoming tick in QM treated as a separate write (cycle)? - In this case the SSD wouldn't last long. Or is it handled yet quite differently?

Thanks

hilbert
Posts: 222
Joined: 17 Aug 2011
Has thanked: 76 times
Been thanked: 64 times

Re: Re:

Postby hilbert » 04 Dec 2014

Tresor wrote:
Henrik wrote:Crucial SSD C300


Henrik,

There is a finite number of read/write cycles that SSD memory can handle in its life. When it comes to SSD and MC: is each new incoming tick in QM treated as a separate write (cycle)? - In this case the SSD wouldn't last long. Or is it handled yet quite differently?

Thanks

Very old thread, but I would like to know answer to this question, as I am considering buying a SSD.

orion
Posts: 250
Joined: 01 Oct 2014
Has thanked: 65 times
Been thanked: 104 times

Re: MultiCharts Optimization Speed

Postby orion » 04 Dec 2014

hilbert, you have nothing to worry. A few things to note:

1) Each incoming tick is not a separate disk write cycle as posited above. Operating systems do block reads and block writes.

2) SSDs go even further than typical operating system file systems and implement specialized log structured file systems for enhanced durability.

3) The nature of our application (trading) is such that bulk of our disk accesses are reads instead of writes since we spend much more time accessing old data for backtesting. The ratio of reads to writes is 10-100X.

hilbert
Posts: 222
Joined: 17 Aug 2011
Has thanked: 76 times
Been thanked: 64 times

Re: MultiCharts Optimization Speed

Postby hilbert » 04 Dec 2014

orion wrote:hilbert, you have nothing to worry. A few things to note:

1) Each incoming tick is not a separate disk write cycle as posited above. Operating systems do block reads and block writes.

2) SSDs go even further than typical operating system file systems and implement specialized log structured file systems for enhanced durability.

3) The nature of our application (trading) is such that bulk of our disk accesses are reads instead of writes since we spend much more time accessing old data for backtesting. The ratio of reads to writes is 10-100X.

Thanks orion for reassuring me that there is nothing to worry!
I need a clarification regarding your point#1. What do you mean when you say operating systems do block read and block writes? My understanding is that each incoming tick is written to file cache (if ram cache is off). So, I thought it meant that as soon as a tick arrives, its written on SSD (since file resides somewhere on SSD). But you seem to be implying that this is infact not the case and operating system blocks read and write.

If I am collecting data for 50 charts, and each second a total of 100 ticks are coming (for all 50 charts taken together), that will mean 2.5 million ticks (=100*3600*7) come in a 7 hr period. So, you seem to be implying that this is not equivalent to 2.5 million write operations. I wonder how many write operations this is equivalent to!

Btw, I understood your point#2 and 3. Thanks for help. :)

orion
Posts: 250
Joined: 01 Oct 2014
Has thanked: 65 times
Been thanked: 104 times

Re: MultiCharts Optimization Speed

Postby orion » 04 Dec 2014

Why do you have RAM cache off?

hilbert
Posts: 222
Joined: 17 Aug 2011
Has thanked: 76 times
Been thanked: 64 times

Re: MultiCharts Optimization Speed

Postby hilbert » 04 Dec 2014

orion wrote:Why do you have RAM cache off?

I am still running MC 32 bit (it doesn't support RAM cache). I have 4GB RAM. I figured putting a SSD into my system will be the best bang for buck as far as upgrading my system is concerned. Even if I move to MC 64 (which supports RAM Cache ON), with only 4 GB RAM I am not sure it will make sense to have RAM Cache on.

User avatar
TJ
Posts: 6586
Joined: 29 Aug 2006
Location: Global Citizen
Has thanked: 971 times
Been thanked: 1907 times

Re: MultiCharts Optimization Speed

Postby TJ » 04 Dec 2014

hilbert wrote:
orion wrote:Why do you have RAM cache off?

I am still running MC 32 bit (it doesn't support RAM cache). I have 4GB RAM. I figured putting a SSD into my system will be the best bang for buck as far as upgrading my system is concerned. Even if I move to MC 64 (which supports RAM Cache ON), with only 4 GB RAM I am not sure it will make sense to have RAM Cache on.

You only have 4 GB?
Your computer must be swapping like crazy.
I would get more RAM before I would put money on a SSD.

As it is, you are probably killing your HD. Installing a SSD will only transfer the swap to the SSD, and kill your SSD in no time.

hilbert
Posts: 222
Joined: 17 Aug 2011
Has thanked: 76 times
Been thanked: 64 times

Re: MultiCharts Optimization Speed

Postby hilbert » 04 Dec 2014

TJ wrote:
hilbert wrote:
orion wrote:Why do you have RAM cache off?

I am still running MC 32 bit (it doesn't support RAM cache). I have 4GB RAM. I figured putting a SSD into my system will be the best bang for buck as far as upgrading my system is concerned. Even if I move to MC 64 (which supports RAM Cache ON), with only 4 GB RAM I am not sure it will make sense to have RAM Cache on.

You only have 4 GB?
Your computer must be swapping like crazy.
I would get more RAM before I would put money on a SSD.

As it is, you are probably killing your HD. Installing a SSD will only transfer the swap to the SSD, and kill your SSD in no time.

TJ, Thanks but I doubt my computer is swapping anything from pagefile (if that is what you mean). I just have 30 charts open in MC (with no indicators), one or two spreadsheet, 10 tabs in chrome and thats it. Taskmanager/resmon rarely shows more than 3 GB of RAM being used. If I misunderstood you, I apologize.
Or, do you mean something else when you say my computer must be swapping like crazy? Is there any way I can measure how much my computer is swapping? Thanks. Good advice is very much appreciated here.

orion
Posts: 250
Joined: 01 Oct 2014
Has thanked: 65 times
Been thanked: 104 times

Re: MultiCharts Optimization Speed

Postby orion » 05 Dec 2014

hilbert, it is possible you are staying within the physical limits of your memory as you say. You can find out how much swapping is going on by looking at the hard faults per second in the Windows resource monitor. However, I would agree with TJ that investing in a new system with 16 or 32GB of RAM will be a better investment than an SSD.
These users thanked the author orion for the post:
TJ

User avatar
TJ
Posts: 6586
Joined: 29 Aug 2006
Location: Global Citizen
Has thanked: 971 times
Been thanked: 1907 times

Re: MultiCharts Optimization Speed

Postby TJ » 05 Dec 2014

hilbert wrote:TJ, Thanks but I doubt my computer is swapping anything from pagefile
::
Is there any way I can measure how much my computer is swapping? Thanks. Good advice is very much appreciated here.

Open your resmon...

Look at the Hard Faults/sec graph. It will tell you how many times an application has failed to read from memory due to a shortage of RAM. Instead, Windows was forced to go into the pagefile.

Do let us know how did your computer fare.


Return to “MultiCharts”