
So I would like to suggest that TSSupport provide some 'benchmark' or 'example' scripts for maybe whatever they might (already) use for 'quality assurance' testing, for several data-vendors and maybe at least two levels of complexity for charts/workspaces. If these tests-examples were posted in this forum, then the users could try to 'duplicate' the different test-examples and report the times to complete each test. Of course, different computer setup, and different internet providers, etc., will add a lot of 'variability' to the numbers, but it would at least provide some kind of common reference to support our understanding of what might be/ or not be a bug/issue for further investigation by TSSupport, or by the users themselves.
I would suggest that the 'benchmark' or test-example(s) 'should' each specify the:
(1) Data-symbol, resolution(s), and number of days of data for the back-fill history, ... as a workspace file, with pre-specified chart(s)
(2) A specific sequence of download 'tasks' that could demonstrate that the cache memory stored data does save some time on subsequent creation of charts using the same data. [& Specific results, as determined in your own lab testing]
(3) A demonstration of any performance hits that should be expected by using tick-data-bars, and the combining of N-tick bars with other bar-types on the same chart. [& Specific results, as determined in your own lab testing]
The logging facility should be ON or OFF, and the user can easily switch between the two modes. That way, the logging can be eliminated during a 'benchmark' test, or when it is not really necessary to have on.
Perhaps some other users also have some more suggestions about what kind of 'tasks' or 'configuration/setups' that might be appropriate for comparison testing/benchmarking? I'm sure the QA department will have some more ideas too, right? Anything right now would be better than not having anything objective as a performance and timing reference.

Then setup a special 'sticky' topic that allows users to 'add' their own 'results' for each of the 'same' test-examples.
Should be a very interesting 'experiment' , right?
