Suggestion:We Need Some BENCHMARK Testing!

Questions about MultiCharts and user contributed studies.
denizen2
Posts: 125
Joined: 17 Jul 2005
Has thanked: 8 times
Been thanked: 1 time

Suggestion:We Need Some BENCHMARK Testing!

Postby denizen2 » 20 Mar 2007

There are statements in several topics relating to some version of "it takes too long" for ......{fill in the blanks] :wink: Everybody has probably a different idea about what is 'too long' for something to occur.

So I would like to suggest that TSSupport provide some 'benchmark' or 'example' scripts for maybe whatever they might (already) use for 'quality assurance' testing, for several data-vendors and maybe at least two levels of complexity for charts/workspaces. If these tests-examples were posted in this forum, then the users could try to 'duplicate' the different test-examples and report the times to complete each test. Of course, different computer setup, and different internet providers, etc., will add a lot of 'variability' to the numbers, but it would at least provide some kind of common reference to support our understanding of what might be/ or not be a bug/issue for further investigation by TSSupport, or by the users themselves.

I would suggest that the 'benchmark' or test-example(s) 'should' each specify the:

(1) Data-symbol, resolution(s), and number of days of data for the back-fill history, ... as a workspace file, with pre-specified chart(s)

(2) A specific sequence of download 'tasks' that could demonstrate that the cache memory stored data does save some time on subsequent creation of charts using the same data. [& Specific results, as determined in your own lab testing]

(3) A demonstration of any performance hits that should be expected by using tick-data-bars, and the combining of N-tick bars with other bar-types on the same chart. [& Specific results, as determined in your own lab testing]

The logging facility should be ON or OFF, and the user can easily switch between the two modes. That way, the logging can be eliminated during a 'benchmark' test, or when it is not really necessary to have on.

Perhaps some other users also have some more suggestions about what kind of 'tasks' or 'configuration/setups' that might be appropriate for comparison testing/benchmarking? I'm sure the QA department will have some more ideas too, right? Anything right now would be better than not having anything objective as a performance and timing reference. :roll:

Then setup a special 'sticky' topic that allows users to 'add' their own 'results' for each of the 'same' test-examples.

Should be a very interesting 'experiment' , right? :idea:

User avatar
Kate
Posts: 758
Joined: 08 Dec 2006

Postby Kate » 22 Mar 2007

Denizen2,

Thank you again for the thought-provoking idea. But so far it is difficult from the technical viewpoint to upload our "testing" scripts for "quality assurance" testing and provide users with the framework that we use here in office. It will be time-consuming to analyze and compare all user inputs and it'd be better if our users post their individual results here and upload their workspaces so that we can test them.

However, I can reveal some of our future plans concerning data loading issue. Soon MultiCharts will display data by portions as soon as each portion is received. As you know, so far MultiCharts displays a chart only when all the data is received and unsophisticated users might think that the program is working unacceptably slowly. But judging from our tests comparing MultiCharts and other programs (AB, for instance), I can say that MultiCharts works as fast as many other programs but it is not obvious since users have to wait till the whole bulk of data will be loaded. In some case the whole amount of data cannot be downloaded at once, maybe because datafeed is not responding, or there occurs IB pacing violation and so on, MultiCharts is idle but the user might think that it is still waiting for data or loading it and hence working too slowly. That's why we decided to change MultiCharts data display order so that it will reflect it by portions.

denizen2
Posts: 125
Joined: 17 Jul 2005
Has thanked: 8 times
Been thanked: 1 time

Postby denizen2 » 22 Mar 2007

Thank you Kate for your quick reply.

I understand that using your existing QA test-scripts are probably not the most appropriate or doable approach. However, maybe something more like a few 'standard' test-workspaces would suffice, i.e., no 'scripts' are needed, just some workspaces with some 'standard' charts (of different levels of complexity).

The concept of creating a 'benchmark suite of test scripts' is probably going too far, I admit. So, again, let me suggest just creating a few workspaces that can be used simply as a *common* reference for the users to try on their own machine(s), and then report the 'results' to everybody.

This could save every body time and confusion during the process of trying to 'confirm' if there is bug or issue.

Then you people in the front-lines of customer-support would know more quickly that every body is looking at the 'same page', without the time-lapse associated with (1) getting a complaint, (2) asking user to describe in more detail the problem, (3) you asking that person to send his workspace, etc. Anything that would reduce the number of communication cycles would be VERY helpful to both sides, right?

At the same time, if we had some 'typical' performance-test-numbers under certain defined test-scenarios, then we would also know what to expect with the current version. Instead, we have only our 'subjective' and 'individual' experiences to evaluate the meaning of certain data downloading issues.

So, in summary, what I am suggesting might be nothing more than some example workspaces that TSSupport has 'defined' as a 'reference' for every body to try first, before they come to you with a 'complaint' :wink: . I would imagine as time goes by, then this 'standard-reference-set' of workspaces might be 'expanded' (by users themselves, or by TSupport) . It could also be used for 'tutorials' as examples of some specific issue that is being communicated, etc. 8) .

Cheers,

denizen2


Return to “MultiCharts”