I have an indicator (Code below) that if I create a brand new watchlist and a brand new chart, so all settings identical, the value in the watchlist/scanner is different to what I see in the chart.
It all seems to do with the amount of data I have showing or available for calculation.
This is easy to monitor on the chart, as I can see how many bars back there are, but how do i know what is going on with the Scanner/Screener/Watchlist??? And why does it change so much and not match the price chart values?
I can understand if I tell the screener/watchlist to look only 5 bars back, that things may be miscalculated if i am using averages etc ... but why would the value be changing dramatically if i am looking 400 bars back compared to 500 bars back, when the max average i am using is a 60 day/bar average??
Code:
Code: Select all
inputs:
LongTerm1( 30 ),
LongTerm2( 35 ),
LongTerm3( 40 ),
LongTerm4( 45 ),
LongTerm5( 50 ),
LongTerm6( 60 ),
Price ( Close ),
FirstPercentLevel ( 0.15),
SecondPercentLevel ( 0.2),
ThirdPercentLevel ( 0.25),
FourthPercentLevel ( 0.3),
SmoothingAvg ( 8 ),
OverBColor (Cyan);
Variables:
PriceDifference (0),
AverageofLongGMMA ( 0 ),
OverboughtLevel1 (0),
OverboughtLevel2 (0),
OverboughtLevel3 (0),
OverboughtLevel4 (0),
SmoothedPriceDifference (0);
arrays:
arr1[6]( 0 ) ;
arr1[1] = Xaverage(Price ,LongTerm1);
arr1[2] = Xaverage(Price ,LongTerm2);
arr1[3] = Xaverage(Price ,LongTerm3);
arr1[4] = Xaverage(Price ,LongTerm4);
arr1[5] = Xaverage(Price ,LongTerm5);
arr1[6] = Xaverage(Price ,LongTerm6);
PriceDifference = price - (arr1[6]);
AverageofLongGMMA = arr1[6];
OverboughtLevel1 = (AverageofLongGMMA + (AverageofLongGMMA * FirstPercentLevel ))- AverageofLongGMMA ;
OverboughtLevel2 = (AverageofLongGMMA + (AverageofLongGMMA * SecondPercentLevel ))- AverageofLongGMMA ;
OverboughtLevel3 = (AverageofLongGMMA + (AverageofLongGMMA * ThirdPercentLevel ))- AverageofLongGMMA ;
OverboughtLevel4 = (AverageofLongGMMA + (AverageofLongGMMA * FourthPercentLevel ))- AverageofLongGMMA ;
SmoothedPriceDifference = XAverage(PriceDifference ,SmoothingAvg );
plot1 (SmoothedPriceDifference , "Price Diff");
Plot3 ("Normal");
plot4 (OverboughtLevel1 , "Overbought Level 1");
plot5 (OverboughtLevel2 , "Overbought Level 2");
plot6 (OverboughtLevel3 , "Overbought Level 3");
plot7 (OverboughtLevel4 , "Overbought Level 4");
if SmoothedPriceDifference >= OverboughtLevel1 then
begin
SetPlotColor( 1, OverBColor ) ;
Plot3 ("Overbought 1");
end;
if SmoothedPriceDifference >= OverboughtLevel2 then Plot3 ("Overbought 2");
if SmoothedPriceDifference >= OverboughtLevel3 then Plot3 ("Overbought 3");
if SmoothedPriceDifference >= OverboughtLevel4 then Plot3 ("Overbought 4");
Plot1 (SmoothedPriceDifference , "Price Diff");
What is going on here and how does this all work? How do i make sure that an indicator is ALWAYS looking at the correct amount of data needed to calculate CORRECT results? How to i ensure values are consistent across the board?
Thank you for your time and help
Kindest Regards,
Kaj