Discussion:
Validity issues in otherwise well fitting model
(too old to reply)
Sharad Gupta
2017-03-26 06:45:48 UTC
Permalink
Dear All,

My model is getting good overall fit (cmin/df = 2.997 and GFI=.95, TLI=.917, RMSEA < .05 with PClose > .5) but validity values are not good and are as follows -

The Stat wiki tool gives following output -
CR AVE MSV ASV Soc En Ec
Soc 0.826 0.490 0.221 0.198 0.700
En 0.777 0.538 0.221 0.177 0.470 0.734
Ec 0.552 0.330 0.175 0.154 0.418 0.364 0.575

Since Ec variable has further 3 sub-constructs (VS, DFC, CC), I checked validity for all 5 constructs/sub-constructs and got following results -
Alpha values CR AVE MSV ASV Soc En CC VS DFC
Soc 0.74 0.826 0.490 0.221 0.098 0.700
En 0.795 0.777 0.538 0.221 0.089 0.470 0.734
CC 0.904 0.771 0.531 0.045 0.016 0.076 0.110 0.729
VS 0.672 0.671 0.408 0.211 0.086 0.270 0.243 0.045 0.639
DFC 0.725 0.568 0.253 0.211 0.102 0.304 0.249 0.211 0.459 0.503

Alpha values were taken from SPSS.

Even after collecting usable 788 data points and getting the fit model, this model is showing good validity. How can we address this issue?

Can you suggest a way out here?

Regards,
Sharad
Sharad Gupta
2017-03-26 13:02:17 UTC
Permalink
Edit: The problem is poor validity of the model. Please suggest your ways to improve it.
Post by Sharad Gupta
Dear All,
My model is getting good overall fit (cmin/df = 2.997 and GFI=.95, TLI=.917, RMSEA < .05 with PClose > .5) but validity values are not good and are as follows -
The Stat wiki tool gives following output -
CR AVE MSV ASV Soc En Ec
Soc 0.826 0.490 0.221 0.198 0.700
En 0.777 0.538 0.221 0.177 0.470 0.734
Ec 0.552 0.330 0.175 0.154 0.418 0.364 0.575
Since Ec variable has further 3 sub-constructs (VS, DFC, CC), I checked validity for all 5 constructs/sub-constructs and got following results -
Alpha values CR AVE MSV ASV Soc En CC VS DFC
Soc 0.74 0.826 0.490 0.221 0.098 0.700
En 0.795 0.777 0.538 0.221 0.089 0.470 0.734
CC 0.904 0.771 0.531 0.045 0.016 0.076 0.110 0.729
VS 0.672 0.671 0.408 0.211 0.086 0.270 0.243 0.045 0.639
DFC 0.725 0.568 0.253 0.211 0.102 0.304 0.249 0.211 0.459 0.503
Alpha values were taken from SPSS.
Even after collecting usable 788 data points and getting the fit model, this model is "not" showing good validity. How can we address this issue?
Can you suggest a way out here?
Regards,
Sharad
Rich Ulrich
2017-03-28 21:01:29 UTC
Permalink
On Sun, 26 Mar 2017 06:02:17 -0700 (PDT), Sharad Gupta
Post by Sharad Gupta
Edit: The problem is poor validity of the model. Please suggest your ways to improve it.
I don't think you give enough information about your model
for anyone to offer suggestions.

For my own part, I don't even understand the results that you
list -- starting the the abbreviations for your "good overall fit"
and including the headings on the two matrices.

I /guess/ that the last columns of each matrix represent an
intercorrelation matrix for some version of replication -- which
includes values for the diagonal (self-correlations) that are all
less than 0.80. Those are a bit low for scaled total-scores that
I have most of my experience with, but for other data those
could be either "far too low" or "unusually good." - There are
so many differences in what should be expected for different
data.
Post by Sharad Gupta
Post by Sharad Gupta
Dear All,
My model is getting good overall fit (cmin/df = 2.997 and GFI=.95, TLI=.917, RMSEA < .05 with PClose > .5) but validity values are not good and are as follows -
The Stat wiki tool gives following output -
CR AVE MSV ASV Soc En Ec
Soc 0.826 0.490 0.221 0.198 0.700
En 0.777 0.538 0.221 0.177 0.470 0.734
Ec 0.552 0.330 0.175 0.154 0.418 0.364 0.575
Since Ec variable has further 3 sub-constructs (VS, DFC, CC), I checked validity for all 5 constructs/sub-constructs and got following results -
Alpha values CR AVE MSV ASV Soc En CC VS DFC
Soc 0.74 0.826 0.490 0.221 0.098 0.700
En 0.795 0.777 0.538 0.221 0.089 0.470 0.734
CC 0.904 0.771 0.531 0.045 0.016 0.076 0.110 0.729
VS 0.672 0.671 0.408 0.211 0.086 0.270 0.243 0.045 0.639
DFC 0.725 0.568 0.253 0.211 0.102 0.304 0.249 0.211 0.459 0.503
Alpha values were taken from SPSS.
Even after collecting usable 788 data points and getting the fit model, this model is "not" showing good validity. How can we address this issue?
Can you suggest a way out here?
Here are a couple of comments about reliability and validity, in
general. I don't know whether these will help, or not.
(We get few enough questions now that I can go overboard
on the chance to rehearse answers.)

Like correlations, any computed reliability is an index /for a
particular sample/. - Are your 788 data points independent
and random representations of the universe that you want
to generalize to?

From the usual theoretical perspective, "reliabiltiy" puts a
limit on what can be hoped for, for "validity". That is to say,
if you predictor is not measured accurately, it cannot be a
basis for a good prediction; the inaccuracy or non-reliability
must show up in the prediction.

Guilford pointed out how this perspective does not capture
everything that we want to talk about when we rely on
"internal consistency" for reliability.

Does your "alpha" refer to Cronbach's alpha for internal reliability,
which is a effectively a transformation of the average correlation?

The best you get internally is when you ask the same question
over and over, maybe in different words; that usually does not
encompass a /broad/ latent concept, which may be needed
for prediction.

An example: psychological Depression is better measured
by including (say) physiological variables like disturbed sleep
and diet, and also anxiety, than by taking items that only
rely on "sadness" -- though the latter would have a higher
internal reliability.
--
Rich Ulrich
Loading...