1 Introduction

This document accompanies the book Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R by Joseph Hair, Tomas Hult, Christian M. Ringle, Marko Sarstedt, Nicholas Danks, and Soumya Ray.

It introduces a concise version of the book’s R Code and outputs for the example corporate reputation model.

2 Introduction to SEMinR (Chapter 3)

2.1 Installing and loading the package

To download and install the SEMinR package call install.packages("seminr"). (You only need to do this once to equip RStudio on your computer with SEMinR.)

To load the SEMinR library use library(seminr). (You must do this everytime you restart RStudio and wish to use SEMinR.)

library(seminr)

2.2 Load and inspect the data

The data set accompanying the book (“Corporate Reputation Data.csv”) is integrated in the SEMinR package.

corp_rep_data <- seminr::corp_rep_data


Alternatively, we can load the data by importing it from another file such as "Corporate Reputation Data.csv".

corp_rep_data <- read.csv(file = "Corporate Reputation Data.csv", 
  header = TRUE, sep = ";")


Take a quick look at the data with head().

head(corp_rep_data)

2.3 Model set up

2.3.1 Model and measurement details

Here, we work with a simple corporate reputation model as displayed below in Fig. 1. Tab. 1 shows the model’s measurement details, i.e. the constructs, variable names and items.

**Fig. 1** Simple corporate reputation model.

Fig. 1 Simple corporate reputation model.


Tab. 1 Measurement details for the simple corporate reputation model.
construct variable name item
Competence (COMP) comp_1 [The company] is a top competitor in its market.
Competence (COMP) comp_2 As far as I know, [the company] is recognized worldwide.
Competence (COMP) comp_3 I believe that [the company] performs at a premium level.
Likeability (LIKE) like_1 [The company] is a company that I can better identify with than other companies.
Likeability (LIKE) like_2 [The company] is a company that I would regret more not having if it no longer existed than I would other companies.
Likeability (LIKE) like_3 I regard [the company] as a likeable company.
Customer Satisfaction (CUSA) cusa I am satisfied with [the company].
Customer Loyalty (CUSL) cusl_1 I would recommend [company] to friends and relatives.
Customer Loyalty (CUSL) cusl_2 If I had to choose again, I would choose [company] as my mobile phone services provider.
Customer Loyalty (CUSL) cusl_3 I will remain a customer of [company] in the future.

2.3.2 Create a measurement model

The constructs() function specifies the list of all construct measurement models. Within this list you can define various constructs:

  • composite() specifies the measurement of individual constructs.
  • interaction_term() specifies interactions terms.
  • higher_composite() specifies hierarchical component models, i.e. higher-order constructs (Sarstedt et al., 2019).

Thereby, composite() specifies the measurement of individual constructs and

  • multi_items() creates a vector of multiple measurement items with similar names.
  • single_item() describe a single measurement item.

For example, the composite COMP incorporates the items comp_1 to comp_3.

corp_rep_mm <- constructs(
  composite("COMP", multi_items("comp_", 1:3)),
  composite("LIKE", multi_items("like_", 1:3)),
  composite("CUSA", single_item("cusa")),
  composite("CUSL", multi_items("cusl_", 1:3)))

2.3.3 Create a structural model

The structural model indicates the sequence of the constructs and the relationships between them.

  • relationships() specifies all the structural relationships between all constructs.
  • paths() specifies relationships between a specific set of antecedents and outcomes.

For example, to specify the relationships from COMP and LIKE to CUSA and CUSL, we use the from = and to = arguments in the paths() function: paths(from = c("COMP", "LIKE"), to = c("CUSA", "CUSL")).

corp_rep_sm <- relationships(
  paths(from = c("COMP", "LIKE"), to = c("CUSA", "CUSL")),
  paths(from = c("CUSA"), to = c("CUSL")))

2.3.4 Estimating the model

To estimate a PLS path model, algorithmic options and arguments settings must be selected. These can be reviewed by calling the function’s documentation with ?estimate_pls.

Here, we specify the data (data = corp_rep_data), the measurement model (measurement_model = corp_rep_mm) and structural model (structural_model = corp_rep_sm) as well as the weighting scheme (inner_weights = path_weighting) and missing data handling with missing values being indicated by “-99” (missing_value = "-99") and replaced by the mean (missing = mean_replacement).

corp_rep_pls_model <- estimate_pls(data = corp_rep_data,
  measurement_model = corp_rep_mm,
  structural_model  = corp_rep_sm,
  inner_weights = path_weighting,
  missing = mean_replacement,
  missing_value = "-99")
## Generating the seminr model
## All 344 observations are valid.

2.3.5 Summarizing the model

Once the model has been estimated, a summarized report of the results can be generated by using the summary() function.

summary_corp_rep <- summary(corp_rep_pls_model)


The summary() function applied to a SEMinR model object produces a summary.seminr_model class object. Its sub-objects (see Tab. 2) serve as basis for the assessment of the measurement and structural model (Hair et al., 2019).

Tab. 2 Elements of the summary.seminr_model object.
Sub-object Content
meta The estimation function and version information.
iterations The number of iterations for the PLS algorithm to converge.
paths The model’s path coefficients and (adjusted) R2 values.
total_effects The model’s total effects.
total_indirect_effects The model’s total indirect effects.
loadings The outer loadings for all constructs.
weights The outer weights for all constructs.
validity The metrics necessary to evaluate the construct measures’ validity.
reliability The metrics necessary to evaluate the construct measures’ reliability.
composite_scores The estimated scores for constructs.
vif_antecedents The metrics used to evaluate structural model collinearity.
fSquare The f2 metric for all structural model relationships.
descriptives The descriptive statistics of the indicator data.
it_criteria The Information Theoretic model selection criteria for the estimated model.


For example, by calling summary_corp_rep$paths we inspect the model’s path coefficients and the (adjusted) R2 values and by calling summary_corp_rep$reliability, we inspect the construct reliability metrics which we can plot with plot(summary_corp_rep$reliability).

summary_corp_rep$paths
##         CUSA  CUSL
## R^2    0.295 0.562
## AdjR^2 0.290 0.558
## COMP   0.162 0.009
## LIKE   0.424 0.342
## CUSA       . 0.504
summary_corp_rep$reliability
##      alpha  rhoC   AVE  rhoA
## COMP 0.776 0.865 0.681 0.832
## LIKE 0.831 0.899 0.747 0.836
## CUSA 1.000 1.000 1.000 1.000
## CUSL 0.831 0.899 0.748 0.839
## 
## Alpha, rhoC, and rhoA should exceed 0.7 while AVE should exceed 0.5
plot(summary_corp_rep$reliability)


To check if and when the algorithm converged, we can inspect the number of iterations in summary_corp_rep$iterations.

summary_corp_rep$iterations
## [1] 4


We can access summary statistics such as mean, standard deviation and number of missing values for the model’s items and constructs by inspecting the summary_corp_rep$descriptives$statistics object.

Precisely, we call summary_corp_rep$descriptives$statistics$items to get the item statistics and summary_corp_rep$descriptives$statistics$constructs for the construct statistics.

summary_corp_rep$descriptives$statistics$items
##                    No. Missing  Mean Median   Min   Max Std.Dev. Kurtosis Skewness
## serviceprovider  1.000   0.000 2.000  2.000 1.000 4.000    1.004    2.477    0.744
## servicetype      2.000   0.000 1.637  2.000 1.000 2.000    0.482    1.323   -0.568
## comp_1           3.000   0.000 4.648  5.000 1.000 7.000    1.435    2.664   -0.263
## comp_2           4.000   0.000 5.424  6.000 1.000 7.000    1.377    2.375   -0.564
## comp_3           5.000   0.000 5.221  5.500 1.000 7.000    1.460    2.797   -0.674
## like_1           6.000   0.000 4.584  5.000 1.000 7.000    1.550    2.589   -0.403
## like_2           7.000   0.000 4.250  4.000 1.000 7.000    1.850    2.095   -0.311
## like_3           8.000   0.000 4.480  5.000 1.000 7.000    1.873    2.055   -0.324
## cusl_1           9.000   3.000 5.129  5.000 1.000 7.000    1.515    3.246   -0.789
## cusl_2          10.000   4.000 5.276  6.000 1.000 7.000    1.746    3.022   -0.947
## cusl_3          11.000   3.000 5.651  6.000 1.000 7.000    1.657    3.899   -1.296
## cusa            12.000   1.000 5.440  6.000 1.000 7.000    1.175    3.748   -0.765
## csor_1          13.000   0.000 4.235  4.000 1.000 7.000    1.471    2.605   -0.042
## csor_2          14.000   0.000 3.076  3.000 1.000 7.000    1.654    2.427    0.495
## csor_3          15.000   0.000 3.988  4.000 1.000 7.000    1.481    2.521   -0.061
## csor_4          16.000   0.000 3.125  3.000 1.000 7.000    1.464    2.527    0.196
## csor_5          17.000   0.000 3.983  4.000 1.000 7.000    1.585    2.297   -0.042
## csor_global     18.000   0.000 4.988  5.000 1.000 7.000    1.291    2.363   -0.141
## attr_1          19.000   0.000 4.991  5.000 1.000 7.000    1.460    2.906   -0.565
## attr_2          20.000   0.000 2.945  2.000 1.000 7.000    2.101    1.868    0.572
## attr_3          21.000   0.000 4.811  5.000 1.000 7.000    1.454    2.423   -0.274
## attr_global     22.000   0.000 5.587  6.000 2.000 7.000    1.216    2.884   -0.652
## perf_1          23.000   0.000 4.619  5.000 1.000 7.000    1.393    2.633   -0.201
## perf_2          24.000   0.000 5.070  5.000 1.000 7.000    1.334    2.790   -0.438
## perf_3          25.000   0.000 4.721  5.000 1.000 7.000    1.507    2.644   -0.420
## perf_4          26.000   0.000 4.919  5.000 1.000 7.000    1.436    2.803   -0.460
## perf_5          27.000   0.000 4.971  5.000 1.000 7.000    1.442    2.688   -0.457
## perf_global     28.000   0.000 5.977  6.000 3.000 7.000    0.981    2.904   -0.752
## qual_1          29.000   0.000 5.052  5.000 1.000 7.000    1.399    3.223   -0.644
## qual_2          30.000   0.000 4.372  5.000 1.000 7.000    1.497    2.451   -0.290
## qual_3          31.000   0.000 5.081  5.000 1.000 7.000    1.473    2.969   -0.678
## qual_4          32.000   0.000 4.413  4.000 1.000 7.000    1.490    2.600   -0.215
## qual_5          33.000   0.000 5.012  5.000 1.000 7.000    1.424    2.642   -0.530
## qual_6          34.000   0.000 4.924  5.000 1.000 7.000    1.537    2.445   -0.438
## qual_7          35.000   0.000 4.398  4.000 1.000 7.000    1.556    2.511   -0.224
## qual_8          36.000   0.000 4.837  5.000 1.000 7.000    1.417    2.415   -0.197
## qual_global     37.000   0.000 6.026  6.000 2.000 7.000    1.020    3.222   -0.862
## switch_1        38.000   0.000 3.765  4.000 1.000 5.000    1.287    2.351   -0.723
## switch_2        39.000   0.000 3.352  3.000 1.000 5.000    1.314    1.947   -0.282
## switch_3        40.000   0.000 3.881  4.000 1.000 5.000    1.227    2.549   -0.807
## switch_4        41.000   0.000 2.837  3.000 1.000 4.000    1.149    1.745   -0.453
summary_corp_rep$descriptives$statistics$constructs
##        No. Missing   Mean Median    Min   Max Std.Dev. Kurtosis Skewness
## COMP 1.000   0.000 -0.000  0.075 -2.911 1.668    1.000    2.746   -0.449
## LIKE 2.000   0.000 -0.000  0.032 -2.300 1.698    1.000    2.351   -0.265
## CUSA 3.000   0.000  0.000  0.477 -3.783 1.329    1.000    3.759   -0.766
## CUSL 4.000   0.000  0.000  0.221 -3.073 1.173    1.000    3.570   -1.000

2.3.6 Bootstrapping the model

In PLS-SEM, we need to perform bootstrapping to estimate standard errors and compute confidence intervals.

We run the bootstrapping with the bootstrap_model() function with 1,000 subsamples (nboot = 1000) and set a seed (seed = 123) to obtain reproducible results.

Next, we summarize the bootstrap model with sum_boot_corp_rep <- summary(boot_corp_rep) and obtain results on model estimates such as the path coefficients with sum_boot_corp_rep$bootstrapped_paths.

boot_corp_rep <- bootstrap_model(seminr_model = corp_rep_pls_model,
  nboot = 1000,
  cores = NULL,
  seed = 123)
## Bootstrapping model using seminr...
## SEMinR Model successfully bootstrapped

sum_boot_corp_rep <- summary(boot_corp_rep)

sum_boot_corp_rep$bootstrapped_paths
##                Original Est. Bootstrap Mean Bootstrap SD T Stat. 2.5% CI 97.5% CI
## COMP  ->  CUSA         0.162          0.166        0.068   2.374   0.038    0.298
## COMP  ->  CUSL         0.009          0.011        0.056   0.165  -0.098    0.126
## LIKE  ->  CUSA         0.424          0.422        0.062   6.858   0.299    0.542
## LIKE  ->  CUSL         0.342          0.340        0.056   6.059   0.227    0.450
## CUSA  ->  CUSL         0.504          0.504        0.042  11.978   0.419    0.585


The summary.boot_seminr_model object, i.e. sum_boot_corp_rep, contains the following sub-objects (Tab. 3):

Tab. 3 Elements of the summary.boot_seminr_model object.
Sub-object Content
nboot The number of bootstrap subsamples generated during bootstrapping.
bootstrapped_paths The bootstrap estimated standard error, T statistic, and confidence intervals for the path coefficients.
bootstrapped_weights The bootstrap estimated standard error, T statistic, and confidence intervals for the indicator weights.
bootstrapped_loadings The bootstrap estimated standard error, T statistic, and confidence intervals for the indicator loadings.
bootstrapped_HTMT The bootstrap estimated standard error, T statistic, and confidence intervals for the HTMT values.
bootstrapped_total_paths The bootstrap estimated standard error, T statistic, and confidence intervals for the model’s total effects.

3 Evaluation of reflective measurement models (Chapter 4)

3.1 Indicator reliability

For the reflective measurement model, we need to estimate the relationships between the reflectively measured constructs and their indicators (i.e., loadings). Indicator reliability can be calculated by squaring the loadings.

Low indicator reliability may result in biased construct results. Therefore, we evaluate indicator loadings as follows:

  • Indicator loadings above 0.708 are recommended, since they correspond to an explained variance (indicator reliabilty) of at least 50%.
  • Indicators with loadings between 0.40 and 0.70 should be considered for removal.
  • Indicators with very low loadings (below 0.40) should be removed.

We can get the loadings by inspecting the summary.seminr_model object’s loadings element (summary_corp_rep$loadings).

summary_corp_rep$loadings
##         COMP  LIKE  CUSA  CUSL
## comp_1 0.858 0.000 0.000 0.000
## comp_2 0.798 0.000 0.000 0.000
## comp_3 0.818 0.000 0.000 0.000
## like_1 0.000 0.879 0.000 0.000
## like_2 0.000 0.870 0.000 0.000
## like_3 0.000 0.843 0.000 0.000
## cusa   0.000 0.000 1.000 0.000
## cusl_1 0.000 0.000 0.000 0.833
## cusl_2 0.000 0.000 0.000 0.917
## cusl_3 0.000 0.000 0.000 0.843


We can get the indicator reliability by squaring the loadings.

summary_corp_rep$loadings^2
##         COMP  LIKE  CUSA  CUSL
## comp_1 0.736 0.000 0.000 0.000
## comp_2 0.638 0.000 0.000 0.000
## comp_3 0.669 0.000 0.000 0.000
## like_1 0.000 0.773 0.000 0.000
## like_2 0.000 0.757 0.000 0.000
## like_3 0.000 0.711 0.000 0.000
## cusa   0.000 0.000 1.000 0.000
## cusl_1 0.000 0.000 0.000 0.694
## cusl_2 0.000 0.000 0.000 0.841
## cusl_3 0.000 0.000 0.000 0.710

3.2 Internal consistency reliability

Internal consistency reliability is the extent to which indicators measuring the same construct are associated with each other.

Of the various indicators for internal consistency reliability, Cronbach’s alpha is the lower bound (Trizano-Hermosilla & Alvarado, 2016), the composite reliability \(\rho\)c (Jöreskog, 1971) is the upper bound for internal consistency reliability. The exact (or consistent) reliability coefficient \(\rho\)A usually lies between these bounds and may serve as a good representation of a construct’s internal consistency reliability (Dijkstra, 2010, 2014; Dijkstra & Henseler, 2015).

An item is acceptable for inclusion in the model if its internal consistency reliability takes specific values:

  • Recommended value of 0.80 to 0.90.
  • Minimum value of 0.70 (or 0.60 in exploratory research).
  • Maximum value of 0.95 to avoid indicator redundancy, which would compromise content validity (Diamantopoulos et al., 2012).

The reliability indicators can be found in summary_corp_rep$reliability and plotted with plot(summary_corp_rep$reliability).

summary_corp_rep$reliability
##      alpha  rhoC   AVE  rhoA
## COMP 0.776 0.865 0.681 0.832
## LIKE 0.831 0.899 0.747 0.836
## CUSA 1.000 1.000 1.000 1.000
## CUSL 0.831 0.899 0.748 0.839
## 
## Alpha, rhoC, and rhoA should exceed 0.7 while AVE should exceed 0.5
plot(summary_corp_rep$reliability)

3.3 Convergent validity

Convergent validity is the extent to which the construct converges in order to explain the variance of its indicators. The average variance extracted (AVE) is the mean of a construct indicator’s squared loadings. The minimum acceptable AVE is 0.50 or higher (Hair et al., 2021).

AVE values can also be accessed at summary_corp_rep$reliability.

summary_corp_rep$reliability
##      alpha  rhoC   AVE  rhoA
## COMP 0.776 0.865 0.681 0.832
## LIKE 0.831 0.899 0.747 0.836
## CUSA 1.000 1.000 1.000 1.000
## CUSL 0.831 0.899 0.748 0.839
## 
## Alpha, rhoC, and rhoA should exceed 0.7 while AVE should exceed 0.5

3.4 Discriminant validty

According to the older Fornell-Larcker criterion (Fornell & Larcker, 1981), the square root of the AVE of each construct should be higher than the construct’s highest correlation with any other construct in the model. These results can be outputted by summary_corp_rep$validity$fl_criteria.

However, this metric is not suitable for discriminant validity assessment due to its poor performance in detecting discriminant validity problems (Henseler et al., 2015; Radomir & Moisescu, 2019).

summary_corp_rep$validity$fl_criteria
##       COMP  LIKE  CUSA  CUSL
## COMP 0.825     .     .     .
## LIKE 0.645 0.864     .     .
## CUSA 0.436 0.528 1.000     .
## CUSL 0.450 0.615 0.689 0.865
## 
## FL Criteria table reports square root of AVE on the diagonal and construct correlations on the lower triangle.


We recommend the heterotrait-monotrait ratio (HTMT) of the correlations to assess discriminant validity (Henseler et al., 2015).

The HTMT is the mean value of the indicator correlations across constructs (i.e., the heterotrait-heteromethod correlations) relative to the (geometric) mean of the average correlations for the indicators measuring the same construct (i.e., the monotrait-heteromethod correlations).

Discriminant validity problems are present when HTMT values

  • exceed 0.90 for constructs that are conceptually very similar.
  • exceed 0.85 for constructs that are conceptually more distinct.

We can get the HTMT matrix by calling summary_corp_rep$validity$htmt.

summary_corp_rep$validity$htmt
##       COMP  LIKE  CUSA CUSL
## COMP     .     .     .    .
## LIKE 0.780     .     .    .
## CUSA 0.465 0.577     .    .
## CUSL 0.532 0.737 0.755    .


We can use bootstrap confidence intervals to test if the HTMT is significantly different from 1.00 (Henseler et al., 2015) or a lower threshold value such as 0.90 or 0.85, which should be defined based on the study context (Franke & Sarstedt, 2019).

We obtain the HTMT 90% bootstrap CI by calling sum_boot_corp_rep <- summary(boot_corp_rep, alpha = 0.10) and inspect the object sum_boot_corp_rep$bootstrapped_HTMT.

sum_boot_corp_rep <- summary(boot_corp_rep, alpha = 0.10)

sum_boot_corp_rep$bootstrapped_HTMT
##                Original Est. Bootstrap Mean Bootstrap SD T Stat. 5% CI 95% CI
## COMP  ->  LIKE         0.780          0.782        0.041  19.009 0.716  0.849
## COMP  ->  CUSA         0.465          0.467        0.060   7.806 0.368  0.563
## COMP  ->  CUSL         0.532          0.534        0.059   8.961 0.438  0.631
## LIKE  ->  CUSA         0.577          0.577        0.044  13.153 0.502  0.647
## LIKE  ->  CUSL         0.737          0.736        0.041  17.872 0.669  0.802
## CUSA  ->  CUSL         0.755          0.755        0.034  22.232 0.699  0.809

4 Evaluation of formative measurement models (Chapter 5)

Relevant criteria for evaluating formative measurement models include the assessment of:

  1. Convergent validity.
  2. Indicator collinearity.
  3. Statistical significance and relevance of the indicator weights.

4.1 Model set up

4.1.1 Model and measurement details

Here, we work with an extended corporate reputation model as displayed below in Fig. 2.

Tab. 4 shows the model’s measurement details, i.e. the constructs, variable names and items for the formative constructs:

  • QUAL: The quality of a company’s products and services as well as its quality of customer orientation.
  • PERF: The company’s economic and managerial performance.
  • CSOR: The company’s corporate social responsibility.
  • ATTR: The company’s attractiveness.
**Fig. 2** Extended corporate reputation model.

Fig. 2 Extended corporate reputation model.


Tab. 4 Measurement details for the extended corporate reputation model.
construct variable name item
Quality (QUAL) qual_1 The products/services offered by [the company] are of high quality.
Quality (QUAL) qual_2 [The company] is an innovator, rather than an imitator with respect to [industry].
Quality (QUAL) qual_3 [The company]’s products/services offer good value for money.
Quality (QUAL) qual_4 The services [the company] offered are good.
Quality (QUAL) qual_5 Customer concerns are held in high regard at [the company].
Quality (QUAL) qual_6 [The company] is a reliable partner for customers.
Quality (QUAL) qual_7 [The company] is a trustworthy company.
Quality (QUAL) qual_8 I have a lot of respect for [the company].
Performance (PERF) perf_1 [The company] is a very well-managed company.
Performance (PERF) perf_2 [The company] is an economically stable company.
Performance (PERF) perf_3 The business risk for [the company] is modest compared to its competitors.
Performance (PERF) perf_4 [The company] has growth potential.
Performance (PERF) perf_5 [The company] has a clear vision about the future of the company.
Corporate Social Responsibility (CSOR) csor_1 [The company] behaves in a socially conscious way.
Corporate Social Responsibility (CSOR) csor_2 [The company] is forthright in giving information to the public.
Corporate Social Responsibility (CSOR) csor_3 [The company] has a fair attitude toward competitors.
Corporate Social Responsibility (CSOR) csor_4 [The company] is concerned about the preservation of the environment.
Corporate Social Responsibility (CSOR) csor_5 [The company] is not only concerned about profits.
Attractiveness (ATTR) attr_1 [The company] is successful in attracting high-quality employees.
Attractiveness (ATTR) attr_2 I could see myself working at [the company].
Attractiveness (ATTR) attr_3 I like the physical appearance of [the company] (company, buildings, shops, etc.).
Single-item measure for the redundancy analysis qual_global Please assess the overall quality of [the company’s] activities.
Single-item measure for the redundancy analysis perf_global Please assess [the company’s] overall performance.
Single-item measure for the redundancy analysis csor_global Please assess the extent to which [the company] acts in socially conscious ways.
Single-item measure for the redundancy analysis attr_global Please assess [the company’s] overall attractiveness.

4.1.2 Estimating and bootstrapping the model

According to the extended model, we update the measurement and structural model specifications. Note that the four drivers QUAL, PERF, CSOR, and ATTR are formative constructs, estimated with mode_B, while COMP and LIKE are reflective, estimated with mode_A (default setting in the composite() function).

corp_rep_mm_ext <- constructs(
  composite("QUAL", multi_items("qual_", 1:8), weights = mode_B),
  composite("PERF", multi_items("perf_", 1:5), weights = mode_B),
  composite("CSOR", multi_items("csor_", 1:5), weights = mode_B),
  composite("ATTR", multi_items("attr_", 1:3), weights = mode_B),
  composite("COMP", multi_items("comp_", 1:3)),
  composite("LIKE", multi_items("like_", 1:3)),
  composite("CUSA", single_item("cusa")),
  composite("CUSL", multi_items("cusl_", 1:3)))

corp_rep_sm_ext <- relationships(
  paths(from = c("QUAL", "PERF", "CSOR", "ATTR"), to = c("COMP", "LIKE")),
  paths(from = c("COMP", "LIKE"), to = c("CUSA", "CUSL")),
  paths(from = c("CUSA"), to = c("CUSL"))
)


Once the model is set up, we estimate the model with estimate_pls() and store it in corp_rep_pls_model_ext to summarize it with summary(corp_rep_pls_model_ext).

Further, we bootstrap the model (bootstrap_model()), summarize the result with a 90% CI by calling summary(boot_corp_rep_ext, alpha = 0.10) and store the result in summary_boot_corp_rep_ext.

corp_rep_pls_model_ext <- estimate_pls(
  data = corp_rep_data,
  measurement_model = corp_rep_mm_ext,
  structural_model = corp_rep_sm_ext,
  missing = mean_replacement,
  missing_value = "-99")
## Generating the seminr model
## All 344 observations are valid.

summary_corp_rep_ext <- summary(corp_rep_pls_model_ext)

boot_corp_rep_ext <- bootstrap_model(seminr_model = corp_rep_pls_model_ext,
                                     nboot = 1000,
                                     seed = 123)
## Bootstrapping model using seminr...
## SEMinR Model successfully bootstrapped

summary_boot_corp_rep_ext <- summary(boot_corp_rep_ext, alpha = 0.10)

4.1.3 Reflective Measurement Model Evaluation

PLS-SEM model estimates will change when any of the model relationships or variables are changed. We thus need to reassess the reflective measurement models to ensure that this portion of the model remains valid and reliable.


4.1.3.1 Indicator reliability

summary_corp_rep_ext$loadings
##         QUAL  PERF  CSOR  ATTR  COMP  LIKE  CUSA  CUSL
## qual_1 0.741 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_2 0.570 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_3 0.749 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_4 0.664 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_5 0.787 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_6 0.856 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_7 0.722 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_8 0.627 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## perf_1 0.000 0.846 0.000 0.000 0.000 0.000 0.000 0.000
## perf_2 0.000 0.690 0.000 0.000 0.000 0.000 0.000 0.000
## perf_3 0.000 0.573 0.000 0.000 0.000 0.000 0.000 0.000
## perf_4 0.000 0.717 0.000 0.000 0.000 0.000 0.000 0.000
## perf_5 0.000 0.638 0.000 0.000 0.000 0.000 0.000 0.000
## csor_1 0.000 0.000 0.771 0.000 0.000 0.000 0.000 0.000
## csor_2 0.000 0.000 0.571 0.000 0.000 0.000 0.000 0.000
## csor_3 0.000 0.000 0.838 0.000 0.000 0.000 0.000 0.000
## csor_4 0.000 0.000 0.617 0.000 0.000 0.000 0.000 0.000
## csor_5 0.000 0.000 0.848 0.000 0.000 0.000 0.000 0.000
## attr_1 0.000 0.000 0.000 0.754 0.000 0.000 0.000 0.000
## attr_2 0.000 0.000 0.000 0.506 0.000 0.000 0.000 0.000
## attr_3 0.000 0.000 0.000 0.891 0.000 0.000 0.000 0.000
## comp_1 0.000 0.000 0.000 0.000 0.824 0.000 0.000 0.000
## comp_2 0.000 0.000 0.000 0.000 0.821 0.000 0.000 0.000
## comp_3 0.000 0.000 0.000 0.000 0.844 0.000 0.000 0.000
## like_1 0.000 0.000 0.000 0.000 0.000 0.880 0.000 0.000
## like_2 0.000 0.000 0.000 0.000 0.000 0.869 0.000 0.000
## like_3 0.000 0.000 0.000 0.000 0.000 0.844 0.000 0.000
## cusa   0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000
## cusl_1 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.833
## cusl_2 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.917
## cusl_3 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.843
summary_corp_rep_ext$loadings^2
##         QUAL  PERF  CSOR  ATTR  COMP  LIKE  CUSA  CUSL
## qual_1 0.548 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_2 0.325 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_3 0.561 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_4 0.441 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_5 0.619 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_6 0.732 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_7 0.521 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## qual_8 0.393 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## perf_1 0.000 0.716 0.000 0.000 0.000 0.000 0.000 0.000
## perf_2 0.000 0.476 0.000 0.000 0.000 0.000 0.000 0.000
## perf_3 0.000 0.328 0.000 0.000 0.000 0.000 0.000 0.000
## perf_4 0.000 0.514 0.000 0.000 0.000 0.000 0.000 0.000
## perf_5 0.000 0.407 0.000 0.000 0.000 0.000 0.000 0.000
## csor_1 0.000 0.000 0.595 0.000 0.000 0.000 0.000 0.000
## csor_2 0.000 0.000 0.325 0.000 0.000 0.000 0.000 0.000
## csor_3 0.000 0.000 0.703 0.000 0.000 0.000 0.000 0.000
## csor_4 0.000 0.000 0.380 0.000 0.000 0.000 0.000 0.000
## csor_5 0.000 0.000 0.719 0.000 0.000 0.000 0.000 0.000
## attr_1 0.000 0.000 0.000 0.569 0.000 0.000 0.000 0.000
## attr_2 0.000 0.000 0.000 0.256 0.000 0.000 0.000 0.000
## attr_3 0.000 0.000 0.000 0.794 0.000 0.000 0.000 0.000
## comp_1 0.000 0.000 0.000 0.000 0.679 0.000 0.000 0.000
## comp_2 0.000 0.000 0.000 0.000 0.673 0.000 0.000 0.000
## comp_3 0.000 0.000 0.000 0.000 0.712 0.000 0.000 0.000
## like_1 0.000 0.000 0.000 0.000 0.000 0.774 0.000 0.000
## like_2 0.000 0.000 0.000 0.000 0.000 0.755 0.000 0.000
## like_3 0.000 0.000 0.000 0.000 0.000 0.713 0.000 0.000
## cusa   0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000
## cusl_1 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.694
## cusl_2 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.841
## cusl_3 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.710


4.1.3.2 Internal consistence reliability and convergent validity

summary_corp_rep_ext$reliability
##      alpha  rhoC   AVE  rhoA
## QUAL 0.878 0.894 0.518 1.000
## PERF 0.747 0.824 0.488 1.000
## CSOR 0.816 0.854 0.545 1.000
## ATTR 0.600 0.770 0.540 1.000
## COMP 0.776 0.869 0.688 0.786
## LIKE 0.831 0.899 0.747 0.836
## CUSA 1.000 1.000 1.000 1.000
## CUSL 0.831 0.899 0.748 0.839
## 
## Alpha, rhoC, and rhoA should exceed 0.7 while AVE should exceed 0.5


4.1.3.3 Discriminant validity


Fornell-Larcker criterion (Fornell & Larcker, 1981)

summary_corp_rep_ext$validity$fl_criteria
##       QUAL  PERF  CSOR  ATTR  COMP  LIKE  CUSA  CUSL
## QUAL 0.719     .     .     .     .     .     .     .
## PERF 0.788 0.699     .     .     .     .     .     .
## CSOR 0.700 0.626 0.738     .     .     .     .     .
## ATTR 0.689 0.665 0.592 0.735     .     .     .     .
## COMP 0.763 0.728 0.596 0.613 0.829     .     .     .
## LIKE 0.712 0.639 0.617 0.612 0.638 0.864     .     .
## CUSA 0.489 0.425 0.422 0.416 0.423 0.529 1.000     .
## CUSL 0.532 0.479 0.417 0.439 0.439 0.615 0.689 0.865
## 
## FL Criteria table reports square root of AVE on the diagonal and construct correlations on the lower triangle.


Heterotrait-monotrait ratio (HTMT) of the correlations (Henseler et al., 2015)

summary_corp_rep_ext$validity$htmt
##       QUAL  PERF  CSOR  ATTR  COMP  LIKE  CUSA CUSL
## QUAL     .     .     .     .     .     .     .    .
## PERF 0.918     .     .     .     .     .     .    .
## CSOR 0.809 0.738     .     .     .     .     .    .
## ATTR 0.896 0.911 0.826     .     .     .     .    .
## COMP 0.864 0.946 0.678 0.847     .     .     .    .
## LIKE 0.828 0.790 0.741 0.840 0.780     .     .    .
## CUSA 0.564 0.479 0.444 0.548 0.465 0.577     .    .
## CUSL 0.652 0.591 0.460 0.620 0.532 0.737 0.755    .
summary_boot_corp_rep_ext$bootstrapped_HTMT
##                Original Est. Bootstrap Mean Bootstrap SD T Stat. 5% CI 95% CI
## QUAL  ->  PERF         0.918          0.917        0.034  27.108 0.860  0.968
## QUAL  ->  CSOR         0.809          0.807        0.029  27.572 0.756  0.851
## QUAL  ->  ATTR         0.896          0.893        0.052  17.079 0.805  0.976
## QUAL  ->  COMP         0.864          0.864        0.029  30.249 0.817  0.909
## QUAL  ->  LIKE         0.828          0.827        0.029  28.292 0.776  0.875
## QUAL  ->  CUSA         0.564          0.563        0.045  12.592 0.487  0.631
## QUAL  ->  CUSL         0.652          0.651        0.045  14.446 0.574  0.723
## PERF  ->  CSOR         0.738          0.737        0.044  16.921 0.661  0.804
## PERF  ->  ATTR         0.911          0.911        0.046  19.877 0.833  0.986
## PERF  ->  COMP         0.946          0.948        0.038  25.069 0.881  1.003
## PERF  ->  LIKE         0.790          0.791        0.045  17.606 0.718  0.861
## PERF  ->  CUSA         0.479          0.478        0.061   7.898 0.380  0.575
## PERF  ->  CUSL         0.591          0.590        0.056  10.610 0.499  0.684
## CSOR  ->  ATTR         0.826          0.826        0.052  15.803 0.743  0.914
## CSOR  ->  COMP         0.678          0.675        0.047  14.526 0.594  0.751
## CSOR  ->  LIKE         0.741          0.740        0.041  18.272 0.673  0.805
## CSOR  ->  CUSA         0.444          0.443        0.049   8.992 0.360  0.522
## CSOR  ->  CUSL         0.460          0.457        0.052   8.922 0.369  0.544
## ATTR  ->  COMP         0.847          0.846        0.063  13.524 0.746  0.950
## ATTR  ->  LIKE         0.840          0.841        0.050  16.679 0.759  0.925
## ATTR  ->  CUSA         0.548          0.548        0.060   9.131 0.446  0.642
## ATTR  ->  CUSL         0.620          0.620        0.064   9.750 0.516  0.718
## COMP  ->  LIKE         0.780          0.782        0.041  19.009 0.716  0.849
## COMP  ->  CUSA         0.465          0.467        0.060   7.806 0.368  0.563
## COMP  ->  CUSL         0.532          0.534        0.059   8.961 0.438  0.631
## LIKE  ->  CUSA         0.577          0.577        0.044  13.153 0.502  0.647
## LIKE  ->  CUSL         0.737          0.736        0.041  17.872 0.669  0.802
## CUSA  ->  CUSL         0.755          0.755        0.034  22.232 0.699  0.809

4.2 Convergent validity

To examine the formatively measured constructs’ convergent validity, we carry out a separate redundancy analysis for each construct (see Fig 3.). The survey contained global single-item measures with generic assessments of the four phenomena—attractiveness, corporate social responsibility, performance, and quality—that we can use as measures of the dependent construct in the redundancy analyses (attr_global, csor_global, perf_global, and qual_global).

**Fig. 3** Redundancy analysis of formatively measured constructs.

Fig. 3 Redundancy analysis of formatively measured constructs.

To run the redundancy analysis for a formatively measured construct (e.g. attractiveness), we link it with an alternative measure of the same concept (attr_global). Here, the measurement model consists of two constructs (the multi-item formative measure and the global item) and the structural model consists of a single path. To perform the redundancy analysis, we first estimate each model with estimate_pls() and summarize it.

Subsequently, we check the path coefficients for convergent validity. A path coefficient of 0.70 (R2 = 50%) provides support for the formatively measured construct’s convergent validity.

Attractiveness (ATTR)

# Create measurement model
ATTR_redundancy_mm <- constructs(
  composite("ATTR_F", multi_items("attr_", 1:3), weights = mode_B),
  composite("ATTR_G", single_item("attr_global"))
)

# Create structural model
ATTR_redundancy_sm <- relationships(
  paths(from = c("ATTR_F"), to = c("ATTR_G"))
)

# Estimate the model
ATTR_redundancy_pls_model <- estimate_pls(
  data = corp_rep_data,
  measurement_model = ATTR_redundancy_mm,
  structural_model = ATTR_redundancy_sm,
  missing = mean_replacement,
  missing_value = "-99")
## Generating the seminr model
## All 344 observations are valid.

# Summarize the model
sum_ATTR_red_model <- summary(ATTR_redundancy_pls_model)
sum_ATTR_red_model$paths
##        ATTR_G
## R^2     0.764
## AdjR^2  0.763
## ATTR_F  0.874


Corporate Social Responsibility (CSOR)

# Create measurement model
CSOR_redundancy_mm <- constructs(
  composite("CSOR_F", multi_items("csor_", 1:5), weights = mode_B),
  composite("CSOR_G", single_item("csor_global"))
)

# Create structural model
CSOR_redundancy_sm <- relationships(
  paths(from = c("CSOR_F"), to = c("CSOR_G"))
)

# Estimate the model
CSOR_redundancy_pls_model <- estimate_pls(
  data = corp_rep_data,
  measurement_model = CSOR_redundancy_mm,
  structural_model = CSOR_redundancy_sm,
  missing = mean_replacement,
  missing_value = "-99")
## Generating the seminr model
## All 344 observations are valid.

# Summarize the model
sum_CSOR_red_model <- summary(CSOR_redundancy_pls_model)
sum_CSOR_red_model$paths
##        CSOR_G
## R^2     0.735
## AdjR^2  0.734
## CSOR_F  0.857


Performance (PERF)

# Create measurement model
PERF_redundancy_mm <- constructs(
  composite("PERF_F", multi_items("perf_", 1:5), weights = mode_B),
  composite("PERF_G", single_item("perf_global"))
)

# Create structural model
PERF_redundancy_sm <- relationships(
  paths(from = c("PERF_F"), to = c("PERF_G"))
)

# Estimate the model
PERF_redundancy_pls_model <- estimate_pls(
  data = corp_rep_data,
  measurement_model = PERF_redundancy_mm,
  structural_model  = PERF_redundancy_sm,
  missing = mean_replacement,
  missing_value = "-99")
## Generating the seminr model
## All 344 observations are valid.

# Summarize the model
sum_PERF_red_model <- summary(PERF_redundancy_pls_model)
sum_PERF_red_model$paths
##        PERF_G
## R^2     0.657
## AdjR^2  0.656
## PERF_F  0.811


Quality (QUAL)

# Create measurement model
QUAL_redundancy_mm <- constructs(
  composite("QUAL_F", multi_items("qual_", 1:8), weights = mode_B),
  composite("QUAL_G", single_item("qual_global"))
)

# Create structural model
QUAL_redundancy_sm <- relationships(
  paths(from = c("QUAL_F"), to = c("QUAL_G"))
)

# Estimate the model
QUAL_redundancy_pls_model <- estimate_pls(
  data = corp_rep_data,
  measurement_model = QUAL_redundancy_mm,
  structural_model  = QUAL_redundancy_sm,
  missing = mean_replacement,
  missing_value = "-99")
## Generating the seminr model
## All 344 observations are valid.

# Summarize the model
sum_QUAL_red_model <- summary(QUAL_redundancy_pls_model)
sum_QUAL_red_model$paths
##        QUAL_G
## R^2     0.648
## AdjR^2  0.647
## QUAL_F  0.805

4.3 Indicator collinearity

To check the formative measurement models for collinearity, we inspect the model summary object summary_corp_rep_ext for the indicator variance inflation factor (VIF) values by calling summary_corp_rep_ext$validity$vif_items.

VIF values of 5 or above are indicative of collinearity issues. However, collinearity issues may also be evidenced by VIF values of 3 (Becker et al., 2015; Mason & Perreault Jr, 1991).

summary_corp_rep_ext$validity$vif_items
## QUAL :
## qual_1 qual_2 qual_3 qual_4 qual_5 qual_6 qual_7 qual_8 
##  1.806  1.632  2.269  1.957  2.201  2.008  1.623  1.362 
## 
## PERF :
## perf_1 perf_2 perf_3 perf_4 perf_5 
##  1.560  1.506  1.229  1.316  1.331 
## 
## CSOR :
## csor_1 csor_2 csor_3 csor_4 csor_5 
##  1.560  1.487  1.735  1.556  1.712 
## 
## ATTR :
## attr_1 attr_2 attr_3 
##  1.275  1.129  1.264 
## 
## COMP :
## comp_1 comp_2 comp_3 
##  1.397  1.787  1.888 
## 
## LIKE :
## like_1 like_2 like_3 
##  1.945  2.000  1.811 
## 
## CUSA :
## cusa 
##    1 
## 
## CUSL :
## cusl_1 cusl_2 cusl_3 
##  1.802  2.564  1.933

4.4 Significance and relevance of the indicator weights

We consider the significance of the indicator weights by means of bootstrapping (bootstrap_model()). We assign the output to the boot_corp_rep_ext object and apply summary() with a significance level of 5% (alpha = 0.05) for two-tailed testing.

boot_corp_rep_ext <- bootstrap_model(
  seminr_model = corp_rep_pls_model_ext,
  nboot = 1000,
  cores = parallel::detectCores(), 
  seed = 123)
## Bootstrapping model using seminr...
## SEMinR Model successfully bootstrapped

summary_boot_corp_rep_ext <- summary(boot_corp_rep_ext, alpha = 0.05)


To inspect the indicator weight’s significance in the formative measurement model we inspect the summary_boot_corp_rep_ext$bootstrapped_weights object.

summary_boot_corp_rep_ext$bootstrapped_weights
##                  Original Est. Bootstrap Mean Bootstrap SD T Stat. 2.5% CI 97.5% CI
## qual_1  ->  QUAL         0.202          0.207        0.061   3.338   0.093    0.321
## qual_2  ->  QUAL         0.041          0.040        0.051   0.808  -0.063    0.135
## qual_3  ->  QUAL         0.106          0.104        0.065   1.629  -0.027    0.221
## qual_4  ->  QUAL        -0.005         -0.004        0.054  -0.085  -0.103    0.103
## qual_5  ->  QUAL         0.160          0.160        0.059   2.714   0.044    0.268
## qual_6  ->  QUAL         0.398          0.394        0.064   6.224   0.259    0.512
## qual_7  ->  QUAL         0.229          0.224        0.057   4.006   0.109    0.333
## qual_8  ->  QUAL         0.190          0.190        0.061   3.099   0.070    0.312
## perf_1  ->  PERF         0.468          0.465        0.068   6.887   0.326    0.594
## perf_2  ->  PERF         0.177          0.180        0.068   2.584   0.050    0.314
## perf_3  ->  PERF         0.194          0.189        0.054   3.603   0.089    0.304
## perf_4  ->  PERF         0.340          0.339        0.072   4.746   0.201    0.485
## perf_5  ->  PERF         0.199          0.197        0.062   3.184   0.075    0.323
## csor_1  ->  CSOR         0.306          0.300        0.083   3.671   0.139    0.467
## csor_2  ->  CSOR         0.037          0.035        0.069   0.536  -0.097    0.173
## csor_3  ->  CSOR         0.406          0.406        0.083   4.863   0.241    0.563
## csor_4  ->  CSOR         0.080          0.079        0.076   1.058  -0.076    0.220
## csor_5  ->  CSOR         0.416          0.416        0.089   4.662   0.245    0.607
## attr_1  ->  ATTR         0.414          0.415        0.071   5.848   0.273    0.544
## attr_2  ->  ATTR         0.201          0.196        0.063   3.165   0.074    0.322
## attr_3  ->  ATTR         0.658          0.655        0.062  10.549   0.537    0.770
## comp_1  ->  COMP         0.469          0.468        0.021  22.413   0.429    0.512
## comp_2  ->  COMP         0.365          0.366        0.017  21.421   0.335    0.400
## comp_3  ->  COMP         0.372          0.373        0.014  26.068   0.346    0.401
## like_1  ->  LIKE         0.419          0.420        0.014  29.343   0.393    0.448
## like_2  ->  LIKE         0.374          0.374        0.013  28.576   0.351    0.401
## like_3  ->  LIKE         0.363          0.363        0.014  26.477   0.337    0.390
## cusa  ->  CUSA           1.000          1.000        0.000       .   1.000    1.000
## cusl_1  ->  CUSL         0.369          0.369        0.016  23.494   0.338    0.401
## cusl_2  ->  CUSL         0.420          0.421        0.015  28.972   0.395    0.452
## cusl_3  ->  CUSL         0.365          0.365        0.015  24.427   0.335    0.393


To inspect the indicator’s relevance in the formative measurement model we inspect the summary_boot_corp_rep_ext$bootstrapped_loadings object.

summary_boot_corp_rep_ext$bootstrapped_loadings
##                  Original Est. Bootstrap Mean Bootstrap SD T Stat. 2.5% CI 97.5% CI
## qual_1  ->  QUAL         0.741          0.738        0.045  16.619   0.640    0.819
## qual_2  ->  QUAL         0.570          0.568        0.054  10.636   0.454    0.668
## qual_3  ->  QUAL         0.749          0.744        0.039  19.281   0.663    0.815
## qual_4  ->  QUAL         0.664          0.658        0.045  14.606   0.567    0.738
## qual_5  ->  QUAL         0.787          0.780        0.034  23.222   0.711    0.842
## qual_6  ->  QUAL         0.856          0.848        0.031  27.547   0.781    0.901
## qual_7  ->  QUAL         0.722          0.713        0.042  17.090   0.626    0.790
## qual_8  ->  QUAL         0.627          0.622        0.049  12.706   0.520    0.711
## perf_1  ->  PERF         0.846          0.839        0.035  24.055   0.766    0.902
## perf_2  ->  PERF         0.690          0.686        0.047  14.665   0.587    0.775
## perf_3  ->  PERF         0.573          0.568        0.051  11.156   0.466    0.664
## perf_4  ->  PERF         0.717          0.715        0.050  14.249   0.614    0.815
## perf_5  ->  PERF         0.638          0.634        0.059  10.760   0.507    0.739
## csor_1  ->  CSOR         0.771          0.761        0.050  15.397   0.652    0.852
## csor_2  ->  CSOR         0.571          0.562        0.060   9.432   0.437    0.671
## csor_3  ->  CSOR         0.838          0.830        0.043  19.467   0.737    0.904
## csor_4  ->  CSOR         0.617          0.610        0.057  10.745   0.492    0.716
## csor_5  ->  CSOR         0.848          0.841        0.043  19.726   0.749    0.917
## attr_1  ->  ATTR         0.754          0.753        0.051  14.919   0.645    0.841
## attr_2  ->  ATTR         0.506          0.501        0.066   7.609   0.362    0.626
## attr_3  ->  ATTR         0.891          0.887        0.033  26.678   0.816    0.946
## comp_1  ->  COMP         0.824          0.822        0.021  39.717   0.778    0.858
## comp_2  ->  COMP         0.821          0.821        0.020  40.457   0.781    0.856
## comp_3  ->  COMP         0.844          0.843        0.019  43.658   0.804    0.878
## like_1  ->  LIKE         0.880          0.880        0.016  55.828   0.847    0.907
## like_2  ->  LIKE         0.869          0.867        0.017  50.469   0.832    0.899
## like_3  ->  LIKE         0.844          0.844        0.019  44.828   0.803    0.879
## cusa  ->  CUSA           1.000          1.000        0.000       .   1.000    1.000
## cusl_1  ->  CUSL         0.833          0.832        0.024  35.285   0.780    0.874
## cusl_2  ->  CUSL         0.917          0.917        0.010  89.030   0.894    0.935
## cusl_3  ->  CUSL         0.843          0.842        0.023  37.177   0.793    0.881


Indicators with significant weights can be retained. If an indicator’s weight is not significant, the indicator should be considered for removal from the measurement model. However, the indicator can also be retained if, at a minimum, a formative indicator’s loading is significant or if the loading is at least 0.5 as this suggests that the indicator makes a sufficient absolute contribution to forming the construct.

5 Evaluation of the structural model (Chapter 6)

5.1 Collinearity issues

To examine the VIF values for the predictor constructs we inspect the vif_antecedents element within the summary_corp_rep_ext object. The VIF values above 5 (respectively 3) may indicate collinearity issues.

summary_corp_rep_ext$vif_antecedents
## COMP :
##  QUAL  PERF  CSOR  ATTR 
## 3.487 2.889 2.083 2.122 
## 
## LIKE :
##  QUAL  PERF  CSOR  ATTR 
## 3.487 2.889 2.083 2.122 
## 
## CUSA :
##  COMP  LIKE 
## 1.686 1.686 
## 
## CUSL :
##  COMP  LIKE  CUSA 
## 1.716 1.954 1.412

5.2 Significance and relevance of the structural model relationships

To evaluate the relevance and significance of the structural paths, we inspect the bootstrapped_paths element nested in the sum_boot_corp_ext object.

summary_boot_corp_rep_ext$bootstrapped_paths
##                Original Est. Bootstrap Mean Bootstrap SD T Stat. 2.5% CI 97.5% CI
## QUAL  ->  COMP         0.430          0.431        0.065   6.603   0.303    0.552
## QUAL  ->  LIKE         0.380          0.384        0.067   5.699   0.253    0.514
## PERF  ->  COMP         0.295          0.301        0.064   4.611   0.173    0.422
## PERF  ->  LIKE         0.117          0.123        0.073   1.613  -0.011    0.261
## CSOR  ->  COMP         0.059          0.059        0.054   1.084  -0.044    0.165
## CSOR  ->  LIKE         0.178          0.177        0.056   3.205   0.065    0.282
## ATTR  ->  COMP         0.086          0.084        0.055   1.565  -0.018    0.194
## ATTR  ->  LIKE         0.167          0.165        0.065   2.573   0.034    0.291
## COMP  ->  CUSA         0.146          0.147        0.071   2.047   0.007    0.281
## COMP  ->  CUSL         0.006          0.006        0.055   0.104  -0.104    0.115
## LIKE  ->  CUSA         0.436          0.435        0.062   7.069   0.312    0.555
## LIKE  ->  CUSL         0.344          0.343        0.056   6.175   0.231    0.449
## CUSA  ->  CUSL         0.505          0.505        0.042  12.074   0.420    0.586


The total effects help to assess the impact of the four exogenous driver constructs (ATTR, CSOR, PERF, and QUAL) on the outcome constructs CUSA and CUSL. They are stored in the bootstrapped_total_paths element of summary_boot_corp_rep_ext.

summary_boot_corp_rep_ext$bootstrapped_total_paths
##                Original Est. Bootstrap Mean Bootstrap SD T Stat. 2.5% CI 97.5% CI
## QUAL  ->  COMP         0.430          0.431        0.065   6.603   0.303    0.552
## QUAL  ->  LIKE         0.380          0.384        0.067   5.699   0.253    0.514
## QUAL  ->  CUSA         0.228          0.230        0.039   5.923   0.154    0.310
## QUAL  ->  CUSL         0.248          0.251        0.044   5.679   0.165    0.337
## PERF  ->  COMP         0.295          0.301        0.064   4.611   0.173    0.422
## PERF  ->  LIKE         0.117          0.123        0.073   1.613  -0.011    0.261
## PERF  ->  CUSA         0.094          0.098        0.040   2.373   0.024    0.179
## PERF  ->  CUSL         0.089          0.094        0.045   1.968   0.009    0.180
## CSOR  ->  COMP         0.059          0.059        0.054   1.084  -0.044    0.165
## CSOR  ->  LIKE         0.178          0.177        0.056   3.205   0.065    0.282
## CSOR  ->  CUSA         0.086          0.086        0.028   3.133   0.031    0.138
## CSOR  ->  CUSL         0.105          0.105        0.033   3.166   0.038    0.172
## ATTR  ->  COMP         0.086          0.084        0.055   1.565  -0.018    0.194
## ATTR  ->  LIKE         0.167          0.165        0.065   2.573   0.034    0.291
## ATTR  ->  CUSA         0.085          0.084        0.031   2.731   0.026    0.147
## ATTR  ->  CUSL         0.101          0.100        0.038   2.652   0.028    0.178
## COMP  ->  CUSA         0.146          0.147        0.071   2.047   0.007    0.281
## COMP  ->  CUSL         0.079          0.081        0.069   1.155  -0.052    0.213
## LIKE  ->  CUSA         0.436          0.435        0.062   7.069   0.312    0.555
## LIKE  ->  CUSL         0.564          0.563        0.061   9.219   0.444    0.676
## CUSA  ->  CUSL         0.505          0.505        0.042  12.074   0.420    0.586

5.3 Explanatory power

To consider the model’s explanatory power we analyze the R2 of the endogenous constructs and the f2 effect size of the predictor constructs.

R2 and adjusted R2 can be obtained from the summary_corp_rep_ext$paths element.

summary_corp_rep_ext$paths
##         COMP  LIKE  CUSA  CUSL
## R^2    0.631 0.558 0.292 0.562
## AdjR^2 0.627 0.552 0.288 0.558
## QUAL   0.430 0.380     .     .
## PERF   0.295 0.117     .     .
## CSOR   0.059 0.178     .     .
## ATTR   0.086 0.167     .     .
## COMP       .     . 0.146 0.006
## LIKE       .     . 0.436 0.344
## CUSA       .     .     . 0.505


The f2 effect size is stored in summary_corp_rep_ext$fSquare.

summary_corp_rep_ext$fSquare
##       QUAL  PERF  CSOR  ATTR  COMP  LIKE  CUSA  CUSL
## QUAL 0.000 0.000 0.000 0.000 0.144 0.094 0.000 0.000
## PERF 0.000 0.000 0.000 0.000 0.076 0.011 0.000 0.000
## CSOR 0.000 0.000 0.000 0.000 0.005 0.034 0.000 0.000
## ATTR 0.000 0.000 0.000 0.000 0.009 0.030 0.000 0.000
## COMP 0.000 0.000 0.000 0.000 0.000 0.000 0.018 0.000
## LIKE 0.000 0.000 0.000 0.000 0.000 0.000 0.159 0.138
## CUSA 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.403
## CUSL 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000

5.4 Predictive power

5.4.1 Generate predictions

To evaluate the model’s predictive power, we generate the predictions using the predict_pls() function.

We run the PLSpredict procedure with k=10 folds (noFolds = 10) and 10 repetitions (reps = 10). In addition, we use the direct antecedents approach (technique = predict_DA) to consider both the antecedent and the mediator as predictors of the outcome. Initial simulation evidence suggests a higher accuracy for the DA approach as opposed to the competing earliest antecedents approach (EA approach) which excludes the mediator(s) from the analysis (Ray et al., 2017).

Finally, we summarize the PLSpredict model and assign the output to the sum_predict_corp_rep_ext object.

predict_corp_rep_ext <- predict_pls(
  model = corp_rep_pls_model_ext,
  technique = predict_DA,
  noFolds = 10,
  reps = 10)

sum_predict_corp_rep_ext <- summary(predict_corp_rep_ext)

5.4.2 Inspect prediction errors

Two popular prediction statistics that quantify the amount of prediction error are the mean absolute error (MAE) and the root mean squared error (RMSE).

If the prediction error is highly skewed, the MAE is a more appropriate metric than the RMSE. To assess the distribution of prediction error, we use the plot() function on the object sum_predict_corp_rep_ext and specify the indicator argument to the indicators of interest, e.g. cusl_1, cusl_2, and cusl_3.

Note that by calling par(mfrow=c(1,3)) we arrange the three output plots horizontally and undo that plot setting with par(mfrow=c(1,1)).

par(mfrow=c(1,3))
plot(sum_predict_corp_rep_ext,
  indicator = "cusl_1")
plot(sum_predict_corp_rep_ext,
  indicator = "cusl_2")
plot(sum_predict_corp_rep_ext,
  indicator = "cusl_3")
par(mfrow=c(1,1))

We get the prediction errors by inspecting sum_predict_corp_rep_ext.

The prediction statistics’ raw values do not carry much meaning. Hence, researchers need to compare the RMSE (or MAE) values with a naïve linear regression model (LM) benchmark (Danks & Ray, 2018).

In comparing the RMSE (or MAE) values with the LM values, the following guidelines apply (Shmueli et al., 2019):

  • High predictive power: All indicators in the PLS-SEM analysis have lower RMSE (or MAE) values compared to the LM.
  • Medium predictive power: The majority (or the same number) of the indicators in the PLS-SEM analysis have lower RMSE (or MAE) values compared to the LM.
  • Low predictive power: The minority of the of the indicators in the PLS-SEM analysis have lower RMSE (or MAE) values compared to the LM.
  • Lack of predictive power: None of the of the indicators in the PLS-SEM analysis have lower RMSE (or MAE) values compared to the LM.
sum_predict_corp_rep_ext
## 
## PLS in-sample metrics:
##      comp_1 comp_2 comp_3 like_1 like_2 like_3  cusa cusl_1 cusl_2 cusl_3
## RMSE  1.023  1.080  1.111  1.102  1.453  1.479 0.985  1.181  1.225  1.298
## MAE   0.784  0.867  0.881  0.838  1.127  1.147 0.769  0.874  0.908  0.947
## 
## PLS out-of-sample metrics:
##      comp_1 comp_2 comp_3 like_1 like_2 like_3  cusa cusl_1 cusl_2 cusl_3
## RMSE  1.046  1.104  1.137  1.125  1.478  1.506 0.994  1.191  1.244  1.322
## MAE   0.799  0.885  0.897  0.856  1.145  1.168 0.777  0.880  0.923  0.964
## 
## LM in-sample metrics:
##      comp_1 comp_2 comp_3 like_1 like_2 like_3  cusa cusl_1 cusl_2 cusl_3
## RMSE  0.939  1.012  1.008  0.970  1.320  1.351 0.769  1.085  1.159  1.221
## MAE   0.717  0.800  0.786  0.746  1.022  1.052 0.608  0.794  0.872  0.910
## 
## LM out-of-sample metrics:
##      comp_1 comp_2 comp_3 like_1 like_2 like_3  cusa cusl_1 cusl_2 cusl_3
## RMSE  1.054  1.136  1.161  1.099  1.467  1.505 0.889  1.236  1.322  1.388
## MAE   0.803  0.894  0.895  0.828  1.123  1.169 0.693  0.885  0.987  1.035

5.5 Predictive model comparisons

To perform model comparison, we set up three theoretically justifiable competing models (Model 1, Model 2, and Model 3 in Fig. 4).

**Fig. 4** Alternative Models.

Fig. 4 Alternative Models.

The models share a measurement model and so we need only specify the measurement model one time. However, each model has a unique structural model and thus we specify three structural models and assign the output to structural_model1, structural_model2, and structural_model3. We then estimate three separate PLS models and summarize the results to sum_model1, sum_model2, and sum_model3.

# Create measurement model
measurement_model <- constructs(
  composite("QUAL", multi_items("qual_", 1:8), weights = mode_B),
  composite("PERF", multi_items("perf_", 1:5), weights = mode_B),
  composite("CSOR", multi_items("csor_", 1:5), weights = mode_B),
  composite("ATTR", multi_items("attr_", 1:3), weights = mode_B),
  composite("COMP", multi_items("comp_", 1:3)),
  composite("LIKE", multi_items("like_", 1:3)),
  composite("CUSA", single_item("cusa")),
  composite("CUSL", multi_items("cusl_", 1:3))
)

# Create structural models
# Model 1
structural_model1 <- relationships(
  paths(from = c("QUAL","PERF","CSOR","ATTR"), to = c("COMP", "LIKE")),
  paths(from = c("COMP","LIKE"),  to = c("CUSA", "CUSL")),
  paths(from = "CUSA", to = c("CUSL"))
)
# Model 2
structural_model2 <- relationships(
  paths(from = c("QUAL","PERF","CSOR","ATTR"), to = c("COMP", "LIKE", "CUSA")),
  paths(from = c("COMP","LIKE"),  to = c("CUSA", "CUSL")),
  paths(from = "CUSA", to = c("CUSL"))
)
# Model 3
structural_model3 <- relationships(
  paths(from = c("QUAL","PERF","CSOR","ATTR"), 
        to = c("COMP", "LIKE", "CUSA", "CUSL")),
  paths(from = c("COMP","LIKE"),  to = c("CUSA", "CUSL")),
  paths(from = "CUSA", to = c("CUSL"))
)

# Estimate and summarize the models
pls_model1 <- estimate_pls(
  data  = corp_rep_data,
  measurement_model = measurement_model,
  structural_model  = structural_model1,
  missing_value = "-99"
)
## Generating the seminr model
## All 344 observations are valid.
sum_model1 <- summary(pls_model1)

pls_model2 <- estimate_pls(
  data  = corp_rep_data,
  measurement_model = measurement_model,
  structural_model  = structural_model2,
  missing_value = "-99"
)
## Generating the seminr model
## All 344 observations are valid.
sum_model2 <- summary(pls_model2)

pls_model3 <- estimate_pls(
  data  = corp_rep_data,
  measurement_model = measurement_model,
  structural_model  = structural_model3,
  missing_value = "-99"
)
## Generating the seminr model
## All 344 observations are valid.
sum_model3 <- summary(pls_model3)


The matrix of the model’s information criteria includes the AIC and BIC value for each outcome construct and can be accessed by inspecting the it_criteria element in the summary objects (e.g. sum_model1$it_criteria).

sum_model1$it_criteria
##         COMP     LIKE     CUSA     CUSL
## AIC -333.825 -271.581 -113.728 -276.964
## BIC -314.622 -252.378 -102.206 -261.602
sum_model2$it_criteria
##         COMP     LIKE     CUSA     CUSL
## AIC -323.388 -275.188 -120.849 -276.965
## BIC -304.185 -255.985  -93.965 -261.603
sum_model3$it_criteria
##         COMP     LIKE     CUSA     CUSL
## AIC -316.781 -275.871 -124.286 -275.763
## BIC -297.577 -256.668  -97.401 -245.038


To compare BICs for CUSA, we extract and compare Schwarz’s (1978) Bayesian information criterion (BIC), we subset this matrix to return only the BIC row of the CUSA column (e.g. sum_model1$it_criteria["BIC", "CUSA"]).

To compare the BIC for the three models we assign each model’s BIC for CUSΑ to the vector itcriteria_vector and name the vector using the names() function. The model that produces the lowest value in BIC is to be selected.

itcriteria_vector <- c(sum_model1$it_criteria["BIC", "CUSA"],
                       sum_model2$it_criteria["BIC", "CUSA"],
                       sum_model3$it_criteria["BIC", "CUSA"])

names(itcriteria_vector) <- c("Model1", "Model2", "Model3")

itcriteria_vector
##     Model1     Model2     Model3 
## -102.20623  -93.96473  -97.40109


Finally we inspect that itcriteria_vector and calculate the BIC Akaike Weights using the compute_it_criteria_weights() function.

Akaike weights indicate a model’s relative likelihood and faciliate model comparisons in light of model selection uncertainty that may arise when BIC differences are small (Danks et al., 2020).

The higher the Akaike weights, the more likely that the model selected better represents the data generation model. We comoute the Akaike weights by inserting itcriteria_vector in the compute_itcriteria_weights() function.

compute_itcriteria_weights(itcriteria_vector)
##     Model1     Model2     Model3 
## 0.90357310 0.01466713 0.08175977

6 Mediation analysis (Chapter 7)

Mediation occurs when a construct, referred to as mediator construct, intervenes between two other related constructs (see Fig. 5 for a generic mediation model). Direct effects are the relationships linking two constructs with a single arrow (p1, p2, and p3). Indirect effects represent a sequence of relationships with at least one intervening construct. Thus, an indirect effect is a sequence of two or more direct effects (p1 and p2) and is represented visually by multiple arrows. Fig. 6 shows two mediation effects in the corporate reputation model, i.e. from COMP through CUSA to CUSL and from LIKE through CUSA to CUSL.

**Fig. 5** Generic mediation model.

Fig. 5 Generic mediation model.


**Fig. 6** Corporate reputation model with highlighted mediation paths (dotted lines).

Fig. 6 Corporate reputation model with highlighted mediation paths (dotted lines).


The results for the indirect effects (\(\small p~1~*p~2~\)) can be found by inspecting the total_indirect_effects element within the summary_corp_rep_ext object.

summary_corp_rep_ext$total_indirect_effects
##       QUAL  PERF  CSOR  ATTR  COMP  LIKE  CUSA  CUSL
## QUAL 0.000 0.000 0.000 0.000 0.000 0.000 0.228 0.248
## PERF 0.000 0.000 0.000 0.000 0.000 0.000 0.094 0.089
## CSOR 0.000 0.000 0.000 0.000 0.000 0.000 0.086 0.105
## ATTR 0.000 0.000 0.000 0.000 0.000 0.000 0.085 0.101
## COMP 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.074
## LIKE 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.220
## CUSA 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## CUSL 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000


Specific indirect paths can be evaluated for significance by using the specific_effect_significance() function. This function takes a bootstrapped model object (boot_seminr_model = boot_corp_rep_ext), an antecedent construct name (from = "COMP, respectively from = "LIKE), a mediator construct name (through = "CUSA") and an outcome construct name (to = "CUSL") as arguments and returns the confidence interval (e.g. with alpha = 0.05) for the total indirect paths from the antecedent to the outcome construct.

Note that for serial mediation models, the through argument can take multiple mediating constructs as arguments (e.g. through = c("construct1", "construct2”)).

For example, the specific indirect effects for the paths from COMP through CUSA to CUSL and LIKE through CUSA to CUSL are:

specific_effect_significance(boot_seminr_model = boot_corp_rep_ext, 
  from = "COMP", 
  through = "CUSA", 
  to = "CUSL", 
  alpha = 0.05)
##  Original Est. Bootstrap Mean   Bootstrap SD        T Stat.       0.05% CI       0.95% CI 
##     0.07350093     0.07443445     0.03651814     2.01272396     0.00296709     0.14620781
specific_effect_significance(boot_corp_rep_ext, 
  from = "LIKE", 
  through = "CUSA", 
  to = "CUSL", 
  alpha = 0.05)
##  Original Est. Bootstrap Mean   Bootstrap SD        T Stat.       0.05% CI       0.95% CI 
##     0.22001302     0.21993527     0.03687955     5.96571802     0.14917504     0.29318977


We consider both the indirect (\(\small p~1~*p~2~\)) and the direct effects (\(\small p~3~\)) to determine the mediation type, respectively the non-mediation type (Zhao et al., 2010).

Zhao et al. (2010) identify three types of mediation:

  • Complementary mediation: The indirect effect as well as the direct effect are significant and point in the same direction. The product of indirect and direct effects (\(\small p~1~*p~2~*p~3~\)) has a positive sign.
  • Competitive mediation: The indirect effect as well as the direct effect are significant, but point in opposite directions. The product of indirect and direct effects (\(\small p~1~*p~2~*p~3~\)) has a negative sign.
  • Indirect-only mediation: The indirect effect is significant but not the direct effect.

In addition, they characterize two types of non-mediation:

  • Direct-only non-mediation: The direct effect is significant but not the indirect effect.
  • No-effect non-mediation: Neither the direct nor the indirect effect are significant.

The direct effects (e.g. COMP on CUSL and LIKE on CUSL) can be accessed by the inspecting summary_corp_rep_ext$paths. The confidence intervals for the direct effects are stored in summary_boot_corp_rep_ext$bootstrapped_paths.

summary_corp_rep_ext$paths
##         COMP  LIKE  CUSA  CUSL
## R^2    0.631 0.558 0.292 0.562
## AdjR^2 0.627 0.552 0.288 0.558
## QUAL   0.430 0.380     .     .
## PERF   0.295 0.117     .     .
## CSOR   0.059 0.178     .     .
## ATTR   0.086 0.167     .     .
## COMP       .     . 0.146 0.006
## LIKE       .     . 0.436 0.344
## CUSA       .     .     . 0.505
summary_boot_corp_rep_ext$bootstrapped_paths
##                Original Est. Bootstrap Mean Bootstrap SD T Stat. 2.5% CI 97.5% CI
## QUAL  ->  COMP         0.430          0.431        0.065   6.603   0.303    0.552
## QUAL  ->  LIKE         0.380          0.384        0.067   5.699   0.253    0.514
## PERF  ->  COMP         0.295          0.301        0.064   4.611   0.173    0.422
## PERF  ->  LIKE         0.117          0.123        0.073   1.613  -0.011    0.261
## CSOR  ->  COMP         0.059          0.059        0.054   1.084  -0.044    0.165
## CSOR  ->  LIKE         0.178          0.177        0.056   3.205   0.065    0.282
## ATTR  ->  COMP         0.086          0.084        0.055   1.565  -0.018    0.194
## ATTR  ->  LIKE         0.167          0.165        0.065   2.573   0.034    0.291
## COMP  ->  CUSA         0.146          0.147        0.071   2.047   0.007    0.281
## COMP  ->  CUSL         0.006          0.006        0.055   0.104  -0.104    0.115
## LIKE  ->  CUSA         0.436          0.435        0.062   7.069   0.312    0.555
## LIKE  ->  CUSL         0.344          0.343        0.056   6.175   0.231    0.449
## CUSA  ->  CUSL         0.505          0.505        0.042  12.074   0.420    0.586


To evaluate if CUSA acts as a complementary or competitive mediator for the effect of LIKE, respectively COMP, on CUSL, we determine whether the product of indirect and direct effects (\(\small p~1~*p~2~*p~3~\)) has a positive or negative sign.

We can subset the path matrix to access the path coefficients for p1, p2, and p3 (e.g. from LIKE to CUSA: summary_corp_rep_ext$paths[“LIKE”, “CUSA”].

# Calculate the sign of p1*p2*p3 for LIKE as antecedent
summary_corp_rep_ext$paths["LIKE", "CUSL"] *
  summary_corp_rep_ext$paths["LIKE","CUSA"] * 
  summary_corp_rep_ext$paths["CUSA","CUSL"]
## [1] 0.07569007
# Calculate the sign of p1*p2*p3 for COMP as antecedent
summary_corp_rep_ext$paths["COMP", "CUSL"] *
  summary_corp_rep_ext$paths["COMP","CUSA"] * 
  summary_corp_rep_ext$paths["CUSA","CUSL"]
## [1] 0.0004163559

7 Moderation analysis (Chapter 8)

Moderation describes a situation in which the relationship between two constructs is not constant but depends on the values of a third variable, referred to as a moderator variable. The moderator variable (or construct) changes the strength or even the direction of a relationship between two constructs in the model.

Specifically, we introduce perceived switching costs (SC) as a moderator variable that can be assumed to negatively influence the relationship between CUSA and CUSL (Fig. 7). We assume that the higher SC, the weaker the relationship between these two constructs. We use an extended form of Jones, Mothersbaugh, and Beatty’s (2000) scale and measure switching costs reflectively using four indicators (Tab. 5).

**Fig. 7** Corporate reputation model with the added moderator Switching Costs (SC) and interaction term (CUSA * SC).

Fig. 7 Corporate reputation model with the added moderator Switching Costs (SC) and interaction term (CUSA * SC).


Tab. 5 Measurement details for the corporate reputation model with the added moderator switching costs (SC).
construct variable name item
Switching Cost (SC) switch_1 It takes me a great deal of time to switch to another company.
Switching Cost (SC) switch_2 It costs me too much to switch to another company.
Switching Cost (SC) switch_3 It takes a lot of effort to get used to a new company with its specific ‘rules’ and practices.
Switching Cost (SC) switch_4 In general, it would be a hassle switching to another company.

Interaction terms are described in the measurement model functionconstructs() using the following methods:

  • product_indicator generates the interaction term by multiplying each indicator of the exogenous construct with each indicator of the moderator variable.
  • orthogonal is an extension of the product indicator approach, which generates an interaction term whose indicators do not share any variance with the indicators of the exogenous construct and the moderator. The orthogonalizing approach is typically used to handle multicollinearity in the structural model.
  • two-stage specifies the interaction term as the product of the latent scores of the exogenous construct and moderator variable.

While specifying the measurement model, we add the interaction term between the independent variable CUSA and the moderator variable SC and apply the two stage approach as follows: interaction_term(iv = "CUSA", moderator = "SC", method = two_stage).

Further, we add the path linkingCUSA*SC to CUSL to the structural model in paths(from = c("CUSA", "SC", "CUSA*SC"), to = c("CUSL")).

# Create the measurement model
corp_rep_mm_mod <- constructs(
  composite("QUAL", multi_items("qual_", 1:8), weights = mode_B),
  composite("PERF", multi_items("perf_", 1:5), weights = mode_B),
  composite("CSOR", multi_items("csor_", 1:5), weights = mode_B),
  composite("ATTR", multi_items("attr_", 1:3), weights = mode_B),
  composite("COMP", multi_items("comp_", 1:3)),
  composite("LIKE", multi_items("like_", 1:3)),
  composite("CUSA", single_item("cusa")),
  composite("SC", multi_items("switch_", 1:4)),
  composite("CUSL", multi_items("cusl_", 1:3)),
  interaction_term(iv = "CUSA", moderator = "SC", method = two_stage))

# Create the structural model
corp_rep_sm_mod <- relationships(
  paths(from = c("QUAL", "PERF", "CSOR", "ATTR"), to = c("COMP", "LIKE")),
  paths(from = c("COMP", "LIKE"), to = c("CUSA", "CUSL")),
  paths(from = c("CUSA", "SC", "CUSA*SC"), to = c("CUSL"))
)


Next, we estimate the model with estimate_pls(), store it to corp_rep_pls_model_mod and summarize it to sum_corp_rep_mod. Further, we also apply bootstrapping and summarize the bootstrapped model with an alpha level of 5% (summary(boot_corp_rep_mod, alpha = 0.05)).

# Estimate the new model with moderator
corp_rep_pls_model_mod <- estimate_pls(
  data = corp_rep_data,
  measurement_model = corp_rep_mm_mod,
  structural_model = corp_rep_sm_mod,
  missing = mean_replacement,
  missing_value = "-99"
)
## Generating the seminr model
## All 344 observations are valid.

# Extract the summary
sum_corp_rep_mod <- summary(corp_rep_pls_model_mod)

# Bootstrap the model
boot_corp_rep_mod <- bootstrap_model(
  seminr_model = corp_rep_pls_model_mod,
  nboot = 1000)
## Bootstrapping model using seminr...
## SEMinR Model successfully bootstrapped

# Summarize the results of the bootstrap
sum_boot_corp_rep_mod <- summary(boot_corp_rep_mod, alpha = 0.05)


In order to evaluate the moderating effect, we inspect the bootstrapped_paths element in the sum_boot_corp_rep_mod object.

sum_boot_corp_rep_mod$bootstrapped_paths
##                   Original Est. Bootstrap Mean Bootstrap SD T Stat. 2.5% CI 97.5% CI
## QUAL  ->  COMP            0.430          0.431        0.068   6.323   0.298    0.565
## QUAL  ->  LIKE            0.380          0.387        0.064   5.964   0.264    0.512
## PERF  ->  COMP            0.295          0.296        0.064   4.590   0.164    0.425
## PERF  ->  LIKE            0.117          0.124        0.069   1.696  -0.009    0.257
## CSOR  ->  COMP            0.059          0.063        0.053   1.104  -0.038    0.169
## CSOR  ->  LIKE            0.178          0.175        0.056   3.171   0.064    0.282
## ATTR  ->  COMP            0.086          0.086        0.053   1.620  -0.014    0.188
## ATTR  ->  LIKE            0.167          0.163        0.062   2.700   0.037    0.286
## COMP  ->  CUSA            0.146          0.148        0.065   2.250   0.022    0.280
## COMP  ->  CUSL           -0.020         -0.018        0.057  -0.354  -0.126    0.096
## LIKE  ->  CUSA            0.436          0.435        0.056   7.735   0.326    0.543
## LIKE  ->  CUSL            0.319          0.316        0.059   5.424   0.199    0.427
## CUSA  ->  CUSL            0.467          0.465        0.047   9.891   0.373    0.556
## SC  ->  CUSL              0.071          0.075        0.058   1.214  -0.039    0.194
## CUSA*SC  ->  CUSL        -0.071         -0.072        0.031  -2.276  -0.138   -0.016


To better comprehend the results of the moderator analysis, we use the slope_analysis() function to visualize the two-way interaction effect. Therefore, we specify the model (moderated_model = corp_rep_pls_model_mod), the dependent variable (dv = "CUSL"), the moderator (moderator = "SC"), and the independent variable (iv = "CUSA") as well as the legend’s position (leg_place = "bottomright").

slope_analysis(
  moderated_model = corp_rep_pls_model_mod,
  dv = "CUSL",
  moderator = "SC",
  iv = "CUSA",
  leg_place = "bottomright")


These results provide clear support that SC exerts a significant and negative moderating effect on the relationship between CUSA and CUSL. The higher the SC, the weaker the relationship between CUSA and CUSL.

References

Becker, J.-M., Ringle, C. M., Sarstedt, M., & Völckner, F. (2015). How collinearity affects mixture regression results. Marketing Letters, 26(4), 643–659. https://doi.org/10.1007/s11002-014-9299-9

Danks, N. P., & Ray, S. (2018). Predictions from partial least squares models. In Applying partial least squares in tourism and hospitality research. Emerald Publishing Limited. https://doi.org/10.1108/978-1-78756-699-620181003

Danks, N. P., Sharma, P. N., & Sarstedt, M. (2020). Model selection uncertainty and multimodel inference in partial least squares structural equation modeling (pls-sem). Journal of Business Research, 113, 13–24. https://doi.org/10.1016/j.jbusres.2020.03.019

Diamantopoulos, A., Sarstedt, M., Fuchs, C., Wilczynski, P., & Kaiser, S. (2012). Guidelines for choosing between multi-item and single-item scales for construct measurement: A predictive validity perspective. Journal of the Academy of Marketing Science, 40(3), 434–449. https://doi.org/10.1007/s11747-011-0300-3

Dijkstra, T. K. (2010). Latent variables and indices: Herman wold’s basic design and partial least squares. In Handbook of partial least squares (pp. 23–46). Springer. https://doi.org/10.1007/978-3-540-32827-8_2

Dijkstra, T. K. (2014). PLS’Janus face–response to professor rigdon’s “rethinking partial least squares modeling: In praise of simple methods”. Long Range Planning, 47(3), 146–153. https://doi.org/10.1016/j.lrp.2014.02.004

Dijkstra, T. K., & Henseler, J. (2015). Consistent partial least squares path modeling. MIS Quarterly, 39(2), 297–316. https://doi.org/10.2307/26628355

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. https://doi.org/10.1177/002224378101800104

Franke, G., & Sarstedt, M. (2019). Heuristics versus statistics in discriminant validity testing: A comparison of four procedures. Internet Research. https://doi.org/10.1108/IntR-12-2017-0515

Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2021). A primer on partial least squares structural equation modeling (pls-sem) (Third edition). SAGE.

Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of pls-sem. European Business Review, 31(1), 2–24. https://doi.org/10.1108/EBR-11-2018-0203

Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. https://doi.org/10.1007/s11747-014-0403-8

Jones, M. A., Mothersbaugh, D. L., & Beatty, S. E. (2000). Switching barriers and repurchase intentions in services. Journal of Retailing, 76(2), 259–274. https://doi.org/10.1016/S0022-4359(00)00024-5

Jöreskog, K. G. (1971). Simultaneous factor analysis in several populations. Psychometrika, 36(4), 409–426. https://doi.org/10.1007/BF02291366

Mason, C. H., & Perreault Jr, W. D. (1991). Collinearity, power, and interpretation of multiple regression analysis. Journal of Marketing Research, 28(3), 268–280. https://doi.org/10.1177/002224379102800302

Radomir, L., & Moisescu, O. I. (2019). Discriminant validity of the customer-based corporate reputation scale: Some causes for concern. Journal of Product & Brand Management. https://doi.org/10.1108/JPBM-11-2018-2115

Ray, S., Danks, N., & Shmueli, G. (2017, June). The piggy in the middle: The role of mediators in pls prediction.

Sarstedt, M., Hair Jr, J. F., Cheah, J.-H., Becker, J.-M., & Ringle, C. M. (2019). How to specify, estimate, and validate higher-order constructs in pls-sem. Australasian Marketing Journal (AMJ), 27(3), 197–211. https://doi.org/10.1016/j.ausmj.2019.05.003

Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics, 6(2), 461–464. https://doi.org/10.1214/aos/1176344136

Shmueli, G., Sarstedt, M., Hair, J. F., Cheah, J.-H., Ting, H., Vaithilingam, S., & Ringle, C. M. (2019). Predictive model assessment in pls-sem: Guidelines for using plspredict. European Journal of Marketing. https://doi.org/10.1108/EJM-02-2019-0189

Trizano-Hermosilla, I., & Alvarado, J. M. (2016). Best alternatives to cronbach’s alpha reliability in realistic conditions: Congeneric and asymmetrical measurements. Frontiers in Psychology, 7, 769. https://doi.org/10.3389/fpsyg.2016.00769

Zhao, X., Lynch, J. G., & Chen, Q. (2010). Reconsidering baron and kenny: Myths and truths about mediation analysis. Journal of Consumer Research, 37(2), 197–206. https://doi.org/10.1086/651257