Content Validity-Based Evaluation Criteria System for Siting Wind-Solar Plants

The spatial assessment criteria system for hybridizing renewable energy sources, such as hybrid solar-wind farms, is critical in selecting ideal installation sites that maximize benefits, reduce costs, protect the environment, and serve the community. However, a systematic approach to designing indicator systems is rarely used in relevant site selection studies. Therefore, the current paper attempts to present an inclusive framework based on content validity to create an effective criteria system for siting wind-solar plants. To this end, the criteria considered in the related literature are captured, and the top 10 frequent indicators are identified. The Delphi technique is used to subject commonly used factors to expert judgments. Other factors are considered according to expert recommendations. In this context, the assessment tool was a combination of questionnaires and interviews with experts from scientific backgrounds that reflect the measurement target. The item-level content validity index (I-CVI) is applied along with the modified Kappa statistic (k*) to analyze expert ratings and suggestions. The results demonstrate the superiority of 9 and 4 commonly used factors and the suggested factors, respectively. The 13 criteria have achieved high agreement among experts at I-CVIs ≥ 0.78 and k*s > 0.76. The conclusion can be drawn that the modified Kappa statistic used in this analysis has a more significant effect on eliminating irrelevant factors. The current methodology and consequences might pave the way for making informed decisions to locate wind and solar farms.


Introduction
The depletion of fossil fuel reserves, rising fuel prices, and rising environmental concerns have increased renewable energy use (RE) in recent years [1].Wind and solar energy are the most promising technologies among the various RE sources as they are the fastest growing and most mature [2], [3].Due to the resource fluctuation of single solar or wind energy plants, windsolar hybrid farms are usually preferred [4].Such farms increase energy reliability, reduce development costs, and decrease energy storage needs [5].Nevertheless, highly efficient preplanning is required, and informed decisions about selecting suitable construction sites should be made.The decision is commonly based on multiple and varied factors covering the project's technical, economic, environmental, and social aspects, which are addressed by multi-criteria decision-making (MCDM) methods [6].Thus, careful consideration should be given to ISSN: 0067-2904 designing a criteria system with efficient content compatible with wind and solar energy to ensure the success of such hybrid farms.
The MCDM efficiency of RE farms is not significantly affected by the increase or decrease in the number of evaluation criteria adopted but by whether the criteria system is relevant and effectively influences the decision-making outcomes [7], [8].Therefore, several techniques have been applied in the literature to design related criteria systems.Preliminarily, after the defined goal, assessment factors are considered based on a review of similar studies [9], [10].Literature statistics-based indicators can reflect the author's experience and the merits of some implemented renewable energy projects.However, redundant factors are introduced in the literature [11].As another effective method for selecting criteria, expert judgment has been widely considered [12], [13].Expert opinions can refine the literature-based criteria systems and rate them reasonably.Other methods of designing indicator systems have rarely been recorded in the relevant publications, either stand-alone or integrated with the abovementioned methods.Characteristics of study areas, data availability, national regulations, etc., were considered, as shown in Table 1.
Table 1: Methods for selecting criteria considered in the literature.

Criteria selection based on
References █ [4], [9], [10], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26] ██ [2], [12], [27], [28], [29], [30], [31] ███ [32] ███ [11], [33] ██ [34], [35] ███ [1] ███ [36] ██ [37] █ [38], [39] ██ [40] █ Literature statistics █ National regulations █ Data availability █ Expert judgments █ Study area attributes █ Authors' suggestion Previous studies of site suitability assessment for wind-solar farms have sought to design their criteria systems based on various sources.However, to our knowledge, most papers have not disclosed any approach to validate these systems.Despite attempts by some researchers to refine the selected criteria based on experts' opinions, they also did not address the multiplicity and conflict of judgments issued.An exception to this statement is a study by Ali et al. [36].Although the study applied the content validity index (CVI) to examine the experts' answers for each criterion questioned, it has not considered evaluating the entire content of the criteria system and the new factors that the experts might add.It is worth noting that content validity is defined as the degree to which the assessment tool items relate and describe the targeted domain for a particular assessment purpose [41].The term "assessment tool" refers to a specific manner of collecting data, such as questionnaires.All features of the measurement process, including survey items and response formats, are referenced in the items of an assessment tool.The domain indicates details of the subject utilized to present the measurement target information [42] .
Against this backdrop, we believe a noticeable gap in the literature has not been filled.As such, the present research attempts to improve the practice by implementing the scale-level content validity index (S-CVI) along with the modified Kappa statistic (k*).This way, unnecessary indicators that may waste time and resources are eliminated in the decision-making process.The paper also considers the complementarity of factors proposed by experts with commonly used elements in the relevant literature into a unified criteria system with good content.A case study was organized to design a criteria system for siting wind-solar farms to showcase the suggested framework performance more convincingly.It is the first report on creating a criteria system for assessing suitability for wind-solar plants.The rest of the paper describes the methodological framework that focuses on the content validity steps, findings and discussions, and extrapolated conclusions.

Methodology
The methodology framework of the present study is demonstrated in Figure 1.The methods consist of extracting the commonly used evaluation criteria, refining the criteria system based on expert opinions, and calculating the content validity index.

Extract standard evaluation criteria
A comprehensive review of the considered factor system was conducted in the site suitability assessment research for wind and solar hybrid farms worldwide.The Quality, Similarity, and Latest (QSL) approach were used to select relevant publications.Mainly, criteria have been divided in most published papers into evaluation and exclusion factors [21], [34], [38].However, some researchers have grouped the indicators based on the specific study objective, the decision-making methodology used, or the criteria data type.For example, [37] has separated factors into positive and negative.
In contrast, others have grouped criteria based on the type of data into quantitative and qualitative [36].Regarding project management, criteria are divided into benefits, opportunities, costs, and risks (BOCR) [24].Despite the variety of clustering methods, grouping based on evaluation and exclusion factors remains unavoidable since exclusion criteria are necessary to avoid unsuitable sites and thus reduce the research workload [43].Furthermore, most indicators used in site suitability assessment for hosting wind and solar projects include restrictive and evaluative thresholds.
The evaluation criteria consider suitable alternatives for RE farm development [44].They can classify or rank candidates that do not satisfy the exclusion criteria [45].Therefore, evaluation criteria are mainly weighted.Previous studies have commonly divided the evaluation criteria into three main groups, namely, technical, economic, and environmental [1], [20], [32].Other recent articles have added social factors to consider community acceptance of such projects [9], [36], [39].In contrast, some authors have also considered risk factors [33], [39].
Technical criteria directly affect the efficiency and performance of wind-solar farms.The most important technical criteria are the natural resource factors for wind and solar energy [46].For photovoltaics, solar radiation is a critical indicator.More intense radiation is vital in increasing electrical energy production from available resources [47], [48].Wind velocity, for wind turbines, is the most crucial natural resource in energy production [49].Besides, acceptable wind speeds cool the PV cell [50].Mehdi et al. argue that average wind velocity is inappropriate as an indicator [17].Instead, wind density should be adopted as it is calculated based on three climatic factors: wind speed, air temperature, and air pressure.Nevertheless, this statement could increase the initial planning burden for wind-solar hybrid farms.Microsite selection wind density can be adopted to further site suitability assessment.
Economic criteria minimize costs incurred, which is often a crucial part of the decisionmaking process in any operational project such as RE plants [51], [52].Economic indicators have been discussed maturely in the literature, especially factors of infrastructure availability, such as distance to roads and power transmission grid.Land inclination (slope) has also been defined as one of the economic criteria in several articles.From an engineering perspective, a steep slope increases the installation costs of RE modules [34].The slope has been included in technical criteria by other researchers.They reported that investigating the optimum tilt angle would improve the efficiency of photovoltaic generation [12].Some relevant publications considered land price, construction cost, and electricity demand as economic sub-criteria [37].Environmental protection is one of the primary reasons behind the trend toward RE investments.Thus, a set of environmental factors has attracted the interest of researchers and decision-makers.Land use/land cover (LULC) has been considered an environmental criterion since RE developments might destroy fertile land or affect biodiversity [35].Remote arid lands with low use-value have been recommended in many published papers as being highly suitable for wind-solar installations [1], [40], [53].Nevertheless, some authors have suggested using LULC alongside exclusion criteria [16].Other studies also discuss environmental criteria for visual and noise pollution caused by wind-solar equipment [11], [33].Although there is an agreement to consider the environmental aspects of solar-wind farm planning, controversy still rages regarding the definition and whether they should be evaluated or excluded.
Social criteria reflect people's attitudes toward RE projects.Negative public views can lead to project implementation resistance [11].In the broad literature, social criteria mainly consider the well-being of people near wind-solar farms.For instance, Algarín et al. [39] discussed people's reactions to visual pollution and landscape distortion caused by the deployment of wind turbines and photovoltaics.Population density is also a vital social criterion for siting wind-solar farms [11].Thus, considering unemployment rates and the possibility of involving local people in investment can reduce people's rejection of such projects [18].Based on the above classifications, Table 2 summarizes the evaluation criteria used in the relevant articles.

Criteria system refinement
The best way to refine evaluation factors drawn from the literature is to subject them to experts' opinions.For this mission to be successful, correct practices should be followed in selecting an expert panel, designing the questionnaire, using the method of questioning, and content validation.

Expert panel
Commonly, expert panels are chosen from different scientific backgrounds, reflecting the diversity of criteria to be refined [39].In the present study, RE, engineering, the environment, and strategic management were among the disciplines of experts invited.These experts come from the academic and industrial sectors to make judgments based on a more comprehensive insight.
For content validation purposes, studies recommend that the number of experts is at least six and not exceed ten [55], [56].Accordingly, nine experts with 8 and 20 years of experience have been appointed for this work.The relevant details of the experts are shown in Table 3.

Questionnaire design
A questionnaire form should be carefully prepared to ensure that the expert panel understands the rating task.Experts must be given ample definitions of the content domain and the items within that domain [55].In the current paper, the questionnaire form consists of three sections.The first section explains the specific study objectives and how to respond to the questionnaire.Then, in the second section, the evaluation criteria derived from the literature are exposed to the raters' judgments according to a recommended 4-point rating scale, which 1indicates not relevant, 2-somewhat relevant, 3-quite relevant, and 4-highly relevant [56].In the last section, experts are encouraged to suggest new factors or give feedback that would improve the criteria system for the target case study.

Questioning experts
Face-to-face interviews with experts are ordinarily preferred to obtain reliable ratings.This approach, however, is costly, time-consuming, and unsafe during the COVID-19 pandemic.Therefore, our work adopted the online interview approach and Google Forms to implement the questionnaire process.
The Delphi technique, used to converge the opinions of a group of experts, was followed over several rounds to collect ratings [42].In the first round, items representing the top 10 evaluation criteria in the relevant studies were presented.Experts were asked individually to rate each item based on the adopted scale.Moreover, the experts were informed of other less frequent criteria in the literature and then asked if they had any suggested factors that could be added to the content.Based on the initial experts' answers, the questionnaire was updated by adding the new suggested criteria and the rating outcomes of the previous round.After that, the information is returned to the experts in a new rating round until they reach an agreement.

Content validity index
The content validity index (CVI), an indicator of inter-rater agreement, is the most extensively used way to evaluate content validity for multi-item questionnaires.The CVI is divided into two forms: item-level CVI (I-CVI) and scale-level CVI (S-CVI) [55].The advantages provided by CVI are ease of calculation, understanding ability, and unanimous attention rather than consistency.In addition to the content validity of the full scale, the CVI method can indicate information at the item level that can be used to include or exclude items.Another merit of the CVI method is the adjustment to a chance agreement where Polit et al. [57] succeeded in translating the item-content validity index (I-CVI) into a modified Kappa (k*) value.This index reflects the agreement among the raters that the item is relevant [58].CVI-based content validity calculations can be summarized in the following steps: 1. Re-coding experts' ratings from a scale of 1-4 to the boolean scale of 0 or 1, in which 1 indicates expert agreement and vice versa.Ratings 3 and 4 falls into the value of 1, while 0 is given to ratings of 1 or 2. 2. Compute I-CVI refers to the proportion of agreement about the relevance of each questionnaire item.Referring to equation (1), I-CVI is calculated as the sum of agreements (the number of experts giving agreement) divided by the total number of experts.
I-CVI = A n (1) Where: A=Total number of agreements n=number of experts 3. Determine the kappa statistic (k*), which gives evidence regarding the agreement degree beyond chance.The probability of chance agreement (pc) should be computed using equation (2).Then, the binomial random variable equation ( 3) is used as follows: Agreement indicator of Kappa is Fair if k* = 0.40 -0.59; Good if k* = 0.60 -0.74; and Excellent at k* > 0 .74. 4. Calculate the S-CVI, representing the average I-CVI values for all questionnaire items or the average proportion relevance rated by all experts as illustrated in equations ( 4) and ( 5), respectively.
S-CVI = ∑ I-CVI / number of item (4) S-CVI = ∑ proportion relevance rating / n (5) Where: The proportion relevance rating is the mean relevance rating by an individual expert.Referring to Polit's recommendations, the content is considered to have excellent validity if the I-CVIs for its items are ≥ 0.78, k* values are > 0.74, and the S-CVI is ≥ 0.90 [57].

Results
This section summarizes the main findings from implementing the proposed framework to design a site suitability criteria system for wind-solar farms.First, the top 10 criteria used in the relevant site suitability assessment literature are explored.Following this, experts' opinions on standard or suggested criteria are investigated.Finally, the experts' judgments are examined based on the content validity indices.

Refinement of the commonly used criteria
A survey of the relevant literature yielded 30 different factors that were considered to assess site suitability for wind-solar farms.However, not all criteria were used equally in the relevant studies.The researchers agreed to consider some indicators as indispensable while differing on others due to the characteristics of the different study areas.Figure 2 shows a list of the top 10 criteria that were frequently examined in relevant publications.Four technical criteria are listed: solar radiation, wind speed, elevation, and air temperature.The slope, proximity to roads, proximity to transmission lines, and proximity to cities were also recorded among the standard economic criteria used.Furthermore, the criteria for land cover and natural disasters have taken place on this list.
The experts have provided significant ratings and suggestions that would refine the criteria system.As indicated in Table 4, there was high agreement among the experts regarding adopting the top 10 criteria extracted from the literature.They declared that these criteria are vital in planning wind-solar projects worldwide.However, the criterion of earthquakes did not meet the agreement of all experts.Instead, many experts have suggested adopting the natural disaster criterion as a more comprehensive factor for various natural hazards, including earthquakes.Besides, many judges (≥7 out of 9) proposed using the cloud index, population density, and wind density.The cloud cover blocks a significant amount of solar radiation and differs from place to place worldwide.Moreover, the population density reflects the rate of energy demand that power plants have to meet.On the other hand, other suggested criteria, such as humidity, aspect, energy demand, and people's attitudes, have received some experts' support (≤5).From the short review above, it is noted that there is no complete agreement among the experts regarding the proposed criteria, which explains the importance of examining content validity to consider whether or not these criteria will be adopted.

CVI and k* analysis
Referring to equations 1-3, the calculations of the content viability indices yielded valuable findings that can be observed in Table 5.Except for T10, all the top 10 criteria used in the literature had significant k* values, ranging from 0.76 to 1, indicating excellent agreement among the experts to adopt these factors in the targeted content.In addition, 4 of 9 suggested criteria have obtained excellent Kappa ratings, namely S2, S4, S7, and S8.While the S1 and S5 had a fair rating, the rest of the proposed factors did not receive enough expert agreement to be rated on the Kappa statistic.As a result, of both criteria sources, only 13 criteria of k* ≥ 0.76 can participate in creating content with excellent validity.Finally, the thirteen criteria and their expert evaluations are shown in Table 6 to verify the whole substance of the criterion system.The verification results based on equations 4 and 5 showed that S-CVI achieved a value of 0.9, corresponding to the recommended thresholds for content acceptance.Suppose the criteria are not filtered based on the excellent kappa threshold.In that case, the S-CVI value will be less than 0.9 at 0.75, indicating invalid content.The presented comparison demonstrates the significant role of the Kappa statistic in eliminating factors that weaken content.result, the design of the effective indicators system has a vital role in reducing the effort, time, and cost when solving such problems, which was the key reason for conducting this study.
This paper provides an interesting framework to improve the practice of employing expert opinions in refining a criteria system derived from the relevant literature.Essentially, the framework was based on experts' judgments analysis according to content validity indices and modified Kappa statistics.The methodology was tested in designing an effective indicator system that contributed to siting wind-solar farms.The results concluded that the criterion under test could successfully exceed the threshold of the content validity index and achieve the Kappa Excellent category when it yields the agreement of 7 experts out of 9. Based on the preceding, 13 criteria have succeeded in representing the indicators system for the adopted case study, as illustrated in Figure 3.
Broadly translated, our findings indicate that content validity indices can objectively contribute to refining criteria systems for MCDM problems.The non-compliance indices evidence it with a complete agreement among experts to include any criterion in the target system; this constraint has been considered in some relevant publications [2], [59].It is worth mentioning that repeated expert surveys often reach a complete agreement.This cumbersome process may be tainted with bias.On the other hand, the indices adopted in this study have outperformed the half-plus-one empirical approach to judge the inclusion or exclusion of a criterion.The I-CVI and k* are simply neither flexible nor very strict.Instead, they adopt the number of agreements and the number of experts in a specific mathematical formula.
Interestingly, the greater the positive or negative agreement among experts to include or exclude criteria, the closer the k* value is to the I-CVI value, as displayed in Figure 4.This result highlights the potential of the Kappa value to adjust the chance agreement [57].Lastly, the utilization of scale-level CVI in conjunction with the item-level CVI allowed validation of the entire domain, which is another merit of the proposed framework.The results of this study are in line with previous studies in terms of the number of experts recommended and the adoption of the main criteria for solar and wind energy resources [2], [24], [30], [33], [36].For instance, technical criteria such as solar radiation, wind speed, and temperature included in this study were present in all most similar studies indicated in Table 2.However, our system of criteria did not adopt factors that have been infrequently used in the literature, such as wind direction [11], sunny hours [18], [20], [34], air pressure [9], [14], [28], humidity [18], [38], etc.The reason for this discrepancy is the different methods used in designing the criteria systems and stakeholders' opinions.Furthermore, the characteristics of the study area often impose the adoption of specific criteria that are not common in the relevant literature.
This study has limitations in the criteria, the MCDM problem, and the study area.The current work did not address the exclusion criteria where the proposed framework was applied to a decision-making problem regarding siting wind-solar farms.However, applying the current practice of designing a criteria system to any other decision-making process is easy.

Conclusion
A framework for designing the specific criteria system was proposed involving three main stages: initial formation of criteria content based on the literature, refinement of the content based on expert opinions, and content validity analysis.A comprehensive survey of relevant articles was conducted to extract the top 10 commonly used factors in assessing site suitability for wind-solar plants.The current study used the Delphi technique to enhance the criteria system through the judgments of nine carefully selected experts.Finally, the I-CVI modified Kappa index, and S-CVI were applied to examine expert ratings at the item and domain level.Nine of the criteria that were judged and four that were suggested received an excellent Kappa value of ≥ 0.76.Moreover, the thirteen factors collectively achieved a value of 0.9 for S-CVI, forming a valuable content of the criteria system.The criteria reported were solar radiation, wind speed, air temperature, slope, elevation, landcover, proximity to roads, proximity to grid, proximity to cities, cloud index, natural disasters, population density, and wind density, which covered technical, economic, environmental, and social aspects.As a valid criteria system, the announced findings will shorten the time for researchers and planners to investigate the site suitability of such projects.For future work, the current framework can be employed in designing criteria systems for various site suitability assessment problems.Including evaluation and exclusion criteria in the targeted content would also be interesting.

Figure 2 :
Figure 2 : Top 10 evaluation criteria in the relevant studies.

Figure 3 :
Figure 3 : the valid criteria system for our case study.

Figure 4 :
Figure 4: Comparison of k* value and I-CVI for all studied criteria.

Table 2 :
Evaluation criteria used in the literature of siting wind-solar farms.

Table 3 :
Demographics of the experts involved in refining the criteria system.

Table 4 :
The experts (e) ratings for refining the criteria system, where 1 indicates expert agreement and 0 denotes disagreement.

Table 5 :
Calculations of the content validity indices for both criteria sources are considered in the present study.