About the ESG Index (2024)

The ESG Index is composed of 65 variables based on datasets that are exclusively borrowed from internationally recognized entities.

The ESGI follows a strict methodology:

Selection process

A number of criteria were considered during the selection process, detailed in the downloadable technical methodology.

MIssing data

The processing of missing data is handled on a case-by-case basis depending on the structure of the datasets.

    • In the case of time series datasets with visible trends, we proceed with a linear extrapolation from the five last available years. This method allows for estimating parameters based on real past values.
    • The second approach used is the method of the Last Observation Carried Forward (LOCF), which is a common statistical approach for time series data that consists in imputing the last available observation. Similar to the first method, only the last five available years are considered.
    • The last approach is that of multiple imputations through Predictive Mean Matching (PMM). This approach allows us to preserve the distributions in the data and ensures that imputed values are plausible as it fills in values from real observations (Vink et al., 2014[1]). PMM provides a random value from a donor, based on the closeness of the regression-predicted values of the donor , with that of the recipient . This implies that linear regressions are not used to generate imputed values but rather to determine the donor (Schenker, N. & Taylor, J.M.G., 1996[2]).

[1] Vink, G., Frank, L. E., Pannekoek, J., and van Buuren, S. (2014). Predictive mean matching imputation of semicontinuous variables. Statistica Neerlandica. 68(1). 61-90

[2] Schenker, N., & Taylor, J. M. G. (1996). Partially parametric techniques for multiple imputation. Computational Statistics & Data Analysis, 22(4), 425–446

case deletion

For some variables, no PMM imputation was performed and only true values were considered in the analysis. This is due to the structure of the data and the absence of correlation with other variables. In the case of a missing value, the algorithm proportionally redistributes the according weight to variables measuring the same indicator.

standardization

Aside from binary variables, all datasets were tested for skewness, then transformed and recoded if necessary. The mean and standard deviation is calculated and all variables are then standardized to allow for a proper aggregation in the global scoring. Several normalization methods exist. The one used here is that of z-scores, which converts datasets to a common scale with a mean of zero and a standard deviation of one.

aggregation

The aggregation process converts all data points to a scale of 0-100, where 0 represents the lowest risk of ESG issues, and 100 corresponds to the highest risk of ESG issues.

measure of uncertainty

Based on the n datasets obtained from the multiple imputation process, a standard error and a 90 percent confidence interval are calculated for each dataset to reflect the variance around the different scores.

About the ESG Index (2024)
Top Articles
Latest Posts
Article information

Author: Trent Wehner

Last Updated:

Views: 5506

Rating: 4.6 / 5 (76 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Trent Wehner

Birthday: 1993-03-14

Address: 872 Kevin Squares, New Codyville, AK 01785-0416

Phone: +18698800304764

Job: Senior Farming Developer

Hobby: Paintball, Calligraphy, Hunting, Flying disc, Lapidary, Rafting, Inline skating

Introduction: My name is Trent Wehner, I am a talented, brainy, zealous, light, funny, gleaming, attractive person who loves writing and wants to share my knowledge and understanding with you.