Skip to content

KPI Energy Losses - Waterfall

For more details on energy losses data and waterfall calculation see this dedicated page in the reference section.

KpiEnergyWaterfall(perfdb)

Class used for getting energy waterfall values. Can be accessed via perfdb.kpis.energy.waterfall.

Parameters:

  • perfdb

    (PerfDB) –

    Top level object carrying all functionality and the connection handler.

Source code in echo_postgres/perfdb_root.py
def __init__(self, perfdb: e_pg.PerfDB) -> None:
    """Base class that all subclasses should inherit from.

    Parameters
    ----------
    perfdb : PerfDB
        Top level object carrying all functionality and the connection handler.

    """
    self._perfdb: e_pg.PerfDB = perfdb

get(period, group_name, group_type_name, waterfall_type='relative_perc', include_p50_deviation=False)

Gets the energy waterfall values for a specific period, group name, group type name and waterfall type.

The resulting values will be positive or negative depending on how each loss impacted the final value. First and last entries in the result are always positive as they represent Gross/Measured (measured, target) or Target/Measured (relative_abs, relative_perc).

The output aims to be used directly in a waterfall chart, where the first and last entries are the total values and the intermediate entries are the losses.

In case include_p50_deviation is set to True, two extra entries will be added at the beginning of the result: P50 and Target Adjustment. P50 is a total value and Target Adjustment is the difference between Target and P50.

Currently this method is getting the values from perfdb.kpis.energy.losses.values as it contains both the lost energy values as well as the target and measured.

For more details on how the waterfall is calculated, specially on relative_abs and relative_perc, check the Reference/Energy Losses section of the documentation.

Parameters:

  • period

    (DateTimeRange) –

    Period for which to get the values. Hour and minute will be removed, only the dates will be considered and are considered as inclusive.

  • group_name

    (str) –

    Name of the desired group.

  • group_type_name

    (str) –

    Name of the desired group type.

  • waterfall_type

    (Literal['actual', 'target', 'relative_abs', 'relative_perc'], default: 'relative_perc' ) –

    Type of the waterfall to get. Can be one of:

    • measured: Actual values in MWh
    • measured_perc: Actual values in percentage
    • target: Target values in MWh
    • target_perc: Target values in percentage
    • relative_abs: Difference of measured and target values in MWh
    • relative_perc: Difference of measured and target values in percentage

    By default, relative_perc is used.

  • include_p50_deviation

    (bool, default: False ) –

    Whether to include the P50 deviation in the waterfall or not. Only applicable if waterfall_type is one of relative_abs or relative_perc. By default, False.

Returns:

  • Series –

    Series with results for wanted group. The index is the name of the loss. Depending on the waterfall_type, there will be special rows added:

    • Gross: First row with the total value of the group. Applicable to all types, except relative types. If a percentage type is selected it will be equal to 100%.
    • Measured: Last row with containing actual measured values (net energy). Applicable to all types, except target and target_perc.
    • Target: Depending on the waterfall_type:
      • target and target_perc: Last row with the target values (expected net energy).
      • relative_abs and relative_perc: First row with the target values (expected net energy).
Source code in echo_postgres/kpi_energy_waterfall.py
@validate_call
def get(
    self,
    period: DateTimeRange,
    group_name: str,
    group_type_name: str,
    waterfall_type: Literal["measured", "measured_perc", "target", "target_perc", "relative_abs", "relative_perc"] = "relative_perc",
    include_p50_deviation: bool = False,
) -> pd.Series:
    """Gets the energy waterfall values for a specific period, group name, group type name and waterfall type.

    The resulting values will be positive or negative depending on how each loss impacted the final value. First and last entries in the result are always positive as they represent Gross/Measured (measured, target) or Target/Measured (relative_abs, relative_perc).

    The output aims to be used directly in a waterfall chart, where the first and last entries are the total values and the intermediate entries are the losses.

    In case `include_p50_deviation` is set to True, two extra entries will be added at the beginning of the result: `P50` and `Target Adjustment`. `P50` is a total value and `Target Adjustment` is the difference between Target and P50.

    Currently this method is getting the values from `perfdb.kpis.energy.losses.values` as it contains both the lost energy values as well as the target and measured.

    For more details on how the waterfall is calculated, specially on relative_abs and relative_perc, check the `Reference/Energy Losses` section of the documentation.

    Parameters
    ----------
    period : DateTimeRange
        Period for which to get the values. Hour and minute will be removed, only the dates will be considered and are considered as inclusive.
    group_name : str
        Name of the desired group.
    group_type_name : str
        Name of the desired group type.
    waterfall_type : Literal["actual", "target", "relative_abs", "relative_perc"], optional
        Type of the waterfall to get. Can be one of:

        - **measured**: Actual values in MWh
        - **measured_perc**: Actual values in percentage
        - **target**: Target values in MWh
        - **target_perc**: Target values in percentage
        - **relative_abs**: Difference of measured and target values in MWh
        - **relative_perc**: Difference of measured and target values in percentage

        By default, **relative_perc** is used.
    include_p50_deviation : bool, optional
        Whether to include the P50 deviation in the waterfall or not. Only applicable if waterfall_type is one of `relative_abs` or `relative_perc`. By default, False.

    Returns
    -------
    Series
        Series with results for wanted group. The index is the name of the loss. Depending on the `waterfall_type`, there will be special rows added:

        - `Gross`: First row with the total value of the group. Applicable to all types, except `relative` types. If a percentage type is selected it will be equal to 100%.
        - `Measured`: Last row with containing actual measured values (net energy). Applicable to all types, except `target` and `target_perc`.
        - `Target`: Depending on the `waterfall_type`:
            - `target` and `target_perc`: Last row with the target values (expected net energy).
            - `relative_abs` and `relative_perc`: First row with the target values (expected net energy).
    """
    # checking if group exists
    group_ids = self._perfdb.objects.groups.instances.get_ids()
    if group_type_name not in group_ids:
        raise ValueError(f"group_type_name {group_type_name} does not exist")
    if group_name not in group_ids[group_type_name]:
        raise ValueError(f"group_name {group_name} does not exist")

    # adjusting period
    period.start = period.start.replace(hour=0, minute=0, second=0, microsecond=0)
    period.end = period.end.replace(hour=0, minute=0, second=0, microsecond=0)

    # getting definition of losses to get order and grouping in the Waterfall

    loss_def = self._perfdb.kpis.energy.losses.types.get(output_type="DataFrame")
    # removing "considered_in_waterfall" = False
    loss_def = loss_def[loss_def["considered_in_waterfall"]].copy()
    # sorting losses by order
    loss_def = loss_def.sort_values("loss_order")

    logger.info(
        f"Getting energy waterfall values for {period.start.date():%Y-%m-%d} to {period.end.date():%Y-%m-%d}, group {group_name}, group type {group_type_name} and waterfall type {waterfall_type}",
    )
    logger.info(f"The order of losses is {loss_def.index.to_list()}")

    # creating dict with loss order
    loss_order = loss_def["loss_order"].to_dict()

    # getting measured losses values
    df = self._perfdb.kpis.energy.losses.values.get(
        period=period,
        time_res="daily",
        aggregation_window=None,
        object_or_group_names=[group_name],
        object_group_types=[group_type_name],
        energy_losses_types=loss_def.index.to_list(),
    )

    # summing all days in the period
    df = df.reset_index(drop=False)
    df = (
        df[["energyloss_type_name", "measured", "measured_after_loss", "target", "target_after_loss"]]
        .groupby("energyloss_type_name")
        .sum()
    )

    # converting to MWh
    df = df / 1000

    # sorting losses by order considering loss_order dict, where the values is the order
    df.index.name = "energyloss_type_name"
    df = (
        df.reset_index(drop=False)
        .sort_values("energyloss_type_name", key=lambda x: x.map(loss_order))
        .set_index("energyloss_type_name")
    )

    # adding loss_order to df for easier filtering
    df = df.merge(
        loss_def[["loss_order"]],
        left_index=True,
        right_index=True,
    )

    # summing values of losses that should be grouped based on waterfall_group
    # ! for this to work these losses must be sequential!
    waterfall_groups = loss_def["waterfall_group"].dropna().unique().tolist()
    if waterfall_groups:
        for group in waterfall_groups:
            group_losses = loss_def[loss_def["waterfall_group"] == group].index.to_list()
            # checking if group_losses are sequential in loss order
            min_group_order, max_group_order = (
                loss_def.loc[group_losses, "loss_order"].min(),
                loss_def.loc[group_losses, "loss_order"].max(),
            )
            losses_between_group = loss_def[
                (loss_def["loss_order"] >= min_group_order) & (loss_def["loss_order"] <= max_group_order)
            ].index.to_list()
            if set(group_losses) != set(losses_between_group):
                wrong_losses = set(losses_between_group) - set(group_losses)
                raise ValueError(
                    f"Losses in waterfall group {group} are not sequential in loss order. The following losses are in between: {wrong_losses}",
                )

            # creating a row with the sum of the group losses
            group_row = df.loc[group_losses].agg(
                {"measured": "sum", "measured_after_loss": "min", "target": "sum", "target_after_loss": "min", "loss_order": "min"},
            )
            group_row.name = group
            # dropping group losses from original df
            df = df.drop(group_losses)
            # separating df in rows before and after the group
            before_group = df[df["loss_order"] < min_group_order]
            after_group = df[df["loss_order"] > max_group_order]
            # adding group row to the end of before_group
            df = pd.concat([before_group, group_row.to_frame().T, after_group])

    # * Calculating waterfall

    match waterfall_type:
        # measured and target values in MWh
        case "measured" | "measured_perc" | "target" | "target_perc":
            # getting main col
            main_col = "measured" if "measured" in waterfall_type else "target"

            result = df.copy()

            # dropping "uncertainty" loss
            result = result.drop("uncertainty", errors="ignore")
            # adding "Measured" or "Target" to the end using after_loss of the last loss
            new_line = df.loc[df.index[-1]].copy()
            new_line.name = main_col.capitalize()
            result = pd.concat([result, new_line.to_frame().T])
            result.loc[main_col.capitalize(), main_col] = result.loc[main_col.capitalize(), f"{main_col}_after_loss"]
            # adding "Gross" to the beginning
            new_line = result.loc[result.index[0]].copy()
            new_line.name = "Gross"
            result = pd.concat([new_line.to_frame().T, result])
            result.loc["Gross", main_col] = result.loc["Gross", f"{main_col}_after_loss"] + result.loc["Gross", main_col]
            # converting to a Series in MWh
            result = result[main_col]
            result.name = "Loss"

            # reversing signal of losses
            result.iloc[1:-1] = -result.iloc[1:-1]

            # converting to percentage if needed
            if waterfall_type == f"{main_col}_perc":
                result = result / result["Gross"]

        # relative to target in MWh or percentage
        case "relative_abs" | "relative_perc":
            result = df.copy()

            # adding a "net_simulated" column to store the relative impact
            result["net_simulated"] = 0.0

            # calculating as_percentage loss for both target and measured
            result["target_as_perc"] = result["target"] / (result["target"] + result["target_after_loss"])
            result["measured_as_perc"] = result["measured"] / (result["measured"] + result["measured_after_loss"])

            # lets iterate over the losses to calculate the relative impact
            # the idea here is to start from the gross up of the target (or measured if you consider that uncertainty is a loss) and then simulate the net considering the target losses up to this point and the measured losses after this point
            # finally the impact is the difference between this simulated net and the previous simulated net
            # As an example, consider 3 losses A, B and C:
            # - grossed up = measured_after_loss / (1 - measured_as_perc).prod()
            # - for loss A:
            #     - net_simulated_A = grossed_up * (1 - target_as_perc_A) * (1 - measured_as_perc_B) * (1 - measured_as_perc_C)
            # - for loss B:
            #     - net_simulated_B = grossed_up * (1 - target_as_perc_A) * (1 - target_as_perc_B) * (1 - measured_as_perc_C)
            # - for loss C:
            #     - net_simulated_C = grossed_up * (1 - target_as_perc_A) * (1 - target_as_perc_B) * (1 - target_as_perc_C)
            # then the impacts are:
            # - impact_A = grossed_up - net_simulated_A
            # - impact_B = net_simulated_A - net_simulated_B
            # - impact_C = net_simulated_B - net_simulated_C
            grossed_up = result.loc[result.index[-1], "measured_after_loss"] / (1 - result["measured_as_perc"]).prod()
            for loss in df.index:
                losses_up_to_this = result.index[: result.index.to_list().index(loss)]
                losses_after_this = result.index[result.index.to_list().index(loss) :]

                percent_target_loss_up_to_this = 1 - (1 - result.loc[losses_up_to_this, "target_as_perc"]).product()
                percent_measured_loss_after_this = 1 - (1 - result.loc[losses_after_this, "measured_as_perc"]).product()

                result.loc[loss, "net_simulated"] = (
                    grossed_up * (1 - percent_measured_loss_after_this) * (1 - percent_target_loss_up_to_this)
                )

            # calculating the relative impact
            # creating a vector of net_simulated and appending target_after_loss of the last loss to it
            net_simulated = pd.concat(
                [result["net_simulated"], pd.Series({df.index[-1]: df.loc[df.index[-1], "target_after_loss"]})],
            )
            # subtracting one value minus the previous to get the impact
            impact = net_simulated.values[:-1] - net_simulated.values[1:]

            # assigning impact to result
            result["Relative"] = impact

            # adding "Target" to the beginning using after_loss of the last loss
            new_line = result.loc[result.index[-1]].copy()
            new_line.name = "Target"
            new_line.loc["Relative"] = new_line["target_after_loss"]
            result = pd.concat([new_line.to_frame().T, result])
            # adding "Measured" to the end using after_loss of the last loss
            new_line = result.loc[result.index[-1]].copy()
            new_line.name = "Measured"
            new_line.loc["Relative"] = new_line["measured_after_loss"]
            result = pd.concat([result, new_line.to_frame().T])

            # getting only series
            result = result["Relative"]

            # adding P50 deviation if wanted
            if include_p50_deviation:
                # in case the current group type is not SPE, lets get all the SPEs that are part of the group
                if group_type_name != "SPE":
                    group_def = self._perfdb.objects.groups.instances.get(
                        object_group_names=[group_name],
                        object_group_types=[group_type_name],
                        output_type="DataFrame",
                    )
                    spe_names: list[str] = group_def.loc[(group_type_name, group_name), "spe_names"]
                else:
                    spe_names = [group_name]
                # getting target energy to find the resource assessment used
                target_energy = self._perfdb.kpis.energy.targets.get(
                    period=period,
                    time_res="daily",
                    object_or_group_names=spe_names,
                    object_group_types=["SPE"],
                    measurement_points=["Connection Point"],
                )
                target_resource_assessments = target_energy.reset_index(drop=False)[
                    ["object_or_group_name", "date", "target_resource_assessment_id"]
                ].set_index(["object_or_group_name", "date"])
                resource_assessment_ids = target_energy["target_resource_assessment_id"].unique().tolist()  # type: ignore # noqa: F841
                # TODO: currently the materialized view in resourceassessments.pxx only returns the default resource assessment. We need to fix this to allow getting Pxx for the correct pxx in each year
                # getting the P50 from the resource assessments
                p50 = self._perfdb.resourceassessments.pxx.get(
                    period=period,
                    time_res="daily",
                    group_names=spe_names,
                    group_types=["SPE"],
                    resource_types=["average_power"],
                    pxx=[0.5],
                    evaluation_periods=["longterm"],
                )
                # convert to MWh
                p50["value"] = p50["value"] / 1000 * 24  # from kW to MWh
                # adjust columns
                p50 = (
                    p50.reset_index(drop=False)[["group_name", "date", "value"]]
                    .rename(columns={"group_name": "object_or_group_name", "value": "p50"})
                    .set_index(["object_or_group_name", "date"])
                )
                # adjusting dtypes of indexes for merge
                p50.index = p50.index.set_levels(
                    [p50.index.levels[0].astype("string[pyarrow]"), p50.index.levels[1].astype("date32[pyarrow]")],
                )
                target_resource_assessments.index = target_resource_assessments.index.set_levels(
                    [
                        target_resource_assessments.index.levels[0].astype("string[pyarrow]"),
                        target_resource_assessments.index.levels[1].astype("date32[pyarrow]"),
                    ],
                )
                # merge
                p50 = p50.merge(
                    target_resource_assessments,
                    left_index=True,
                    right_index=True,
                )
                # sum of p50 for all SPES in the group for the period
                total_p50 = p50["p50"].sum()
                # adding two rows at start for results: P50 and Target Adjustment

                result = pd.concat(
                    [
                        pd.Series(
                            {
                                "P50": total_p50,
                                "Target Adjustment": result.iloc[0] - total_p50,
                            },
                        ),
                        result,
                    ],
                )

            result.name = "Loss"

            # converting to percentage if needed
            if waterfall_type == "relative_perc":
                result = result / result["Target"]

    # creating a dict from loss name to display name
    loss_display_name = loss_def["display_name"].to_dict()
    # converting names in result to display names
    result = result.rename(index=loss_display_name)

    return result