Skip to content

InterleavedChunkEvaluator

ADLStream.evaluation.InterleavedChunkEvaluator

Interleave chunks Evaluator.

THis evaluator incrementally updates the accuracy by evaluating chunks of data sequentially.

Parameters:

Name Type Description Default
chunck_size int

Number of instances per chunk. the particular case of chunk_size = 1, represents prequential the interleaved train-then-test approach.

required
metric str

loss function. Possible options can be found in ADLStream.evaluation.metrics.

required
results_file str

Name of the csv file where to write results. If None, no csv file is created. Defaults to "ADLStream.csv".

'ADLStream.csv'
dataset_name str

Name of the data to validate. Defaults to None.

None
show_plot bool

Whether to plot the evolution of the metric. Defaults to True.

True
plot_file str

Name of the plot image file. If None, no image is saved. Defaults to None.

None
Source code in ADLStream/evaluation/interleaved_chunks.py
class InterleavedChunkEvaluator(BaseEvaluator):
    """Interleave chunks Evaluator.

    THis evaluator incrementally updates the accuracy by evaluating chunks of data
    sequentially.


    Arguments:
        chunck_size (int): Number of instances per chunk.
            the particular case of chunk_size = 1, represents prequential the
            interleaved train-then-test approach.
        metric (str): loss function.
            Possible options can be found in ADLStream.evaluation.metrics.
        results_file (str, optional): Name of the csv file where to write results.
            If None, no csv file is created.
            Defaults to "ADLStream.csv".
        dataset_name (str, optional): Name of the data to validate.
            Defaults to None.
        show_plot (bool, optional): Whether to plot the evolution of the metric.
            Defaults to True.
        plot_file (str, optional): Name of the plot image file.
            If None, no image is saved.
            Defaults to None.
    """

    def __init__(
        self,
        chunk_size,
        metric,
        results_file="ADLStream.csv",
        dataset_name=None,
        show_plot=True,
        plot_file=None,
        **kwargs
    ):
        self.chunk_size = chunk_size
        self.metric = metric
        super().__init__(
            results_file=results_file,
            dataset_name=dataset_name,
            show_plot=show_plot,
            plot_file=plot_file,
            ylabel=self.metric,
            **kwargs
        )

    def compute_metric(self):
        new_metric = metrics.evaluate(
            self.metric,
            self.y_eval[: self.chunk_size],
            self.o_eval[: self.chunk_size],
        )

        return new_metric

    def evaluate(self):
        new_results = []
        instances = []
        instances_index = len(self.metric_history)
        # Chunks loop
        while (
            len(self.y_eval) >= self.chunk_size and len(self.o_eval) >= self.chunk_size
        ):
            # Get metric
            new_metric = self.compute_metric()

            # Save metric
            self.metric_history.append(new_metric)
            new_results.append(new_metric)

            # Remove eval data
            self.y_eval = self.y_eval[self.chunk_size :]
            self.o_eval = self.o_eval[self.chunk_size :]
            self.x_eval = self.x_eval[self.chunk_size :]

            # Add number of instances evaluated
            instances_index += 1
            instances.append(self.chunk_size * instances_index)

        return new_results, instances

evaluate(self)

Function that contains the main logic of the evaluator. In a generic scheme, this function should: - Get validation metrics from validation data (self.y_eval, self.o_eval and self.x_eval). - Save metrics in self.metric_history. - Remove already evaluated data (y_eval, o_eval and x_eval) to keep memory free. - Return new computed metrics and count of number of instances evaluated.

Exceptions:

Type Description
NotImplementedError

This is an abstract method which should be implemented.

Returns:

Type Description
tuple[list, list]

new_metrics (list), instances(list)

Source code in ADLStream/evaluation/interleaved_chunks.py
def evaluate(self):
    new_results = []
    instances = []
    instances_index = len(self.metric_history)
    # Chunks loop
    while (
        len(self.y_eval) >= self.chunk_size and len(self.o_eval) >= self.chunk_size
    ):
        # Get metric
        new_metric = self.compute_metric()

        # Save metric
        self.metric_history.append(new_metric)
        new_results.append(new_metric)

        # Remove eval data
        self.y_eval = self.y_eval[self.chunk_size :]
        self.o_eval = self.o_eval[self.chunk_size :]
        self.x_eval = self.x_eval[self.chunk_size :]

        # Add number of instances evaluated
        instances_index += 1
        instances.append(self.chunk_size * instances_index)

    return new_results, instances

run(self, context) inherited

Run evaluator

This function update predictions from context, evaluate them and update result file and result plot.

Parameters:

Name Type Description Default
context ADLStreamContext

ADLStream context

required
Source code in ADLStream/evaluation/interleaved_chunks.py
def run(self, context):
    """Run evaluator

    This function update predictions from context, evaluate them and update result
    file and result plot.

    Args:
        context (ADLStreamContext): ADLStream context
    """
    self.start()
    while not context.is_finished():
        self.update_predictions(context)
        new_results, instances = self.evaluate()
        if new_results:
            self.write_results(new_results, instances)
            self.update_plot(new_results, instances)

    if self.plot_file:
        self.visualizer.savefig(self.plot_file)
    if self.show_plot:
        self.visualizer.show()
    self.end()

update_predictions(self, context) inherited

Gets new predictions from ADLStream context

Parameters:

Name Type Description Default
context ADLStreamContext

ADLStream context

required
Source code in ADLStream/evaluation/interleaved_chunks.py
def update_predictions(self, context):
    """Gets new predictions from ADLStream context

    Args:
        context (ADLStreamContext): ADLStream context
    """
    x, y, o = context.get_predictions()
    self.x_eval += x
    self.y_eval += y
    self.o_eval += o
    self.write_predictions(o)