Skip to content

Error processing evaluation results during setup #5

@vaenyr

Description

@vaenyr

Hi,

When following the README, during the geak-eval setup -ds tbg step, after the initial benchmarking, I get a lot of errors similar to:

Error processing mul_exponent_compensator.json, skipping...
Error processing multinomial_sampling.json, skipping...
Error processing nested_loops_processing.json, skipping...
Error processing parallel_attention.json, skipping...
Error processing parallel_retention_attention.json, skipping...
Error processing pow_scalar_tensor.json, skipping...
Error processing quant_transpose_kernel.json, skipping...
Error processing quantize_copy_kv.json, skipping...
Error processing quantize_global.json, skipping...
Error processing quantize_kv_copy.json, skipping...

These seem to be caused by the fact that the analysis script expects the jsons to be under performance_metrics/perf_G/golden_metrics, whereas they are being saved under performance_metrics/perf_G/{GPU}_golden_metrics (basically the golden_metrics_folder argument here: https://github.com/AMD-AIG-AIMA/GEAK-eval/blob/master/geak_eval/initializations.py#L10).

Now, I'm not sure what the intended difference between TBG_PERF_GOLD_ROOT and NATIVE_PERF_GOLD_ROOT is, so I'm not sure if simply changing one to another makes sense (intuitively, is the native root even needed during a setup phase?).

Are these errors safe to ignore?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions