You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I run the example of Llama3-8B on W8A8(INT8) and W8A8(FP8),But I get no activations quantization schemes in every_layer's parameter;I only can get the weight_scale and weight_zero_point in every_layer's parameter