Skip to content

Apply activation quantization parameters selection#1466

Closed
gouda-youichi wants to merge 0 commit intoSonySemiconductorSolutions:mainfrom
kkawa14:apply_actq_params_selection_for_node_inside_fln
Closed

Apply activation quantization parameters selection#1466
gouda-youichi wants to merge 0 commit intoSonySemiconductorSolutions:mainfrom
kkawa14:apply_actq_params_selection_for_node_inside_fln

Conversation

@gouda-youichi
Copy link
Copy Markdown
Contributor

Apply activation quantization parameters selection for Stage3-4.

Pull Request Description:

Checklist before requesting a review:

  • I set the appropriate labels on the pull request.
  • I have added/updated the release note draft (if necessary).
  • I have updated the documentation to reflect my changes (if necessary).
  • All function and files are well documented.
  • All function and classes have type hints.
  • There is a licenses in all file.
  • The function and variable names are informative.
  • I have checked for code duplications.
  • I have added new unittest (if necessary).

@kkawa14 kkawa14 changed the title Apply activation quantization parameters selection for Stage3-4. Apply activation quantization parameters selection Jun 12, 2025
attr_cfg.set_weights_quantization_param(weights_params)

if n.is_activation_quantization_enabled():
if n.is_activation_quantization_enabled() or n.is_fln_quantization():
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The condition here should be based on the fusing info, not the node candidate mode, similarly to the fix from PR #1462
The same should apply for the condition in set_activation_quantization_param (the other change in this PR).


"""
assert self.quant_mode == ActivationQuantizationMode.QUANT
assert self.quant_mode == ActivationQuantizationMode.QUANT or self.quant_mode == ActivationQuantizationMode.FLN_QUANT
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this assertion needs to be modified to check for node is_enable_activation_quantization and same for fln quantization (similarly to all other places that this check occurs)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants