Skip to content

Conversation

@mstasiaknordic
Copy link
Contributor

Fast PWM120 instance works with 320MHz clock, thus pwm_nrfx_get_cycles_per_sec needs to be adjusted.
Also, it uses cachable RAM, thus sys_cache function needs to be added to flush the cached sequence.

@zephyrbot zephyrbot added area: PWM Pulse Width Modulation platform: nRF Nordic nRFx labels Oct 31, 2024
mstasiaknordic added a commit to mstasiaknordic/sdk-zephyr that referenced this pull request Oct 31, 2024
Fast PWM120 instance works with 320MHz clock, thus
pwm_nrfx_get_cycles_per_sec needs to be adjusted.
Also, it uses cachable RAM, thus sys_cache function
needs to be added to flush the cached sequence.

Upstream PR : zephyrproject-rtos/zephyr#80672

Signed-off-by: Michał Stasiak <[email protected]>
mstasiaknordic added a commit to mstasiaknordic/sdk-zephyr that referenced this pull request Oct 31, 2024
Fast PWM120 instance works with 320MHz clock, thus
pwm_nrfx_get_cycles_per_sec needs to be adjusted.
Also, it uses cachable RAM, thus sys_cache function
needs to be added to flush the cached sequence.

Upstream PR #: zephyrproject-rtos/zephyr#80672

Signed-off-by: Michał Stasiak <[email protected]>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to check in runtime whether d-cache operation is needed or not, depending on seq.values.p_raw memory region attributes. You can take inspiration from this PR: https://github.com/zephyrproject-rtos/zephyr/pull/80015/files , see UARTE_IS_CACHEABLE .

Maybe in the future we could switch to dmm for this shim, but let's it keep simple as this is hotfix

Comment on lines 245 to 260
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about using this approach from UARTE:
036a85b
7c01174

Copy link
Contributor Author

@mstasiaknordic mstasiaknordic Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since it is a hotfix, could we postpone the changes until the DMM rework?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see connection between this and DMM, but I'm fine with this staying temporarily if you post a follow-up PR.

@mstasiaknordic mstasiaknordic force-pushed the fast_nrf_pwm branch 2 times, most recently from 3ab9af4 to 4ec37b3 Compare October 31, 2024 11:37
Comment on lines 252 to 254
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#ifdef CONFIG_NRFX_PWM120
*cycles = 320ul * 1000ul * 1000ul;
#else
#ifdef CONFIG_NRFX_PWM120
*cycles = 320ul * 1000ul * 1000ul;
#else

this won't work when enabling multiple instances.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe current solution will do, though it can be replaced with suitable macro introduced in future nrfx release

Comment on lines 252 to 260
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-1, please, describe clock in DT and capture it, it is actually straightforward.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am affraid that would require adding clock property to every nrf pwm node. If thats not an issue, then sure.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all that need it, yes. Just check the UART case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gmarull won't it imply tree-wide change in all PWM nodes, in nRF52/53/91 devices as well? we wanted to avoid it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it can be made optional, see

#define UARTE_GET_FREQ(idx) DT_PROP(DT_CLOCKS_CTLR(UARTE(idx)), clock_frequency)
#define UARTE_GET_BAUDRATE_DIV(idx) \
COND_CODE_1(DT_CLOCKS_HAS_IDX(UARTE(idx), 0), \
((UARTE_GET_FREQ(idx) / NRF_UARTE_BASE_FREQUENCY_16MHZ)), (1))

@mstasiaknordic mstasiaknordic force-pushed the fast_nrf_pwm branch 3 times, most recently from 211755d to ea712e3 Compare November 4, 2024 07:34
@mstasiaknordic mstasiaknordic force-pushed the fast_nrf_pwm branch 2 times, most recently from 686a78d to d87e42f Compare November 4, 2024 08:28
nika-nordic
nika-nordic previously approved these changes Nov 4, 2024
@gmarull gmarull dismissed their stale review November 4, 2024 09:53

dt looks ok

anangl
anangl previously approved these changes Nov 4, 2024
anangl
anangl previously approved these changes Nov 4, 2024
@mstasiaknordic mstasiaknordic requested a review from anangl November 4, 2024 11:32
nika-nordic
nika-nordic previously approved these changes Nov 5, 2024
Added hsfll120 clock for fast PWM120 nodes.

Signed-off-by: Michał Stasiak <[email protected]>
Fast PWM120 instance works with 320MHz clock, thus
pwm_nrfx_get_cycles_per_sec needs to be adjusted,
applying correct clock frequency.
Also, it uses cachable RAM, thus sys_cache function
needs to be added to flush the cached sequence.

Signed-off-by: Michał Stasiak <[email protected]>
Add pin retenion if GPD is enabled for nRF54H20.

Signed-off-by: Michał Stasiak <[email protected]>
@kartben kartben merged commit d04b3e0 into zephyrproject-rtos:main Dec 2, 2024
25 checks passed
@mstasiaknordic mstasiaknordic deleted the fast_nrf_pwm branch January 2, 2025 07:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: PWM Pulse Width Modulation platform: nRF Nordic nRFx

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants