Skip to content

Share hwspinlock accross driver in the same cluster #103752

@Fymyte

Description

@Fymyte

Summary

Currently, every instantiation of macro HWSPINLOCK_DT_SPEC_GET...() creates a new struct k_spinlock.
Whilst it should be fine for the majority of use-cases, it limits the possibilities of hwspinlock.

For example, those are not possible (or involves tricks to access the same statically defined spinlock):

  • Share a common hw ressource between 2 drivers in the same cluster
  • Share a common hw ressource between multiple clusters, but 2 actors are on the same cluster

After the previous rework of hwspinlock (see RFC #92706), I had feedback from @mathieuchopstm (sorry for the ping) which involved such use-case.

I also came across an issue in Sequans hwspinlock driver which expects to only be locked once. However, because the application used 2 context, it was possible to lock the same spinlock twice at the same time, resulting it releasing the lock instead.

Describe the solution you'd like

Create a new mandatory api for hwspinlock drivers to implement which returns a zephyr k_spinlock given a hwspinlock id.

/**
 * @brief Callback API to get the regular spinlock for a specific hw spinlock
 *
 * This callback must be implemented.
 *
 * @param dev HW spinlock device instance.
 * @param id Spinlock identifier.
 *
 * @returns the zephyr spinlock associated with the given hw spinlock id, or NULL on error
 */
typedef struct k_spinlock *(*hwspinlock_api_get_spinlock)(const struct device *dev, uint32_t id);

/* ... */

__subsystem struct hwspinlock_driver_api {
	hwspinlock_api_get_spinlock get_spinlock;
	/* ... */
};

Having this, it is possible to remove every instance of struct hwspinlock_context in hw_spin_lock() api

static inline int hw_spin_trylock(const struct device *dev, uint32_t id, k_spinlock_key_t *key);
static inline k_spinlock_key_t hw_spin_lock(const struct device *dev, uint32_t id);
static inline void hw_spin_unlock(const struct device *dev, uint32_t id, k_spinlock_key_t key);

Instead of relying on ctx->lock, the API would get the spinlock by calling lock = api->get_spinlock(dev, id)
Example:

static inline k_spinlock_key_t hw_spin_lock(const struct device *dev, uint32_t id)
{
	const struct hwspinlock_driver_api *api = (const struct hwspinlock_driver_api *)dev->api;

	__ASSERT(api->lock != NULL, "hwspinlock lock callback must be implemented");

	k_spinlock_key_t k = k_spin_lock(api->get_spinlock(dev, id));
	api->lock(dev, id);

	return k;
}

By doing this, it is possible to tie a single zephyr Spinlock per hwspinlock id, and get rid of this issue.

Alternatives

Alternatively, it is possible to leave spinlock management to each individual driver. This would leave more freedom to driver developper but can lead to more bugs.

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions