-
Notifications
You must be signed in to change notification settings - Fork 8.4k
Description
Introduction
Many sensors support having an external clock. This will likely be a simple device tree binding in the driver, but there's no way to see this information externally. This can become more important as sensors start reporting time in ticks of their own clocks which will increase accuracy.
Problem description
My current first pass at the async sensor API uses nanosecond timestamps. While those work fine, they do have accumulated error over time. Using the internal clock of the SoC or the sensor's external clock can allow us to get much more accurate timestamps. Additionally, the SoC clock might skip some ticks under some platforms when the SoC goes into a low power mode. This would mean converting the decoded timestamps to ticks/cycles, but also including some way for the consumer to interpret the time.
This RFC touches on 2 issues:
- The sensor might be driven via a different clock than the internal one provided by the sensor. In these cases the actual sample rate might be different from the contracted ODR. For example the ICM42688's highest ODR is 32kHz, but when connected to a 50kHz oscillator, the actual sample rate will be 50kHz via
ODR * external_clock_hz / 32. - The timestamp of the interrupt triggering the event will likely end up using cycles instead of ticks or nanoseconds but even that may come from a separate
counterfrom the system one.
Proposed change
1. All sensors should report time in cycles
We would first introduce the a new "chosen" node in Zephyr for the zephyr,sensor-clock in devicetree. This chosen node would allow an application to specify which clock is keeping time. This means that if the system clock is running at 300Mhz (3 1/3 nanoseconds per cycle) but we have an external oscillator which is driving our sensors running at 50kHz we're able to specify that all cycle measurements for sensors are 20,000 nanoseconds per cycle. If the chosen node isn't present, sensors are expected to use the system clock.
2. Update the sensor data types and decoders
The current sensor data types include a header which has a uint64_t base_timestamp and each reading includes a uint32_t offset. Both are in nano-seconds. Instead, 2 changes will be made.
- The base timestamp will (instead of being represented in nano-seconds) be made in the units of the timestamp counter and be replaced with
uint64_t base_cycles - The offset field will be renamed to
uint32_t offset_cycles
Alone, these are pretty useless. The base_cycles are in the units of the counter (might be the system clock) and the offset_cycles are determined by the external clock frequency + the ODR that produced the data.
As an example, a 300MHz processor will have a cycle accuracy of 3 ns (losing the floating portion of 1/3 ns) and a sensor sampling at 32kHz will have a sample interval of 31,250ns. This means that if we were to only use the system clock we would assume that the samples were actually 31,248ns apart (nearest multiple of the processor's cycle accuracy rounded down). The accumulated error over time will exist.
However, if the offset_cycles were used, we would instead see 1 which corresponds to the highest sample rate. In this case there will be no accumulation error if adding offsets. Finally, a base timestamp can be added to the equation to produce the most accurate ns timestamp possible. This can be done via a simple inline function:
static inline uint64_t sensor_get_ns_time(uint64_t base_cycles, uint32_t offset_cycles) {
uint32_t hz;
base_cycles += offset_cycles;
if (DT_HAS_CHOSEN(zephyr_sensor_clock)) {
hz = counter_get_frequency(DT_CHOSEN(zephyr_sensor_clock));
} else {
hz = timing_freq_get();
}
// Probably add some check that base_cycles * 1,000,000,000 does't overflow the uint64_t first
return base_cycles * UNT64_C(1000000000) / hz;
}Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Status