Replies: 2 comments
-
The F Prime way is to explicitly state where data flows. The advantage is that the data flow is auditable, and it probably doesn't actually change in practice (not dynamically in flight systems, anyway). If you need to distribute data to a number of places, you would use the same pattern that is used for distributing rate group calls. Basically you write a simple component that takes a single input, and then loops through a set of output ports and sends a copy of that data to each output port. F Prime's structure makes this an easy component, because you can scale the number of a specific port definition with a variable in that component to define how many of that port you have. |
Beta Was this translation helpful? Give feedback.
-
For past projects I've worked on, we've implemented what you described in your "centralized data system" example. We used the Fprime PolyDB to serve as the central data system. The generator component would write to the PolyDB and any consumer components would poll the PolyDB to get the data whenever it needed it(often times at a specified rate via a rate group). I suppose you could also use the TlmDB as the central data system if you are polling values that are already being written to your telemetry stream. Not sure if this is the best way to implement it, but this is one way we have done things on my prior projects. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I've been investigating using fprime in the development of an avionics supervisory system platform. I recently read the page on Components and Ports and discovered that ports can't have a one output to many inputs relationship.
Is there a common pattern in use within fprime for sharing data that multiple components depend on?
For example, in the systems we work on its common for one component to provide positional data. This data is often shared with the rest of the system via a pub/sub mechanism. Specifically, when positional data is updated it is published to the pubsub mechanism and any other component(s) that needs positional information subscribe to the published topic and gets updates whenever new positional data is generated/published. While the number of publishes and subscribers will change between "deployments", each deployment ends up with a fixed number that can be statically allocated.
Another approach is to store "common" data in a centralized data system, where instead of using a pubsub topology, the data generator would save updates to position in the centralized data store (reusing the positional data example), and then people with dependencies on that data would need to poll the data store to see if there are updates to the data they depend on. This can be combined with a message queue system to send out messages to known dependents, eg subscribers, so that they know the data has been modified, but they are still required to manually retrieve the data from a central data store.
Any recommendations for how to share common data between multiple components in fprime? It seems inefficient to have to modify the data generator every time a downstream component is added or removed from the system that happens to depend on a given set of data.
Beta Was this translation helpful? Give feedback.
All reactions