why cld isn't enough for programmatic theory #269
Replies: 2 comments
-
additional search on "spaghetti criticism" on vensim, i found two interesting links:
spaghetti seems to be SD-derogatory term?
understanding cognitive upper bound of human shouldn't be the complexity ceiling as human cognition evolve and without an aligned evaluation metric for measuring cognitive capacity (due to low understanding and too diverse context of use of cognition to evaluate with unified metric) - there's a high chance your estimate is bad (either under/over estimate). also is everyone prone to be simplistic? there's always a matcher that preys on simple models with unrealistically heroic assumptions. more on
|
Beta Was this translation helpful? Give feedback.
-
this reminds me of Josh's class:
![]() extracted from #250 (comment) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
tl;dr
definition of programmatic theory, see CroninStoutenvanKnippenbergAMR.2021.0517.Final.pdf
from 7. Of the idea of necessary connection, David Hume argues:
Causal loop diagrams (CLDs) are problematic as a theoretical framework because they assume necessary connections between variables that, according to Hume, humans can never truly verify through empirical observation alone 🔍🤔. While CLDs draw deterministic arrows indicating direct causation, Hume shows that we only observe constant conjunctions between events, learning through statistical patterns and associations rather than explicit causal logic - this points to why we should favor probabilistic programs that can better capture uncertainty and the subtle patterns through which humans and animals actually process information and make decisions 🎲🧠.
Based on Hume's insight that knowledge emerges through "constant conjunction" - the repeated observation of one event reliably following another - I aim to develop an algorithm for mining test quantities with exchangeability. Rather than assuming direct causal links, exchangeability emerges from documenting these constant conjunctions systematically across different contexts. This approach guides model development at both individual and societal levels by focusing on what patterns consistently conjoin in practice (like Tesla's observation of how certain battery configurations reliably lead to better performance) rather than attempting to deduce optimal solutions from assumed causal relationships.
using Hume's Critique of Causal Determinism cld
Beta Was this translation helpful? Give feedback.
All reactions