customer pull: bayesian in b-school (2023-26) #156
Replies: 24 comments 30 replies
-
Possible paper outline/points
|
Beta Was this translation helpful? Give feedback.
-
(name includes typo)
A. unified information unit across data-draws-decision-data loop
B. statistical computation framework
C. decompose vensim functionality
D. dynamic diagnostics
todo
|
Beta Was this translation helpful? Give feedback.
-
Let data speak: automatically adjusting unit as data flows inkey idea / hypothesis
case: covid model (SIR)
figure: |
Beta Was this translation helpful? Give feedback.
-
SD seminar on 10/20 12:30-2:00pm EST in the Jay W. Forrester conference room, E62-450 7 prep meeting title: goal: |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
From the above diagram, we decided that Angie leads application (lower right blue bubble), @tomfid leads modeling (upper left) To be constructive for oppty 2 (make a bridge from bayes to dynamic models), I'm drafting the invitation from three Bayes crowds:
What would be a good title for our talk?
This could potentially be a preparation for Prescriptive Bayes conference drafted in #144. |
Beta Was this translation helpful? Give feedback.
-
@chasfine said it would be useful, with covid model, if we can describe how
today's agenda:
![]()
below is version updates of @hazhirr's models (assumed it's sharable as he allowed to upload the model here): V7 changes from V6: Changing the test and hospitalization sector completely V8 with JS changes added include changing variable names and cleaning up the model, as well as new risk perception formulations. V9 with HR modifying risk perception and hospitalization equations. Also deomgrpahic impact on fatality added in V9-HR V10 with HR modifying post-mortem and fatality equations to fix a bug and be more transparent in fatality rates using more linear alterantives. V11: Modified testing and calibration parts so that testing data, when availble, is used to drive the model (ratther than simulated testing data) V13: Modified improved reading of testing from data (removing countries with only a couple of test data). Reduced free parameterse and ranges so calibration does not go to weird regions of parameter space. V14: Update fatality equation to further reduce degrees of freedom and be mroe consistent with the analytical solution to death. Also further fine tuned the VOC ranges. Changed post-mortem test ratio to simplify. Added a new control panel for better tracking the data. V15: Changed test capacity formulations to include a government response time for testing distinct from initial government response time. Also moved back to using simulation rather than data for testing, except for when there is a lot of data (US). Updated voc not letting non-covid acuity get too low. Made sensitivity to risk country specific. V16: changed time constants for detection delay and resolution time to be based on data.; made the acuity level fixed at 1.4 , letting flu acuity to vary instead. Made the symptom density impact not increase beyong 1 . V17: Removed travel section; Added pre-symptomatic stock different from those post onset but pre-testing and used a higher order delay; Removed Hospital Capacity Expansion; Added structure to capture hospital pressure depending on geographical distribution V18: Added zero-inflated poisson structure to capture asymptomatics beyond poisson segment; Added increase in infectivity with symptoms; Added test sensitivity (false negatives) to the structure; cleaned up and improved robustness of extrapolation equations for testing and hospital V19: Add constraints for total death; fix sensitivity of testing that was missing for post-mortem tests; V20: Updating parameter ranges based on V19 run; fixing hospital contact rate; V21: Improving hospital demand to be limited to V22: Limiting death drivers to only general parameters. Fixing post mortem testing equation. V23: Fixing a problem in negative demand for test, to include recovered notest. Removing country-specificity in negative demand for testing; adding min contact fraction to list of calibrated parameters. V24: Getting to negative binomial for calibration; correcting ranges for V25: Adding test capacity, rates, and cumulative to enhance calibration with test rate V26 and 27: Fixing the inaccurate approximation of alloction of test; Adding changes in negative demand in resposne to recent infections. V28: Making Total Asymptomatic Fracton a parameter; Removing time to test for negatives; cleaning unit errors; making dread factor country specific. V29: Fixing an error in hospital capacity; Expanding minimum excess death attributable to COVID; Expanding Max Hospital Demand; V30: Simplifying hospital capacity to not cut admissions in stock management.; Excluding data below 50 cumulative; V31: Separating hosptial to be indepednent of non-hospital part V33: Made infectiousness of symptomatic a step higher than asymptomatic. V34: Make the hospital priority for untested dynamically respond to recent allocation received by non-covid hospital demand V35: Add reality check test flags V36: Fixing bugs in positive test fraction; making adj V48: V37: Improving prior penalty deviation; adding post-mortem as country specific; removing weight on reports as country specific V39: Adding impact of Weather; removing country-specific time constants. V40: Making all parameters both general and country specific V42: extending V40 architecture to all conceptual parameters that may be reported in the paper and updating the likelihood function to penaize larger variance. V43: Simplifying variance formulation for random effects; making the penalty for excess mortality a real likelihood. V44: Minor cleanups V45: Update variable names based on documentation writup. Add structure for policy sensitivity at the end. V46: Add flexibility to have real general variables as well V47: Incorporating differences in asymptomatic infection into estimation V48: Updating pre-symptomatic infectivity; adding parameter to stop calibration ; adding test capacity for future. V49-51: changes in parameters for stdev of random effects and the excess death penalties. V58: Add impact of learning and patient mix on IFR and adherence fatigue on behavioral response V62: Create a simpler behavioral response function V63: Cleanup mapping of behavioral response to policies and ready for Dec 23 analysis. Improve vaccination sector to make consistent with Stella model. V64: Adding data to vaccination sector. V65-66: Added error terms for initial unrealistic wave; added structure to use average recent test rates rather than last day. V67: Add new variant rise to the model V68: Add leaky vaccines, death despite vaccine, and loss of immunity V70: Tighten the test demand variances; Changing post-mortem testing formulation; changing vaccination priority multiplier; V71: Add new calibration component for excess mortality using total excess mortality rather than delta with covid counts V72: A state resetting mechanism added to the model. Omicron variant added to model 73: Updating delays in measurement and death reporting; adding a saturation effect on risk perception so it does not go to very high levels and stay there forever 74: Separating Naive and NVx groups in the model 75: Adding country-level maxVacFrac coming from Vaccine model estimations; Adding projection process noise calibrated in ProcessNoiseGen model. 79: Adding structure for measurment timing of data; Changing total test allocation based on demand not population of each group; adding acuity loss due to deaths. Adding BA5 variant. Adding outcome measures needed. |
Beta Was this translation helpful? Give feedback.
-
Özge Karanfil and I had a chat on her paper with John on evidence-based testing which I feel can provide insight to our search for information sync between our two parallel cycles (1,2) and one large cycle.
tool-builder:modeler+policy_maker:consumer from this figure is mapped to probabilistic computing system: SD+economics+statistics:practice |
Beta Was this translation helpful? Give feedback.
-
ppl-sd/econ/stat-application for preparing 10/20 sd seminar talk A: atom_supply: ppl, Tom Talk schedule (20mins each) B,C: matching supply with demand circle (Travis)
|
Beta Was this translation helpful? Give feedback.
-
atom, bit, customerC: seminar audience
toolsTool C1_segment
Tool A1_Processify
Tool B1_Professionalize
Tool B2_Collaborate
Tool A2_Automate
Tool B3_Acculturate
Tool A3_Platformize
Tool A4_Replicate
Tool A5_Capitalize
Tool C2_Evaluate
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
On replication vs segmentation: My thinking was that replication is entering a new (probably geographic) market with a (virtually) identical process, e.g, McDonald's builds virtually identical restaurants in new (USA locations) with the exact same marketing, product offerings, processes, etc. Pure replication. Software products, (e.g., Microsoft office) require even simpler replication -- essentially license and download a copy (which is identical to what everyone else gets). For Microsoft, replication is virtually effortless, for McDonald's it takes significant work. Segmentation, on the other hand, in my thinking, requires a new product variant for a new market. So, if McDonald's or Microsoft is offering a Korean version of their model to that local market, they have to do (perhaps far) more customization in product, marketing, and perhaps process, to achieve adoption in the new language, culture, etc. Perhaps this discussion raises the issue that replication and segmentation might occur on some continuum, rather than be orthogonal dimensions. |
Beta Was this translation helpful? Give feedback.
-
@tomfid asked for a brief feedback on his definition of SD (right). I have three and one question in the end.
Just a feeling, but fixing every objects to flow on continuous time may deter efficient uncertainty representation. Just as modeling large and slow world with quantum field model (not newtonian) is too luxurious and in fact can lower accuracy. Allowing different levels of discreteness to different clockspeed objects may be beneficial and based on this physics' quadrant (classic vs relativity vs quantum mechanics, field) I drafted below. ![]() Q1. @tomfid from the sd genome doc you shared, what do you mean by “purely statistical models”? can’t all vensim models be cast as pure statistical model? (vice versa) |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Framing tomorrow's talk as customer's prior elicitation (i.e. customer interview) it would be helpful if we can use today's session for constructing a signal to action map via simulated data testing. settingthree actions
beachhead customer
productBayesSD_vensim` with two main features:
meeting scheduleinput from either Tom or Charlie are noted with @ 1. sync Tom, Angie, Charlie's prior on
|
- | def (related concept in E-strategy) |
---|---|
ValidApproxBiasCost( |
Cost of approximation bias caused by approximating utility with testing hypothesis |
ValidStatBiasCost( |
Cost of statistical bias caused by approximating utility of beachhead customer population with that of N sampled customers |
VerifApproxBiasCost( |
Cost of approximation bias caused by approximate testing hypothesis |
VerifConvgBiasCost( |
Cost of optimization bias caused by limited resource (represented as time |
OpportunityCost( |
Resource, time, and strategic costs of an experiment |
Beta Was this translation helpful? Give feedback.
-
Collected data on "Q1: What do your customers want and what are they willing to pay for?, Q2: One wish / your vision for using Vensim? What new tech would delight you?, Q3: What topics in system dynamics should Vensim 11+ focus on?" from on/offline audience (feel free to add, preferably with your name for us to folllow up) Chat in the thread below. |
Beta Was this translation helpful? Give feedback.
-
Angie's digest - wish open discussion!
Not sure robustness is the goal. i don't like some myths around sensitivity check. natural for model to change according to priors. comparison and evaluation might be better happen in parameter space ("is this prior that results in certain outcome acceptable? within our reason?")
@tomfid let's invite Jeroen to our Friday chat to discuss aggregation flexibility!
What does Asmeret mean by representative?
@tseyanglim what do you mean by " doing the stuff to the right to be more part of 'my job', to be done bespoke for any given audience"? not following..
What information on Karim Chichakly and Asmeret Naugle would help me understand Karim's last comment?
|
Beta Was this translation helpful? Give feedback.
-
partial realization of text-based diagram planner from feedback during talk from Johan Chu @advrk |
Beta Was this translation helpful? Give feedback.
-
I shared a recent bayesian software paper with hazhir and tom and was glad to identify interest in bayesflow and likelihood models. ![]() Simulation-Based Prior Knowledge Elicitation for Parametric Bayesian Models is one result of the research direction I was working on with Andrew and Paul back in Columbia. I'd be interested in discussing ways to apply this to improve Vensim's data structure. As long as data structures are well enough as below, managerially and statistically, I'm all for reverse engineering the inference based on several graphical diagnostics. invertible network by bayesflow. More approachable because it's python-based, stems from Stan, tested with posterior recovery plots and SBC. Although its ode solver is in its infant step, we can learn the following: Based on Tom's draft, I developed SD parameter table for SBC as attached; for our next step, discussion on concrete hypothesis and plot will be greatly helpful for experiments. My hypothesis:
3-5 requires a thorough understanding of mechanism from your comment "process noise can change dynamics" but am lost For speedup, reparameterization, not too diffuse prior, small HMC step size (.01), and reduce_sum are four strategies I am testing. |
Beta Was this translation helpful? Give feedback.
-
Integrating Bayesian Inference and System Dynamics with Case Studies in Epidemiology.pdf DesireD1: develop an integrated workflow (from algorithm builders to stakeholders) BeliefB1: tools will get integrated potentially with the help of word to world architecture that provide substrate for probabilistic reasoning, hence D1 PlanningPlan3(B2, D2) for talk preparation, @jandraor, what input from @tomfid and I can help the most efficient generalization of your work? |
Beta Was this translation helpful? Give feedback.
-
I reached out to Tom and Hazhir:
@tomfid answered and shared his diagrams which I like in that it gave me individual-based perspective (complementing idea or knowledge-based)
Hazhir liked Tom's operational graphs, and the distinct bottlenecks on both supply and demand side of getting Bayesian tools, data, and dynamic models together. He suggested, one may also note the distinction between decision makers and 'problems': a general issue may be the frequency of problems that may benefit from the full suite of tools on the demand side. He thinks for poorly defined problems with limited data it is not clear how much the full workflow adds value. I agree with Hazhir, and this is where incrementally revealing workflow, along with its ability to identify modeler's state can be useful. |
Beta Was this translation helpful? Give feedback.
-
from the roadmap, i felt the customer pull might be the bottleneck so reached out to bayes in business supporters.
and Jeff Dotson who just wrote MackeyDotson24_Bayesian Stats in Mangement.pdf was willing to chime in. so I made below need and solution Table 1: Bayesian Solutions and Corresponding Management Research Papers
Table 2: Entrepreneurship Research Needs and Corresponding Empirical Papers
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
before 2024, system dynamics audience has been my beach head in that I have MIT SD group who can give me feedback and collaborating closely with vensim which provides platform.
from 2024, i changed bayesian entrepreneurship #234 as my new beach head. but since both of them are in b-school, technology needed and the way it should be wrapped are not too different
Integrating Bayesian thinking with System dynamics: 10/20/2023 SD seminar
Building on #149, Tom and I are preparing SD seminar with, Charlie, Ozge, Travis, Abdullah, Vikash's help.
i wish to cover sd's position (which can or cannot be framed as statistical regression) and its implications in automation (including gen-AI).
q
1.1 what behavior should be learned for prescription? -> 1.2 minimum set of behaviors related to prescription?
2.1 how should it be learned for prescription? -> 2.2. minimum data samples for each learning?
Beta Was this translation helpful? Give feedback.
All reactions