⏰🔍💊faster measurement merges behavior identification and algorithmic prescription #198
Replies: 2 comments
-
Charlie: This presentation on how a person spends his/her time — Maybe same model can be used on a different time scale for how an entrepreneur spends his/her time over the course of nailing, perhaps? Angie: yes but definition of “behavior”, “interpretable” I think we need to be very concrete; Currently the framework is identify behavior then optimize this but as measurement increases this framework may change: #198. Separating industry effect (I_0) vs startup effect (S_0) so that I can create counterfactual of S_1 given I_0 is my approach and hardwareness is example of industry effect (Assumption: clockspeed_I << clockspeed_S) based on Charlie's comment, I asked Yuebing whether she can share the code/logic on how you set finite number of different activities (goal is to understand behavior in startup operation) |
Beta Was this translation helpful? Give feedback.
-
![]() ![]() ![]() ![]() |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Ultimately, I'd like to persuade you that every intervention should be algorithmic. However, this requires understanding how useful phenomena evolves faster than research methodology and measurements. Based on ## 1 will discuss this.
My definition of "algorithmic" is "initial and termination state can be identified, has updating logic, and finishes in a finite number of iterations".
Just as chef assembles ingredients under certain recipe with the help of measurement, researcher combines phenomena under theory with the help of measurement. I established analogy as the table below. Do you have anything comments/questions/additions for the last row "how to adopt when$\theta_t$ evolves faster than $\phi_t$ ?", "researcher" column?
Angie's Q:
H answered:
- gourmet would care more about recipe and measurement
- academic or specialized audience may focus on methodological rigor, theoretical contribution, and relevance to existing literature
- test different combination of ingredient and recipe on target customer
- perform pilot studies and pre-register designs
- follow established protocols and engage in open science practices
- tailor research outputs to audience needs
- attending interdisciplinary conferences to gather diverse research questions and trends
e.g. designing a mixed-methods study that combines surveys and interviews to explore new aspects of consumer behavior in digital marketplaces
- ensure the scalability of research practices by adopting standardized data collection and analysis protocols that can be applied across various studies for consistency e.g. developing a template for data management and analysis that ensures consistency in research practices
2. innovative culture (experiment with new techniques and ingredients)
3. flexible supply chain management (quickly adapt to new menu needs)
1.1 Develop Modular Research Frameworks: research designs with interchangeable or adaptable components, such as modular questionnaires or flexible experimental setups, that can be easily adjusted to study new phenomena or incorporate emerging theories. This approach reduces the time needed to design new studies from scratch
1.2 Utilize Existing Data Sets: use existing data sets for preliminary analysis or to test new hypotheses. Need final clean test on newly collected data set though...
2. Innovation
2.1 Promote Interdisciplinary Collaboration: bring in fresh perspectives and methodologies that can be adapted to your research area
2.2 Rapid Research Prototyping: preliminary studies or exploratory analyses that can be quickly executed to test new ideas or methods. This approach supports the iterative refinement of research methodologies in response to new developments.
3. Flexible Supply Chain
3.1 Build Collaborator Network: Establish relationships with a broad network of collaborators, including other research institutions, industry partners, and cross-disciplinary teams. This network can provide rapid access to new theories, data sources, and analytical tools.
3.2 Adopt Open Science Practices: Engage in open science practices by sharing data, research instruments, and methodologies. This can facilitate the rapid dissemination and adoption of innovative research methods and encourage collaborative improvements.
3.3 Utilize Agile Research Methods: Implement agile methodologies, traditionally used in software development, to manage research projects. This involves breaking down large research questions into smaller, manageable tasks with short cycles of planning, execution, and evaluation. This allows for quick adjustments in response to new findings or changing phenomena.
(TBC)
2. two phases will start to overlap
Given the above is acceptable, next is whether two phases in the first and second column below will overlap. Currently, the first step "behavior identification" tests null hypothesis "behavior A doesn't exist". After falsifying this, researchers designs intervention prescription and further tests second null "intervention B is not effective". Representing the existence of behavior with$\theta = I(\text{measurement on population C supports existence of behavior A})$ , ideal procedure of current empirical OM paper is: $\phi$ , $p(\theta=1, \phi=1) = p(\theta=1|\phi=1)p(\theta = 1)$
= p(\theta|\phi)p(\phi) = inHowever, greatest leap of faith is applying
for instance, by sensor technology that extracts implicitly, the two will begin to overlap. My question is, how is HT1 + HT2 overlapped which in some is what recommender engine is
- increase in global sourcing results in an increase in inventory investment (Jain14)
Comparing the intervention from our reading lists, interaction can be divided into one-time or multi-time. Having fairness and efficiency of an algorithm as two main objectives, fairness should start from identifying the existence of Between studies that try to identify the existence of deviation (e.g. motivation among race, deviation from profit maximizing value)Comparing examples from one-time (What Motivates Innovative Entrepreneurs?) vs multi-time (Demand Learning and Pricing for Varying Assortments), one-time is more multi-time is providing new tool intervention depending on how complex the underlying data generating process is and behavioral components and cognitive bias (e.g. perception of being in the last queue) increases complexity.
Another way to visualize complexity of intervention is depth of intervention tree. If we give homogenous treatment to every person, then tree depth is 1. However, if we customize the intervention by sequential step of his/her prior elicitation, then the tree becomes deeper. I had a feeling most people only defines the latter intervention as "algorithmic". Is this correct?
3. connecting with optimization algorithm
Viewing "cut" as hypothesis to be falsified on the existing theory and hypothesis of human behavior, we are using data to falsify. Then by using different algorithms (benders decomposition or column and row generation in optimization algorithm ~ profit maxizing algorithm) the system is converging to (unknown) ideal formulation. This could be the form of decision tree or cut generating program.
Although Kris crossed the line between algorithm in management and optimization context, I beg to differ. Gist of integer optimization algorithms like column generation is how to systematically add cuts to the integer constraint (AX>b) relaxed polyhedron ($F= {X|AX>B, X \in R}$ ) to ideal formulation $conv(F \cap X \in Z)$ . Changing gear from geometric to algebraic perspective, systematic cut is a constraint that can be added to different subgroups. My guess is, systemic lab experiment design that are replicable is
kris gave tips "let the reviewers decide what they want to do (if we don't have experiment E, would this be a rejectable component?"
Beta Was this translation helpful? Give feedback.
All reactions