You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: 'Master product analytics, A/B testing & power analysis: design stable metrics, validate randomization with A/A tests, plan sample size to de-risk features.'
19
-
intro: How do you design product experiments that truly establish causality and avoid costly false conclusions? In this episode, Jakob Graff — Director of Data Science and Data Analytics at diconium, with prior analytics leadership at Inkitt, Babbel, King and a background in econometrics — walks through practical product analytics and A/B testing strategies focused on causality and reliable metrics. <br><br> We cover why randomized experiments mirror clinical trials, how experimentation de-risks features and builds organizational learning, and a concrete case study on subscription vs. points revenue metric design. Jakob explains experimentation platform trade-offs (third-party vs. in-house), traffic splitters, assignment tracking, and why A/A tests validate system trust. You’ll hear best practices for first tests (two-group simplicity), metric selection considering noise and seasonality, and how to plan duration with power analysis and sample-size calculations. The discussion also compares z/t/nonparametric tests, p-value intuition from A/A comparisons, frequentist vs Bayesian perspectives, and multi-armed test considerations. <br><br> Listen to learn practical steps for designing randomized experiments, selecting stable metrics, planning sample sizes, and interpreting results so your product analytics and A/B testing produce actionable, causal insights
18
+
description: "Master product analytics, A/B testing & power analysis: design stable metrics, validate randomization with A/A tests, plan sample size to de-risk features."
19
+
intro: "How do you design product experiments that truly establish causality and avoid costly false conclusions? In this episode, Jakob Graff — Director of Data Science and Data Analytics at diconium, with prior analytics leadership at Inkitt, Babbel, King and a background in econometrics — walks through practical product analytics and A/B testing strategies focused on causality and reliable metrics. <br><br> We cover why randomized experiments mirror clinical trials, how experimentation de-risks features and builds organizational learning, and a concrete case study on subscription vs. points revenue metric design. Jakob explains experimentation platform trade-offs (third-party vs. in-house), traffic splitters, assignment tracking, and why A/A tests validate system trust. You’ll hear best practices for first tests (two-group simplicity), metric selection considering noise and seasonality, and how to plan duration with power analysis and sample-size calculations. The discussion also compares z/t/nonparametric tests, p-value intuition from A/A comparisons, frequentist vs Bayesian perspectives, and multi-armed test considerations. <br><br> Listen to learn practical steps for designing randomized experiments, selecting stable metrics, planning sample sizes, and interpreting results so your product analytics and A/B testing produce actionable, causal insights"
description: Discover AI-driven computer vision and remote sensing strategies to scale
19
-
biodiversity monitoring, improve species ID, and inform conservation policy.
20
-
intro: How can AI help close critical data gaps in biodiversity monitoring and turn
21
-
images and sensor data into actionable conservation decisions? In this episode Tanya
22
-
Berger-Wolf, a computational ecologist, director of TDAI@OSU, and co-founder of
23
-
the Wildbook project (Wild Me), walks through practical applications of AI for ecology,
24
-
biodiversity monitoring, and conservation. <br><br> We cover core techniques—computer
25
-
vision, machine learning, and remote sensing—and their use in image-based monitoring
26
-
with camera traps, drones, and species identification. Tanya explains individual
27
-
identification and longitudinal tracking, habitat mapping and change detection,
28
-
and the data challenges of labeling, class imbalance, and sparse observations. The
29
-
conversation addresses integration of heterogeneous datasets, model robustness (domain
30
-
shift and transfer learning), and ethical considerations including Indigenous knowledge
31
-
and equity. You’ll also hear about scalable platforms like Wildbook, citizen science
32
-
workflows for crowdsourcing and quality control, policy relevance, open data and
33
-
FAIR principles, edge deployment in the field, and building sustainable monitoring
34
-
programs. <br><br> Listen to gain concrete insights on tools, pitfalls, and next
35
-
steps for applying AI to conservation—what works now, what remains hard, and resources
36
-
to explore further.
17
+
description: "Discover AI-driven computer vision and remote sensing strategies to scale biodiversity monitoring, improve species ID, and inform conservation policy."
18
+
intro: "How can AI help close critical data gaps in biodiversity monitoring and turn images and sensor data into actionable conservation decisions? In this episode Tanya Berger-Wolf, a computational ecologist, director of TDAI@OSU, and co-founder of the Wildbook project (Wild Me), walks through practical applications of AI for ecology, biodiversity monitoring, and conservation. <br><br> We cover core techniques—computer vision, machine learning, and remote sensing—and their use in image-based monitoring with camera traps, drones, and species identification. Tanya explains individual identification and longitudinal tracking, habitat mapping and change detection, and the data challenges of labeling, class imbalance, and sparse observations. The conversation addresses integration of heterogeneous datasets, model robustness (domain shift and transfer learning), and ethical considerations including Indigenous knowledge and equity. You’ll also hear about scalable platforms like Wildbook, citizen science workflows for crowdsourcing and quality control, policy relevance, open data and FAIR principles, edge deployment in the field, and building sustainable monitoring programs. <br><br> Listen to gain concrete insights on tools, pitfalls, and next steps for applying AI to conservation—what works now, what remains hard, and resources to explore further."
description: 'Learn to build data teams and ethical AI in healthcare: actionable personalization, A/B testing for digital therapeutics, GDPR-safe experiments.'
19
-
intro: How can AI power effective digital therapeutics while balancing personalization, rapid experimentation, and patient safety? In this episode, Stefan Gudmundsson — Director of Data, Analytics, and AI with a track record building ML and data teams at Sidekick Health, King, H&M, and CCP Games — walks through practical approaches for AI in healthcare and digital therapeutics. <br><br> We cover how machine learning is applied to diagnosis, drug discovery, and biologics (AlphaFold); Sidekick Health’s gamified digital therapeutics and quality-of-life goals; behavioral design that minimizes in-app time; and engagement strategies like charity incentives versus leaderboards. Stefan explains building the analytics foundation—data pipelines, dashboards, and experimentation capabilities—and why A/B testing and agenda-driven recommender systems are core to personalization. He also tackles data privacy and ethics (GDPR/HIPAA, de-identification), remote monitoring with wearables, clinical trials versus app experiments, managing medical risk, and hiring and scaling data, ML, and engineering teams. <br><br> Listen to get concrete frameworks for building data teams, running safe, measurable experiments, designing personalized interventions, and embedding ethical safeguards into AI-driven digital therapeutics
18
+
description: "Learn to build data teams and ethical AI in healthcare: actionable personalization, A/B testing for digital therapeutics, GDPR-safe experiments."
19
+
intro: "How can AI power effective digital therapeutics while balancing personalization, rapid experimentation, and patient safety? In this episode, Stefan Gudmundsson — Director of Data, Analytics, and AI with a track record building ML and data teams at Sidekick Health, King, H&M, and CCP Games — walks through practical approaches for AI in healthcare and digital therapeutics. <br><br> We cover how machine learning is applied to diagnosis, drug discovery, and biologics (AlphaFold); Sidekick Health’s gamified digital therapeutics and quality-of-life goals; behavioral design that minimizes in-app time; and engagement strategies like charity incentives versus leaderboards. Stefan explains building the analytics foundation—data pipelines, dashboards, and experimentation capabilities—and why A/B testing and agenda-driven recommender systems are core to personalization. He also tackles data privacy and ethics (GDPR/HIPAA, de-identification), remote monitoring with wearables, clinical trials versus app experiments, managing medical risk, and hiring and scaling data, ML, and engineering teams. <br><br> Listen to get concrete frameworks for building data teams, running safe, measurable experiments, designing personalized interventions, and embedding ethical safeguards into AI-driven digital therapeutics"
description: 'Discover AI infrastructure strategies: open source orchestration, on-prem
19
-
economics and distributed training at scale to cut costs, boost performance and
20
-
control.'
21
-
intro: How has the rise of ChatGPT reshaped the infrastructure needed to build and
22
-
run large language models, and when does open source orchestration make sense compared
23
-
to cloud or proprietary systems? In this episode we speak with Andrey Cheptsov,
24
-
founder and CEO of dstack — an open-source alternative to Kubernetes and Slurm designed
25
-
to simplify AI infrastructure orchestration. Drawing on his decade-plus at JetBrains
26
-
building developer tools, Andrey frames practical trade-offs between on-prem economics
27
-
and cloud spend, the maturity of open source orchestration tools, and patterns for
28
-
distributed training at scale. We cover core topics including open source orchestration
29
-
for AI workloads, cost and operational considerations for on-prem deployments, and
30
-
strategies to scale distributed training efficiently and reliably. Listen to understand
31
-
when an open source approach like dstack is appropriate, what to evaluate in orchestration
32
-
tools, and how to balance performance, cost, and control as you scale AI projects
33
-
post-ChatGPT. This episode is for engineering leaders and ML infrastructure teams
34
-
seeking actionable insights on AI infrastructure, orchestration tools, on-prem economics,
35
-
and distributed training best practices.
17
+
description: "Discover AI infrastructure strategies: open source orchestration, on-prem economics and distributed training at scale to cut costs, boost performance and control."
18
+
topics:
19
+
- AI infrastructure
20
+
- MLOps
21
+
- LLMs
22
+
- open-source
23
+
- tools
24
+
intro: "How has the rise of ChatGPT reshaped the infrastructure needed to build and run large language models, and when does open source orchestration make sense compared to cloud or proprietary systems? In this episode we speak with Andrey Cheptsov, founder and CEO of dstack — an open-source alternative to Kubernetes and Slurm designed to simplify AI infrastructure orchestration. Drawing on his decade-plus at JetBrains building developer tools, Andrey frames practical trade-offs between on-prem economics and cloud spend, the maturity of open source orchestration tools, and patterns for distributed training at scale. We cover core topics including open source orchestration for AI workloads, cost and operational considerations for on-prem deployments, and strategies to scale distributed training efficiently and reliably. Listen to understand when an open source approach like dstack is appropriate, what to evaluate in orchestration tools, and how to balance performance, cost, and control as you scale AI projects post-ChatGPT. This episode is for engineering leaders and ML infrastructure teams seeking actionable insights on AI infrastructure, orchestration tools, on-prem economics, and distributed training best practices."
description: 'Master AI product design: build algorithm-ready UX, run rapid experiments and craft data-driven roadmaps to prioritize innovation and ship measurable results.'
19
-
intro: How do you design products that are “algorithm-ready” while running rapid experiments and building data-driven roadmaps? In this episode, Liesbeth Dingemans—strategy and AI leader, founder of Dingemans Consulting, former VP of Revenue at Source.ag and Head of AI Strategy at Prosus—walks through pragmatic approaches to AI product design that bridge vision and execution. <br><br> We cover algorithm-friendly UX and signal collection, a concrete interaction-design case study comparing TikTok and Instagram signals, and the Double Diamond framework for moving from problem framing to solution exploration. Liesbeth explains scoping and prioritization, parallel experiments and proofs of concept, one-week design sprints, appropriate timeframes for research-to-scale, and the role of designers, data scientists, engineers and product managers in shaping AI roadmaps. <br><br> Listeners will learn how to avoid rework by involving data science early, use scoping documents to challenge assumptions, create measurable experiments (the Task Force/“Jet Ski” model), and build data-driven pitches for long-term bets versus quarterly OKRs. Tune in for concrete frameworks and practices to make AI product design, rapid experiments, and data-driven roadmaps work in your organization
18
+
description: "Master AI product design: build algorithm-ready UX, run rapid experiments and craft data-driven roadmaps to prioritize innovation and ship measurable results."
19
+
intro: "How do you design products that are “algorithm-ready” while running rapid experiments and building data-driven roadmaps? In this episode, Liesbeth Dingemans—strategy and AI leader, founder of Dingemans Consulting, former VP of Revenue at Source.ag and Head of AI Strategy at Prosus—walks through pragmatic approaches to AI product design that bridge vision and execution. <br><br> We cover algorithm-friendly UX and signal collection, a concrete interaction-design case study comparing TikTok and Instagram signals, and the Double Diamond framework for moving from problem framing to solution exploration. Liesbeth explains scoping and prioritization, parallel experiments and proofs of concept, one-week design sprints, appropriate timeframes for research-to-scale, and the role of designers, data scientists, engineers and product managers in shaping AI roadmaps. <br><br> Listeners will learn how to avoid rework by involving data science early, use scoping documents to challenge assumptions, create measurable experiments (the Task Force/“Jet Ski” model), and build data-driven pitches for long-term bets versus quarterly OKRs. Tune in for concrete frameworks and practices to make AI product design, rapid experiments, and data-driven roadmaps work in your organization"
0 commit comments