You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: "Master product analytics, A/B testing & power analysis: design stable metrics, validate randomization with A/A tests, plan sample size to de-risk features."
19
-
intro: "How do you design product experiments that truly establish causality and avoid costly false conclusions? In this episode, Jakob Graff — Director of Data Science and Data Analytics at diconium, with prior analytics leadership at Inkitt, Babbel, King and a background in econometrics — walks through practical product analytics and A/B testing strategies focused on causality and reliable metrics. <br><br> We cover why randomized experiments mirror clinical trials, how experimentation de-risks features and builds organizational learning, and a concrete case study on subscription vs. points revenue metric design. Jakob explains experimentation platform trade-offs (third-party vs. in-house), traffic splitters, assignment tracking, and why A/A tests validate system trust. You’ll hear best practices for first tests (two-group simplicity), metric selection considering noise and seasonality, and how to plan duration with power analysis and sample-size calculations. The discussion also compares z/t/nonparametric tests, p-value intuition from A/A comparisons, frequentist vs Bayesian perspectives, and multi-armed test considerations. <br><br> Listen to learn practical steps for designing randomized experiments, selecting stable metrics, planning sample sizes, and interpreting results so your product analytics and A/B testing produce actionable, causal insights"
description: "Discover AI-driven computer vision and remote sensing strategies to scale biodiversity monitoring, improve species ID, and inform conservation policy."
18
-
intro: "How can AI help close critical data gaps in biodiversity monitoring and turn images and sensor data into actionable conservation decisions? In this episode Tanya Berger-Wolf, a computational ecologist, director of TDAI@OSU, and co-founder of the Wildbook project (Wild Me), walks through practical applications of AI for ecology, biodiversity monitoring, and conservation. <br><br> We cover core techniques—computer vision, machine learning, and remote sensing—and their use in image-based monitoring with camera traps, drones, and species identification. Tanya explains individual identification and longitudinal tracking, habitat mapping and change detection, and the data challenges of labeling, class imbalance, and sparse observations. The conversation addresses integration of heterogeneous datasets, model robustness (domain shift and transfer learning), and ethical considerations including Indigenous knowledge and equity. You’ll also hear about scalable platforms like Wildbook, citizen science workflows for crowdsourcing and quality control, policy relevance, open data and FAIR principles, edge deployment in the field, and building sustainable monitoring programs. <br><br> Listen to gain concrete insights on tools, pitfalls, and next steps for applying AI to conservation—what works now, what remains hard, and resources to explore further."
18
+
description: Discover AI-driven computer vision and remote sensing strategies to scale
19
+
biodiversity monitoring, improve species ID, and inform conservation policy.
20
+
intro: How can AI help close critical data gaps in biodiversity monitoring and turn
21
+
images and sensor data into actionable conservation decisions? In this episode Tanya
22
+
Berger-Wolf, a computational ecologist, director of TDAI@OSU, and co-founder of
23
+
the Wildbook project (Wild Me), walks through practical applications of AI for ecology,
24
+
biodiversity monitoring, and conservation. <br><br> We cover core techniques—computer
25
+
vision, machine learning, and remote sensing—and their use in image-based monitoring
26
+
with camera traps, drones, and species identification. Tanya explains individual
27
+
identification and longitudinal tracking, habitat mapping and change detection,
28
+
and the data challenges of labeling, class imbalance, and sparse observations. The
29
+
conversation addresses integration of heterogeneous datasets, model robustness (domain
30
+
shift and transfer learning), and ethical considerations including Indigenous knowledge
31
+
and equity. You’ll also hear about scalable platforms like Wildbook, citizen science
32
+
workflows for crowdsourcing and quality control, policy relevance, open data and
33
+
FAIR principles, edge deployment in the field, and building sustainable monitoring
34
+
programs. <br><br> Listen to gain concrete insights on tools, pitfalls, and next
35
+
steps for applying AI to conservation—what works now, what remains hard, and resources
description: "Master AI product design: build algorithm-ready UX, run rapid experiments and craft data-driven roadmaps to prioritize innovation and ship measurable results."
19
-
intro: "How do you design products that are “algorithm-ready” while running rapid experiments and building data-driven roadmaps? In this episode, Liesbeth Dingemans—strategy and AI leader, founder of Dingemans Consulting, former VP of Revenue at Source.ag and Head of AI Strategy at Prosus—walks through pragmatic approaches to AI product design that bridge vision and execution. <br><br> We cover algorithm-friendly UX and signal collection, a concrete interaction-design case study comparing TikTok and Instagram signals, and the Double Diamond framework for moving from problem framing to solution exploration. Liesbeth explains scoping and prioritization, parallel experiments and proofs of concept, one-week design sprints, appropriate timeframes for research-to-scale, and the role of designers, data scientists, engineers and product managers in shaping AI roadmaps. <br><br> Listeners will learn how to avoid rework by involving data science early, use scoping documents to challenge assumptions, create measurable experiments (the Task Force/“Jet Ski” model), and build data-driven pitches for long-term bets versus quarterly OKRs. Tune in for concrete frameworks and practices to make AI product design, rapid experiments, and data-driven roadmaps work in your organization"
17
+
description: 'Master AI product design: build algorithm-ready UX, run rapid experiments
18
+
and craft data-driven roadmaps to prioritize innovation and ship measurable results.'
19
+
intro: How do you design products that are “algorithm-ready” while running rapid experiments
20
+
and building data-driven roadmaps? In this episode, Liesbeth Dingemans—strategy
21
+
and AI leader, founder of Dingemans Consulting, former VP of Revenue at Source.ag
22
+
and Head of AI Strategy at Prosus—walks through pragmatic approaches to AI product
23
+
design that bridge vision and execution. <br><br> We cover algorithm-friendly UX
24
+
and signal collection, a concrete interaction-design case study comparing TikTok
25
+
and Instagram signals, and the Double Diamond framework for moving from problem
26
+
framing to solution exploration. Liesbeth explains scoping and prioritization, parallel
27
+
experiments and proofs of concept, one-week design sprints, appropriate timeframes
28
+
for research-to-scale, and the role of designers, data scientists, engineers and
29
+
product managers in shaping AI roadmaps. <br><br> Listeners will learn how to avoid
30
+
rework by involving data science early, use scoping documents to challenge assumptions,
31
+
create measurable experiments (the Task Force/“Jet Ski” model), and build data-driven
32
+
pitches for long-term bets versus quarterly OKRs. Tune in for concrete frameworks
33
+
and practices to make AI product design, rapid experiments, and data-driven roadmaps
description: "Master algorithmic trading: backtesting and risk management—learn practical data sources, features, models & execution to build robust strategies."
17
+
description: 'Master algorithmic trading: backtesting and risk management—learn practical
18
+
data sources, features, models & execution to build robust strategies.'
18
19
topics:
19
20
- machine learning
20
21
- data science
21
22
- MLOps
22
23
- algorithmic trading
23
24
- tools
24
-
intro: "How do you turn a trading idea into a robust, risk-managed algorithm in Python? In this episode Ivan Brigida — analytics lead behind PythonInvest with 10+ years in statistical modeling, forecasting, econometrics and finance — walks through practical steps for algorithmic trading with Python, from data sourcing to deployment (and a clear reminder this is educational, not investment advice). <br><br> We cover where retail traders get market data (Yahoo, Quandl, Polygon), OHLCV and adjusted-close nuances, and a concrete mean-reversion example. Ivan explains backtesting methodology, common pitfalls like time-series data leakage, and walk-forward simulation for realistic validation. He breaks down risk management (stop-loss thresholds, position sizing), execution and trading fees, plus evaluation metrics (ROI, precision) and defining prediction targets (binary growth thresholds such as 5%). <br><br> On the modeling side you’ll hear practical feature engineering (time-window stats, handcrafted indicators), model choices (logistic regression, XGBoost, neural nets), explainability via feature importance, and deployment options (cron, Airflow, APIs, partial automation). Listen to gain actionable guidance for building, validating, and deploying algorithmic trading systems in Python."
25
+
intro: How do you turn a trading idea into a robust, risk-managed algorithm in Python?
26
+
In this episode Ivan Brigida — analytics lead behind PythonInvest with 10+ years
27
+
in statistical modeling, forecasting, econometrics and finance — walks through practical
28
+
steps for algorithmic trading with Python, from data sourcing to deployment (and
29
+
a clear reminder this is educational, not investment advice). <br><br> We cover
30
+
where retail traders get market data (Yahoo, Quandl, Polygon), OHLCV and adjusted-close
31
+
nuances, and a concrete mean-reversion example. Ivan explains backtesting methodology,
32
+
common pitfalls like time-series data leakage, and walk-forward simulation for realistic
33
+
validation. He breaks down risk management (stop-loss thresholds, position sizing),
34
+
execution and trading fees, plus evaluation metrics (ROI, precision) and defining
35
+
prediction targets (binary growth thresholds such as 5%). <br><br> On the modeling
36
+
side you’ll hear practical feature engineering (time-window stats, handcrafted indicators),
37
+
model choices (logistic regression, XGBoost, neural nets), explainability via feature
38
+
importance, and deployment options (cron, Airflow, APIs, partial automation). Listen
39
+
to gain actionable guidance for building, validating, and deploying algorithmic
0 commit comments