When updating prediction weights, we typically go for model in models where multiple prediction models are involved. This works since we just add some models at the start and always work with those - but in production it's possible a new prediction model might get registered at any time - and picked up by any already running acquisition sessions. Going forward we'd need to lock the list of prediction models used during any given acquisition and just limit ourselves to those. Need to think about this.