Table of contents
- Batch prediction in depth.
- Does it always make sense to try and go online?
In today's article, I will discuss the ideas behind Batch prediction.
First of all, what do I mean with it?
Batch prediction = offline predictions of ML models computed in batch cached for later use.
I will focus on:
- Use cases of Batch serving with a common design template that you can use for your own system.
- When you need to move from batch serving to online and when you should sit tight.
Batch serving in depth.
In the last decade, big data processing has been dominated by batch systems like Spark. To leverage the existing technology stack, a company's first step into the Machine learning world is to just use the existing batch system to make predictions.
Let's see an example of a Batch system, taken from the amazing blog post  (highly suggested read!)
Predictions are computed using batch features by pinging an offline ML model.
As a user interacts with the application, predictions are simply looked up.
Models are developed and prototyped offline and evaluated on historical data.
There are some advantages with this:
- You don't need to care about online serving
- The predictions of your ML models are always one look-up away with no latency.
- The workflow makes it super easy to collaborate with Data Scientists in the organization: they are free to play around with offline data and deploy models.
However, there are a few drawbacks:
- Features get stale.
- Predictions get stale.
- Continuous training loop is much slower.
And these are major drawbacks:
Imagine a user that is looking for a new movie category. If the user comes back online after a few hours and the model has not yet picked up the new taste, there is a high change the user will not stick around.
Imagine a prediction system that detects abuse on your systems: stale models means bad actors will disrupt more!
Does It always make sense to try and go online?
Despite the major drawbacks, I still argue there are cases in your organization where it makes sense to not go online (yet).
For example, if your organization has never used machine learning solutions before, it is much safer to invest in an offline solution that is already leveraging the systems that are in place now. If the experiment goes well, then you can always make the case that it would work X% better if features and predictions would not be stale.
Moreover, it could be the case that bringing the features online is too expensive and the % boost makes the project just not worth it. If the features offline, then it actually does not matter if your predictions are online ;).
In the next article, I will cover online predictions and continual learning.