Typically, we write the models in Python and bundle them up in Docker Containers to deploy via the client’s favourite container orchestrator (Kubernetes, Amazon Container Services, Swam). For larger-scale problems, we tend to use Amazon Sagemaker as our workhorse. The choice, however, is normally determined by the nature of the problem and the capabilities and preferences of the client.
Customer-obsessed businesses measure the accuracy with which they can predict these events. Through the use of statistics, we can quantify the degree to which we actually understand our audience and we can monitor our improvement over time.
Taking an empirical, bottom-up approach to simulating ensures that we capture the nuances surrounding the arrival of different types of customers, taking into account the real-world complexities of finite and variable supply.
We can articulate the value that we will capture when the best algorithm is deployed.
We need to monitor the performance and reliability of the deployment. Is engineering stable? Is the recommender playing nicely with the other systems?