Continuously Train + Deploy TensorFlow Serverless Models in Production with PipelineAI, Kafka, Kubernetes, and GPUs
At the end of the workshop, you can download the Docker image and run everything in your own cloud account.
Using the latest advancements in real-time AI from the open source PipelineAI project, we will build a continuous, serverless model training and deployment pipeline using GPUs, TensorFlow, Kafka, Kubernetes, JupyterLab, and OpenFaaS.
Streaming data is generated in real-time from the other attendees using Slack to crowd-source the data labeling. This newly-labeled data is used to continuously train and deploy new models.
We will use OpenFaaS and Istio with Kubernetes to quickly - and safely - deploy the new models to a live production environment.
Similar to canary deployments of classic microservices, the new models are exposed to only a small amount of production traffic. Slowly, we will shift more traffic to the new models.
Using reinforcement learning, multi-armed bandits, and live metrics, we automatically route traffic to the winning models using reward functions such as MAXIMIZE(number of signups) and MINIMIZE(cost per prediction).
All demos run on a hybrid-cloud, open source, GPU-based, Kubernetes cluster optimized for the machine learning and artificial intelligence use cases that we commonly see at PipelineAI.
The only prequisites are a modern browser, an internet connection, and a good night's sleep.
**Everybody gets their own cloud-based GPU (latest Nvidia V100) for the day!!
No Setup is Required.
Continuous Training (CT)
Continuous Optimization (CO)
Continuous Experimentation (CE)