PipelineAI Distributed Spark ML + Tensorflow AI + GPU Workshop

Everybody Gets Their Own GPU Cloud Instance! No Setup is Required.


We use PipelineAI to build an end-to-end model training and deployment pipeline using Spark ML,Tensorflow AI, Kubernetes, and GPUs!

At the end of the workshop, you can download the Docker image and run everything in your own cloud account. 

Topics Covered

Spark ML, TensorFlow AI, Storing and Serving Models with HDFS, Trade-offs of CPU vs. GPU, Scale Up vs. Scale Out, CUDA + cuDNN, GPU Development Overview, TensorFlow Model Checkpointing, Saving, Exporting, and Importing, Distributed TensorFlow AI Model Training (Distributed Tensorflow), TensorFlow's Accelerated Linear Algebra Framework (XLA), TensorFlow's Just-in-Time (JIT) Compiler, Ahead of Time (AOT) Compiler, Centralized Logging and Visualizing of Distributed TensorFlow Training (Tensorboard), Distributed Tensorflow AI Model Serving/Predicting (TensorFlow Serving), Centralized Logging and Metrics Collection (Prometheus, Grafana), Continuous TensorFow AI Model Deployment (TensorFlow, Airflow), Hybrid Cross-Cloud and On-Premise Deployments (Kubernetes), High-Performance and Fault-Tolerant Micro-services (NetflixOSS)


The only prequisites are a modern browser, an internet connection, and a good night's sleep.

We provide a GPU cloud instance for each attendee.

Continuous Model Training

Real-Time 360º Dashboards

Hybrid-Cloud Autoscaling

Register for PipelineAI Early Access