Upcoming Events

Global Meetups

Global Meetups


PipelineAI is the organizer and sponsor for the series of Advanced Spark and TensorFlow Meetups with 20,000+ members throughout the world including the following cites:

  • San Francisco
  • Chicago
  • New York
  • Washington DC
  • London
  • Berlin
  • Toronto
  • Madrid
  • Beijing
  • Chennai
  • Bangalore
  • Dubai

In 2016 alone, we hosted 60+ free meetup community events throughout the world.

While this dedication to the community requires a large amount of human and financial capital, we continue to push forward with close to 100 global community events planned in 2017.

Monthly PipelineAI Community Dev Sync (Online)

Everyone is invited!

Date/Time: 9am PT, Every Third Monday, Online.

Register HERE for the monthly sync meeting.

More info HERE on the monthly sync meeting.



Optimize and Deploy Distributed TensorFlow, Spark, and Scikit-Learn Models (Meetup London, May 2017)

Deploying High Performance TensorFlow in Production with GPUs (PyData London, May 2017)

Continuously Train & Deploy Spark ML and Tensorflow AI Models from Jupyter Notebook to Production (StartupML Conference Jan 2017)

Recent Advancements in Data Science Workflows: From Jupyter-based Notebook to NetflixOSS-based Production (Big Data Spain Nov 2016)



Deploying High Performance TensorFlow in Production with GPUs - Strata London - May 24, 2017

High Performance TensorFlow + GPUs - GPU Tech Conference - San Jose, May 2017 (Slides)


PipelineAI Distributed Spark ML + Tensorflow AI + GPU Workshop

Everybody gets their own GPU for the duration of the workshop!!

We will each build an end-to-end, continuous, distributed Spark ML and Tensorflow AI model training and deployment pipeline on our own GPU-based cloud instance.

The only prequisites are a modern browser and an internet connection. We provide the rest including a GPU-based cloud instance for each attendee.

At the end of the workshop, each attendee can download the Docker image and run everything in your own cloud account.


  • Spark ML
  • TensorFlow AI
  • Storing and Serving Models with HDFS
  • Trade-offs of CPU vs. GPU, Scale Up vs. Scale Out
  • CUDA + cuDNN GPU Development Overview
  • TensorFlow Model Checkpointing, Saving, Exporting, and Importing
  • Distributed TensorFlow AI Model Training (Distributed Tensorflow)
  • TensorFlow's Accelerated Linear Algebra Framework (XLA)
  • TensorFlow's Just-in-Time (JIT) Compiler, Ahead of Time (AOT) Compiler
  • Centralized Logging and Visualizing of Distributed TensorFlow Training (Tensorboard)
  • Distributed Tensorflow AI Model Serving/Predicting (TensorFlow Serving)
  • Centralized Logging and Metrics Collection (Prometheus, Grafana)
  • Continuous TensorFow AI Model Deployment (TensorFlow, Airflow)
  • Hybrid Cross-Cloud and On-Premise Deployments (Kubernetes)
  • High-Performance and Fault-Tolerant Micro-services (NetflixOSS)

Dates -- Location

Apr 22 -- San Francisco -- Spark + TensorFlow + GPUs (SOLD OUT)

May 27 -- London -- Spark + TensorFlow + GPUs (SOLD OUT)

June 10 -- New York -- Spark + TensorFlow + GPUs (SOLD OUT)

July 8 -- San Francisco -- Spark + TensorFlow + GPUs (SOLD OUT)

Aug 12 -- Sydney Australia -- Spark + TensorFlow + GPUs (SOLD OUT)

Sept 23 -- New York and Washington DC -- Spark + TensorFlow + GPUs

Oct 28 -- London and Ireland -- Spark + TensorFlow + GPUs

PipelineAI Home

Register for PipelineAI Enterprise Edition

More Resources