Keynote Speakers

Keynotes

Gustavo Alonso

ETH Zurich, Switzerland
Keynote Title: How Hardware Evolution is Driving Software Systems

Wolfgang Reisig

Humboldt-Universität zu Berlin, Germany
Keynote Title: Conceptual Modeling of Event-Based Systems

Tyler Akidau

Google AI Research, USA
Keynote Title: Open Problems in Stream Processing: A Call To Action

Karthik Ramasamy

Streamlio Inc., USA
Keynote Title: Unifying Messaging, Queuing, Streaming & Light Weight Compute for Online Event Processing

Donald Kossmann

Microsoft Research, Redmond, USA

Industry Talks

Sergey Bykov

Microsoft, USA
Keynote Title: Drinking from the Firehose, with Virtual Streams and Virtual Actors

Olivier Tardieu

IBM Research, USA
Keynote Title: Serverless Composition of Serverless Functions

Gustavo Alonso

ETH Zurich, Switzerland

Keynote Title: How Hardware Evolution is Driving Software Systems
Abstract:

Computing Systems are undergoing a multitude of interesting changes: from the platforms (cloud, appliances) to the workloads, data types, and operations (big data, machine learning). Many of these changes are driven or being tackled through innovation in hardware even to the point of having fully specialized designs for particular applications. In this talk I will review some of the most important changes happening in hardware and discuss how they affect system design as well as the opportunities they create. I will focus on data processing with an emphasis on streams and event based systems but also discuss applications in other areas. I will also briefly discuss how these trends are likely to result in a very different form of IT, and consequently of Computer Science, from the one we know today.

Biography:

Gustavo Alonso is a Professor of Computer Science at ETH Zürich. He studied telecommunications -electrical engineering- at the Madrid Technical University (ETSIT, Politécnica de Madrid). As a Fulbright scholar, he completed an M.S. and Ph.D. in Computer Science at UC Santa Barbara. After graduating from Santa Barbara, he worked at the IBM Almaden Research Center before joining ETHZ. His research interests encompass almost all aspects of systems, from design to run time. He works on distributed systems, data processing, and system aspects of programming languages. Most of his research these days is related to multi-core architectures, data centers, FPGAs, and hardware acceleration. Gustavo has received numerous awards for his research, including three Test-of-Time awards for work in databases, programming languages, and systems. He is a Fellow of the ACM and of the IEEE as well as a Distinguished Alumnus of the Department of Computer Science of UC Santa Barbara.

Wolfgang Reisig

Humboldt-Universität zu Berlin, Germany

Keynote Title: Conceptual Modeling of Event-Based Systems

Tyler Akidau

Google AI Research, USA

Keynote Title: Open Problems in Stream Processing: A Call To Action
Abstract:

In the last four years, stream processing has gone from niche to mainstream, with real-time data processing systems gaining traction in not only fast-moving startups, but also their more skeptical and cautious enterprise brethren. In light of such pervasive adoption, is it safe to say we’ve we finally reached the point where stream processing is a solved commodity? Are we done and ready to move on to the next big thing?

In this talk, I will argue that the answer to those questions is conclusively "no": stream processing as a field of research is alive and well. In fact, as streaming systems evolve to look more and more alike, the need for active exploration of new ideas is all the more pressing. And though streaming systems are more capable and robust than ever, they remain in many ways difficult to use, difficult to maintain, and difficult to understand. But we can change that.

I don’t claim to have all the answers; no one does. But I do have a few ideas of where we can start. And by sharing my thoughts on some of the more interesting open problems in stream processing, and encouraging others to share theirs, I’m hoping that we as a research community can work together to help move the needle just a little bit further.

Biography:

Tyler Akidau is a software engineer at Google, where he is the technical lead for the Data Processing Languages & Systems group, responsible for Google's Apache Beam efforts, Google Cloud Dataflow, and internal data processing tools like Google Flume, MapReduce, and MillWheel. He is also a founding member of the Apache Beam PMC, author of the 2015 Dataflow Model paper, the Streaming 101 and Streaming 102 articles, and co-author of the Streaming Systems book. Though deeply passionate and vocal about the capabilities and importance of stream processing, he is also a firm believer in batch and streaming as two sides of the same coin, with the real endgame for data processing systems the seamless merging between the two.

Karthik Ramasamy

Streamlio Inc., USA

Keynote Title: Unifying Messaging, Queuing, Streaming & Light Weight Compute for Online Event Processing
Abstract:

Online event processing are abound ranging from web and mobile applications to data processing. Such event processing applications often require the ability to ingest, store, dispatch and process events. Until now, supporting all of these needs has required different systems for each task -- stream processing engines, messaging queuing middleware, and pub/sub messaging systems. This has led to the unnecessary complexity for the development of such applications and operations leading to increased barrier to adoption in the enterprises.

In this keynote, Karthik will outline the need to unify these capabilities in a single system and make it easy to develop and operate at scale. Karthik will delve into how Apache Pulsar was designed to address this need with an elegant architecture. Apache Pulsar is a next generation distributed pub-sub system that was originally developed and deployed at Yahoo and running in production in more than 100+ companies. Karthik will explain how the architecture and design of Pulsar provides the flexibility to support developers and applications needing any combination of queuing, messaging, streaming and lightweight compute for events. Furthermore, he will provide real life use cases how Apache Pulsar is used for event processing ranging from data processing tasks to web processing applications.

Biography:

Karthik Ramasamy is the co-founder and CEO of Streamlio that focuses on building next generation event processing infrastructure using Apache Pulsar. Before Streamlio, he was the engineering manager and technical lead for real-time infrastructure at Twitter where he co-created Twitter Heron. Twitter Heron was open sourced and used by several companies. He has two decades of experience working with companies such as Teradata, Greenplum, and Juniper in their rapid growth stages building parallel databases, big data infrastructure, and networking. He co-founded Locomatix, a company that specializes in real-time streaming processing on Hadoop and Cassandra using SQL, that was acquired by Twitter. Karthik has a Ph.D. in computer science from the University of Wisconsin, Madison with a focus on big data and databases. During his college tenure several of his research projects were later spun off as a company acquired by Teradata. Karthik is the author of several publications, patents, and a popular book "Network Routing: Algorithms, Protocols and Architectures".

Sergey Bykov

Microsoft, USA

Keynote Title: Drinking from the Firehose, with Virtual Streams and Virtual Actors
Abstract:

Event Stream Processing is a popular paradigm for building robust and performant systems in many different domains, from IoT to fraud detection to high-frequency trading. Because of the wide range of scenarios and requirements, it is difficult to conceptualize a unified programming model that would be equally applicable to all of them. Another tough challenge is how to build streaming systems with cardinalities of topics ranging from hundreds to billions while delivering good performance and scalability.

In this session Sergey Bykov will talk about the journey of building Orleans Streams that originated in gaming and monitoring scenarios, and quickly expanded beyond them. He will cover the programming model of virtual streams that emerged as a natural extension of the virtual actor model of Orleans, the architecture of the underlying runtime system, the compromises and hard choices made in the process. Sergey will share the lessons learned from the experience of running the system in production, and future ideas and opportunities that remain to be explored.

Biography:

Sergey Bykov joined Microsoft in 2001 and worked in several product groups, from BizTalk and Host Integration Server to embedded operating systems for Point of Sale terminals to Bing. The mediocre state of developer tools for cloud services and distributed systems at the time inspired him to join Microsoft Research to start the Orleans project, in order to qualitatively improve developer productivity in that area.

The Orleans Framework implemented a radically new approach to building scalable distributed applications and cloud services via a simple and intuitive programming model. Orleans has been used for years to power blockbuster games like Halo, Gears of War, and Age of Empires, within Skype, Azure, and a number of other Microsoft product groups, as well as for IoT, financial modeling, and many other domains by Microsoft customers. Clones of Orleans created for JVM, Go, and Erlang only confirm success of the Orleans model.

Orleans has become one of the most successful Open Source .NET projects, with a vibrant world-wide community of contributors, and is a showcase of the New Microsoft. Sergey continues leading Orleans along with several other innovative projects within the Microsoft Gaming organization.

Olivier Tardieu

IBM Research, USA

Keynote Title: Serverless Composition of Serverless Functions
Abstract:

In a few short years, the Function-as-a-Service paradigm has revolutionized how we think about distributed event processing. Within seconds, developers can deploy and run event-driven functions in the cloud that scale on demand. CIOs love the high availability and zero administration. CFOs love the fine-grained, pay-as-you-go pricing model.

Most FaaS platforms today are biased toward (if not restricted to) simple functions, functions that only run for a short while, use limited CPU and memory, and process relatively small amounts of data. In this talk, we will get a closer look at FaaS platforms (using Apache OpenWhisk as our exemplar) to understand these trade-offs.

Functions are not applications. To build compelling FaaS applications we need to compose functions. In this talk, we will compare approaches to function composition both from a developer and from a system’s perspective. We will show how composition can dramatically expand the scope of FaaS.

Biography:

Dr. Olivier Tardieu is a Principal Research Staff Member at IBM T.J. Watson, USA. He received a Ph.D. in Computer Science from Ecole des Mines de Paris, France in 2004, and joined IBM Research in 2007. His research focuses on making developers more productive with better programming models and methodologies. His passion for programming language design has driven him to explore a wide range of topics including high-performance computing, digital circuit design, log analysis, and stream processing. Today he devotes his energy to cloud computing, and in particular the serverless paradigm.

Important Dates

Abstract submission for research track February 19th March 8th, 2019
Research and Industry paper submission February 26th March 8th, 2019
Tutorial proposal submission March 22nd April 5, 2019
Grand challenge solution submission April 7th April 22nd, 2019
Author notification research and industry track April 9th April 19th, 2019
Poster, demo & doctoral symposium submission April 22nd, 2019