Tutorial: A Modern Approach to Situation Awareness: The Ultimate Challenge for Event Processing


In this tutorial we sketch a modern approach to situation awareness, which we perceive as the ultimate challenge of event processing. The following list is a subset of the challenges that we intend to discuss: How to capture, organise, and tamper proof incoming data and requests; how to recognise abnormal conditions with ultra-high sensitivity and ultra-low false positive/negative; how to interact with domain expert in their domain jargon; how to create and use shared experience within a group of domain experts.


Dr. Christoph Brandt, TU Darmstadt, christoph.brandt@gmx.net: I have 30+ years’ experience in software engineering, compiler technology, domain languages, language engineering, algebraic graph transformation, triple graph grammars and other areas. I developed an enterprise modeling framework that is supporting evolving domain languages using algebraic graph transformation (AGT) and using triple graph grammars (TGGs) based on AGT and that, for the first time, provides support for the propagation of constraints across modeling domains integrated by TGGs, and, that also provides support for horizontal integration and synchronization of organizational domain models in the finance industry by the help of TGGs. Other achievements are the development of business continuity processes using the generative power of graph grammars, the application of TGGs in order to align the syntax of a language with its semantics in a way that supports language evolution with regard to the syntax and the semantics of a language artifact and, for the first time, the development of an integrated compositional model encompassing syntax, semantics and pragmatics as the three main dimensions of language that is able to support compositional language engineering.

Dieter Gawlick, Oracle, dieter.gawlick@oracle.com: I have 50+ years’ experience in data (base) management, event management, and other areas. I developed the first commit process including group/fast commit and developed the foundation for critical operational characteristic of databases: high performance and scalability as well as high reliability (stand-by technology, and the interaction between OLTP systems and ATM’s). Other achievements are the Raid 5 technology and LSM – log structured merge tree. This is followed, integrating queues and workflow into databases, generalizing subscription technology by an expression data type, and developing a frame work for situation awareness that integrates a wide set of technologies; Multi-model databases, workflow, provenance, event processing, temporal property graphs, ML, TGGs, just to name a few.

Tutorial: An Outlook to Declarative Languages for Big Steaming Data


In the Big Data context, data streaming systems have been introduced to tame velocity and enable reactive decision making. However, approaching such systems is still too complex due to the paradigm shift they require, i.e., moving from scalable batch processing to continuous analysis and detection. Declarative Languages are playing a crucial role in fostering the adoption of Stream Processing solutions. In this full-day tutorial, we aim of introducing various approaches for declarative querying of the state-of-the-art big data streaming frameworks. In addition, we provides guidelines and hand-on experience on developing and deploying Stream Processing applications using a variety of SQL-like languages, such as EPL, KSQL, Flink-SQL, Spark Streaming SQL, and more. In particular, the goals of the tutorial are to:

  • Provide the fundamental notion of processing streams with declarative languages,
  • Offer an overview of current challenges and state of the art for streaming query languages,
  • Showcase different technologies for processing streams, and
  • Outline the process of developing and deploying stream processing applications.
  • Speakers:

    Riccardo Tommasini is a Ph.D. student at the Department of Electronics and Information of the Politecnico di Milano since 2015. His research interests span Stream Processing, Semantic Technologies, Description Logics, Programming Languages, and Benchmarking. His ongoing PhD thesis investigate the Velocity aspects of the Web of Data. Riccardo’s tutorial activities comprise Big Data Tutorial at Kno.e.sis Center Wright State University, Dayton, Ohio (2015), Stream Reasoning Tutorial at ISWC 2017, ICWE 2018, ESWC 2019, and TheWebConf 2019.

    Sherif Sakr is the Head of Data Systems Group at the Institute of Computer Science, University of Tartu, Estonia. He received his PhD degree in Computer and Information Science from Konstanz University, Germany in 2007. Sherif is an ACM Senior Member and an IEEE Senior Member. In 2017, he has been appointed to serve as an ACM Distinguished Speaker and as an IEEE Distinguished Speaker. He is currently serving as the Editor-in-Chief of the Springer Encyclopedia of Big Data Technologies. Prof. Sakr’s research interest is data and information management in general, particularly in big data processing systems, big data analytics and data science. Prof. Sakr has published more than 100 refereed research publications in international journals and conferences such as: Proceedings of the VLDB endowment (PVLDB), IEEE Transactions on Parallel and Distributed Systems (IEEE TPDS), IEEE Transactions on Service Computing (IEEE TSC), IEEE Transactions on Big Data (IEEE TBD), ACM Computing Survey (ACM CSUR), SIGMOD, ICDE and EDBT. He delivered several tutorial in various conferences including WWW’12, IC2E’14, CAiSE’14 and EDBT Summer School 2015. The 2nd ScaDS International Summer School on Big Data 2016, The 3rd Keystone Training School on Keyword search in Big Linked Data 2017 and ISWC 2019. More information is available here.

    Marco Balduini earned his doctorate in 2019 from Politecnico di Milano and he is actually a post-doc at the Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB). His research work focuses on Big Data, Data Processing, Semantic technologies, Data Science and Data integration. His major interest is the management of heterogeneous stream of spatio-temporal data characterized by high volumes. He contributed in Stream Reasoning research field and his work was applied in analyzing heterogeneous data streams from Social Media, Mobile Telecommunication and IoT in collaboration with Telecom Italia and Siemens. He was actively involved in the development of the C-SPARQL Engine, in the W3C Community Group on RDF Stream Processing (RSP) and in the organization of Stream Reasoning tutorials ISWC 2013, ISWC 2014, ISWC 2015 and ISWC 2016. He participated in the research activities of the EU FP7 projects LarKC and ModaClouds and in the EIT projects City Data Fusion for Event Management, Crowd Insights, and Welcome. He is also a cofounder of Fluxedo, a start-up that exploits cutting-edge technology in the field of big data management and analysis.

    Emanuele Della Valle is an Assistant Professor of Software Project Management at the Department of Electronics and Information of the Politecnico di Milano. In more than 15 years of research, his research interests covered Big Data, Stream Processing, Semantic technologies, Data Science, Web Information Retrieval, and Service Oriented Architectures. He branded the stream reasoning research field. The semantic and syntactic extensions, which he proposed to the SemanticWeb stack (i.e., RDF streams and Continuous SPARQL), are currently on the path towards standardization at a W3C in the RDF Stream Processing community group. His work on Stream Reasoning was applied in analyzing Social Media, Mobile Telecom and IoT data streams in collaboration with Telecom Italia, IBM, Siemens, Oracle, Indra, and Statoil. In 2015, he started up a company (Fluxedo) to commercialize the open source results of Stream Reasoning research. His education activities include lecturing on Web Science, Software Project Management, Semantic Technologies, Stream Processing and Big Data technologies.

    Tutorial: Correctness and consistency of event-based systems


    Event-based systems encounter challenges of correctness and consistency. Correctness means that the execution results match the intention of the designer. Consistency means that different data elements that co-exist within a certain system creates an internal consistency with respect to the system's requirements. In this tutorial we cover the different aspects of correctness and consistency. We discuss issues of correctness with respect of the temporal properties of the system, such as: order of events, and boundaries of time windows. We further discuss the different aspects of fine tuning required in event-based system, where different semantic interpretations are possible, such as: repeating events, or consumption. The consistency discussion relates to two classic issues in the data management world: data dependencies and integrity constraint enforcement. Since event-based systems typically consist of loosely-coupled component architecture in distributed environment, the challenge is compliance with data dependencies and assertions about consistency. Last but not least, we discuss the validation of event-based systems by using static and dynamic analysis.


    Prof. Opher Etzion, Yezreel Valley Academic College, opher.etzion@gmail.com: Opher Etzion serves as Professor of IS, head of the Information Systems department and head of the Technological Empowerment Institute in Yezreel Valley Academic College. During the years 1997-2014 he served in various roles in IBM, most recently IBM Senior Technical Staff Member and chief scientist of event processing in IBM Haifa Research Lab. He has also been the chair of EPTS (Event Processing Technical Society). In parallel he served as a senior teaching fellow at the Technion – Israel Institute of Technology, where over the years he supervised 6 PhD and 24 MSc theses. He has authored or co-authored papers in refereed journals and conferences on topics related to: active databases, temporal databases, rule-base systems, event processing and autonomic computing, and gave several keynote addresses and tutorials. He is the co-author of Event Processing in Action (with Peter Niblett), a comprehensive technical book about event processing. Prior to joining IBM in 1997, he has been a faculty member and Founding Head of the Information Systems Engineering department at the Technion, and held professional and managerial positions in industry and in the Israel Air-Force. He is a senior member of ACM, and has been general chair and program chair of various conferences such as COOPIS 2000 and ACM DEBS 2011. He won several prestigious awards over the years, such as the Israel Air-Force commander award, the highest air-force award (1983), IBM Outstanding Innovation Award (twice – in 2002 and 2013), and IBM Corporate Award (the highest IBM award, in 2010) for the pioneering work on event processing. He was recognized as Distinguished Speaker by ACM.

    Tutorial: Developing Distributed Systems with Multitier Programming


    Developing distributed systems is a complex task that requires to program different peers, often using several languages on different platforms, writing communication code and handling data serialization and conversion.

    We show how the multitier programming paradigm can alleviate these issues, supporting a development model where all peers in the system can be written in the same language and coexist in the same compilation units, communication code is automatically inserted by the compiler and the language abstracts over data conversion and serialization. We present multitier programming abstractions, discuss their applicability step by step for the development of small applications and discuss larger case studies on distributed stream processing, like Apache Flink and Apache Gearpump.


    Pascal Weisenburger: Pascal is a PhD student at the Technical University of Darmstadt. His research interests focus on programming language design, in particular multitier programming, reactive programming and event-based systems. He is the main developer of the ScalaLoci multitier programming language. His recent publications appear in the OOPSLA conference and in the ECOOP conference.

    Guido Salvaneschi: Guido is an assistant professor at the Technical University of Darmstadt. His current research interests focus on programming language design of reactive applications, such as event-based languages, dataflow languages and functional reactive programming. His work includes the integration of different paradigms, incrementality and distribution. He obtained his PhD from Dipartimento di Elettronica e Informazione at Politecnico di Milano, under the supervision of Prof. Carlo Ghezzi with a dissertation on context-oriented programming and language-level techniques for adaptive software. He has co-organized the REBLS workshop at SPLASH for several years and has been program chair of the (Programming) conference. Some of Guido’s recent publications appear in OOPSLA, PLDI, ECOOP, ICFP, FSE, ICSE and DEBS.

    Important Dates

    Abstract submission for research track February 19th March 8th, 2019
    Research and Industry paper submission February 26th March 8th, 2019
    Tutorial proposal submission March 22nd April 5, 2019
    Grand challenge solution submission April 7th April 22nd, 2019
    Author notification research and industry track April 9th April 19th, 2019
    Poster, demo & doctoral symposium submission April 22nd May 3rd, 2019
    Early registration May 31st, 2019