Skip to content

IEEE Streaming Workshop / Keynote Announcement

We are delighted to announce Pablo Estrada from Google and Fabian Hueske from Snowflake as the two keynote speakers of our seventh Workshop on Real-time Stream Analytics, Stream Mining, CER/CEP & Stream Data Management in Big Data.

The workshop is held jointly with the 2022 IEEE International Conference On Big Data, and will take place in Osaka, Japan, from the 17th to the 20th of December, 2022. It will bring together major players to discuss, explore and refine new opportunities and use cases.

We will welcome two keynote speakers:

  • Pablo Estrada is a PMC member for Apache Beam and a software engineer in Google Cloud Dataflow for the last 6 years. The batch and stream systems have coalesced around SQL or fluent-unified APIs, resulting from a history of experimentations. In his talk, Pablo will connect the dots and explore how these systems lead to the Dataflow model and Apache Beam.
  • Fabian Hueske works as a software engineer on streaming things at Snowflake. He is a PMC member of Apache Flink and a co-founder of data Artisans (now Ververica). Over the past two years, tool vendors (notably Snowflake) have partnered with INCITS to work on SQL streaming extensions. In this talk, Fabian will talk about the project and the next steps for extensions to support streaming data.

Ready To Join Us?

We invite you to submit your work and  join us to enrich cooperation and exchange between researchers and practitioners.

The CFP and the important dates are summarised on the workshop website. You can apply by clicking on the following link.

Releated Posts

Kafka Summit 2024: Announcements & Trends

The Kafka Summit brought together industry experts, developers, and enthusiasts to discuss the latest advancements and practical applications of event streaming and microservices. In this article, our CTO Sabri Skhiri
Read More

Privacy Enhancing Technologies 2024: A Summary

For Large Language Models (LLMs), Azure confidential computing offers TEEs to protect data integrity throughout various stages of the LLM lifecycle, including prompts, fine-tuning, and inference. This ensures that all
Read More