NAVI GATIONSEARCH BOX
Join us on LinkedIn Follow us on Twitter
Eura Nova RD
Eura Nova

Conferences

Activity Conferences

27-02-2020

Thirty-Fourth AAAI Conference On Artificial Intelligence: A Summary

Two weeks ago, our young research engineers Hounaida Zemzem and Rania Saidi were in New York for the Thirty-Fourth AAAI Conference On Artificial Intelligence. The conference promotes research in artificial intelligence and fosters scientific exchange between researchers, practitioners, scientists, students, and engineers in AI and its affiliated disciplines. Rania and Hounaida attended dozens of technical paper presentations, workshops, and tutorials on their favourite research areas: reinforcement learning for Hounaida and graph theory for Rania. What were the big trends and their favourite talks? Let’s find out with them!

 

The Big Trends:

Rania says: “The conference focused mostly on advanced AI topics such as graph theory, NLP, Online Learning, Neural Nets Theory and Knowledge Representation. It also looked into real-world applications such as online advertising, email marketing, health care, recommender systems, etc.”

Hounaida adds: “I thought it was very successful given the large number of attendees as well as the quality of the accepted papers (7737 submissions were reviewed and 1,591 accepted). The talks showed the power of AI to tackle problems or improve situations in various domains.”

 

Favourite talks and tutorials

Hounaida explains: “Several of the sessions I attended were very insightful. My favourite talk was given by Mohammad Ghavamzadeh, an AI researcher at Facebook. He gave a tutorial on Exploration-Exploitation in Reinforcement Learning. The tutorial by William Yeoh, assistant professor at Washington University in St. Louis, was also amazing. He talked about Multi-Agent Distributed Constrained Optimization. Both their talks were clear and funny.”

 

Rania’s feedback? “One of my favourite talks was given by Yolanda Gil, the president of the Association for the Advancement of Artificial Intelligence (AAAI). She gave a personal perspective on AI and its watershed moments, demonstrated the utility of AI in addressing future challenges, and insisted on the fact that AI is now necessary to science. I also learned a lot about the state of the art in graph theory. The tutorial given by Yao Ma, Wei jin, Lingfu Wu and Tengfei Ma was really interesting. They explained Graph Neural Networks: Models and Application​s. Finally, the tutorial presented by Chengxi Zang and Fei Wang about Differential Deep Learning on Graphs and its Applications was excellent. Both were really inspiring and generated a lot of ideas about how to continue to expand my research in the field! ”

 

Favourite papers

A personal selection by Rania & Hounaida of interesting papers to check out :

For Hounaida:

 

For Rania:

 

Final thoughts

After attending their first conference as Euranovians, what will Rania & Hounaida remember? Hounaida concludes: “Going to New York for the AAAI-20 Conference as one of the ENX data scientists was an amazing experience. I met many brilliant and sharp international experts in various fields. I enjoyed the one-week talks with so many special events, offline discussions, and the night strolls!”

Activity Conferences

20-02-2020

Schloss Dagstuhl: where computer science meets

Which direction stream and complex event processing is going to take? Last week, the world’s best-known international researchers met in Schloss Dagstuhl, Germany,  to present and discuss their research. Among the members were present Avigdor Gal, Professor at the Israel Institute of Technology, Alessandro Margara, Assistant Professor at the Polytechnic University of Milan, or Till Rohrmann, engineering lead at Veverica.

Invited to talk about the requirements and needs from the industry, our R&D director Sabri Skhiri explains: “The seminar brought together world-class computer scientists and practitioners working on complex event recognition, distributed systems, databases, stream reasoning and artificial intelligence. Our objective was to disseminate the recent foundational results in each of these isolated fields among all participants, to identify the open problems that need to be resolved, and to establish new research collaborations among these fields”.

What were the big trends and intakes gathered by those brilliant minds? Let’s find out with Sabri!

 

 

The Big Trends

This seminar is a bit particular as it does not show any trends but rather gives a picture of all the communities working on CER in a way or another. I was fascinated by the diversity of researchers. I  did not expect to see such a rich variety of fields: knowledge representation, spatial reasoning, logic-based reasoning, data management, learning-based approaches, event-driven processing, process mining, database theory, stream mining,… According to me, the composite event recognition models that are the best at recognising complex events would include:

  1. Data flow model
  2. Ontology-based and reasoning model
  3. Symbolic reasoning model
  4. Automata-based model

We also identified common challenges across these models and communities. The three priority topics areas we identified are:

  1. Expressivity: composability & hierarchies
  2. Evaluation strategy, parallelization and distribution
  3. Uncertainty management

 

Favourite Talk

Kurt Rothermel from TU Stuttgart – Time-sensitive Complex Event Processing

My first reaction to load shedding was: “It is useless since customers do not want to lose any event, that is why so much effort is spent today on exactly once semantics…“. However, there is a trend today in stream processing, which is the trade-off between cost, latency, and correctness. Tyler Akidau described this challenge as a choice between one of three propositions: fast and correct, cheap and correct, or fast and cheap.  Tyler was talking about streaming but that rule applies in the same way in a CEP context. The load shedding strategy directly falls in the third proposition. In this perspective, the work of Kurt is highly relevant.

 

Favourite Tutorial

Jacopo Urbani & Fredrik Heintz – Stream Reasoning

Concretely, stream reasoning is incremental reasoning over rapidly changing information. The tutorial opened new perspectives on stream processing for me. It tried to answer a very interesting question: how can you provide reasoning about context from streams of data? I definitely come from the database and event-based systems communities and I did not know at all that stream reasoning was so mature. This community has been evolving from having a continuous version of SPARKQL to a complete distributed stream reasoning semantics. It is interesting to see that the work we have done in the LEAD algebra and semantics is deeply inspired by this community. However, we have never used any reasoning logic on top of LEAD. But after a few hours of the tutorial, I realise that (1) reasoning can be used for query rewriting and optimisation (2) it is worth evaluating at least BigSR,  the LARS implementation on Flink.

 

Avigdor Gal & Ruben Mayer – Distributed and Event-Based Systems

Avidgor is a kind of pop star for the stream processing and distributed systems community, or at least for me! The papers he published about a probabilistic CEP engine with late arrival and event uncertainty were visionary.

The speakers started by explaining the basics of stream processing then went deeper into the event recognition language and architecture. They detailed pub/sub applied to event recognition and explained the data flow model, which consists of a single unified data processing model where the stream and batch paradigms are the same.  This last part was based on Tyler Akidau’s paper.

A second part of the talk focused on elasticity on streams. Stream fission puts operators among different categories:

  • Firstly, key-based operators, that is a group by operation (as in SQL)
  • Secondly, window-based operators enable to split processing that needs to have multiple event types correlated with different keys within the same operator
  • Finally, pane-based operators enable a split-merge strategy where you distribute and merge the result.

Interestingly, Avigdor presented his work about late-arrival processing from a probabilistic viewpoint and not from the watermark perspective. Usually, modern stream processing frameworks use watermarks in order to take into account events that arrive later. Avigdor presented a probabilistic approach to this issue.

 

What are late-arrival events?

Imagine we want to count the number of cars entering a road segment every three minutes: we have a “tumbling window” every 3 minutes. If an event (ie a car) arrives at 2’55 second in the window but is stuck somewhere in the network for 6 sec, it is called a late-arrival event. The processing time (the time at which the CEP processes the event) is delayed compared to the event time (the time on which the event really occurs).

Note that for CEP, there is clearly a trade-off between timeliness and accuracy, because the slack time will increase the delay to deliver your result but will increase your accuracy. There is always a tradeoff between cost, latency and correctness, and usually, you can only pick two among the three.

Fun fact: If you need to explain what is event time & processing time to your mother (yeah, don’t underestimate the power of this kind of discussion at Christmas dinner), the best way is to take the Star Wars analogy. From an event time perspective (which is the time at which the story really happened) you should follow episode 1, 2, 3,4, 5, 6, 7,8, 9. But if you take the processing time (the time on which we received the episode), it is 4, 5, 6, 1, 2, 3, 7, 8, 9.  Isn’t it great ?!

 

Final Thoughts

CER has been explored from many viewpoints. However, never in the research history was there a meeting gathering representatives of these communities. This was the objective of this seminar. Having all these people in a castle in the middle of nowhere was a blast! I had very passionate discussions during meals but also during the night at the library with the most brilliant brains on stream and CEP. On the other hand, I still had some fun discussions about comparing Star Trek DIscovery and Picard! Finally, the most important things I will remember after this seminar… are the endless ping pong games with Till Rohrmann and Alessandro Margara :-).

Activity Conferences

09-01-2020

Fourth Workshop on Real-Time and Stream Analytics in Big Data: key takeaways

Last December, Eura Nova’s research center held the fourth workshop on real-time and stream analytics in big data at the 2019 IEEE Conference on Big Data in Los Angeles. The workshop brought together leading players including Confluent, Apache Pulsar, the University of Virginia and Télécom Paris Tech as well as 8 renowned speakers from 6 different countries. We received more than 30 applications and we are proud to have hosted such interesting presentations of papers in stream mining, IoT, and industry 4.0.

The workshop was a real success with many interesting questions and comments. If you could not attend, our R&D engineer Syrine Ferjaoui brought back important elements from the presentations for you.

 

First keynote speaker:

First of all, the workshop started with the keynote of Matteo Merli, PMC member at Apache Pulsar. His talk “Messaging and Streaming” explained how Pulsar can be a unified infrastructure that supports messaging and streaming.

Matteo introduced messaging as events that are being created and streaming as analysing events that just happened. These are two different processing concepts but they need a single infrastructure. He then explained the architecture view of Pulsar, which has separate layers between the brokers and the bookies (BookKeeper instances that handle persistent storage of messages). This means that brokers and bookies can be added independently, traffic can be shifted very quickly across brokers, and new bookies will ramp up on traffic quickly. This segmented distribution makes the architecture of Pulsar more flexible and dynamic.

Pulsar has other interesting features such as durability, low latency, high throughput, high availability, unified messaging model, high scalability, native computing, … The roadmap includes working on Pulsar storage API to allow direct access to data stored in Pulsar and to retrieve and process data more efficiently. They are also working on higher-level messaging features.”

 

Second keynote speaker:

The second keynote was given by John Roesler, a Kafka committer at Confluent. He talked about Kafka Streams and the evolution of streaming paradigms.

To design software, we, developers, used to separate the application logic from the database. To scale the database capacity, we then started to use a search index to do ETL jobs and query the database in a fast and optimal way. However, this created bugs in the software, added data consistency issues, and created more complexity in the system. Later, we started to use HDFS for a more flexible design. While enabling replication and distributed storage, this solution added more latency and supported batch processing only. It did not meet the needs of real-time processing use cases.

At this point, streaming helped a lot. The next step was to add a streaming platform that reads from sources, does some computation, and sinks the result somewhere else. The KafkaStreams design is a set of multiple lambda stateful functions, which makes it a good fit for a microservices architecture.  With Kafka Streams’ new updates, the app logic is linked to a relational database with ACID guarantees.

Finally, John Roesler considers that “software is a fractal”, a never-ending pattern: a software architecture is complex and even when we zoom into a single component, it is still complex. But for the Kafka Streams’ design, when we zoom out, it looks like a set of services interacting and connected to each other and this simplifies the aforementioned designs.

John concluded by mentioning open problems that can be dealt with in stream processing, including semantics, observability, operability, and maintainability.

 

Workshop Invited Speakers:

After the keynotes, 8 selected papers were presented, covering mainly these 6 topics: (1) Stream Processing for IoT, (2) Serverless and HPC (High Performance Computing), (3) Collaborative Streaming, (4) Stream Mining, (5) Image Mining and (6) Real-time Machine Learning. Some papers are not yet available, as they will be published in the proceeding of the IEEE Big Data Conference. In the meantime, do not hesitate to contact our R&D department at research@euranova.eu to discuss how you can leverage stream processing in your projects.

Sören and Wilhelm are engineers in the Software Engineering Group from Kiel University. They propose a stream processing architecture which allows for aggregating sensors in hierarchical groups, supports multiple hierarchies in parallel, provides reconfiguration at runtime, and preserves the scalability and reliability qualities of streaming.

Andre Luckow, head of Blockchain and Emerging Technologies at BMW Group, and Shantenu Jha, associate professor at Rutgers University, presented StreamInsight, which provides insight into the performance of streaming applications and infrastructure, their selection, configuration, and scaling behaviour.

The paper is written by Tobias Grubenmann, researcher at The University of Hong Kong, in collaboration with Daniele Dell’Aglio and Abraham Bernstein, researchers at the University of Zurich. They present the Collaborative Stream Processing (CSP), a model where the costs, which are set exogenously by providers, are shared between multiple consumers, the collaborators. For this, they identify the important requirements for CSP to establish trust between the collaborators and propose a CSP algorithm adhering to these requirements.

  • Kennard-Stone Balance Algorithm for Time-series Big Data Stream Mining (Tengyue Li, Simon Fong, and Raymond Wong)

Tengyue Li and Simon Fong (researcher and associate professor at the University of Macau, China) and Raymond Wong (associate professor at UNSW Sydney) worked on the Kennard-Stone Balance algorithm used as a new data conversion method. Training a prediction model effectively using big data streams poses certain challenges in machine learning. In this paper, the authors apply the Kennard-Stone algorithm on time-series to extract a meaningful representation of big data streams, which improves the performance of a machine learning model.

 

  • Assessing the Effects of TV Ad Events on Digital Search: On the Selection of Outcome Measures (Shawndra Hill, Anthony Colas, H. Andrew Schwartz, and Gordon Burtch)

Shawndra Hill (Microsoft), Anthony Colas (University of Florida), H. Andrew Schwartz (State University of New York at Stony Brook) and Gordon Burtch (University of Minnesota) explained their work on the interactions between TV content and online behaviours such as response to digital advertising. They developed AdMiner, a tool that can track online activity around a brand and provide actionable insights into ad campaigns.

 

Austin Harris, Jose Stovall, and Mina Sartipi (researchers and CUIP director at the University of Tennessee at Chattanooga) have helped to create Chattanooga’s smart corridor, used to test new technologies and generate data-driven outcomes. In their talk, they presented the corridor, used as a test bed for research in smart city developments in a real-world environment. The wireless communication infrastructure and network of sensors in combination with data analytics provide a means of monitoring and controlling city resources and infrastructure in real time.

 

Sebastian Trinks and Carsten Felde (TU Bergakademie Freiberg) presented how image mining can help avoiding errors and low quality of printed prototypes in real time. This can result in saving resources and increasing efficiency when developing new products.

 

This year, IEEE Big Data held the Real-time Machine Learning Competition on Data Streams. As the competition is focused on streaming, its online platform required a specific infrastructure that meets data stream mining requirements. Dihia Boulegane is a Ph.D. student at Télécom ParisTech working in collaboration with Orange Labs on machine learning for IoT networks monitoring. She was in charge of implementing the streaming engine of the dedicated platform of the competition. Dihia explained its components, the technologies used, and the challenges met to build the platform. At the end, the platform was able to provide multiple streams to multiple users, to receive multiple streams, to process them and to provide the leader board and live results.

 

Special thanks to our keynote guests, Matteo Merli and John Roesler, and all the attendees and speakers! We are looking forward to an even more successful workshop in the coming edition of the IEEE Big Data Conference. Stay tuned for paper submission dates!

 

Activity Conferences

03-01-2020

IEEE Big Data 2019: a summary

At the beginning of the month, our R&D director Sabri Skhiri and our R&D engineer Syrine Ferjaoui travelled to Los Angeles to attend IEEE Big Data Conference. It is one of the most influential academic gatherings in distributed machine learning. This year, it featured 879 authors, shortlisted from 2009 applicants. They came from 28 countries and presented 210 papers. Back in Belgium, Sabri and Syrine give you their opinion on the event itself and the important elements from the keynotes, the tutorials, the workshops and the interesting papers.

 

The Big Trends

Sabri says: “The main trends were deep learning, NLP, privacy-preserving approaches, GAN, graph mining and stream mining. In my view, the level of the papers was quite good. Authors are becoming ever more skilled in data science, maths and algorithms. This goes to show that to be a good data scientist, you need an extensive set of advanced skills. Interestingly, there was almost nothing about distributed computing! This is a big move compared to the previous editions. The only presentations that had something to do with distributed systems were about optimisation strategies, an area similar to what our ECCO team researches. The Big Data Conference focuses on data science; it does not really look into its scalability.  Distributed computing topics tend to be dealt with at conferences like DEBS, VLDB, USENIX, SIGMOD, etc. As a result, this conference is an amazing place to see hundreds of data science use cases with, most of the time, an interesting contribution.”

 

The Keynotes

 

The keynotes were focused on data science as well. We even heard the term “Big Data Science”.

Keynote 1: Responsible Data Science by Lise Getoor – Professor at UC Santa Cruz

Syrine says: “The first keynote was my favourite. Lise started by comparing machine learning to a black box. The goal was to unpack the box and invite people to use data science and to use it wisely. To autonomise ethical decision-making, we should move away from maximising AI systems autonomy and move toward human-centric systems. To do this, we should make sure that human-centric systems have three qualities: (1) be knowledge-based, (2) be data-driven, and (3) support human values. Achieving responsible data science requires both machine-learning and ethics.”

 

Keynote 2: DataCommons “Google for Data” by Ramanathan Guha – Google

Guha presented DataCommons, a project started by Google to combine data from different open sources. Syrine explains: “Google’s DataCommons project allows users to pretend that the Web is one website, enabling developers to pretend all this data is in one database. The long-term vision of Google is to aggregate all data from publicly available sources (Medicare, Wikidata, sequence data, Landsat, CDC, Census…) into a single Open Knowledge Graph. The goal is to ​reduce or eliminate the ​​data download-clean-store​ process. Instead, users can access and use already cleaned data in the cloud. ​Data can be public or private (internet & intranet). This will avoid repeated data wrangling  and ease the burden of data storage, indexing, etc.”

 

The Tutorials

This year, IEEE Big Data held nine tutorials. Our R&D director explains: “At this type of events, tutorials are always a good way to learn a complete state of the art in a couple of hours. I particularly appreciated the tutorial on “Taming Unstructured Big data: Automated Information Extraction for Massive Textby the team of the famous Jiawei Han (he is a kind of pop star in data mining and the father of Graph Cube). I found out that many papers about named entity relations were published in the past two years. The idea is to be able to extract supervised, semi-supervised, and unsupervised relations between entities: for instance, discovering that “Trump” is “President of” “USA”. They also propose new approaches to integrate knowledge bases such as DBPedia or YAGO to infer new unknown relations from a corpus. This is just amazing!”

 

Syrine adds: “The tutorial on NewSQL principles, systems, and current trends was interesting as it explained why we should consider using NoSQL/NewSQL to deal with data interconnections and very high scalability. After attending this tutorial, I was motivated to order this book about Principles of Distributed Database Systems. For fans of deep learning, the tutorial “Deep Learning on Big Data with Multi-Node GPU Jobs” covers a lot about large-scale GPU-based deep-learning systems. If you missed the conference, all resources can be found on this ​link​.”

 

The Workshops

The EURA NOVA research centre organised the fourth workshop on Real-time and Stream Analytics in Big Data, at the 2019 IEEE conference on Big Data. We were really happy to welcome Matteo Merli from Apache Pulsar and John Roesler from Confluent as keynotes speakers. Thank you to them and to all the attendees and speakers! They had a great time, with captivating talks and a lot of interesting questions and comments. The summary of the event will soon be available on our website. The slides of the keynotes are available here:

 

 

Favourite Papers

A personal selection of interesting papers:

The paper tackles a really interesting problematic faced by a lot of data scientists. Introducing active learning is a cool idea and so is the way they used a mathematical trick to make their approach feasible.

Su Won Bae, from Mobilewalla, presented how they can define a complete customer acquisition model by mixing their data with their customer data (in this case, a worldwide leader in food delivery). Sabri says: “The quality of data science models highly depends on the data they can train on. I am convinced we will go in the same direction as Mobilewalla in the future to have richer models. However, mixing data must be done with care as it may raise some privacy issues;  our purpose has to have legal ground.”

The speaker presented MorphMine, a method for unsupervised morpheme segmentation.  It can generate morpheme candidates that are filtered out using entropy to select the best morphemes from a corpus. Then, these morphemes can be used to highly improve the word embedding model and the downstream machine learning tasks.

 

 

Conferences

03-01-2020

GraphOpt: Framework for Automatic Parameters Tuning of Graph Processing Frameworks

Finding the optimal configuration of a black-box system is a difficult problem that requires a lot of time and human labor. Big data processing frameworks are among the increasingly popular systems whose tuning is a complex and time consuming. The challenge of automatically finding the optimal parameters of big data frameworks attracted a lot of research in recent years. Some of the studies focused on optimizing specific frameworks such as distributed stream processing, or finding the best cloud configurations, while others proposed general services for optimizing any black-box system. In this paper, we introduce a new use case in the domain of automatic parameter tuning: optimizing the parameters of distributed graph processing frameworks. This task is notably difficult given the particular challenges of distributed graph processing that include the graph partitioning and the iterative nature of graph algorithms.

To address this challenge, we designed and implemented GraphOpt: an efficient and scalable black-box optimization framework that automatically tunes distributed graph processing frameworks. GraphOpt implements state-of-the-art optimization algorithms and introduces a new hill-climbing-based search algorithm. These algorithms are used to optimize the performance of two major graph processing frameworks: Giraph and GraphX. Extensive experiments were run on GraphOpt using multiple graph benchmarks to evaluate its performance and show that it provides up to 47.8% improvement compared to random search and an average improvement of up to 5.7%.

 

The paper was published at the third IEEE International Workshop on Benchmarking, Performance Tuning and Optimization for Big Data Applications (BPOD 2019).

Do not hesitate to contact our R&D department at research@euranova.eu to discuss how you can leverage graph processing in your projects.

Muaz Twaty, Amine Ghrab, Skhiri Sabri: GraphOpt: a Framework for Automatic Parameters Tuning of Graph Processing Frameworks. 2019 IEEE International Conference on Big Data (Big Data) Workshops, Los Angeles, CA, USA.

Download file (.pdf)

Activity Conferences

24-10-2019

Flink Forward: The Key Takeaways

Early October 2019, 6 EURA NOVA engineers travelled to Berlin to attend the Flink Forward Conference, dedicated to Apache Flink users and stream processing communities.

In this article, they will give you their opinion about Ververica’s’ main announcement, the impact of Ververica acquisition by Alibaba, the big trends, and a selection of their favourite talks.

 

Alibaba!

This is the first Flink Forward conference since the acquisition of Ververica (formerly known as data Artisans) by Alibaba, which has been one of the largest users of Flink and second-largest contributor for years. Our R&D director Sabri Skhiri says: “The only significant impact of this acquisition on the conference is that the venue is now at the Berlin Business Center instead of the Kulturbrauerei. There, we could see that the Apache Flink user’s community has grown significantly as well as their commits on Flink. This edition was a bit more business and enterprise-oriented than previous ones, although it still had its technical DNA and a lot of technical talks. All in all, this was a very good mix. Alibaba folks are deeply committed to open source and creating technology impact. We saw a lot of activities from them such as the integration of the Blink SQL runner, the hive integration or the new scheduling model. In summary, a great event.”

 

First Keynote Announcement

Keynote: Stream Processing and Applications in the Modern Age (Stephan Ewen)

During the first keynote, Ververica took the opportunity to announce the launch of Stateful Functions (statefun.io), an open-source framework built on top of Flink to run stateful serverless functions. It bridges the gap between Function as a Service and stream processing.

Sabri says: ”Last year, they announced their streaming ledger that brings ACID transactions between states to stream processing applications. This year, they announced the launch of Stateful Functions, a framework that reduces the complexity of building and orchestrating stateful applications at scale. In the streaming world, this announcement does not change a lot of things. However, in the microservice community, this opens new doors in terms of design patterns, especially in the way data feeding and stateful operations can be designed more flexibly.”

You can find the video of the presentation here.

 

The Big Trends

1. Unified batch and streaming

A significant trend of this edition is the “Unified Stream and Batch” moto. Our R&D engineer Syrine Ferjaoui says: “Flink currently features different APIs, the DataSet API for batch processing and the DataStream API for stream processing. In addition, the Table API is already a unified API on top of both (DataSet and DataStream) with declarative-style programming. Now, they are working on a solution to unify truly the batch and streaming APIs.”

Sabri adds: “In Flink 1.9, they released the State API with which a state created in batch can be used in a stream application – interesting for bootstrapping/backfilling states. But the community is going further by proposing in Flink 2.0 a unique Data API that will merge DataSet and DataStream while still taking advantage of the batch properties to optimise the execution.”

Every talk was exploring in a way or another how this unification can be pushed forward. For instance, in the Pulsar talk, they were thinking about using Pulsar as a back end to transparently bootstrap a state and then switch on stream using (1) pulsar capability in terms of segment storage and (2) unified data stream API in Flink.”

 

2.”Enterprise-grade” Flink:

Flink is moving clearly toward an “enterprise-grade” technology. Sabri says: “The first signal is that Cloudera adopted Apache Flink into its Data Platform. Also, AWS Kinetics now integrates Flink as a client. Adoption by such big players goes to show that Flink is well on the way to gain enterprise-grade support. The second signal is the release of the Ververica Platform that highly facilitates enterprise-grade operations. Thirdly, the integration of the Hive Metastore with the pluggable catalogue architecture is a significant step towards better governance and metadata management. Finally, there were many talks about lowering the barrier to deploying Flink in prod. The topics included APIs, configuration, memory management, K8S operators, etc.”

 

3.The ML path

Finally, regarding ML/AI, there is still a lot of work to get over the gap with the Spark ecosystem. However, the Alibaba folks are working hard on this topic and we can already see the first results. Sabri says: “The refactoring of the Flink ML interface to work on Flink Table APIs is excellent. There is an excellent vision of integrating Flink as a data prep engine for ML and serving layer; and the roadmap looks great.”

 

Interesting talks

A personal selection by Charles & Christophe of interesting talks to check out :

For Charles, our data architect:

  • Aljoscha Krettek & Timo Walther, respectively a co-founder at Ververica and a PMC member of Apache Flink work on the Flink APIs. They give a summary of recent contributions to Flink’s Table & SQL APIs. It was a very good overview of what is going on in terms of refactoring and where we are going.
  • Roman Grebennikov is a software developer from Findify AB. His talk focused on Flink serialisation framework and common problems happening around it. He illustrated and explained several ways to optimise Flink jobs by taking care of the serialisation, which in most cases represents about 60% of the processing.

For Christophe, our software engineer:

  • Konstantin Klauf is the head of product for the Ververica Platform based on Apache Flink. He discussed Apache Flink worst practices by sharing anecdotes and hard-learned lessons of adopting distributed stream processing. It was a humorous list of general good practices when working with Flink from planning, requirement, deployment, and maintenance.
  • Aaron Levin and Mike Mintz are software engineers in a Stripe’s streaming team. They talked about the many challenges they encountered writing the specialised dual source. This talk was a very well-told story about a simple use case with a high constraint: all-time deduplication of transactions at Stripe (a payment platform‎). It was funny, insightful, full of lessons learned and echoed some of digazu’s features: the history replayer.

 

Activity Conferences

18-10-2019

Kafka Summit: The Key Takeaways

At the beginning of the month, our software engineer Christophe Philemotte was in San Francisco to make a presentation at the Kafka Summit organised by Confluent. The Kafka Summit is one of the main events for data architects, engineers, DevOps, and developers who want to learn about streaming data. In this article, Christophe shares with you the latest trends from the conference.

 

Main observations

This year, one of the most important takeaways at the conference was that Confluent is working towards building an active database with KSQL.

Christophe details: “KSQL is the streaming SQL engine that enables real-time data processing against Apache Kafka. With KSQL, Confluent is embracing the SQL streaming and the integration of its stack into it. They also aim to have the interactivity we already have with a classic database. In short, they are moving towards this new paradigm of active data and passive query where KSQL would make it easy to read, write, and process streaming data in real-time, at scale, using SQL-like semantics. Still, KSQL shouldn’t be chosen over Flink, for instance, without proper consideration of its limitations. For example, real checkpointing and savepoint are missing, as well as global shuffling. There are still constraints on partitioning in some operators and there is no global windowing.”

While talking about SQL streaming, they also mentioned user-defined function or machine learning integration. Find more information on the summit website.

Another interesting point was the shared approaches and themes that were addressed by different companies. For example, 30% of the talks were about the operations. About 5 talks were dedicated to methods how to deploy on Kubernetes, and several other speakers mentioned that deploying on Kubernetes was their target. Real-time analytics, integrations/ETL/DataOps, and of course data pipelines were also often mentioned.

 

Keynotes talks:

During the first keynote talk, Jun Rao, the co-founder of Confluent, looked back at Apache Kafka’s last years and what brought them to where they are today. Christophe says: “One interesting point was the concept of democratising data. They envision Kafka as a one-stop self-service shop for devs, data scientists, etc. Still, the users have to overcome a lot of challenges such as operations, integrations, security, or cold storage. Challenges that we are solving with digazu.”

You can find the video of the presentation here.

Jay Kreps, the CEO of Confluent as well as one of the co-creators of Apache Kafka started his talk by discussing the sentence “Software is eating the world”, by Marc Andreessen. Christophe adds: “The idea is that software must be integrated into an ecosystem of other software. The users are no longer just humans. In some cases, the software will be used almost exclusively by other software.”

Jay Kreps also talked about the new steps for Apache Kafka. He announced that the next release of Kafka KSQL in November will enable users to directly register inputs and outputs thanks to Kafka Connect source and sink connectors. They are also working on better interactivity that will allow users to see the results more quickly in the KSQL CLI.

You can find the video of the presentation here.

 

Our Favourite Use Cases:

Kafka on Kubernetes: Keeping It Simple (Nikki Thean)

Nikki Thean is a staff engineer at Etsy, where she helps deploying Kafka at Etsy. She talked about Etsy’s Cloud Migration and how running Kafka on Kubernetes was the best option for them and was not half as complicated as they thought it had to be. Christophe explains: “At the DataWorks Summit in Barcelona, the message was that K8S resource management was not yet ready to replace YARN. We now see that K8S is the new YARN for many people who are using it to deploy their cluster. For example, Etsy or Confluent Cloud.”

In her talk, Nikki Thean explained how a Kafka-on-K8S setup works. Christophe explains: “The main lessons from her talk are:

  • We can start simply without an operator.
  • We must pay attention to the Kubernetes liveness and readiness probes. They can be used to make a service more robust and more resilient since K8S can restart them if necessary. However, if these probes are not configured carefully, they will kill the brokers unnecessarily.
  • Considering the price to deploy in multiple zones on Google Cloud Platform, a good solution is to deploy at least Zookeeper (the most critical element of the cluster) on multiple zones. Given the low flow of data, it will not be too expensive and Zookeeper will allow identifying which Kafka node has the data.”

You can find the video and the slides of the presentation here.

 

Mission-Critical, Real-Time Fault-Detection for NASA’s Deep Space Network using Apache Kafka (Rishi Verma)

Rishi Verma is a manager at the NASA Jet Propulsion Laboratory. He talked about the new software system being deployed by NASA to upgrade its Deep Space Network (DSN) that operates spacecraft communication links for NASA deep-space spacecraft missions. Christophe says: “It was a super interesting use case! The DSN Complex Event Processing (DCEP) software assembly is a new software system that brings into the DSN next-generation “Big Data” infrastructural tools to do IoT with their legacy assets. The objective is to correlate real-time network data with other critical data assets (in their example, an old radio antenna). They recover all the data on Kafka, then they process it and then they predict signal loss on the basis of weather conditions.”

You can find the video and the slides of the presentation here.

 

 

Cross the Streams Thanks to Kafka and Flink (Christophe Philemotte)

Christophe is the CTO of digazu, the batch and real-time data sharing platform developed by EURA NOVA. In his talk, he explained how you could build a similar data platform and how you could plug Flink into the Kafka ecosystem, as well as what the common pitfalls are and what Flink requires to be deployed on Kubernetes.

Christophe says: “The feedback was positive and I received a lot of questions during the Q&A session and after the talk, notably about Flink vs KSQL vs Spark. Another question that I received a lot is when to use Table, SQL or DataStream API. My answer was that Table and SQL APIs are two different flavours of the same API. The Table API you have a LINQ experience while with the SQL API you have a SQL experience. They are both perfect for data processing that can be expressed simply in SQL. That means in a lot of cases. The DataStream API is a lower-level API compared with the Table and SQL APIs. It gives more control on what you can do, which means it also requires a thorough understanding of Flink core mechanisms. Going for the DataStream API is usually a good choice either when your stream processing cannot be expressed in SQL and requires specific implementation, or when you need to optimise the processing.

The sandbox provided was also very popular.

You can find the video and the slides of the presentation here.

 

Our Favourite User Practice:

 

Please Upgrade Apache Kafka. Now. (Gwen Shapira)

Gwen Shapira is a software engineer at Confluent working on core Apache Kafka. She reviewed all the recent releases and made suggestions on how to de-risk upgrades.

Christophe says: “Gwen Shapira talked about why it is essential to upgrade even though it is risky and time-consuming. She explained that each new release fixes from 30 to 140 bugs and listed the improvements you will get from upgrading”. Among them:

  • The Apache Kafka team is working on improvement to build a reliable replication. For example, watermarking has been improved greatly.
  • They are working on controller design towards the removal of Zookeeper.
  • Finally, some releases are critical for specific reasons (e.g. proper resolution of IP when you work with K8S, JBOD, or EOS).

 

In the second part of her talk, Gwen Shapira made suggestions to upgrade as safely as possible. Christophe explains: “She recommended to take good care of backup configuration and documentation. Regarding documentation, she recommended to read the list of notable changes, to act upon text in bold font, and once you have finished reading, to go over it all again!”

Christophe’s last word? “Be sure to check out slide 35: it lists the ways how not to upgrade!”

You can find the video and the slides of the presentation here.

Activity Conferences

27-09-2019

4th Workshop on Real-time & Stream Analytics in Big Data

EURA NOVA Research centre is proud and excited to organize the fourth workshop on Real-time and Stream analytics in Big Data, collocated with the 2019 IEEE conference on Big Data. The workshop will take place in December in Los Angeles, USA.

Stream processing and real-time analytics in data science have become some of the most important topics of Big Data. To refine new opportunities and use cases required by the industry, we are bringing together experts passioned about the subject. 

This year, we are excited to have two amazing keynotes from Confluent KStream and Apache Pulsar: 

  • Matteo Merli is one of the co-founders of Streamlio, he serves as the PMC chair for Apache Pulsar and he’s a member of the Apache BookKeeper PMC. Previously, he spent several years at Yahoo building database replication systems and multi-tenant messaging platforms. Matteo was the co-creator and lead developer for the Pulsar project within Yahoo.
  • John Roesler is a software engineer at Confluent and a contributor to Apache Kafka, primarily to Kafka Streams. Before that, he spent eight years at Bazaarvoice, on a team designing and building a large-scale streaming database and a high-throughput declarative Stream Processing engine.

 

If you want to join us, authors from the industry and the academia are invited to contribute to the conference by submitting articles. Check out the workshop website to find all the information you will need. Your paper will be reviewed by a prestigious panel of international experts from both the academic and the industrial worlds.

 

Activity Conferences

28-08-2019

ACL 2019: Takeaways from the conference

Last month our R&D Project Director Cécile Pereira and our PhD student Léo Bouscarrat travelled to Florence to attend and present to ACL 2019. ACL is one of the biggest conferences in Natural Language Processing. This year all the records were broken with more than 3500 attendees, 660 accepted papers to the main conference, 9 tutorials and more than 20 workshops. All the talks of the main conference were recorded and are accessible online. In this article, Cécile and Léo share with you the latest trends from the conference!

 

 

Big trends

 

A new paradigm in NLP?

This year, ACL’s selection of topics has shown the importance that has taken self-training methods such as BERT (Devlin et al., 2019) or XLNet (Yang et al., 2019). These methods consist of feeding huge models with a vast amount of data and then train them on easy tasks (for example, predict masked words in the original sentence or predict if two sentences are following each other).

These models should be able to learn faster and with less data on a more specific and complex task. With this method, the way to train a model to solve an NLP task has changed. Here is this new paradigm:

  1. Select a pre-trained model (trained with self-training)
  2. Add a layer on the output of this model (it will depend on your task) and fine-tune the model by giving the inputs and outputs of your task
  3. Evaluate your model

Many papers were using this paradigm to achieve state of the art on several tasks (out of the 660 papers of the main conference, 47 have the word BERT in their abstract).

Contextual embeddings, like BERT, take into account the context of the sentence into the embeddings of the words. BERT can be used for a large variety of tasks including but not limited to classification (Reimers et al., Chalkidis et al.), named entity recognition (Arkhipov et al., Emelyanov and Artemova) and question answering (Li et al., Liu et al.).

So it is working. But the remaining question is why?

Several presentations discussed the explainability of BERT (for example Jawahar et al. and Clark et al.). Those papers discuss that, as the different layers are learning different things, the different heads seems to specialize in certain types of words or certain syntactic or semantic task.

The conference highlighted the need for adversarial training and testing as those models are very good to learn bias in the dataset (Niven and Yao, Jiang and Bansal). For those not familiar with the concept, adversarial training and testing consist to train and/or test on an adversarial dataset. This dataset is composed of examples, often generated ones, where the model fails to predict the correct answers. Adversarial training is generally used to verify if the models learn bias in the dataset (like the negation in Niven and Yao). It can also improve the quality of the models.

 

Improving the experiments in NLP

Several presentations showed that adversarial training can improve the results and robustness of the model (Zhu et al and  Jiang and Bansal, Mohit Bansal slides available here).

The meeting was also a moment to discuss the impact of the use of standard splits on benchmark data.  Standard split means that, if you want to work on a specific task, you will generally look for the training, validation and test splits used in other publications and use the same.

However, Gorman and Bedrick argue that the use of random split should be preferred. They explain it by trying to reproduce the results of nine part-of-speech taggers on a specific dataset. They reproduced the same rankings on the standard splits. However, when they did it on random splits, the ranks of the taggers, considering the same metric, varied.

This showed that getting a better ranking on a specific split doesn’t mean that you are better in general. Since in some fields of research, the improvements between each paper are small, the use of standard split does not guarantee that a model is really better than another one on the task.  Random splits could improve this by adding a notion of variance on the performances.

 

Domain adaptation

The last trend in NLP consists of using models or embeddings learned on huge datasets of general data from sources such as Wikipedia, books or newspapers.

When you want to work on specialised domains such as the biomedical, legal or financial domains, you need specialised embeddings. However, you generally don’t have enough specialised data to re-train the embeddings or the models.

A solution is to use and modify pre-trained models for your specific task. This is called domain adaptation. There are several ways to do it. For example, Boukkouri et al combined a general embedding and a smaller one learned on their domain. Hu et al fine-tuned a general model on their data. These methods allow using recent models (which needs a lot of data) on some specific domains that do not fit those requirements.

 

 

Trendy topics

 

Machine translation

Machine translation is still a huge topic with no less than 46 papers in the main conference (according to the ACL 2019 chair blog post), an entire two-days workshop dedicated to it and Liang Huang invited talk. Liang Huang is a principal scientist of Baidu Silicon Valley AI Lab who talked about the current state of simultaneous translation and Baidu research’s new approach.  They were able to do an English/Chinese translation with 3 seconds of delay only. The demo is available here: https://simultrans-demo.github.io/. One can also notice that the ACL best long paper award was on this topic (Zhang et al.)!

 

Conversational systems

Conversational systems (also called chatbots) were also a trendy topic, with 52 papers, a workshop, and the invited talk from Pascale Fung.

Pascale Fung is a Professor at the Hong Kong University of Science & Technology. She presented the state of the art of conversational systems. For her, recent advances are going in three directions: learning to memorise, learning to personalise and learning to empathise.  She presented her current work on conversational systems that can empathise, showing that improvements have been made but there is still work to do. She ended with questions about the ethics of this sector: how can we build systems that are secure, safe and fair for all?

 

Knowledge graph

Knowledge graphs are also pretty trendy, they seem to be a good way to add knowledge to models. It can be used for Question-Answering or Conversational systems. The blog post of Michael Galkin makes a review of the most interesting articles in this sector.

 

Bias in NLP

After recent papers showed that models in NLP are biased (Bolukbasi et al., 2016 ; Caliskan et al, 2017) there is more and more work on what we can do about that, reflected by a session and a workshop during the meeting (https://genderbiasnlp.talp.cat/).

Several works about removing gender bias from models have been previously published. But the work of Gonen and Goldberg explains that, for now, it’s only “Lipstick on a pig”.

We observed two main areas on the topic. Firstly, removing/controlling gender bias in the models (like in automatic translation, Habash et al., Escudé Font et al., Ik Cho et al.). Secondly, measuring bias in the models and society (with articles proposed by sociologists, like Karve et al., Hitti et al., Basta et al., Kurita et al.).

 

Summarization

There were several papers about summarization (including our own paper https://arxiv.org/abs/1907.07323) which have been summarized by RecitalAI on their GitHub.

 

 

Conclusions

ACL was a great place to measure the trends in the NLP field. As models are becoming better, data scientists are applying them to a large variety of topics including automatic translation, search engine, and chatbots.

As the NLP community and topics are becoming bigger and bigger, we hope that this summary of our biased takeaways from the meeting could help you navigate the nearly 700 ACL papers of this year.

 

Activity Conferences

16-07-2019

DEBS 2019: A Summary

Last month, our R&D engineer Anas Albassit and director Sabri Skhiri travelled to Germany to attend and present at DEBS 2019, one of the most specialised conferences in Distributed Event-Based Systems. DEBS has a long history: from active databases to streaming engines, distributed publish-subscribe systems, etc. it has always been the pioneer of distributed and high-performance systems. In this article, Anas and Sabri share with us what they learned there and what struck them as particularly useful.

 

Big Trends

 

This edition focused around streaming language, scheduling, elasticity, distributed event processing, platform and middleware. Our R&D director Sabri Skhiri says: “For someone working in distributed computing and data management, DEBS is one of the major conferences with SIGMOD and VLDB. Even though it is quite small (80 participants vs 1000 for IEEE Big Data), this is a niche conference of experts from a small yet amazingly talented community of researchers. The keynotes were just great, with a good balance between pure research and industry. This conference tackling distributed computing and streaming is heaven for data scientists and architects like us!”

 

 

Keynotes

 

Open Problems in Stream Processing: A Call To Action (Tyler Akidau)

 

Tyler Akidau is the technical lead for the Data Processing Languages & Systems group at Google. He argues that, even though stream processing has gone from niche to mainstream, this is just the beginning. For him, the need for active exploration of new ideas is all the more pressing. Sabri reacts: “The stream has been there for 30 years. We have Spark, Flink, Dataflow, KStream, MSF Trill. But is it all we can do? Is there nothing to do anymore? Tyler Akidau brilliantly presented that stream processing as a field of research is alive and well.”

The talk was mainly about raising opened or partially opened questions in the streaming world.

  • Firstly, the evolution of a pipeline over time may require changes in the persisted state. In that case, how to gracefully update everything online without stopping the working pipeline? How can we auto tune or auto build optimised systems? Do we need to rethink the way we build such systems now?
  • Secondly, Tyler Akidau moved on the importance of SQL in streams and the missing parts to fill there. One of the main topics was the mathematical formalisation of streaming operators, in addition to providing richer standards and clarifying the ambiguities coming from the nature of the streams like out of order events and latency.
  • The next point focused on the tradeoff triangle between latency, costs and correctness. How to figure out what do we need? How to describe correctly every term from a system-behaviour point of view? Is it possible to prioritise different factors depending on the urgency of some task (e.g. streaming v.s batch) automatically?
  • Last but not least, how can we improve stream processing? What kind of database optimisations can be adopted here, and can we think of more optimisations that can only apply on streams?

 

Tyler Akidau concluded by pointing out that even though streaming systems are more capable and robust than ever, they often remain difficult to use, difficult to maintain, and difficult to understand.

[EDIT] Thank you Tyler for reaching out and for sharing your slide with us! They are available on the following link. If you would like to discuss more insights from the talk, do not hesitate to contact our researchers at research@euranova.eu.

 

 

Interesting Papers

 

STRETCH: Scalable and Elastic Deterministic Streaming Analysis with Virtual Shared-Nothing Parallelism (Hannaneh Najdataei)

 

Hannaneh Najdataei, Researcher and PhD Student at the Chalmers University of Technology in Sweden, presented her framework STRETCH.

Anas explains: “The performance of a streaming engine depends on the throughput and latency of stateful analysis. To achieve the best performance, we need to process a large amount of data (i.e. to be scalable) while handling fluctuations in data rate (i.e. to be elastic). Distributed processing requires the ability to parallelise the processing elastically. Optimally, we should reduce the number of parallel operators when the workload decreases and add operators when more resources are needed. For stateful operators, elasticity reconfigurations require to redistribute the states according to the new cluster configuration (i.e. less or more operators). In this case, we need to find a tradeoff between a share-nothing and a share-all state architecture.”

Sabri adds: “The paper proposes STRECH, a virtual share-nothing parallelism concept that does not require state transfer. The idea is that all workers read the same sequence of input tuples through an intra-node streaming framework. What is surprising in this paper is the parallelism model: all workers get the same sequence of tuples to guarantee the deterministic execution of the stream. On the contrary, in streaming, you usually have a distribution of your tuples per key. Still, they have obtained impressive results matching the throughput and latency figures of the front of state-of-the-art solutions, while also achieving fast elastic reconfigurations.”

To know more about the STRETCH framework, you can find the slides of the presentation here and the paper here.

 

Uncertainty-Aware Event Analytics over Distributed Settings (Nikos Giatrakos)

 

Nikos Giatrakos is a PhD researcher from the Technical University of Crete. He presented his work to do uncertainty-aware event analytics. Sabri reacts: “Getting high performance by sampling the input stream and sacrificing a bit of the result precision is the new trend in research. The idea is to parse only some of the events to be able to handle a bigger load, but still controlling the level of uncertainty you have on the result. I see 2 great applications: (1) get approximated results when needed but also (2) proactive detection before events happen.”

While the idea of filtering by controlling the probability of the error is not new, the paper had several novel points:

  • The decomposition of the error into filters from a pattern matching query
  • The conditional probability assertion for a wide range of aggregation functions
  • A central coordinator for calculating the global PDF and then detecting the pattern

To know more about his paper, you can find the slides of the presentation here and the paper here.

 

LEAD: A Formal Specification For Event Processing (Anas Al Bassit)

 

On the fourth day of the conference, our R&D engineer Anas presented his paper proposing a formal specification for CEP language.

Processing event streams is an increasingly important area for modern businesses aiming to detect and efficiently react to critical situations in near real-time. Due to CEP languages’ limitations and imprecise semantics, describing interesting situations remains challenging. In this paper, Anas presents a formal specification for processing complex events. The paper provides an algebra that consists of a set of operators for constructing complex events (patterns), temporally restricting the construction process and choosing among several selection and consumption policies.

To know more about his paper, you can find the slides of the presentation here and the paper here.

 

 

Tutorials

 

Correctness and Consistency of Event-based Systems  (Opher Etzion)

 

The second day of the conference was dedicated to tutorials from experts in the field. Anas gives insights into his favourite training: Correctness & Consistency of Event-Based Systems. He explains: “The speaker was Opher Etzion, one of the pioneers in the domain of event processing. The tutorial lasted for about 4 hours. What is interesting is that the speaker demonstrates with examples that building an event-based system is not trivial. Even more, a lot of existing systems are incorrect and give inconsistent results due to some problems in their semantics. To ensure correctness, you have at least to understand the sources of latencies in your system and ensure fairness between all the agents, in addition to defining a set of policies to tell the system when, how, where and what events you are looking for.”

Activity Conferences

19-04-2019

DataWorks Summit: the big trends

Last month, four EURA NOVA engineers travelled to Barcelona to attend the Dataworks Summit. The conference is organised by Hortonworks, now known as Cloudera and it is about how to apply open source Big Data technology to accelerate digital transformation initiatives. They came back with a lot to say about the hot topics in AI, machine learning, architecture, the cloud, and the use cases! In this article, they share with us what they learned there and what struck them as particularly useful.

 

Big Trends

 

Data architecture

This year, one of the most important trends at the conference was data management and data architecture. Our R&D director Sabri Skhiri says: “There was a real focus on taking data lakes to their next stage and on making them actionable for AI and machine learning. The notion of data hubs was often mentioned, notably during the keynote speeches by Cloudera, IBM, and Pure Storage. However, most of the vendors of platforms have not been able yet to provide a fully-fledged ecosystem that allows the exploration, governance, and industrialisation of big data”.

AI industrialisation

This brings us to the second motto of the conference: AI industrialisation is a must. Our data engineer Khalil Amdouni explains: “The conference has been migrating towards AI topics. In the past, the conference used to focus mostly on data ingestion and data processing. It has been moving towards data science. Everyone is talking about AI and machine learning and how to put data science models into production. It’s looking into how to move from data exploration to industrialisation; we heard a lot about Cloudera’s Data Science Workbench etc.”

Production environment

The third trend of the conference was the separation between data processing tools and AI frameworks. Khalil explains: “Spark, Cloudera, Kubernetes are now all providing production environments (data science management platforms such as Cloudera Data Science Workbench, the Databricks Runtime ML, Kubeflow…) to integrate with machine learning frameworks such as Tensorflow or Python. Sabri adds: “This is interesting but we should first speak about “productisation”, data science models lifecycles, continuous integration and delivery. There are still a lot of shortcomings, like the fact that you need to centralise all your data in one partition before starting your favourite AI framework”.

Data governance

Another hot topic of the conference was data governance and compliance with regulations. Our R&D director goes on to say: “Everybody is speaking about the importance to be GDPR compliant and is proposing tools like Atlas, Egeria, IBM Infosphere, … but no one says how to actually comply with the GDPR during model deployments or how to deal with access policy management.”

 

 

Favourite Talks

 

Stream, Stream, Stream: Different Streaming Methods with Spark and Kafka

Itai Yaffe presented the journey made by Nielsen’s Marketing Cloud division to provide its customers with real-time analytics tools to profile their target audiences. To achieve its goal, NMC needed to continuously transform its data infrastructure to ingest billions of events per day in a scalable and yet cost-efficient manner.

Sabri says: “The first version of NMC’s architecture includes CSV files and standalone Java applications with an OLAP database to expose the result. To reach their goal, NMC’s teams had to scale the process up to handle 10 times as much data”.

Their first step was to change the architecture: they moved to Kafka to ingest data, they leveraged Spark to stream and to aggregate data, and they used HDFS to store data.

Sabri explains: “The issue here was that they had to manage the statefulness of the Spark applications on HDFS by themselves. In addition, the system was error-prone in case of failure. They tried again and looked into Spark Structured Streaming, then tried to combine Spark Streaming with batch ETLs and finally decided to use Kafka to imitate streaming over their data lake. This evolution made the situation really interesting from a business and architectural point of view. Their business goal is to support decision making with machine learning to deliver reports on campaigns. Over the years, they adapted their architecture to go further and reach that objective”.

Our architect Cyrille Duverne adds: “Their story showed how much effort is required to build a long-term architecture. Tools are not enough; you first need the use cases that lead to an architectural vision. Only then can you choose the tools that will support the vision.  To build this architecture, you need time and people with the right skills”.

To know more about NMC’s journey, you can find the slides of the presentation here.

 

Federated Learning

Chris Wallace is a data scientist at Cloudera Fast Forward Labs. He presented how his team leveraged federated learning to predict maintenance problems when customers of a manufacturer are not willing to share with the manufacturer the details of how their components failed, but want the manufacturer to provide them with a strategy to maintain the faulty parts.

Our architect Cyrille Duverne explains: “In this case, federated learning is a kind of distributed deep learning where you train the model on decentralised data. The main idea is that a network of nodes shares models rather than training data with the server. Each node has the untrained model that they will train using the data they have. Each node then sends a copy of its trained model back to the central server that will take the average and send the new model to the different nodes. The process is repeated until the final version of the model is reached.”

Our data scientist Malian De Ron explains: “I find federated learning very interesting. As data scientists, we can work directly on updating models, but we don’t have access to all the training data. Federated learning can be useful for use cases where the customers want to keep their data anonymous. For example, we work for a financial company that works with a bank. Neither of them is willing to share their data. By using federated learning, the training data could remain in its original location, which could satisfy our customer’s privacy concerns.

To know more about federated learning, you can find the slides of the presentation here.

 

Data governance with Egeria: The industry’s first open metadata standard

John Mertic is the director of program management for ODPi, the Linux Foundation’s Open Data Platform initiative. He talked about their new open metadata standard Egeria, introduced in September. John Mertic explained how the standard supports the free flow of standardised metadata between different technologies and vendor platforms, enabling organisations to locate, manage, and use their data resources more effectively.

Sabri says: ”Companies have 40 years of evolution embedded in their IT systems, resulting in high complexity of data lineage and data silos. In the complex new world of big data and real time, security models have to track data throughout the organisation. This is why data governance and metadata management are hot topics in conferences. Everybody is talking about it and proposes tools such as Egeria, IBM InfoSphere, or Atlas. I talked with IBM InfoSphere people and I had an overview of the Egeria tool. It can be used to federate the IBM InfoSphere Information Governance Catalog, Apache Atlas and even other Egeria cohorts. The IBM Governance Catalog can pull information directly from Egeria and integrate the metadata, the lineage, and even tags from Atlas”.

To know more about Egeria, please find the slides of the presentation here.

 

 

Final Thoughts

 

When working with clients as they make their journey to the new digital world, we noticed recurrent problems in the areas of data access, usage, and governance. In many conferences, we hear stories of companies facing these challenges and making a lot of ad hoc choices but lacking a long-term architectural vision. To crack the challenges, our R&D director Sabri Skhiri designed the Data Architecture Vision (DAV), which later led to digazu.

The Dataworks conference highlighted the need to take data lakes to their next stage. The digazu platform, with its integrated and managed data lake, meets that need. It is a true data hub that integrates real-time and batch dataflows, that collects data from multiple sources, stores it, and distributes it to applications and users across the whole organisation.

Another need mentioned at the conference was that of providing companies with production environments to deploy models. Leveraging ever-increasing amounts of data to provide new services or solve problems requires increasing resources in terms of expertise, time and money. digazu offers a scalable way to keep data pipelines open for business in real time or batches without an army of data experts, lines of code, or complex training.

A third need highlighted at the conference is for companies to reach good data governance. There are already excellent governance tools such as Atlas, Egeria, IBM Infosphere to support the free flow of standardised metadata. digazu opens the door to automated regulatory compliance by providing ready-to-use connectors to data management and governance tools.

To learn more about digazu, visit digazu.com

 

Activity Conferences

27-12-2018

IEEE Big Data 2018: a summary

At the beginning of the month, our R&D director Sabri Skirhi and our R&D engineer Syrine Ferjaoui travelled to Seattle to attend IEEE Big Data. The conference is one of the most influent in this domain, gathering more than 1100 attendees, 5 keynotes, 9 tutorials, and 8 daily tracks in parallel. Back in Belgium, our R&D director gives you his opinion on the conference itself and the important elements from the keynotes, the tutorials, the workshops and the interesting papers.

 

Favourite Talks

 

Keynote 1: Decentralized Machine Learning – Google AI

The IEEE Big Data conference started with the inspiring keynote of Blaise Agüera y Arcas, a distinguished researcher at Google AI. Our director details: “The straightforward thesis of the talk is that we can, and we must, use the mobile device for local deep neural network computing. Blaise Agüera explained that since the launch of Tensorflow, Google Brain has built specialised hardware servers to run efficiently deep neural network computing jobs. Nowadays, we find on the market specialised chips that are smaller than a coin of 1 cent and that costs less than a cappuccino. Using them, you can run very efficiently deep neural net computing jobs on mobile at low frequency, low energy and even continuously. For example,  the Google camera embeds deep neural nets and does not need to send data to the server side for face or situation detection. But Dr Blaise is going further. He works on reusing the existing techniques in distributed neural net and sharing the learned gradient in a parameter server and sharing them to all device. This is what we call federated learning, and it has impacted many research areas, such as edge computing. The idea of edge computing is to execute light tasks on the edge of the network in order to offload the server/cloud. But here, this is changing the game since the nature of the job is not light anymore. In addition, the concept of federated learning does not try to offload the server but changes the role of the server as a coordinator between edge devices. Secondly, it has impacted neural net compression. The question is then: do we still need to compress networks when we can either distribute the neural net on the server side or have specialised chips on the device side?”

 

Keynote 2: Big Data for Speech and Language Processing – MSF Research

The second keynote, Xuedong Huang, is a Microsoft Technical Fellow of Microsoft Cloud and AI. He was presenting the latest advances in Speech recognition and Text To Speech (TTS). The key papers behind this technology can be found here and on the research group page. Our director explains: “The first part of the keynote was about the MSF live captioning that will be soon integrated natively in PowerPoint. That is just impressive. Everything that the speaker is saying is capturing by the tool. I personally tested the Translator Android application and it works just fine!  The second part of the keynote was focused on the Text To Speech (TTS). The speaker was showing a set of very interesting examples of how voice can be modelled. For instance, if the system learns a model out of hours of discussions, it can apply my voice in Chinese or Arabic or it can learn from a group of person in order to get a better accent and expression”.

 

The Tutorials

This year, IEEE Big Data organised 9 tutorials. Our R&D director explains: “This is probably what I like the most at an academic conference. A research group presents a complete state-of-the-art review in their domain and usually position their own work in the story. My favourite was Progress in Zeroth Order Optimization and Its Applications to Adversarial Robustness in Deep Learning. It was one of the coolest research topics I have seen so far. They discussed how you can fool a deep neural network in order to get a wrong classification. The idea is great: finding the minimal noise you can add to a picture in order to increase the probability of a wrong classification. In this setting, you don’t know anything about the classifier, but you can submit images and you will get a label. Indeed, that looks like a black box optimisation setting. That is precisely why they use Zeroth order optimisation. The research topic is so cool, you can manage to fool the classifier to make it recognize a piano in an image picturing a bagel! Can you imagine the impact, at the era of the electronic passport, where image recognition starts to be used in the signature process?  What if I can find how to fool an algorithm to be classified as someone else with just a few grey pixels on my picture?”

 

The Workshops

EURA NOVA research centre organised the third workshop on Real-time and Stream analytics in Big Data, collocated with the 2018 IEEE conference on Big Data. Our Research Director Sabri Skhiri talked about data management, and stream and real-time analytics. Thank you to our keynote speaker Fabian Hueske, and all the attendees and speakers! They had a great time, with captivating talks and a lot of interesting questions and comments. The summary of the event is available on our website. The slides of the opening session and the slides of the second keynote are available here.

 

Final Feelings

In the early age of the conference, IEEE Big Data was mainly focused on the big data infrastructure. In the following years, the conference became data science oriented, with a significant increase in the number and the complexity of data science use cases. When we asked how he felt about the event, Sabri explained: “I have been attending this conference since the first occurrence. The most important shift I have seen is really about the content. This year, the infrastructure papers have almost disappeared. On the other hand, the vast majority of the publications are on data science. We can really see that it is becoming a conference for ML practitioners. The side effect is the complexification of the discussed topics. Machine learning notions are supposed to be known, deep neural networks are becoming the norm. Going further, the authors are also good at using distributed frameworks, especially Spark. For them, the infrastructure is not a problem anymore, this is part of the daily job”.

 

The Papers

A personal selection of interesting papers:

Activity Conferences

15-11-2018

Improving Topic Quality by Promoting Named Entities in Topic Modeling

In July, our R&D engineer Katherine Krasnoschok was in Melbourne, Australia to attend the ACL conference. She presented her poster on topic modelling. Her paper, co-written with Salim Jouili, indicates that involving more named entities positively influences the overall quality of topics.

News-related content has been extensively studied in both topic modeling research and named entity recognition. However, expressive power of named entities and their potential for improving the quality of discovered topics has not received much attention. In this paper, we use named entities as domain-specific terms for news-centric content and present a new weighting model for Latent Dirichlet Allocation. Our experimental results indicate that involving more named entities in topic descriptors positively influences the overall quality of topics, improving their interpretability, specificity and diversity.

Katsiaryna Krasnashchok, Salim Jouili, Improving Topic Quality by Promoting Named Entities in Topic Modeling, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Vol. 2. 2018.

 

Download file (.pdf)

Activity Conferences

02-11-2018

Spark+AI Summit: a summary

A few weeks ago, Sabri Skhiri and Florian Demesmaeker were in London to attend the Spark+AI summit. They came back with a lot to say about the new features of Spark and the presented use cases! In this article, they will give you their opinion about Databricks’ main announcement, the intakes of their favourite talks and training, and what they thought of the new name of the conference.

 

A new name

This year, Spark expanded the summit’s scope and renamed it “Spark + AI Summit”. The goal of Databricks, announced by its co-founder Ali Ghodsi, is to incorporate unified aspects of data and AI.

Florian Demesmaeker, our R&D engineer, explains: “In some of the keynote talks, the speakers talked about use cases where the job of the data engineer is strongly reduced. The data scientists can easily experiment with data, travelling back and forth in time. This means more focus on AI, rather than on the data engineering part that makes all data accessible to the data scientists”.

 

Main announcement

In line with this change of name, Databricks announced the release of a complete data science lifecycle on the cloud.

Sabri Skhiri, our R&D Director, explains “It is interesting to see that the change in the event name is actually very visible in the change of Databricks’ strategy. Their tools are now completely dedicated to stream ETL, and there is a huge focus on integrated data management”.

Databricks’ new features include Databricks Delta which creates data pipeline and provides data views and exploration features. Secondly, the Databricks Runtime ML is a ready-to-use environment providing a set of pre-loaded ML frameworks where the data scientist can play with data. Finally, the MLflow tool allows to simplify the ML models development at enterprise scale.

Our R&D Director precises: “Together, these features provide a complete and unified approach to machine learning lifecycle and pipeline automation. This looks like a very competitive SaaS offer for integrated data management, available on AWS and Azure. However, the metadata management and the security aspect is still the missing piece”.

 

The training day

The first day of the conference was dedicated to training workshops that include a mix of instruction and hands-on exercises to help attendants improve their Apache Spark skills.

Florian gives insights into his favourite training Tuning and Best Practices. He explains: “The aim of the training was to make programmers aware of how Spark works internally, in order to be able to write optimised applications. They presented a few situations, each one showing one relatively slow process. Then they presented a step-by-step procedure to debug the situation and to find the points that could be improved in the current situation. In summary, tips and tricks to adapt to different situations”.

 

Favourite talks

The sessions at the conference covered data engineering and data science contents along with best practices for productionising AI. The talks were divided into roughly two categories: Spark programming and deployment, and applications on top of Spark (AI applications).

Florian Demesmaeker explains: “I attended 28 talks. The keynotes from Databricks were quite interesting, they presented Delta and MLflow. I also enjoyed the talks about tools to optimise the internals of Spark, these provided good technical details. Other talks were about use cases on top of Spark, it was interesting to see what challenges other companies face and how they address them”.

Sabri Skhiri adds: “The talk Learning to Rank Datasets for Search was very inspiring. Oscar Castañeda-Villagrán, a data scientist working at Xoom (a Paypal service) talked about learning to rank R data set. The idea is that we can extract metadata when the data pipeline is arriving in the lake. Going further, you can not only extract metadata but also calculate a kind of judgment relevance score that will be used for bootstrapping the learning to rank process. In this way, a user can search and retrieve the relevant R data set in the lake. A very good idea for the metadata-driven exploration”.

 

 

Early September 2018, 8 EURA NOVA engineers travelled to Berlin to attend the Flink Forward Conference, dedicated to Apache Flink users and stream processing communities. You can read their feedback here.

Activity Conferences Insights

05-10-2018

Flink Forward 2018: What You Want to Know and What You (Will) Need to Know.

Early September 2018, 8 EURA NOVA engineers travelled to Berlin to attend the Flink Forward Conference, dedicated to Apache Flink users and stream processing communities.

They came back with a lot to say about the hot topics in stream processing and the presented use cases! In this article, they will give you their opinion about data Artisans’ main announcement, the intakes of their favourite talks, and what they thought makes Flink Forward different from other conferences.

 

First keynote announcement:

During the keynote speech, data Artisans announced that they now bring ACID transactions directly on streaming data with data Artisans Streaming Ledger.

Charles Bonneau, our software architect, says: “This feature allows ACID transactions between multiple operators’ event-processing operations and internal states. This means that streaming applications can now update multiple states in one transaction. For example, an application that transfers money from one bank account to another can finally be implemented using Flink with strong consistency guarantees. Both bank accounts will have their balance updated at the same time as if there was a master data-management state”.

For Sabri Skhiri, our R&D director, this opens the doors to a brand new range of applications, especially in data-driven real-time services but also in streaming data management. He explains: “They are pushing forward the concept of streaming. Now, you could imagine a master data-management state that can be updated by operational streaming applications in real time. This will allow even more complex and advanced use cases of stream processing!”.

 

Favourite talks:

In 2 days, each Euranovian attended about 18 talks and use case presentations, with speakers from tech giants such as IBM, Netflix, Alibaba, and Uber as well as speakers from smaller companies.

Charles explains: “The conclusions are reassuring: most of them face the same issues that we see at our clients’ and our solutions are all valuable. They include a stream-first data architecture, a stream-first data pipeline product, and Flink developers skills. Even though a number of companies are at the very edge of the technology and their issues do not yet require continuous flows of a considerable amount of events, we are ready”.

For our R&D Director Sabri Skhiri, the keynote speech from Lightbend was one of the most interesting ones. He explains: “Viktor Klang, Lightbend deputy CTO, talked about the convergence between microservices and stream processing.  At EURA NOVA, we have been advocating for this convergence for more than a year in our architecture practice. The idea is simple: asynchronous microservices can be designed as stream processing stages. This is fantastic because it makes modern stateful stream processing frameworks the perfect target for implementing reactive microservices. With stateful deployment, exactly once semantics, high availability and ACID access to states, microservices can become stateful streaming apps.”

 

Vision-oriented Flink Conference:

Our colleagues came back with sparkles in their eyes. When we asked them how they felt about the event, Sabri Skhiri explained:

“Very often, this type of conferences tend to be business oriented. They are focused on how to make the framework easy to use and available to as many people as possible. By contrast, this year’s Flink Forward conference was all about innovation and vision. data Artisans shared their vision of what the Flink framework will be within 3 to 5 years and talked about what role stream processing and big data have within this vision.  In fact, almost all the talks were very technical. They were testimonies of big names in the industry, such as Alibaba, Netflix, and ING about problems encountered on the field and how they have been solved, which is often out of the box. The Flink-Alibaba partnership is a sharing one. Alibaba are way ahead with their technology. They keep their lead for 1 year and then they share their work and make their code open source. data Artisans have a great long-term vision of stream processing. I can see a lot of very interesting architecture discussions in the coming months!”

 

Stream Processing Technology:

When most frameworks cannot process considerable streams of live data and provide results in real time, Flink provides a single runtime for the streaming and batch processing while being highly scalable.

Cyrille Duverne, our Lead Data Architect, confirms: “Flink is definitely a real-time processor! We’re speaking about true real time, not only mini batches etc… Plus, the introduction of ACID transaction management in the new version of data Artisans’ Flink distribution creates a good marketing edge”.

Sabri Skirhi and our R&D engineer Florian Demesmaeker were at the Spark Summit this week. Stay tuned for part 2 with their feedback!

Activity Conferences

21-09-2018

Data Mining and ML Techniques Supporting TBS Concept Deployment

Our paper “Data Mining and Machine Learning Techniques supporting Time-based Separation Concept Deployment”, co-written with Eurocontrol and WaPT, has been accepted by the 37th Digital Avionics Systems Conference (DASC) in London, U.K.

The paper presents two methods to allow air traffic controllers to deliver separation minima accurately and safely, on the basis of time intervals instead of distances.

Importantly, in strong headwind conditions,  the aircraft’s groundspeed during approach decreases, meaning that keeping the distance-based separation method results in  lower landing rates. At a time of intensified air traffic, this situation leads to considerable delays at airports with significant costs to operators and travellers.

With the new methods presented in the paper, capacity can increase by up to 14% in strong wind conditions, and by up to 8% in moderate wind conditions.

[EDIT] The paper has been presented in September at DASC 2018, you can find the full version below. If you wish to go deeper into the subject, do not hesitate to contact our research department at research@euranova.eu.

 

The abstract

The Time-Based Separation (TBS) concept consists in the definition of separation minima for aircraft on the final approach to a runway based on time intervals instead of distances, as applied in Distance-Based Separation (DBS) operations.

TBS allows for dynamic distance separation reductions in strong headwind conditions so as to preserve time spacing across all wind conditions. However, TBS application entails the use of a support tool providing separation distance indicators depending on the applicable time separation minimum, the aircraft speed profile which also depends on the headwind conditions.

This paper details two methodologies allowing a system to compute those TBS indicators so as to allow Air Traffic Controllers to accurately and safely deliver the TBS minima using a separation delivery support tool. The first approach is based on “analytical” data mining and modelling whereas the second one is based on a Machine Learning (M/L) procedure.

In the framework of the deployment of the TBS concept in Vienna airport (LOWW), those approaches are developed and tested using a database covering one year of traffic and corresponding local meteorological data.

The operation of TBS with indicators computed using either approaches leads to substantial diminution of time separations compared to a DBS strategy. However, given the large uncertainties related both to leader and follower aircraft speed profiles, the buffers could be designed only for the most frequent pairs. With the M/L approach (resp. the “analytical” approach), the capacity benefits related to the application of TBS with a separation support tool are of the order of 8% (resp. 2%) in moderate wind conditions, and up to 14% (resp. 10%) in strong wind conditions.

 

De Visscher, I.; Stempfel, G.; Rooseleer, F. & Treve, V.; Data mining and Machine Learning techniques supporting Time-Based Separation concept deployment, in 37th Digital Avionics Systems Conference (DASC), pp 594-603, London, UK, September 23-27, 2018

Download file (.pdf)

Activity Conferences

05-07-2018

Third Workshop on Real-time & Stream Analytics in Big Data

EURA NOVA Research center is proud and excited to organize the third workshop on Real-time and Stream analytics in Big Data, collocated with the 2018 IEEE conference on Big Data. The workshop will take place in December in Seattle, USA.

As the world become more connected, flood of digital data is getting generated, in high volume, and in a high velocity. For industries such as financial markets, telecommunications, Smart Cities, manufacturing, or healthcare, there is an increasing need to process, and analyze, these data streams in real time.

These past two years, we have seen arriving another usage of Stream & complex event processing: the data management. New architecture patterns have been proposed to resolve data pipeline and data management within enterprise.

After the success of the two first edition, this is an excellent opportunity to engage in discussions with experts and researchers, to refine new opportunities and use cases required by the industry.

Authors are invited to contribute to the conference by submitting articles in the (among others) following areas: Scalable real-time decision algorithms, IoT analytics & stream mining, Data pipelines & Data management with Streams and Stream ETL and Real-Time Data Warehouse.

 

Want to submit a paper? Check out the workshop website to find all the information you  will need. Your paper will be reviewed by a prestigious panel of international experts from both the academic and the industrial worlds.

Conferences Graph

14-06-2018

Graph BI & Analytics: Current State and Future Challenges

Our paper “Graph BI & Analytics: Current State and Future Challenges” has been accepted for publication at the 20th International Conference on Big Data Analytics and Knowledge Discovery, taking place in Regensburg, Germany.

The paper presents the state of the art of graph BI & analytics, with a focus on graph warehousing. We survey the topics of graph modelling, management, querying, and processing in graph warehouses. Then we conclude by discussing future research directions for solving complex graph problems, building native graph components and intelligent techniques to assist end-users in building and analysing the graph.

More importantly, the paper calls for the development of intelligent, efficient and industry-grade graph data warehousing systems to support the structure-driven management and analytics of data efficiently. While adopting a template that is similar to the traditional BI systems, the graph BI that is presented here extends current systems with graph analytics capabilities that deliver graph-derived insights.

[EDIT] The paper has been presented in September at DaWak 2018, you can now find the full version bellow. If you wish to go deeper into the subject, don’t hesitate to contact our research department at research@euranova.eu.

 

Abstract. In an increasingly competitive market, making well-informed decisions requires the analysis of a wide range of heterogeneous, large and complex data. This paper focuses on the emerging field of graph warehousing. Graphs are widespread structures that yield a great expressive power. They are used for modeling highly complex and interconnected domains, and efficiently solving emerging big data application. This paper presents the current status and open challenges of graph BI and analytics, and motivates the need for new warehousing frameworks aware of the topological nature of graphs. We survey the topics of graph modeling, management, processing and analysis in graph warehouses. Then we conclude by discussing future research directions and positioning them within a unified architecture of a graph BI & analytics framework.

 

Amine Ghrab, Oscar Romero, Salim Jouili, Sabri Skhiri, Graph BI & Analytics: Current State and Future Challenges. DaWaK 2018, 3-18

Download file (.pdf)

Conferences

19-02-2018

The Next Activities of our R&D Centre in Marseille

The French branch of EURA NOVA will take part in two great tech events in the following days and weeks.

 

On the 22nd of February, data scientist Thomas Peel will give a talk titled “Machine Learning à l’ère du RGPD” (Machine learning and the General Data Protection Regulation) on the opening day of the Colloquium intelligence artificielle, machine learning, data science to be held at the grand amphitheatre of the Saint-Charles campus in Marseille. Other great speakers from INRIA, Google, Provence Innovation, and Criteo will be featured. The event is free but registration is mandatory.

 

Practical information:

What? Colloquium intelligence artificielle, machine learning, data science

When? Thursday 22nd of February

Where? Grand amphithéâtre, campus Saint-Charles, – 3, place Victor Hugo – case 39 – 13331 MARSEILLE Cedex 03

Registration: : https://framaforms.org/conferences-ia-data-science-machine-learning-i2mlis-1518019875

 

On the 12th of March, the French branch of EURA NOVA is organising the Marseille Community Event, supported by the Neo4j GraphTour. Two speakers are already announced: R&D project manager Cécile Péreaira will present a text-mining use case with Neo4j in biology, and data scientist Antoine Bonnefoy will sum up the Parisian Neo4j conference, from technology and business viewpoints. After the talks, all attendees will be offered a casual dinner to pursue the discussion.

 

Practical information:

What? Marseille Community Event – Neo4j GraphTour

When? Monday the 12th of March, from 6:30 PM to 8:30 PM

Where? Le Wagon, 167 Rue Paradis,  Marseille

Registration: : https://www.eventbrite.fr/e/billets-neo4j-graphtour-marseille-community-event-42714338737?utm_campaign=new_event_email&utm_medium=email&utm_source=eb_email&utm_term=viewmyevent_button