Last month, our R&D engineer Anas Albassit and director Sabri Skhiri travelled to Germany to attend and present at DEBS 2019, one of the most specialised conferences in Distributed Event-Based Systems. DEBS has a long history: from active databases to streaming engines, distributed publish-subscribe systems, etc. it has always been the pioneer of distributed and high-performance systems. In this article, Anas and Sabri share with us what they learned there and what struck them as particularly useful.
This edition focused around streaming language, scheduling, elasticity, distributed event processing, platform and middleware. Our R&D director Sabri Skhiri says: “For someone working in distributed computing and data management, DEBS is one of the major conferences with SIGMOD and VLDB. Even though it is quite small (80 participants vs 1000 for IEEE Big Data), this is a niche conference of experts from a small yet amazingly talented community of researchers. The keynotes were just great, with a good balance between pure research and industry. This conference tackling distributed computing and streaming is heaven for data scientists and architects like us!”
Tyler Akidau is the technical lead for the Data Processing Languages & Systems group at Google. He argues that, even though stream processing has gone from niche to mainstream, this is just the beginning. For him, the need for active exploration of new ideas is all the more pressing. Sabri reacts: “The stream has been there for 30 years. We have Spark, Flink, Dataflow, KStream, MSF Trill. But is it all we can do? Is there nothing to do anymore? Tyler Akidau brilliantly presented that stream processing as a field of research is alive and well.”
The talk was mainly about raising opened or partially opened questions in the streaming world.
Tyler Akidau concluded by pointing out that even though streaming systems are more capable and robust than ever, they often remain difficult to use, difficult to maintain, and difficult to understand.
[EDIT] Thank you Tyler for reaching out and for sharing your slide with us! They are available on the following link. If you would like to discuss more insights from the talk, do not hesitate to contact our researchers at firstname.lastname@example.org.
Hannaneh Najdataei, Researcher and PhD Student at the Chalmers University of Technology in Sweden, presented her framework STRETCH.
Anas explains: “The performance of a streaming engine depends on the throughput and latency of stateful analysis. To achieve the best performance, we need to process a large amount of data (i.e. to be scalable) while handling fluctuations in data rate (i.e. to be elastic). Distributed processing requires the ability to parallelise the processing elastically. Optimally, we should reduce the number of parallel operators when the workload decreases and add operators when more resources are needed. For stateful operators, elasticity reconfigurations require to redistribute the states according to the new cluster configuration (i.e. less or more operators). In this case, we need to find a tradeoff between a share-nothing and a share-all state architecture.”
Sabri adds: “The paper proposes STRECH, a virtual share-nothing parallelism concept that does not require state transfer. The idea is that all workers read the same sequence of input tuples through an intra-node streaming framework. What is surprising in this paper is the parallelism model: all workers get the same sequence of tuples to guarantee the deterministic execution of the stream. On the contrary, in streaming, you usually have a distribution of your tuples per key. Still, they have obtained impressive results matching the throughput and latency figures of the front of state-of-the-art solutions, while also achieving fast elastic reconfigurations.”
Nikos Giatrakos is a PhD researcher from the Technical University of Crete. He presented his work to do uncertainty-aware event analytics. Sabri reacts: “Getting high performance by sampling the input stream and sacrificing a bit of the result precision is the new trend in research. The idea is to parse only some of the events to be able to handle a bigger load, but still controlling the level of uncertainty you have on the result. I see 2 great applications: (1) get approximated results when needed but also (2) proactive detection before events happen.”
While the idea of filtering by controlling the probability of the error is not new, the paper had several novel points:
On the fourth day of the conference, our R&D engineer Anas presented his paper proposing a formal specification for CEP language.
Processing event streams is an increasingly important area for modern businesses aiming to detect and efficiently react to critical situations in near real-time. Due to CEP languages’ limitations and imprecise semantics, describing interesting situations remains challenging. In this paper, Anas presents a formal specification for processing complex events. The paper provides an algebra that consists of a set of operators for constructing complex events (patterns), temporally restricting the construction process and choosing among several selection and consumption policies.
The second day of the conference was dedicated to tutorials from experts in the field. Anas gives insights into his favourite training: Correctness & Consistency of Event-Based Systems. He explains: “The speaker was Opher Etzion, one of the pioneers in the domain of event processing. The tutorial lasted for about 4 hours. What is interesting is that the speaker demonstrates with examples that building an event-based system is not trivial. Even more, a lot of existing systems are incorrect and give inconsistent results due to some problems in their semantics. To ensure correctness, you have at least to understand the sources of latencies in your system and ensure fairness between all the agents, in addition to defining a set of policies to tell the system when, how, where and what events you are looking for.”
Last month, four EURA NOVA engineers travelled to Barcelona to attend the Dataworks Summit. The conference is organised by Hortonworks, now known as Cloudera and it is about how to apply open source Big Data technology to accelerate digital transformation initiatives. They came back with a lot to say about the hot topics in AI, machine learning, architecture, the cloud, and the use cases! In this article, they share with us what they learned there and what struck them as particularly useful.
This year, one of the most important trends at the conference was data management and data architecture. Our R&D director Sabri Skhiri says: “There was a real focus on taking data lakes to their next stage and on making them actionable for AI and machine learning. The notion of data hubs was often mentioned, notably during the keynote speeches by Cloudera, IBM, and Pure Storage. However, most of the vendors of platforms have not been able yet to provide a fully-fledged ecosystem that allows the exploration, governance, and industrialisation of big data”.
This brings us to the second motto of the conference: AI industrialisation is a must. Our data engineer Khalil Amdouni explains: “The conference has been migrating towards AI topics. In the past, the conference used to focus mostly on data ingestion and data processing. It has been moving towards data science. Everyone is talking about AI and machine learning and how to put data science models into production. It’s looking into how to move from data exploration to industrialisation; we heard a lot about Cloudera’s Data Science Workbench etc.”
The third trend of the conference was the separation between data processing tools and AI frameworks. Khalil explains: “Spark, Cloudera, Kubernetes are now all providing production environments (data science management platforms such as Cloudera Data Science Workbench, the Databricks Runtime ML, Kubeflow…) to integrate with machine learning frameworks such as Tensorflow or Python. Sabri adds: “This is interesting but we should first speak about “productisation”, data science models lifecycles, continuous integration and delivery. There are still a lot of shortcomings, like the fact that you need to centralise all your data in one partition before starting your favourite AI framework”.
Another hot topic of the conference was data governance and compliance with regulations. Our R&D director goes on to say: “Everybody is speaking about the importance to be GDPR compliant and is proposing tools like Atlas, Egeria, IBM Infosphere, … but no one says how to actually comply with the GDPR during model deployments or how to deal with access policy management.”
Stream, Stream, Stream: Different Streaming Methods with Spark and Kafka
Itai Yaffe presented the journey made by Nielsen’s Marketing Cloud division to provide its customers with real-time analytics tools to profile their target audiences. To achieve its goal, NMC needed to continuously transform its data infrastructure to ingest billions of events per day in a scalable and yet cost-efficient manner.
Sabri says: “The first version of NMC’s architecture includes CSV files and standalone Java applications with an OLAP database to expose the result. To reach their goal, NMC’s teams had to scale the process up to handle 10 times as much data”.
Their first step was to change the architecture: they moved to Kafka to ingest data, they leveraged Spark to stream and to aggregate data, and they used HDFS to store data.
Sabri explains: “The issue here was that they had to manage the statefulness of the Spark applications on HDFS by themselves. In addition, the system was error-prone in case of failure. They tried again and looked into Spark Structured Streaming, then tried to combine Spark Streaming with batch ETLs and finally decided to use Kafka to imitate streaming over their data lake. This evolution made the situation really interesting from a business and architectural point of view. Their business goal is to support decision making with machine learning to deliver reports on campaigns. Over the years, they adapted their architecture to go further and reach that objective”.
Our architect Cyrille Duverne adds: “Their story showed how much effort is required to build a long-term architecture. Tools are not enough; you first need the use cases that lead to an architectural vision. Only then can you choose the tools that will support the vision. To build this architecture, you need time and people with the right skills”.
To know more about NMC’s journey, you can find the slides of the presentation here.
Chris Wallace is a data scientist at Cloudera Fast Forward Labs. He presented how his team leveraged federated learning to predict maintenance problems when customers of a manufacturer are not willing to share with the manufacturer the details of how their components failed, but want the manufacturer to provide them with a strategy to maintain the faulty parts.
Our architect Cyrille Duverne explains: “In this case, federated learning is a kind of distributed deep learning where you train the model on decentralised data. The main idea is that a network of nodes shares models rather than training data with the server. Each node has the untrained model that they will train using the data they have. Each node then sends a copy of its trained model back to the central server that will take the average and send the new model to the different nodes. The process is repeated until the final version of the model is reached.”
Our data scientist Malian De Ron explains: “I find federated learning very interesting. As data scientists, we can work directly on updating models, but we don’t have access to all the training data. Federated learning can be useful for use cases where the customers want to keep their data anonymous. For example, we work for a financial company that works with a bank. Neither of them is willing to share their data. By using federated learning, the training data could remain in its original location, which could satisfy our customer’s privacy concerns.
To know more about federated learning, you can find the slides of the presentation here.
Data governance with Egeria: The industry’s first open metadata standard
John Mertic is the director of program management for ODPi, the Linux Foundation’s Open Data Platform initiative. He talked about their new open metadata standard Egeria, introduced in September. John Mertic explained how the standard supports the free flow of standardised metadata between different technologies and vendor platforms, enabling organisations to locate, manage, and use their data resources more effectively.
Sabri says: ”Companies have 40 years of evolution embedded in their IT systems, resulting in high complexity of data lineage and data silos. In the complex new world of big data and real time, security models have to track data throughout the organisation. This is why data governance and metadata management are hot topics in conferences. Everybody is talking about it and proposes tools such as Egeria, IBM InfoSphere, or Atlas. I talked with IBM InfoSphere people and I had an overview of the Egeria tool. It can be used to federate the IBM InfoSphere Information Governance Catalog, Apache Atlas and even other Egeria cohorts. The IBM Governance Catalog can pull information directly from Egeria and integrate the metadata, the lineage, and even tags from Atlas”.
To know more about Egeria, please find the slides of the presentation here.
When working with clients as they make their journey to the new digital world, we noticed recurrent problems in the areas of data access, usage, and governance. In many conferences, we hear stories of companies facing these challenges and making a lot of ad hoc choices but lacking a long-term architectural vision. To crack the challenges, our R&D director Sabri Skhiri designed the Data Architecture Vision (DAV), which later led to digazu.
The Dataworks conference highlighted the need to take data lakes to their next stage. The digazu platform, with its integrated and managed data lake, meets that need. It is a true data hub that integrates real-time and batch dataflows, that collects data from multiple sources, stores it, and distributes it to applications and users across the whole organisation.
Another need mentioned at the conference was that of providing companies with production environments to deploy models. Leveraging ever-increasing amounts of data to provide new services or solve problems requires increasing resources in terms of expertise, time and money. digazu offers a scalable way to keep data pipelines open for business in real time or batches without an army of data experts, lines of code, or complex training.
A third need highlighted at the conference is for companies to reach good data governance. There are already excellent governance tools such as Atlas, Egeria, IBM Infosphere to support the free flow of standardised metadata. digazu opens the door to automated regulatory compliance by providing ready-to-use connectors to data management and governance tools.
To learn more about digazu, visit digazu.com
At the beginning of the month, our R&D director Sabri Skirhi and our R&D engineer Syrine Ferjaoui travelled to Seattle to attend IEEE Big Data. The conference is one of the most influent in this domain, gathering more than 1100 attendees, 5 keynotes, 9 tutorials, and 8 daily tracks in parallel. Back in Belgium, our R&D director gives you his opinion on the conference itself and the important elements from the keynotes, the tutorials, the workshops and the interesting papers.
Keynote 1: Decentralized Machine Learning – Google AI
The IEEE Big Data conference started with the inspiring keynote of Blaise Agüera y Arcas, a distinguished researcher at Google AI. Our director details: “The straightforward thesis of the talk is that we can, and we must, use the mobile device for local deep neural network computing. Blaise Agüera explained that since the launch of Tensorflow, Google Brain has built specialised hardware servers to run efficiently deep neural network computing jobs. Nowadays, we find on the market specialised chips that are smaller than a coin of 1 cent and that costs less than a cappuccino. Using them, you can run very efficiently deep neural net computing jobs on mobile at low frequency, low energy and even continuously. For example, the Google camera embeds deep neural nets and does not need to send data to the server side for face or situation detection. But Dr Blaise is going further. He works on reusing the existing techniques in distributed neural net and sharing the learned gradient in a parameter server and sharing them to all device. This is what we call federated learning, and it has impacted many research areas, such as edge computing. The idea of edge computing is to execute light tasks on the edge of the network in order to offload the server/cloud. But here, this is changing the game since the nature of the job is not light anymore. In addition, the concept of federated learning does not try to offload the server but changes the role of the server as a coordinator between edge devices. Secondly, it has impacted neural net compression. The question is then: do we still need to compress networks when we can either distribute the neural net on the server side or have specialised chips on the device side?”
Keynote 2: Big Data for Speech and Language Processing – MSF Research
The second keynote, Xuedong Huang, is a Microsoft Technical Fellow of Microsoft Cloud and AI. He was presenting the latest advances in Speech recognition and Text To Speech (TTS). The key papers behind this technology can be found here and on the research group page. Our director explains: “The first part of the keynote was about the MSF live captioning that will be soon integrated natively in PowerPoint. That is just impressive. Everything that the speaker is saying is capturing by the tool. I personally tested the Translator Android application and it works just fine! The second part of the keynote was focused on the Text To Speech (TTS). The speaker was showing a set of very interesting examples of how voice can be modelled. For instance, if the system learns a model out of hours of discussions, it can apply my voice in Chinese or Arabic or it can learn from a group of person in order to get a better accent and expression”.
This year, IEEE Big Data organised 9 tutorials. Our R&D director explains: “This is probably what I like the most at an academic conference. A research group presents a complete state-of-the-art review in their domain and usually position their own work in the story. My favourite was Progress in Zeroth Order Optimization and Its Applications to Adversarial Robustness in Deep Learning. It was one of the coolest research topics I have seen so far. They discussed how you can fool a deep neural network in order to get a wrong classification. The idea is great: finding the minimal noise you can add to a picture in order to increase the probability of a wrong classification. In this setting, you don’t know anything about the classifier, but you can submit images and you will get a label. Indeed, that looks like a black box optimisation setting. That is precisely why they use Zeroth order optimisation. The research topic is so cool, you can manage to fool the classifier to make it recognize a piano in an image picturing a bagel! Can you imagine the impact, at the era of the electronic passport, where image recognition starts to be used in the signature process? What if I can find how to fool an algorithm to be classified as someone else with just a few grey pixels on my picture?”
EURA NOVA research centre organised the third workshop on Real-time and Stream analytics in Big Data, collocated with the 2018 IEEE conference on Big Data. Our Research Director Sabri Skhiri talked about data management, and stream and real-time analytics. Thank you to our keynote speaker Fabian Hueske, and all the attendees and speakers! They had a great time, with captivating talks and a lot of interesting questions and comments. The summary of the event is available on our website. The slides of the opening session and the slides of the second keynote are available here.
In the early age of the conference, IEEE Big Data was mainly focused on the big data infrastructure. In the following years, the conference became data science oriented, with a significant increase in the number and the complexity of data science use cases. When we asked how he felt about the event, Sabri explained: “I have been attending this conference since the first occurrence. The most important shift I have seen is really about the content. This year, the infrastructure papers have almost disappeared. On the other hand, the vast majority of the publications are on data science. We can really see that it is becoming a conference for ML practitioners. The side effect is the complexification of the discussed topics. Machine learning notions are supposed to be known, deep neural networks are becoming the norm. Going further, the authors are also good at using distributed frameworks, especially Spark. For them, the infrastructure is not a problem anymore, this is part of the daily job”.
A personal selection of interesting papers:
In July, our R&D engineer Katherine Krasnoschok was in Melbourne, Australia to attend the ACL conference. She presented her poster on topic modelling. Her paper, co-written with Salim Jouili, indicates that involving more named entities positively influences the overall quality of topics.
News-related content has been extensively studied in both topic modeling research and named entity recognition. However, expressive power of named entities and their potential for improving the quality of discovered topics has not received much attention. In this paper, we use named entities as domain-specific terms for news-centric content and present a new weighting model for Latent Dirichlet Allocation. Our experimental results indicate that involving more named entities in topic descriptors positively influences the overall quality of topics, improving their interpretability, specificity and diversity.
Katsiaryna Krasnashchok, Salim Jouili, Improving Topic Quality by Promoting Named Entities in Topic Modeling, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Vol. 2. 2018.
A few weeks ago, Sabri Skhiri and Florian Demesmaeker were in London to attend the Spark+AI summit. They came back with a lot to say about the new features of Spark and the presented use cases! In this article, they will give you their opinion about Databricks’ main announcement, the intakes of their favourite talks and training, and what they thought of the new name of the conference.
A new name
This year, Spark expanded the summit’s scope and renamed it “Spark + AI Summit”. The goal of Databricks, announced by its co-founder Ali Ghodsi, is to incorporate unified aspects of data and AI.
Florian Demesmaeker, our R&D engineer, explains: “In some of the keynote talks, the speakers talked about use cases where the job of the data engineer is strongly reduced. The data scientists can easily experiment with data, travelling back and forth in time. This means more focus on AI, rather than on the data engineering part that makes all data accessible to the data scientists”.
In line with this change of name, Databricks announced the release of a complete data science lifecycle on the cloud.
Sabri Skhiri, our R&D Director, explains “It is interesting to see that the change in the event name is actually very visible in the change of Databricks’ strategy. Their tools are now completely dedicated to stream ETL, and there is a huge focus on integrated data management”.
Databricks’ new features include Databricks Delta which creates data pipeline and provides data views and exploration features. Secondly, the Databricks Runtime ML is a ready-to-use environment providing a set of pre-loaded ML frameworks where the data scientist can play with data. Finally, the MLflow tool allows to simplify the ML models development at enterprise scale.
Our R&D Director precises: “Together, these features provide a complete and unified approach to machine learning lifecycle and pipeline automation. This looks like a very competitive SaaS offer for integrated data management, available on AWS and Azure. However, the metadata management and the security aspect is still the missing piece”.
The training day
The first day of the conference was dedicated to training workshops that include a mix of instruction and hands-on exercises to help attendants improve their Apache Spark skills.
Florian gives insights into his favourite training Tuning and Best Practices. He explains: “The aim of the training was to make programmers aware of how Spark works internally, in order to be able to write optimised applications. They presented a few situations, each one showing one relatively slow process. Then they presented a step-by-step procedure to debug the situation and to find the points that could be improved in the current situation. In summary, tips and tricks to adapt to different situations”.
The sessions at the conference covered data engineering and data science contents along with best practices for productionising AI. The talks were divided into roughly two categories: Spark programming and deployment, and applications on top of Spark (AI applications).
Florian Demesmaeker explains: “I attended 28 talks. The keynotes from Databricks were quite interesting, they presented Delta and MLflow. I also enjoyed the talks about tools to optimise the internals of Spark, these provided good technical details. Other talks were about use cases on top of Spark, it was interesting to see what challenges other companies face and how they address them”.
Sabri Skhiri adds: “The talk Learning to Rank Datasets for Search was very inspiring. Oscar Castañeda-Villagrán, a data scientist working at Xoom (a Paypal service) talked about learning to rank R data set. The idea is that we can extract metadata when the data pipeline is arriving in the lake. Going further, you can not only extract metadata but also calculate a kind of judgment relevance score that will be used for bootstrapping the learning to rank process. In this way, a user can search and retrieve the relevant R data set in the lake. A very good idea for the metadata-driven exploration”.
Early September 2018, 8 EURA NOVA engineers travelled to Berlin to attend the Flink Forward Conference, dedicated to Apache Flink users and stream processing communities. You can read their feedback here.
Early September 2018, 8 EURA NOVA engineers travelled to Berlin to attend the Flink Forward Conference, dedicated to Apache Flink users and stream processing communities.
They came back with a lot to say about the hot topics in stream processing and the presented use cases! In this article, they will give you their opinion about data Artisans’ main announcement, the intakes of their favourite talks, and what they thought makes Flink Forward different from other conferences.
First keynote announcement:
During the keynote speech, data Artisans announced that they now bring ACID transactions directly on streaming data with data Artisans Streaming Ledger.
Charles Bonneau, our software architect, says: “This feature allows ACID transactions between multiple operators’ event-processing operations and internal states. This means that streaming applications can now update multiple states in one transaction. For example, an application that transfers money from one bank account to another can finally be implemented using Flink with strong consistency guarantees. Both bank accounts will have their balance updated at the same time as if there was a master data-management state”.
For Sabri Skhiri, our R&D director, this opens the doors to a brand new range of applications, especially in data-driven real-time services but also in streaming data management. He explains: “They are pushing forward the concept of streaming. Now, you could imagine a master data-management state that can be updated by operational streaming applications in real time. This will allow even more complex and advanced use cases of stream processing!”.
In 2 days, each Euranovian attended about 18 talks and use case presentations, with speakers from tech giants such as IBM, Netflix, Alibaba, and Uber as well as speakers from smaller companies.
Charles explains: “The conclusions are reassuring: most of them face the same issues that we see at our clients’ and our solutions are all valuable. They include a stream-first data architecture, a stream-first data pipeline product, and Flink developers skills. Even though a number of companies are at the very edge of the technology and their issues do not yet require continuous flows of a considerable amount of events, we are ready”.
For our R&D Director Sabri Skhiri, the keynote speech from Lightbend was one of the most interesting ones. He explains: “Viktor Klang, Lightbend deputy CTO, talked about the convergence between microservices and stream processing. At EURA NOVA, we have been advocating for this convergence for more than a year in our architecture practice. The idea is simple: asynchronous microservices can be designed as stream processing stages. This is fantastic because it makes modern stateful stream processing frameworks the perfect target for implementing reactive microservices. With stateful deployment, exactly once semantics, high availability and ACID access to states, microservices can become stateful streaming apps.”
Vision-oriented Flink Conference:
Our colleagues came back with sparkles in their eyes. When we asked them how they felt about the event, Sabri Skhiri explained:
“Very often, this type of conferences tend to be business oriented. They are focused on how to make the framework easy to use and available to as many people as possible. By contrast, this year’s Flink Forward conference was all about innovation and vision. data Artisans shared their vision of what the Flink framework will be within 3 to 5 years and talked about what role stream processing and big data have within this vision. In fact, almost all the talks were very technical. They were testimonies of big names in the industry, such as Alibaba, Netflix, and ING about problems encountered on the field and how they have been solved, which is often out of the box. The Flink-Alibaba partnership is a sharing one. Alibaba are way ahead with their technology. They keep their lead for 1 year and then they share their work and make their code open source. data Artisans have a great long-term vision of stream processing. I can see a lot of very interesting architecture discussions in the coming months!”
Stream Processing Technology:
When most frameworks cannot process considerable streams of live data and provide results in real time, Flink provides a single runtime for the streaming and batch processing while being highly scalable.
Cyrille Duverne, our Lead Data Architect, confirms: “Flink is definitely a real-time processor! We’re speaking about true real time, not only mini batches etc… Plus, the introduction of ACID transaction management in the new version of data Artisans’ Flink distribution creates a good marketing edge”.
Sabri Skirhi and our R&D engineer Florian Demesmaeker were at the Spark Summit this week. Stay tuned for part 2 with their feedback!
Our paper “Data Mining and Machine Learning Techniques supporting Time-based Separation Concept Deployment”, co-written with Eurocontrol and WaPT, has been accepted by the 37th Digital Avionics Systems Conference (DASC) in London, U.K.
The paper presents two methods to allow air traffic controllers to deliver separation minima accurately and safely, on the basis of time intervals instead of distances.
Importantly, in strong headwind conditions, the aircraft’s groundspeed during approach decreases, meaning that keeping the distance-based separation method results in lower landing rates. At a time of intensified air traffic, this situation leads to considerable delays at airports with significant costs to operators and travellers.
With the new methods presented in the paper, capacity can increase by up to 14% in strong wind conditions, and by up to 8% in moderate wind conditions.
[EDIT] The paper has been presented in September at DASC 2018, you can find the full version below. If you wish to go deeper into the subject, do not hesitate to contact our research department at email@example.com.
The Time-Based Separation (TBS) concept consists in the definition of separation minima for aircraft on the final approach to a runway based on time intervals instead of distances, as applied in Distance-Based Separation (DBS) operations.
TBS allows for dynamic distance separation reductions in strong headwind conditions so as to preserve time spacing across all wind conditions. However, TBS application entails the use of a support tool providing separation distance indicators depending on the applicable time separation minimum, the aircraft speed profile which also depends on the headwind conditions.
This paper details two methodologies allowing a system to compute those TBS indicators so as to allow Air Traffic Controllers to accurately and safely deliver the TBS minima using a separation delivery support tool. The first approach is based on “analytical” data mining and modelling whereas the second one is based on a Machine Learning (M/L) procedure.
In the framework of the deployment of the TBS concept in Vienna airport (LOWW), those approaches are developed and tested using a database covering one year of traffic and corresponding local meteorological data.
The operation of TBS with indicators computed using either approaches leads to substantial diminution of time separations compared to a DBS strategy. However, given the large uncertainties related both to leader and follower aircraft speed profiles, the buffers could be designed only for the most frequent pairs. With the M/L approach (resp. the “analytical” approach), the capacity benefits related to the application of TBS with a separation support tool are of the order of 8% (resp. 2%) in moderate wind conditions, and up to 14% (resp. 10%) in strong wind conditions.
De Visscher, I.; Stempfel, G.; Rooseleer, F. & Treve, V.; Data mining and Machine Learning techniques supporting Time-Based Separation concept deployment, in 37th Digital Avionics Systems Conference (DASC), pp 594-603, London, UK, September 23-27, 2018
EURA NOVA Research center is proud and excited to organize the third workshop on Real-time and Stream analytics in Big Data, collocated with the 2018 IEEE conference on Big Data. The workshop will take place in December in Seattle, USA.
As the world become more connected, flood of digital data is getting generated, in high volume, and in a high velocity. For industries such as financial markets, telecommunications, Smart Cities, manufacturing, or healthcare, there is an increasing need to process, and analyze, these data streams in real time.
These past two years, we have seen arriving another usage of Stream & complex event processing: the data management. New architecture patterns have been proposed to resolve data pipeline and data management within enterprise.
After the success of the two first edition, this is an excellent opportunity to engage in discussions with experts and researchers, to refine new opportunities and use cases required by the industry.
Authors are invited to contribute to the conference by submitting articles in the (among others) following areas: Scalable real-time decision algorithms, IoT analytics & stream mining, Data pipelines & Data management with Streams and Stream ETL and Real-Time Data Warehouse.
Want to submit a paper? Check out the workshop website to find all the information you will need. Your paper will be reviewed by a prestigious panel of international experts from both the academic and the industrial worlds.
Our paper “Graph BI & Analytics: Current State and Future Challenges” has been accepted for publication at the 20th International Conference on Big Data Analytics and Knowledge Discovery, taking place in Regensburg, Germany.
The paper presents the state of the art of graph BI & analytics, with a focus on graph warehousing. We survey the topics of graph modelling, management, querying, and processing in graph warehouses. Then we conclude by discussing future research directions for solving complex graph problems, building native graph components and intelligent techniques to assist end-users in building and analysing the graph.
More importantly, the paper calls for the development of intelligent, efficient and industry-grade graph data warehousing systems to support the structure-driven management and analytics of data efficiently. While adopting a template that is similar to the traditional BI systems, the graph BI that is presented here extends current systems with graph analytics capabilities that deliver graph-derived insights.
[EDIT] The paper has been presented in September at DaWak 2018, you can now find the full version bellow. If you wish to go deeper into the subject, don’t hesitate to contact our research department at firstname.lastname@example.org.
Abstract. In an increasingly competitive market, making well-informed decisions requires the analysis of a wide range of heterogeneous, large and complex data. This paper focuses on the emerging field of graph warehousing. Graphs are widespread structures that yield a great expressive power. They are used for modeling highly complex and interconnected domains, and efficiently solving emerging big data application. This paper presents the current status and open challenges of graph BI and analytics, and motivates the need for new warehousing frameworks aware of the topological nature of graphs. We survey the topics of graph modeling, management, processing and analysis in graph warehouses. Then we conclude by discussing future research directions and positioning them within a unified architecture of a graph BI & analytics framework.
Amine Ghrab, Oscar Romero, Salim Jouili, Sabri Skhiri, Graph BI & Analytics: Current State and Future Challenges. DaWaK 2018, 3-18
The French branch of EURA NOVA will take part in two great tech events in the following days and weeks.
On the 22nd of February, data scientist Thomas Peel will give a talk titled “Machine Learning à l’ère du RGPD” (Machine learning and the General Data Protection Regulation) on the opening day of the Colloquium intelligence artificielle, machine learning, data science to be held at the grand amphitheatre of the Saint-Charles campus in Marseille. Other great speakers from INRIA, Google, Provence Innovation, and Criteo will be featured. The event is free but registration is mandatory.
What? Colloquium intelligence artificielle, machine learning, data science
When? Thursday 22nd of February
Where? Grand amphithéâtre, campus Saint-Charles, – 3, place Victor Hugo – case 39 – 13331 MARSEILLE Cedex 03
On the 12th of March, the French branch of EURA NOVA is organising the Marseille Community Event, supported by the Neo4j GraphTour. Two speakers are already announced: R&D project manager Cécile Péreaira will present a text-mining use case with Neo4j in biology, and data scientist Antoine Bonnefoy will sum up the Parisian Neo4j conference, from technology and business viewpoints. After the talks, all attendees will be offered a casual dinner to pursue the discussion.
What? Marseille Community Event – Neo4j GraphTour
When? Monday the 12th of March, from 6:30 PM to 8:30 PM
Where? Le Wagon, 167 Rue Paradis, Marseille