Skip to content

IEEE CloudCom 2011 Trends and hot topics – Part 2

In this post I will continue to discuss about the IEEE CloudCom conference that I have started in the previous post.

Standardisation efforts

The standardisation of the cloud is still a hot topic, mainly because it should avoid the vendor lock-in and it should provide a solid foundation for cloud interoperability. However, this challenge can be divided in two sub-challenges:

  • The number of SDOs trying each other to impose its own standard: When the IEEE chairman presented us the SDOs involved in cloud standards, we saw a slide with three columns of SDOs, each one pushing its own. In addition the panellist discussion highlighted that there is almost no communication between them and instead a strong political competition.
  • The level of the standardisation: mainly focused on the infrastructure in term of storage, computing and network resource.

I would like to discuss about this second point. I felt this focus really out of the scope of the market expectations. Let me present you my personal experience of cloud operator expectations:
1. The core services migration issue: the cloud bursting on Amazon is great and the opposite way as well, I mean taking a service running in Amazon and either migrate it in your cloud or in another public cloud. However, when you develop a service on Amazon, your are tempted to use things that makes your life easier: RDS for your relational data, EQS for messaging, elasticache service for sharing states between elastic instances, SimpleDB for your Key-Value store, CloudWatch for the auto-scaling, etc. At the end of the day, even if we manage to find, in 2-3 years, a consensus about computing, storage and network resources standards, you will absolutely not be able to migrate your service. The standardisation of the core services is not trivial, but feasible, unfortunately, it is currently out of the standardisation debate.

2. Life-cycle management and governance process issues: when you operate a cloud distributed in several data centres in which you have a significant number of VMs and that you have to provision VMs for users, departments or divisions, your primary attention would be on the governance process you will put in place for guaranteeing:

  • Compliance with internal policies: a department X, should receive a VM from the Virtual DC Y, only if the approval process Z is accepted
  • Compliance with standards process such as ITIL: how to represent and to wire my governance processes for switching my service from pre-prod to production ? What are the approval and provisioning processes that must executed to be compliant ?

In this context, the provisioning processes but also all processes linked to the infrastructure or service life-cycle management must be standardised or at least the set of API exposed by all the cloud layers to support those processes.

3. Hybrid cloud – yes, but what about brokering API? In the hybrid cloud scenario you have a part of your service running on your private cloud and another part running a public cloud. For some applications the location of the storage can be important, therefore, when you ask your service to store the data, you would like to delegate this data routing to an external broker that can route the data where it must be stored according to specific policies. The way you define those policies and the behaviour of this kind of broker should be standardised.

I am not saying that the current standardisation is useless, not at all, they are the key foundation of the stack, however, it should, at least be extended to richer use cases.

The grid & cloud for eScience

As we can expect from the conference, there was a bunch of talks speaking about grid and especially grid cluster such as EGI[1] proposes. I have to say that is quite unusual to see cloud people and grid people talking on the same table, actually this is true and not true. When you looked at OCCI [2], the standard effort for cloud management, it is led by OGF, the Open Grid Forum, but on the other hand these 2 communities have different backgrounds and needs. For instance, when the cloud community speaks about platforms, they mean application servers and service management for deploying elastic services while for the grid community they mean gLite, UNICORE, dCache, GLOBUS [3] which are grid application cluster managers for distributing jobs among the grid. This segmentation can been easily seen from the papers viewpoint, the grid ones being more distributed algorithmic for scientific computation and the cloud ones being more oriented to services, Infrastructure &  platform architectures, virtualization optimization and distributed data processing.

Anyway, the interesting thing here, is that there is a real involvement of the grid community for integrating the cloud infrastructure as a foundation for managing “virtual grid nodes”, therefore the grid nodes can be VM-based and can scale-in/out dynamically. As a result, the cloud and the grid community work together for defining better cloud infrastructure foundation, even if they have different needs. In addition the scientific grid community is not new, they exist from the late ’90s, they are well organised, they have amazing infrastructures connected world-wide and they have already dealt with standardistation issues. This is a really good thing.


[1] EGI, The European Grid Infrastructure,
[2] The OCCI web site,
[3] GLOBUS, the GLOBUS web site,
[4] Ian Foster, Globus Toolkit Version 4: Software for Service-Oriented Systems. IFIP International Conference on Network and Parallel Computing, Springer-Verlag LNCS 3779, pp 2-13, 2006.


Releated Posts

Kafka Summit 2024: Announcements & Trends

The Kafka Summit brought together industry experts, developers, and enthusiasts to discuss the latest advancements and practical applications of event streaming and microservices. In this article, our CTO Sabri Skhiri
Read More

Privacy Enhancing Technologies 2024: A Summary

For Large Language Models (LLMs), Azure confidential computing offers TEEs to protect data integrity throughout various stages of the LLM lifecycle, including prompts, fine-tuning, and inference. This ensures that all
Read More