Table of content
In this article there are three essays written during the [Cloud and Edge Computing] (https://courses.helsinki.fi/en/csm13103/124911008)) class.
Definition of edge and fog computing:
Cloud computing is very useful for a lot of tasks but isn’t the solution for everything. For some tasks the cloud computing is simply too far to the user, and in recent years a new paradigm is born. The main idea of these new paradigms is to deploy something like the cloud computing in the edge of the internet, that means closer the end user. In this article  of 2016 the authors want to define, clarify and show similarities and differences of three different edge paradigm and then focus on the security aspect.
The first paradigm is Fog computing. It is an extension of the cloud computing closer to the users. So it generates a three-tier architecture (users-fog nodes-data center). It is often related to IOT.
The second one is Mobile Edge Computing, that represents the execution of some cloud service in the edge of the Internet, for example in the edge of the 5G networks.
The last one is Mobile Cloud Computing, that is the complete delegation of some task from the mobile devices to a new small cloud infrastructure near them.
In this new field of study it’s important to be secure, because it can be targeted or subverted at any moment and also physically.
Edge and Fog computing brings processing far away from the central nodes, in the core of the internet (data-centers) to the other logic extreme, the edge of the internet, first of all in order to obtain low latency. In this article  the authors support Edge-centric Computing as a new paradigm that will bring to the frontier of computing applications, data, and services away from centralized nodes to the edge, closer to the data source . It is based on a decentralized model that makes heterogeneous cloud resources talk to each other and be controlled by a variety of entities. It consists of deploying small datacentre around the cities in order to manage the huge computation with low latency needed by mobile devices like smartphones or sensors.
In my opinion, this definition of Edge and Fog computing is reasonable because it’s important to detach from the normal data centers some critical computation to deploy it in Edge location . It’s also reasonable to have a decentralized architecture to gain all the benefits of the Peer-to-Peer protocols.
An example of Edge Computing: EdgeCourier
This 2017 article deals with this scenario: the owner and his collaborators, through their mobile devices, modify documents that must be synchronized in real time. In this scenario, it’s critical low-bandwidth synchronization because it’s convenient for the user to not generate a lot of traffic (for both financial cost and battery life).
They find out that most commons synchronization systems use a lot of bandwidth because in general they aren’t optimized for working with office documents. More deeply because for documents representation almost everywhere it’s used the Office Open XML standard and OpenDocument standard and they use both ZIP archive containing information about the document.
In this article, they propose a new approach called e-sync to synchronize these document (so this ZIP archive) that is an incremental approach in which only the changes made since the last synchronization are transmitted.
This solution alone would cause a load increase on the server of the cloud companies and also these last wouldn’t adopt this approach soon. I think that is pretty plausible to have 5 save a minute per user (users like to continuously save the documents) and so 150 MB of traffic per hour is realistic and too much considering all the users that work with synchronized documents. For these reasons, the authors propose to deploy a three-tier architecture, that has the edge computing layer in between the mobile device of the user and the cloud storage service.
This concept is called EPS and according to this there will be computing nodes deployed in the edge of the network, more precisely EPS instances run in a virtualized environment. These nodes dialog both with users’ mobile devices and cloud storage service, but in two different way: the first is to act like sync-receiver from the mobile devices and push latest documents to the cloud storage and the second is to pull the latest document from the cloud storage and act like sync-sender respect the mobile devices.
In my opinion, there is no more time to talk about edge computing as a general and vague way but it’s time to start deploying real solutions from real problems in order to make this technology real, and maybe give new ideas to the developer community. I think that the next step is to deeply study the protocols to move around the VM that serve the mobile devices in a dynamic environment.
In conclusion this article is very interesting because
- it starts to study a specific application of edge computing
- introduce the good idea of e-sync
- with this approach the cloud storage service won’t be changed and their load will be lightened and there will be only new apps that use the new protocol with the new edge nodes
- it can effectively reduce document synchronization bandwidth with negligible overheads
- it uses in a very smart way concepts (that we have seen in this course) like Cloudlet and Zen virtualization environment.
This paper hasn’t already had the popularity that it deserves.
Rethinking Networking for “Five Computers”
This paper  deals with the problem of technically modify pre-existent protocols (TCP) in order to help the nowadays, few and big servers to collaborate and share information about the status of the network. Moreover, they can behave using this shared useful information, this approach is called Phy.
The authors think that it’s plausible that a centralized service can coordinate server to avoid congestion and packet loss, for two main reasons:
In the beginning, they propose an example about Netflix, that in 2015 generated 37% of internet traffic with only a few thousands of servers.
Moreover, they find out that dynamically changing the parameters of TCP Cubic according to the level of congestion help a lot of throughputs and queueing delay.
So, senders could share information about the network conditions and set the TCP Cubic parameters accordingly. They can share this information in different ways, but the goal is just to ensure that sufficiently up-to-date information on the state of the network is available to the other individual hosts.
They tested this approach with these two configurations:
half or less of the senders use this approach throughput and delay are still improved compared to the default case.
In combination with machine learning based approach (Remi) obtaining very good results, better than Phy and Remi alone.
In the end, this sharing of information would provide a diversity of viewpoints about the network that can enable effective problem diagnosis.
ICON: Intelligent Container Overlays
This paper  introduces the concept of ICONs, that are self-managing entities providing applications or services using a totally different approach respect the state-of-the-art container or VM orchestration systems. These last are based on centralized controllers and is mostly limited to closed environments. ICONs instead are intended to work in an open scenario and benefit from decentralized control.
ICONs can be any kind of virtualized entity and they form an overlay organized as a logical tree that grows organically as containers migrate or deploy replicas of themselves.
So ICONs can choose actions solving optimization problems using its local information about the network but they also collect information and propagate them up in the tree.
One of the main objectives of ICON is to help application owner, the authors propose that the application owner can’t control exactly every ICON, but he can control the ICONs by setting parameters, defining targets, or providing some code.
The owner can assign a certain budget to the application he wants to deploy and the ICONs can operate as good as they can respect the budget. So there is a tradeoff between the
user experience and budget, and the owner can regulate it.
There are three possible way to discover the closest ICON
IPv6 anycast address
Use DNS similarly to CDN
According to the preliminary evaluation present in the paper, looking at the number of messages and adaption time ICONs looks like to be better than the centralized approach.
Feature image from Thomas Jensen on Unsplash
 Rodrigo Roman, Javier Lopez, Masahiro Mambo, “Mobile edge computing, Fog et al.: A survey and analysis of security threats and challenges”, Future Generation Computer Systems, Volume 78, Part 2, 2018, Pages 680–698.