Distributed file system for cloud

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found. Lua error in package.lua at line 80: module 'strict' not found.


A Distributed file system for cloud is a file system that allows many clients to have access to the same data/file providing important operations (create, delete, modify, read, write). Each file may be partitioned into several parts called chunks. Each chunk may be stored on different remote machines, hence facilitating the parallel execution of applications.Typically, data is stored in files in a hierarchical tree where the nodes represent the directories. There are several ways to share files in a distributed architecture: each solution must be suitable for a certain type of application relying on how complex the application is. Meanwhile, the security of the system must be ensured. Confidentiality, availability and integrity are the main keys for a secure system.

Nowadays[when?], users can share resources from any computer/device, anywhere and everywhere through internet thanks to cloud computing which is typically characterized by scalable and elastic resources - such as physical servers, applications and any services that are virtualized and allocated dynamically. Thus, synchronization is required to make sure that all devices are up-to-date.

Distributed file systems enable also many big, medium and small enterprises to store and access their remote data exactly as they do locally, facilitating the use of variable resources.

Overview

History

Today, there are many implementations of distributed file systems. The first file servers were developed by researchers in the 1970s. Sun Microsystem's Network File System became available in the 1980s. Before that, people who wanted to share files used the sneakernet method, physically transporting files on storage media from place to place. Once computer networks started to proliferate, it became obvious that the existing file systems had many limitations and were unsuitable for multi-user environments. Users initially used FTP to share files.[1] FTP first ran on the PDP-10 at the end of 1973. Even with FTP, files needed to be copied from the source computer onto a server and then from the server onto the destination computer. Users were required to know the physical addresses of all computers involved with the file sharing.[2]

Supporting techniques

Cloud computing uses important techniques to enforce the performance of all systems. Modern data centers provide a huge environment with data center networking (DCN) consisting of large number of computers characterized by different capacities of storage. The MapReduce framework has shown its performance with Data-intensive computing applications in a parallel and distributed system. Moreover, virtualization techniques have been employed to provide dynamic resource allocation, and to allow multiple operating systems to coexist on the same physical server.

Applications

As cloud computing provides large-scale computing thanks to its ability to provide the needful CPU and storage resources to the user with complete transparency, this makes it particularly suitable to different types of applications that require large-scale distributed processing. This kind of Data-intensive computing needs a high performance file system that can share data between Virtual machines (VM).[3]

The application of the Cloud Computing and Cluster Computing paradigms is becoming increasingly important in industrial data processing and scientific applications such as astronomy or physics which frequently demand the availability of a huge number of computers in order to carry out the required experiments. Cloud computing represents a new way of using a computing infrastructure by dynamically allocating the needed resources, releasing them once a task is finished, and only paying for needed services. This kind of service is often provided in the context of an Service-level agreement.[4]

Architectures

Most distributed file systems are built on the client-server architecture, but other, decentralized, solutions exist as well.

upload and download model

Client-server architecture

Remote access model

Network File System (NFS) uses a client-server architecture. It allows sharing files between a certain number of machines on a network as if they were located locally. It provides a standardized view of the local file system. The NFS protocol allows heterogeneous clients' processes, probably running on different operating systems and machines, to access the files on a distant server, ignoring the actual location of files. Relying on a single server results in the NFS protocol suffering from potentially low availability and poor scalability. Using multiple servers does not solve the problem since each server is working independently.[5] The model of NFS is a remote file service. This model is also called the remote access model, which is in contrast with the upload/download model:

  • Remote access model: Provides transparency, the client has access to a file. He can do requests to the remote file (while the file remains on the server).[6]
  • Upload/download model: The client can access the file only locally. It means that the client has to download the file, make the modification, and uploaded it again so it can be used by others' clients.

The file system offered by NFS is almost the same as the one offered by Unix systems. Files are hierarchically organized into a naming graph in which directories and files are represented by nodes.

Cluster-based architectures

This rather an amelioration of the issues with client-server architectures in a way that improve the execution of parallel application. The technique used here is file-striping one: a file is split into multiple chunks, which are "striped" across several (many) storage servers. The goal is to allow access to different parts of a file in parallel. If the application does not benefit from this technique, then it could be more convenient to just store different files on different servers. However, when it comes to organize a distributed file system for large data centers such as Amazon and Google that offer services to web clients allowing multiple operations (reading, updating, deleting,...) to a huge amount of files distributed among a massive number of computers, then cluster-based solutions become more interesting. Note that hosting a massive number of computers opens the door for more hardware failures because more server machines mean more hardware and thus high probability of hardware failures.[7] Two of the most widely used DFS are the Google File System and the Hadoop distributed file system. In both systems, the file system is implemented by user level processes running on top of a standard operating system (in the case of GFS, Linux).[8]

Design principles

Goals

Google File System (GFS) and Hadoop Distributed File System (HDFS) are specifically built for handling batch processing on very large data sets. For that, the following hypotheses must be taken into account:[9]

  • High availability: the cluster can contain thousands of file servers and some of them can be down at any time
  • A servers belongs to a rack, a room, a data center, a country and a continent in order to precisely identify its geographical location
  • The size of file can vary from many gigabytes to many terabytes. The file system should be able to support a massive number of files
  • Need to support append operations and allow file contents to be visible even while a file is being written
  • Communication is reliable among working machines: TCP/IP is used with a remote procedure call RPC communication abstraction. TCP allows the client to know almost immediately that there is a problem and it can try to set up a new connection.[10]
load balancing and rebalancing: Delete file
load balancing and rebalancing: New server
Load balancing

Load balancing is essential for efficient operations in distributed environments. It means distributing the amount of work to do between different servers,[11] fairly, in order to get more work done in the same amount of time and serve clients faster. In this case, consider a large-scale distributed file system. The system contains N chunkservers in a cloud (N can be 1000, 10000, or more), where a certain number of files are stored. Each file is split into several parts or chunks of fixed size (for example 64 megabytes). The load of each chunkserver is proportional to the number of chunks hosted by the server.[12] In a load-balanced cloud, the resources can be well used while maximizing the performance of MapReduce-based applications.

Load rebalancing

In a cloud computing environment, failure is the norm,[13][14] and chunkservers may be upgraded, replaced, and added in the system. Files can also be dynamically created, deleted, and appended. That leads to load imbalance in a distributed file system, meaning that the file chunks are not distributed equitably between the nodes.

Distributed file systems in clouds such as GFS and HDFS rely on central servers (master for GFS and NameNode for HDFS) to manage the metadata and the load balancing. The master rebalances replicas periodically: data must be moved form a DataNode/chunkserver to another one if its free space is below a certain threshold.[15] However, this centralized approach can provoke a bottleneck for those servers as they become unable to manage a large number of file accesses. Consequently, dealing with the load imbalance problem with the central nodes complicates more the situation as it increases their heavy loads. The load rebalance problem is NP-hard.[16]

In order to manage large number of chunkservers to work in collaboration, and solve the problem of load balancing in distributed file systems, several approaches have been proposed, such as reallocating file chunks such that the chunks can be distributed to the system as uniformly as possible while reducing the movement cost as much as possible.[12]

Google file system architecture

Google file system

Splitting File

Lua error in package.lua at line 80: module 'Module:Category main article' not found.

Description

Among the biggest internet companies, Google has created its own distributed file system named Google File System to meet the rapidly growing requests of Google's data processing needs and it is used for all cloud services. GFS is a scalable distributed file system for data-intensive applications. It provides a fault-tolerant way to store data and offer a high performance to a large number of clients.

GFS uses MapReduce that allows users to create programs and run them on multiple machines without thinking about the parallelization and load-balancing issues . GFS architecture is based on a single master, multiple chunkservers and multiple clients.[17]

The master server running on a dedicated node is responsible for coordinating storage resources and managing files's metadata (such as the equivalent of inodes in classical file systems).[9] Each file is split to multiple chunks of 64 MByte. Each chunk is stored in a chunk server.A chunk is identified by a chunk handle, which is a globally unique 64-bit number that is assigned by the master when the chunk is first created.

As said previously, the master maintain all of the files's metadata including their names, directories and the mapping of files to the list of chunks that contain each file’s data.The metadata is kept in the master main memory, along with the mapping of files to chunks. Updates of these data are logged to the disk onto an operation log. This operation log is also replicated onto remote machines. When the log become too large, a checkpoint is made and the main-memory data is stored in a B-tree structure to facilitate the mapped back into main memory.[18]

Fault tolerance

For fault tolerance, a chunk is replicated onto multiple (default three) chunkservers.[19] A chunk is available on at least a chunk server. The advantage of this system is the simplicity. The master is responsible of allocating the chunk servers for each chunk and it is contacted only for metadata information. For all other data, the client has to interact with chunkservers.

Moreover, the master keeps track of where a chunk is located. However, it does not attempt to keep precisely the chunk locations but occasionally contact the chunk servers to see which chunks they have stored.[20] GFS is a scalable distributed file system for data-intensive applications.[21] The master does not have a problem of bottleneck due to all the work that it has to accomplish. In fact, when the client want to access data, it communicates with the master to see which chunk server is holding that data. Once done, the communication is set up between the client and the concerned chunk server.

In GFS, most files are modified by appending new data and not overwriting existing data. In fact, once written, the files are only read and often only sequentially rather than randomly, and that made this DFS the most suitable for scenarios in which many large files are created once but read many times.[22][23]

File process

When a client wants to write/update to a file, the master should accord a replica for this operation. This replica will be the primary replica since it is the first one that gets the modification from clients. The process of writing is decomposed into two steps:[9]

  • sending: First, and by far the most important, the client contacts the master to find out which chunk servers holds the data. So the client is given a list of replicas identifying the primary chunk server and secondaries ones. Then, the client contacts the nearest replica chunk server, and send the data to it. This server will send the data to the next closest one, which then forwards it to yet another replica, and so on. After that, the data have been propagated but not yet written to a file (sits in a cache)
  • writing: When all the replicas receive the data, the client sends a write request to the primary chunk server -identifying the data that was sent in the sending phase- who will then assign a sequence number to the write operations that it has received, applies the writes to the file in serial-number order, and forwards the write requests in that order to the secondaries. Meanwhile, the master is kept out of the loop.

Consequently, we can differentiate two types of flows: the data flow and the control flow. The first one is associated to the sending phase and the second one is associated to the writing phase. This assures that the primary chunk server takes the control of the writes order. Note that when the master accord the write operation to a replica, it increments the chunk version number and informs all of the replicas containing that chunk of the new version number. Chunk version numbers allow to see if any replica didn't make the update because that chunkserver was down.[24]

It seems that some new Google applications did not work well with the 64-megabyte chunk size. To treat that, GFS started in 2004 to implement the BigTable approach."[1]

Hadoop distributed file system

Lua error in package.lua at line 80: module 'Module:Category main article' not found.

HDFS , hosted by Apache Software Foundation, is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes). Its architecture is similar to GFS one, i.e. a master/slave architecture. The HDFS is normally installed on a cluster of computers. The design concept of Hadoop refers to Google, including Google File System, Google MapReduce and BigTable. These three techniques are individually mapping to Hadoop and Distributed File System (HDFS), Hadoop MapReduce Hadoop Base (HBase).[25]

An HDFS cluster consists of a single NameNode and several DataNode machines. A NameNode, a master server, manages and maintains the metadata of storage DataNodes in its RAM. DataNodes manage storage attached to the nodes that they run on. The NameNode and DataNode are software programs designed to run on everyday-use machines, which typically run on a GNU/Linux OS. HDFS can be run on any machine that supports Java and therefore can run either a NameNode or the Datanode software.[26]

More explicitly, a file is split into one or more equal-size blocks except the last block that could be smaller. Each block is stored in multiple DataNodes. Each block may be replicated on multiple DataNodes to guarantee a high availability. By default, each block is replicated three times, a process called "Block Level Replication".[27]

The NameNode manages the file system namespace operations such as opening, closing, and renaming files and directories and regulates the file access. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for operating read and write requests from the file system’s clients, managing the block allocation or deletion, and replicating blocks.[28]

When a client wants to read or write data, it contacts the NameNode and the NameNode checks where the data should be read from or written to. After that, the client has the location of the DataNode and can send read or write requests to it.

The HDFS is typically characterized by its compatibility with data rebalancing schemes. In general, managing the free space on a DataNode is very important. Data must be moved from one DataNode to another one if its free space is not adequate, and in the case of creating additional replicas, data should move to assure the balance of the system.[27]

Other examples

Distributed file systems can be classified into two categories. The first category of DFS is the one designed for internet services such as GFS. The second category include DFS that support intensive applications usually executed in parallel.[29] Here are some example from the second category: Ceph FS, Fraunhofer File System (FhGFS), Lustre File System, IBM General Parallel File System (GPFS) and Parallel Virtual File System.

Ceph file system is a distributed file system that provides excellent performance and reliability.[30] It presents some challenges that are the need to be able to deal with huge files and directories, coordinate the activity of thousands of disks, provide parallel access to metadata on a massive scale, manipulate both scientific and general-purpose workloads, authenticate and encrypt at scale, and increase or decrease dynamically because of frequent device decommissioning, device failures, and cluster expansions.[31]

FhGFS, the high-performance parallel file system from the Fraunhofer Competence Centre for High Performance Computing. The distributed metadata architecture of FhGFS has been designed in order to provide the scalability and flexibility needed to run the most widely used HPC applications.[32]

Lustre File System has been designed and implemented to deal with the issue of bottlenecks traditionally found in distributed systems. Lustre is characterized by its efficiency, scalability and redundancy.[33] GPFS was also designed with the goal of removing the bottlenecks.[34]

Communication

High performance of distributed file systems requires efficient communication between computing nodes and fast access to the storage systems. Operations as open, close, read, write, send and receive should be fast to assure that performance. Note that for each read or write request, the remote disk is accessed and that may takes a long time due to the network latencies.[35]

The data communication (send/receive) operation transfer the data from the application buffer to the kernel on the machine.TCP control the process of sending data and is implemented in the kernel. However, in case of network congestion or errors, TCP may not send the data directly. While transferring, data from a buffer in the kernel to the application, the machine does not read the byte stream from the remote machine. In fact, TCP is responsible for buffering the data for the application.[36]

Providing a high level of communication can be done by choosing the buffer-size of file reading and writing or file sending and receiving on application level. Explicitly, the buffer mechanism is developed using Circular Linked List.[37] It consists of a set of BufferNodes. Each BufferNode has a DataField. The DataField contains the data and a pointer called NextBufferNode that points to the next BufferNode. To find out the current position, two pointers are used: CurrentBufferNode and EndBufferNode, that represent the position in the BufferNode for the last written position and last read one. If the BufferNode has no free space, it will send a wait signal to the client to tell him to wait until there is available space.[38]

Cloud-based Synchronization of Distributed File System

More and more users have multiple devices with ad hoc connectivity. These devices need to be synchronized. In fact, an important point is to maintain user data by synchronizing replicated data sets between an arbitrary number of servers. This is useful for the backups and also for offline operation. Indeed, when the user network conditions are not good, then the user device will selectively replicate a part of data that will be modified later and off-line. Once the network conditions become good, it makes the synchronization.[39] Two approaches exists to tackle with the distributed synchronization issue: the user-controlled peer-to-peer synchronization and the cloud master-replica synchronization approach.[39]

  • user-controlled peer-to-peer: software such as rsync must be installed in all users computers that contain their data. The files are synchronized by a peer-to-peer synchronization in a way that users has to give all the network addresses of the devices and the synchronization parameters and thus made a manual process.
  • cloud master-replica synchronization: widely used by cloud services in which a master replica that contains all data to be synchronized is retained as a central copy in the cloud, and all the updates and synchronization operations are pushed to this central copy offering a high level of availability and reliability in case of failures.

Security keys

In cloud computing, the most important security concepts are confidentiality, integrity and availability ("CIA"). In fact, confidentiality becomes indispensable in order to keep private data from being disclosed and maintain privacy. In addition, integrity assures that data is not corrupted.[40]

Confidentiality

Confidentiality means that data and computation tasks are confidential: neither cloud provider nor other clients can access the client's data. Much research has been done about confidentiality because it is one of the crucial points that still represents challenges for cloud computing. The lack of trust toward the cloud providers is also a related issue.[41] Infrastructure of the cloud must assure customers' data will not be accessed by unauthorized parties.
The environment becomes insecure if the service provider:[42]

  • can locate consumer's data in the cloud
  • has the privilege to access and retrieve consumer's data
  • can understand the meaning of data (types of data, functionalities and interfaces of the application and format of the data).

If these three conditions are satisfied simultaneously, then it became very dangerous.

The geographic location of data stores influences on the privacy and confidentiality. Furthermore, the location of clients should be taken into account. Indeed, clients in Europe won't be interested by using datacenters located in United States, because that affects the confidentiality of data as it will not be guaranteed. In order to figure out that problem, some Cloud computing vendors have included the geographic location of the hosting as a parameter of the service level agreement made with the customer [43] allowing users to choose by themselves the locations of the servers that will host their data.

An approach that may help to face the confidentiality matter is the data encryption [44] otherwise, there will be some serious risks of unauthorized uses. In the same context, other solutions exists such as encrypting only sensitive data.[45] and supporting only some operations, in order to simplify computation.[46] Furthermore, Cryptographic techniques and tools as FHE, are also used to strengthen privacy preserving in cloud.[47]

Availability

Availability is generally treated by replication.[48][49] [50][51] Meanwhile, consistency must be guaranteed. However, consistency and availability cannot be achieved at the same time. This means that neither releasing consistency will allow the system to remain available nor making consistency a priority and letting the system sometimes unavailable.[52] In other hand, data must have an identity to be accessible. For instance, Skute [48] is a mechanism based on key/value store that allow dynamic data allocation in an efficient way. Indeed, each server must be identified by a label in this form “continent-country-datacenter-room-rack-server”. The server has reference to multiple virtual nodes, each node has a selection of data(or multiple partition of multiple data). Each data is identified by a key space which is generated by a one-way cryptographic hash function (e.g. MD5) and is localised by the hash function value of this key. The key space may be partitioned into multiple partitions and every partition refers to a part of a data. To perform replication, virtual nodes must be replicated and so referenced by other servers. To maximize data availability data durability, the replicas must be placed in different servers and every server should be in different region, because data availability increase with the geographical diversity. The process of replication consists of an evaluation of the data availability that must be above a certain minimum. Otherwise, data are replicated to another chunk server. Each partition i has an availability value represented by the following formula:

avail_i=\sum_{i=0}^{|s_i|}\sum_{j=i+1}^{|s_i|} conf_i.conf_j.diversity(s_i,s_j)

where  s_{i} are the servers hosting the replicas,  conf_{i} and  conf_{j} are the confidence of servers  _{i} and  _{j} (relying on technical factors such as hardware components and non-technical ones like the economic and political situation of a country) and the diversity is the geographical distance between s_{i} and  s_{j} .[53]

Replication is a great solution to ensure data availability, but it costs too much in terms of memory space.[54] DiskReduce [54] is a modified version of HDFS that's based on RAID technology (RAID-5 and RAID-6) and allows asynchronous encoding of replicated data. Indeed, there is a background process which look for wide data and it deletes extra copies after encoding it. Another approach is to replace replication with erasure coding[55] In addition, to ensure data availability there are many approaches that allow data recovery. In fact, data must be coded and once it is lost, it can be recovered from fragments which are constructed during the coding phase.[56] Some other approaches that apply different mechanisms to guarantee availability are following: Reed-Solomon code of Microsoft Azure, RaidNode for HDFS, also Google is still working on a new approach based on erasure coding mechanism.[57]

Until now there is no RAID implementation established for cloud storage.[55]

Integrity

Integrity in cloud computing implies data integrity and meanwhile computing integrity. Integrity means data has to be stored correctly on cloud servers and in case of failures or incorrect computing, problems have to be detected.

Data integrity is easy to achieve thanks to cryptography (typically through Message authentication code, or MACs, on data blocks).[58]

There are different ways affecting data's integrity either from a malicious event or from administration errors (i.e. backup and restore, data migration, changing memberships in P2P systems).[59]

It exists some checking mechanisms that check data integrity. For instance:

  • HAIL (HAIL (High-Availability and Integrity Layer) a distributed cryptographic system that allows a set of servers to prove to a client that a stored file is intact and retrievable.[60]
  • Hach PORs [61] (proofs of retrievability for large file) is based on a symmetric cryptographic system, there is only one verification key that must be stored in file to improve its integrity. This method serves to encrypt a file F and then generate a random string named sentinel that must be added at the end of the encrypted file. The server cannot locate the sentinel, which is impossible to differentiate it from other blocks, so a small change would indicate whether the file has been changed or not.
  • Different mechanisms of PDP (Provable data possession) checking : Is a class of efficient and practical method that provides an efficient way to check data integrity at untrusted servers:
    • PDP:[62] Before storing the data on a server, the client must store, locally, some meta-data. At a later time, and without downloading data, the client is able to ask the server to check that the data had not been falsified. This approach is used for static data.
    • Scalable PDP:[63] This approach is premised upon a symmetric-key which is more efficient than public-key encryption. It supports some dynamic operations (modification, deletion and append) but it cannot be used for public verification.
    • Dynamic PDP:[64] This approach extends the PDP model to support several update operations such as append, insert, modify and delete which is well-suited for intense computation .

Economic aspects

The cloud computing economy is growing rapidly. The US government decided to spend 40% of the annual growth rate CAGR and fixed 7 billion dollars by 2015.[65]

More and more companies have been utilizing cloud computing to manage the massive amount of data and overcome the lack of storage capacities. Indeed, companies are enabled to use resources as a service to assure their computing needs without having to invest on infrastructure, so they pay for what they are going to use (Pay-as-you-go model).[66]

Every application provider has to periodically pay the cost of each server where replicas of his data are stored. The cost of a server is generally estimated by the quality of the hardware, the storage capacities, and its query processing and communication overhead.[67]

Cloud computing facilitates tasks for enterprises to scale their services under the client requests. The pay-as-you-go model has also facilitated the tasks for the startup companies that wish to benefit from compute-intensive business. Cloud computing also offers a huge opportunity to many third-world countries that don't have enough resources, and thus enabling IT services. Cloud computing can lower IT barriers to innovation.[68]

Despite the wide utilization of cloud computing, efficient sharing of large volumes of data in an untrusted cloud is still a challenging research topic.

References

  1. Sun microsystem, p. 1.
  2. Fabio Kon, p. 1
  3. Kobayashi et al. 2011, p. 1.
  4. Angabini et al. 2011, p. 1.
  5. Di Sano et al. 2012, p. 2.
  6. Andrew & Maarten 2006, p. 492.
  7. Andrew & Maarten 2006, p. 496
  8. Humbetov 2012, p. 2
  9. 9.0 9.1 9.2 Krzyzanowski 2012, p. 2
  10. Pavel Bžoch, p. 7.
  11. Kai et al. 2013, p. 23.
  12. 12.0 12.1 Hsiao et al. 2013, p. 2.
  13. Hsiao et al. 2013, p. 952.
  14. Ghemawat, Gobioff & Leung 2003, p. 1.
  15. Ghemawat, Gobioff & Leung 2003, p. 8.
  16. Hsiao et al. 2013, p. 953.
  17. Di Sano et al. 2012, pp. 1–2
  18. Krzyzanowski 2012, p. 4
  19. Di Sano et al. 2012, p. 2
  20. Andrew & Maarten 2006, p. 497
  21. Humbetov 2012, p. 3
  22. Humbetov 2012, p. 5
  23. Andrew & Maarten 2006, p. 498
  24. Krzyzanowski 2012, p. 5
  25. Fan-Hsun et al. 2012, p. 2
  26. Azzedin 2013, p. 2
  27. 27.0 27.1 Adamov 2012, p. 2
  28. Yee & Thu Naing 2011, p. 122
  29. Soares et al. 2013, p. 158
  30. Weil et al. 2006, p. 307
  31. Maltzahn et al. 2010, p. 39
  32. Jacobi Lingemann, p. 10
  33. Schwan Philip 2003, p. 401
  34. Jones, Koniges & Yates 2000, p. 1
  35. Upadhyaya et al. 2008, p. 400.
  36. Upadhyaya et al. 2008, p. 403.
  37. Upadhyaya et al. 2008, p. 401.
  38. Upadhyaya et al. 2008, p. 402.
  39. 39.0 39.1 Uppoor, Flouris & Bilas 2010, p. 1
  40. Zhifeng & Yang 2013, p. 854
  41. Zhifeng & Yang 2013, pp. 845–846
  42. Yau & An 2010, p. 353
  43. Vecchiola, Pandey & Buyya 2009, p. 14
  44. Yau & An 2010, p. 352
  45. Miranda & Siani 2009
  46. Naehrig & Lauter 2013.
  47. Zhifeng & Yang 2013, p. 854.
  48. 48.0 48.1 Bonvin, Papaioannou & Aberer 2009, p. 206
  49. Cuong et al. 2012, p. 5
  50. A., A. & P. 2011, p. 3
  51. Qian, D. & T. 2011, p. 3
  52. Vogels 2009, p. 2
  53. Bonvin, Papaioannou & Aberer 2009, p. 208
  54. 54.0 54.1 Carnegie et al. 2009, p. 1
  55. 55.0 55.1 Wang et al. 2012, p. 1
  56. Abu-Libdeh, Princehouse & Weatherspoon 2010, p. 2
  57. Wang et al. 2012, p. 9
  58. Juels & Oprea 2013, p. 4
  59. Zhifeng & Yang 2013, p. 5
  60. Bowers, Juels & Oprea 2009
  61. Juels & S. Kaliski 2007, p. 2
  62. Ateniese et al. Kissner
  63. Ateniese et al. 2008, pp. 5, 9
  64. Erway et al. 2009, p. 2
  65. Lori M. Kaufman 2009, p. 2
  66. Angabini et al. 2011, p. 1
  67. Bonvin, Papaioannou & Aberer 2009, p. 3
  68. Marston et al. 2011, p. 3.

Bibliography

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  1. Architecture & Structure & design:
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
  2. Security Concept
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
  1. synchronization
    • Lua error in package.lua at line 80: module 'strict' not found.
  2. Economic aspects
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.
    • Lua error in package.lua at line 80: module 'strict' not found.