DOI QR코드

DOI QR Code

Verification Control Algorithm of Data Integrity Verification in Remote Data sharing

  • Xu, Guangwei (School of Computer Science and Technology, Donghua University) ;
  • Li, Shan (School of Computer Science and Technology, Donghua University) ;
  • Lai, Miaolin (School of Computer Science and Technology, Donghua University) ;
  • Gan, Yanglan (School of Computer Science and Technology, Donghua University) ;
  • Feng, Xiangyang (School of Computer Science and Technology, Donghua University) ;
  • Huang, Qiubo (School of Computer Science and Technology, Donghua University) ;
  • Li, Li (College of Architecture and Urban Planning, Tongji University) ;
  • Li, Wei (School of Computer Science and Technology, Donghua University)
  • Received : 2020.05.08
  • Accepted : 2022.01.04
  • Published : 2022.02.28

Abstract

Cloud storage's elastic expansibility not only provides flexible services for data owners to store their data remotely, but also reduces storage operation and management costs of their data sharing. The data outsourced remotely in the storage space of cloud service provider also brings data security concerns about data integrity. Data integrity verification has become an important technology for detecting the integrity of remote shared data. However, users without data access rights to verify the data integrity will cause unnecessary overhead to data owner and cloud service provider. Especially malicious users who constantly launch data integrity verification will greatly waste service resources. Since data owner is a consumer purchasing cloud services, he needs to bear both the cost of data storage and that of data verification. This paper proposes a verification control algorithm in data integrity verification for remotely outsourced data. It designs an attribute-based encryption verification control algorithm for multiple verifiers. Moreover, data owner and cloud service provider construct a common access structure together and generate a verification sentinel to verify the authority of verifiers according to the access structure. Finally, since cloud service provider cannot know the access structure and the sentry generation operation, it can only authenticate verifiers with satisfying access policy to verify the data integrity for the corresponding outsourced data. Theoretical analysis and experimental results show that the proposed algorithm achieves fine-grained access control to multiple verifiers for the data integrity verification.

Keywords

1. Introduction

Although cloud computing is attractive as a cost-effective and high-performance model. However, the reliability of cloud infrastructure has aroused data owners' concern because cloud infrastructure often encounters security issues [1-2]. Cloud computing that is deployed by cloud service provider (CSP) is a technical black box for users, and is also convenient for users to use and manage. Unfortunately, the nature of the black box leads to the lack of transparency in the behavior of CSP and regulatory mechanisms for CSP, causing users to distrust cloud service provider [3]. To solve these problems, many solutions have been proposed to verify the integrity of remotely stored data [4-17]. In a data sharing environment, when users download data resources over the Internet, they are very concerned about whether the data resources have been tampered with or damaged. Therefore, data integrity needs to be verified to ensure the availability of the data before data downloading [6]. However, the cloud storage service provided by CSP is not free, and the data owner as a consumer needs to pay money to CSP for resource consumption. Therefore, CSP's resource consumption caused by the data integrity verification will be borne by the data owner. The resource consumption includes transmission and computation overhead of CSP performing remote data integrity verification in the process of users downloading data. From the previous analysis, it can be seen that if CSP allows users without data access rights to verify the data integrity, some unnecessary expenses will be brought to the data owner. Moreover, CSP allows users without data access rights to launch the data integrity verification, making CSP vulnerable to resource exhaustion attacks [18]. According to the existing verification process, after receiving the verification requests from any verifier, CSP consumes its computation resources to generate the verification proof for the corresponding verified data. In this case, it is unreasonable for the data owner to pay for all the verification. This attack has been introduced as economic denial of sustainability (EDOS) [19-21], which will mean that the finance of the data owner is under attack. Although the user cannot recover the entire block from the verification proof generated by CSP during the verification process, the proof generation consumes CSP's computation resources and causes additional overhead for the data owner.

At present, many attribute-based access control algorithms have been proposed. However, in the process of data integrity verification, only the data owner sets the access policy, and then CSP performs access control on the verifiers according to data owner's access policy, leading to some new problems: (a) The data owner and the verifier are likely to conspire to falsify the verification result, making the data integrity verification result unreliable; (b) When a verifier needs to apply for an attribute key, he cannot get the attribute key in time and is necessary to wait for the data owner to distribute the key since the data owner is not always online; (c) Data owner's resources and computation power is limited. When multiple users apply for the attribute key from the data owner, it may cause a serious burden to the data owner. Thus, we propose a verification control algorithm of multiple verifiers (MV-VCP) to resolve these problems in this paper. The main work of this article is as follows:

1) In order to reduce the waste of verification overhead caused by multiple verifiers who do not have access permission to launch the data integrity verification, this paper designs an attribute-based encryption verification control algorithm for multiple verifiers.

2) In the algorithm, data owner and CSP construct an access structure together, and generate a verification sentinel that checks the authority of verifiers according to the access structure.

3) Since the access structure is co-generated by data owner and CSP, CSP cannot know the attributes of the access structure constructed by the data owner, and the data owner outsources the sentry generation operation to CSP.

4) CSP authenticates verifiers, and only the verifiers who meet the access policy can launch the data integrity verification to the corresponding data stored on CSP.

2. Related Work

2.1 Data integrity verification

With the continuous popularity of cloud storage, remote data integrity verification has received more and more attention, since Atenese et al. [7] and Shacham et al. [8] proposed proofable data possession (PDP) and proofs of retrievability (POR) respectively. The main achievements of this field are as follows.

(1) Public verification scheme. Shacham and Waters [8] proposed the compact proof of data availability generated by a publicly verifiable homomorphic scheme based on the BLS signature [9]. Zhang et al. [22] developed a public verification scheme for cloud storage and proposed to use indistinguishable obfuscation algorithms to process data.

(2) Identity-based integrity verification. Yu et al. [5] proposed identity-based integrity verification, using key-homomorphic cryptographic primitives to reduce the complexity of the system and the establishment and management cost of PKI-based public key authentication framework RDIC meter. Yang et al. [10] proposed a public audit protocol for sharing cloud data, supporting identity privacy and identity traceability. Tao et al. [14] designed a data integrity verification scheme to prevent collusion between CSP and revoked group users when sharing data. Li et al. [3] proposed fuzzy identity-based data integrity verification for cloud storage systems, and introduced complex identity-based audits to solve complex key management.

2.2 Access control

The access control algorithm is mainly used to conform to the relevant conventions and the scope of authorized users' access to information resources, and also to ensure that resources are not accessed by illegal users. Due to the large number of users in cloud computing and the complex relationship between roles and permissions, it is more suitable to use attribute-based access control to implement fine-grained access control [23-26]. Attribute-based access control can provide anonymous authentication, and further defines access control strategies based on different attributes of the requester, environment, and data object. Attribute-based encryption schemes [27-29] can meet these requirements. Next, the development of attribute-based encryption and attribute-based access control are explained separately.

(1) Attribute-based encryption scheme

In order to support data owners to perform fine-grained data access control in semi-transparent public cloud storage, attribute-based encryption (ABE) is introduced [28-29]. At present, many encryption schemes based on attribute encryption have been proposed, mainly divided into key policy attribute-based encryption (KP-ABE) [28] and ciphertext policy attribute-based encryption (CP-ABE) [23,29]. Among them, CP-ABE is very practical in public cloud storage [23,29], only users who match the access strategy can decrypt the related ciphertext, which increases the flexibility of the data access control mechanism. Li et al. [24] proposed an attribute-based ICN naming access control scheme, which implements flexible attribute authorization by setting attribute rankings to achieve comparison between attributes.

(2) Attribute-based access control

Joseph et al. [26] proposed an attribute-based method to identify malicious clients. They deal with basic applications in black boxes and have not completely eliminated attacks at the algorithm and protocol level. To solve this problem, Xue et al. [18] proposed a combination of cloud-side access control and existing data-owner-based CP-ABE access control to ensure that only users who comply with data owner’s access strategy can download the corresponding data on CSP. CSP is only responsible for judging whether a user complies with the access strategy, ensuring user's privacy, and preventing user from launching an EDOS attack on the data owner [20-21].

In the existing access control scheme, the data owner encrypts the data using CP-ABE algorithm, and also achieves fine-grained access control. But even if the data owner encrypts the data uploaded to CSP according to the existing scheme, users who do not meet the conditions can still download the data and can launch the data integrity verification. Unauthorized downloads can also reduce security by facilitating offline analysis and leaking information such as data length or update frequency. At the same time, the computation overhead caused by the proof generation in the data integrity verification is relatively large. If the verifier is not constrained before the verification, the malicious verifiers will continue to launch the verification on the CSP. This will increase the data owner's consumption significantly. Therefore, it is extremely important to control the verification.

3. System Model and Problem Statements

3.1 Data integrity verification model

In the data storage service model, it generally contains the data owner (DO), cloud service provider (CSP) and user (User) which can also be called the third party verifier (TPA) as shown in Fig. 1. When the user needs to use the data of the data owner, he downloads the corresponding data from CSP. At this time, if the data is corrupted, it will cause great trouble to the user. Therefore, users are very concerned about the integrity of data on CSP. The traditional verification model uses the third-party verification scheme [7] to ensure the fairness of the data verification. However, in reality, there is no real third party verifier to help the data owner verify the integrity of data. Thus, the user who needs to download the data on CSP will become the verifier. Moreover, the resources on CSP are not free to use and CSP needs to perform corresponding calculations while users verify the data integrity. The data owner is also a consumer who purchases CSP's services. Therefore, the cost of the data integrity verification will be borne by the data owner accordingly. Then the data owner will be very concerned about the number of verification costs caused by CSP calculating the data proof. Data integrity verification mainly consists of the following five steps:

E1KOBZ_2022_v16n2_565_f0001.png 이미지

Fig. 1. Data integrity verification model

(1) KeyGen(λ) → (sk, pk). The algorithm is executed by the data owner. The data owner enters a security parameter λ, and then outputs a private key sk and a public key pk.

(2) TagGen(F, sk) → Φ. The tag generation algorithm is executed by the data owner before uploading the data. Take the encrypted data F and the private key sk as input. The data owner firstly divides the data into n blocks, and then calculates the tags σi, i ∈ [1, n] for each block. The data owner merges these tags into the tag set Φ = {σi}i∈[1,n]. The data owner sends the tag set Φ and data F to the CSP.

(3) Chall(F) → C. The data owner randomly selects the index numbers of some data blocks to be verified, and sends a verification request to TPA. Based on this, TPA generates a corresponding random number for each data block in the information of the extracted data blocks B, and then forms a challenge set C = {(i, vi)i∈Q} to challenge the CSP, where vi are random numbers.

(4) ProofGen(F, Φ, C) → P. CSP responds to the challenge and calculates the verification proof P using the information of the extracted data blocks B stored in his storage space, the data tag Φ corresponding to the extracted data blocks, and the challenge information C provided by TPA, and finally the proof P is returned to TPA.

(5) VerifyData(C, P, pk) → 0/1. TPA uses the received verification proof P, the public key pk, the challenge information C, and the data tag Φ to determine the integrity of the challenged data blocks and output whether these data blocks are intact or not (i.e., 0 or 1).

3.2 Security Model

The security model of this solution is defined as a selectivity-game between the attacker A and the challenger B. The model is specifically defined as follows:

Init. The adversary A chooses a challenge access structure (M*, ρ*), where M* is an l* × n* matrix, and ρ* maps each row of M* to an attribute.

Setup. The challenger runs the Setup algorithm and gives the public parameters PK to the adversary A.

Phase 1. The adversary A issues query for secret keys SK, none of the queried private keys can satisfy the access policy T.

Challenge. The adversary A submits two equal length messages μ0 and μ1 to the challenger B. B randomly chooses b ∈ {0,1} and encrypts μ1 under the challenge access structure (M*, ρ*). Finally, it sends the generated challenge ciphertext CT* to the adversary.

Phase 2. Phase 2 is the same as Phase 1.

Guess. The adversary outputs the guess b of b′. The advantage of A in this game is defined as \(\begin{aligned}A d v_{A}=\left|\operatorname{Pr}\left(b^{\prime}=b\right)-\frac{1}{2}\right|\\\end{aligned}\).

3.3 Problem Statement

(1) Users who do not have data verification rights challenge the data integrity on CSP, causing a waste of verification overhead;

(2) The data owner and the verifier are likely to conspire to falsify the verification results, making the verification results unreliable;

(3) The data owner is not always online. When the verifier needs to apply for the attribute key, it cannot but wait for the data owner to go online for key distribution;

(4) The data owner has only limited resources and computation power. When multiple users apply for the attribute key from the data owner, it may cause a serious burden to him.

4. Verification control algorithm of multiple verifiers

As can be seen from the foregoing, the proof generation requires to consume CSP’s resources and incurs corresponding costs for the data owner. Usually, only users with the right to access the data can verify the integrity of data. Users who do not have the right to access data from verifying the data integrity on CSP cause unnecessary overhead to the data owner, and even malicious users continuously launch the data verification on CSP so as to cause EDOS attacks on the data owner. Thus, the user should perform authentication control before the verification. A detailed description of the verification control algorithm for multiple verifiers is as follows.

4.1 Construction of access structure

The access structure is a specific manifestation of the access strategy. Data owner or CSP needs to use a specific access structure to formulate the access strategy. In the verification control algorithm, the access structure is used to authenticate the user. A user can be authorized to verify the data integrity on CSP if and only if his attribute set meets the access structure constructed by CSP and data owner.

Referring to [30] for a (t, n) threshold gate access structure (P1, P2, ..., Pn), we construct the LSSS matrix M* on Zp as

\(\begin{aligned}M^{*}=\left(\begin{array}{ccccc}1 & 1 & 1 & \cdots & 1 \\ 1 & 2 & 2^{2} & \cdots & 2^{t-1} \\ 1 & 3 & 3^{2} & \cdots & 3^{t-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & n & n^{2} & \cdots & n^{t-1}\end{array}\right)\\\end{aligned}\).       (1)

In this paper, the access structure will be generated by data owner and CSP. The data owner formulates the corresponding access structure A = (A1, A2, 2) for the data F, where A1 is the access structure constructed by data owner, and A2 is the access structure constructed by CSP. The data owner constructs the LSSS access structure (M, ρ) of access structure A according to formula (1), where matrix M is

\(\begin{aligned}M=\left(\begin{array}{ll}1 & 1 \\ 1 & 2\end{array}\right)\\\end{aligned}\).       (2)

The data owner sets the corresponding access structure A1 according to the characteristics of data F, and the attribute set S1 = {x1,. . .,xl1}. The attribute set S1 is a set of attributes included in the access policy A1. The data owner generates the LSSS access structure (M1, ρ) corresponding to A1 according to formulas (1) and (2), where the matrix M1 is

\(\begin{aligned}M_{1}=\left(\begin{array}{ccc}a_{0,1} & \cdots & a_{1, n_{1}} \\ \vdots & \ddots & \vdots \\ a_{l_{1}, 1} & \cdots & a_{l_{1}, n_{1}}\end{array}\right)\\\end{aligned}\).       (3)

CSP formulates corresponding access structure A2, and attribute set S2 is a set of attributes contained in the access structure A2. CSP sets the attribute set S2 = {y1,. . .,yl2}, and the corresponding access structure A2 which is represented by a character string. CSP sends A2 and S2 to data owner. CSP generates LSSS access structure (M2, ρ) according to A2, where M2 is

\(\begin{aligned}M_{2}=\left(\begin{array}{ccc}b_{1,1} & \cdots & b_{1, n_{2}} \\ \vdots & \ddots & \vdots \\ b_{l_{2}, 1} & \cdots & b_{l_{2}, n_{2}}\end{array}\right)\\\end{aligned}\).       (4)

After data owner and CSP complete the generation of M1 and M2 respectively, the data owner inserts the access structures (M1, ρ) and (M2, ρ) into (M, ρ) to form the access structure A. Let l1 be the number of rows in the matrix M1, n1 be the number of columns in the matrix M1, l2 be the number of rows in the matrix M2, and n2 be the number of columns in the matrix M2. According to formula (2), the matrix M is calculated as

\(\begin{aligned}\begin{array}{l}M=\left(\begin{array}{ccc}\vec{v}_{1} \otimes \vec{u}_{1} & \widetilde{M}_{1} & 0 \\ \vec{v}_{2} \otimes \vec{u}_{1} & 0 & \widetilde{M}_{2}\end{array}\right) \\ =\left(\begin{array}{cccccccc}a_{1,1} & a_{1,1} & a_{1,2} & \cdots & a_{1, n_{1}} & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ a_{l_{1}, 1} & a_{l_{1}, 1} & a_{l_{1}, 2} & \cdots & a_{l_{1}, n_{1}} & 0 & \cdots & 0 \\ b_{1,1} & 2 \cdot b_{1,1} & 0 & \cdots & 0 & b_{1,2} & \cdots & b_{1, n_{2}} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ b_{l_{2}, 1} & 2 \cdot b_{l_{2}, 1} & 0 & \cdots & 0 & b_{l_{2}, 2} & \cdots & b_{l_{2}, n_{2}}\end{array}\right) . \\\end{array}\\\end{aligned}\)       (5)

In formula (5), we can see that the data owner inserts the access structure (M1, ρ) and (M2, ρ) into (M, ρ), and the computational complexity of the generator matrix M is

Cp = n1l1 + n2l2 + l1 + l2.       (6)

The pseudo code of the construction of access structure is shown in Algorithm 1.

Algorithm 1. Access structure generation algorithm

Input: M1 and M2;

Output: M; // The matrix M will be generated according to formula (5)

1. n1 = column length of M1, l1 = row length of M1;

2. n2 = column length of M2, l2 = row length of M2; // DO inserts the LSSS matrix M1 into matrix M.

3. for i = 0 to l1 − 1 do

4. for j = 0 to n1 + n2 do

5. if (j = 0) then M[i,j] = M1[i, 0]; continue;

6. else if (j > 0 and j ≤ n1) M[i, j] = M1[i,j];

7. else M[i, j] = 0; end if

8. end for

9. end for // CSP inserts the LSSS matrix M2 into matrix M.

10. for i: = l1 to l1 + l2 − 1 do

11. for j: = 0 to n1 + n2 do

12. if (j = 0) then M[i, j] = M2[i, 0]; continue;

13. else if (j > 1 and j ≤ n1 + 1) M[i, j] = 0;

14. else if (j = 1) M[i, j] = 2 × M2[i, 0];

15. else M[i, j] = M2[i − l1, j − n1 − 1]; end if

16. end for

17. end for

18. return M;

For example, assume that the access structure A1 = ((x1, x2, x3, 2), x4, x5, 3) is formulated by the data owner, according to formulas (1) and (2), the LSSS matrix M1 corresponding to the access structure A1 is

\(\begin{aligned}M_{1}=\left(\begin{array}{llll}1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 2 \\ 1 & 1 & 1 & 3 \\ 1 & 2 & 4 & 0 \\ 1 & 3 & 9 & 0\end{array}\right)\\\end{aligned}\).

Assuming that the access structure A2 = (y1, y2, y3, 2) is formulated by CSP, according to formula (1), the LSSS matrix M2 corresponding to the access structure A2 is

\(\begin{aligned}M_{2}=\left(\begin{array}{ll}1 & 1 \\ 1 & 2 \\ 1 & 3\end{array}\right)\\\end{aligned}\).

The access strategies A1 and A2 constitute a common access structure A = ((x1, x2, x3, 2), x4, x5, 3), (y1, y2, y3, 2),2). Therefore, according to formula (5), insert M1 and M2 into M to form the LSSS access structure (M, ρ) corresponding to the access structure A, the generation process is shown in Fig. 2.

E1KOBZ_2022_v16n2_565_f0002.png 이미지

Fig. 2. The generation of matrix M

In Fig. 2, A1 is the access structure formulated by data owner, and A2 is the access structure formulated by CSP. The LSSS matrixes M1 and M2 corresponding to structure A1 and A2 are generated respectively. Insert M1 and M2 into the matrix M to form the LSSS matrix M corresponding to the access structure A. Finally, the LSSS access structure (M, ρ) is used for subsequent verification control of the user. Only if the attribute set owned by the user satisfies A, the user can launch the data integrity verification on CSP.

4.2 Attribute key generation and distribution

In the process of verification control, since data owner is not always online, he needs to outsource his attribute key to CSP, and then CSP replaces the data owner to distribute the key. However, the key distributed by CSP instead of data owner will bring new problems. If data owner directly stores the attribute key on CSP, the CSP can use the corresponding attribute key to unlock the access strategy. Moreover, user sends his attribute set to CSP when applying for the attribute key. The distribution of the attribute key is the only part that leaks the identity information to each attribute authority. Therefore, user needs to hide his attributes when applying for the attribute key from the CSP.

It can be seen from Section 4.1 that the access structure A is constructed by data owner and CSP together, and CSP cannot know the attribute value in the access strategy constructed by data owner. Therefore, the attribute key corresponding to the attribute in the attribute set S1 is generated by data owner before uploading the data. However, sending the attribute key directly to CSP in plain text will bring new problems. CSP can use the corresponding attribute key to unlock the access policy. At the same time, the attribute values in the attribute set S1 are generally related to data content or the information of data owner. Publishing the attribute values directly to CSP may cause privacy leakage. Data owner generates an index of the attribute key during the generation of the attribute key, and then encrypts the attribute key before uploading it.

AttKeyGenDO(S1): It is executed by data owner before uploading the data, with the attribute set S1 as input. Data owner selects a random number u1 for x ∈ S1 to calculate Kx = hxu1. To prevent CSP from obtaining Kx, data owner will continue to calculate the outsourced attribute key Kx′. For x ∈ S1, calculate the outsourced key Kx′ = (Kx)−r, where r is a random number, and use the attribute value to encrypt the random number r to generate r′ by the symmetric encryption algorithm AES, i.e., r′ = AES. Enc(x, r). In order to enable user to find the corresponding attribute key on CSP through the attribute value, for all x ∈ S1, data owner also calculates the index of the attribute key tx = H2(e(H1(x), gsk)). Data owner sends sk1′ = {Kx′, tx, r′ }x∈S1 to CSP.

When a user purchases or registers a service from CSP, he will execute AttGenuser(S′) to generate an attribute set S′, and then send it to CSP. In AttGenuser(S′), in order to protect user's privacy, user will hide some attributes.

AttGenuser(S′): It is executed by user and takes the attribute set S′ owned by user as input. Generally, it will be run when users register with CSP or purchase services. The user attribute set is S′ = S1′ ∪ S2′ , where S1′ and S2′ are related to S1 and S2 in the user attribute set respectively. Referring to the example in Section 4.1, when a user has a purchase behavior on CSP, then one of the user's attribute set is S = {x1, x2, x4, x5, y1, y2}. Since the privacy of the user is very important, the attributes in the access structure formulated by data owner generally include data information. When the user browses the data or registers in CSP, he does not expect CSP to know what data he has viewed. Therefore, it is necessary to hide the attributes in S1′ in the user attribute set at the stage of user applying for the attribute key. For all xi ∈ S1′, calculate yi = H1(xi)uk, pku = (gsk)1/uk. Finally, the user sends Y = {yi}, pku and S2′ to CSP to apply for the attribute key.

After CSP receives Y = {yi}, pku and S2′, it searches the corresponding attribute key according to the index tx and the attribute key Y, and then generates the attribute key SK2 related to S2′ and returns SK to user.

AttGenCSP(S2′): It is executed by CSP and takes the attribute set S2′ as input. CSP searches the corresponding attribute key according to the index of data owner {tx}x∈S1. If

tx = H2(e(yi, pku))       (7)

holds, CSP saves Kx′ and rx′to SK1.

CSP selects the random number β and calculates the attribute key SK2 of the attribute value in S2′ as

SK2 = (K = gαs2 gak, L1 = gak, L2 = hak, ∀xi∈ S2: Kxi = hxiβ).       (8)

CSP sends SK1 and SK2 to the corresponding users. After CSP returns SK1 and SK2 , user outsources the attribute key Kx′, and the corresponding attribute key Kx can be obtained.

AttGenuser(S1, SK2) : After receiving SK1 and SK2, user decrypts r′ to obtain r = AES.Dec(x, r′), and calculates the attribute key Kx = (Kx′)r.

The pseudo code of the attribute key generation and distribution is shown in Algorithm 2.

Algorithm 2. Attribute key generation algorithm

Input:(M, ρ), S1, S2, and S′;

Output:SK1, SK2;

1. AttKeyGenDO(S1), CSP←SK′1; // Generating attribute key for data owner

2. AttGenuser(S′), CSP←{Y, pku, S2′}; // User requests the appropriate attribute key

3. for x in S2′do CSP computes SK2; end for // Generating attribute key

4. for yi in Y do

5. for tx in SK′1 do

6. if formula (7) holds then SK1←Kx′, rx′; end if

7. end for

8. end for

9. return SK1, SK2;

4.3 Verification authority detection

This paper combines data owner and CSP to jointly generate sentinels for verification control. The access structure (M, ρ) is constructed by data owner and CSP. Neither data owner nor CSP can know the attribute value of the part of the attribute constructed by the other party in the access structure. Thus, the verification sentry needs to be generated in two parts. Specifically, data owner generates the verification sentry STDO of the access structure constructed by himself and the outsourcing key ps,i. CSP generates Di according to ps,i, and then merges STDO and Di into the verification sentry ST.

SentinelGenDO(M, ρ): Data owner enters the access structure (M, ρ) to generate a part of the verification sentry STDO. He randomly selects the secret s ∈ Zp and generates a vector \(\begin{aligned}\vec{v}=\left(s, z_{2}, \ldots, z_{n}\right) \in Z_{p}\\\end{aligned}\), where z2, . . . , zn are used to share the secret s. For i ∈ [1, l1], the data owner calculates \(\begin{aligned}\lambda_{i}=M_{i} \cdot \vec{v}\\\end{aligned}\), where Mi is the ith row of the matrix M. He chooses the random number u2 ∈ Zp and calculates STDO as

STDO = (D = pf ⋅ e(g, g)as, D1′ = gau2, i ∈ [1, l1]: Di = gihρ(i)u2),       (9)

where pf ∈ Zp, the function ρ is an injective function, which maps each row in the matrix M to an attribute in the attribute set S, namely ρ(i) ∈ S. STDO is an intermediate verification sentinel generated by data owner, which is used for user's verification authority. Since s is the key that can be finally controlled by verification, in order to protect data owner's privacy, data owner cannot send the secret s as clear text to CSP. Thus, data owner calculates the outsourcing key ps,i of the secret s by

ps,i = ga(s⋅mi,1−mi,1),       (10)

where i ∈ [l1 + 1, l1 + l2] and mi,1 is the ith row and first column in the matrix M. Then hashpf = H(pf) is calculated, where H(∙) is an anti-collision hash function. Data owner outsources the key Ps = {ps,i}ρ(i)∈S2, and the outsourcing vector \(\begin{aligned}\vec{v}'\\\end{aligned}\), the verification sentry STDO and hashpf are uploaded to CSP for storage.

SentinelGenCSP((M, ρ), \(\begin{aligned}\vec{v}'\\\end{aligned}\), STDO, Ps) : CSP inputs access structure (M, ρ) , the outsourcing vector \(\begin{aligned}\vec{v}'\\\end{aligned}\), the outsourcing key Ps generated by data owner, and the intermediate verification sentry STDO. CSP calculates \(\begin{aligned}\lambda'_{i}=M_{i} \cdot \vec{v}\\\end{aligned}\), where i ∈ [l1 + 1, l1 + l2] , Mi corresponds to the ith row in M. CSP chooses random k ∈ Zp, and calculates Di by

Di = gips,ihkρ(i) = gihkρ(i),       (11)

where \(\begin{aligned}\lambda_{\mathrm{i}}=M_{i} \cdot \vec{v}\\\end{aligned}\). CSP gets sentinel STCSP = (D2′ = gak, i ∈ [l1 + 1, l1 + l2]: Di).

Finally, the verification sentinel ST = DO, STCSP> is output, and ST will then be used to detect the user verification authority.

AuthProofGen(SK1, SK2, ST): The user generates pf′ by this algorithm, and proves to CSP that he satisfies the access structure (M, ρ), and can verify the data integrity. Assume that user's attribute set S′ meets the access strategy constructed jointly by data owner and CSP. If λi is the effectively shared share of secret s, the Lagrange interpolation formula can be used to find a set of coefficients in polynomial time {ωi ∈ Zp}i∈I, so that ∑i∈Iωiλi = s, where 

I = {i: ρ(i) ∈ S′ } ⊂ {1, . . . , l}. Then user calculates

\(\begin{aligned}T=\frac{e\left(\mathrm{D}_{1}^{\prime} \mathrm{D}_{2}^{\prime}, K\right)}{\prod_{i \in I}\left(e\left(D_{i}, L\right) e\left(\mathrm{D}_{1}^{\prime} \mathrm{D}_{2}^{\prime}, K \rho(i)\right)\right)^{\omega_{i}}}=e(g, g)^{a s}\\\end{aligned}\).      (12)

User calculates pf′ = D/T and then sends pf′ to CSP to prove that the set of attributes he possesses meets the access structure (M, ρ), so that he can verify the data integrity on CSP.

Verify(pf′, guk): After CSP receives pf′, it verifies whether user has the authority to verify the data integrity on CSP. CSP verifies whether the attribute set owned by user satisfies the access structure (M, ρ) by

hashpf = H(pf′).       (13)

If formula (13) holds, the attribute owned by user satisfies the access structure, and the verification can be performed; otherwise, CSP rejects user's verification request.

The pseudo code of the process of verification authority detection is shown in Algorithm 3.

Algorithm 3. Verification authority detection algorithm

Input: ST, A;

Output: true/false;

1. for i:= 1 to l1 do

2. STDO←formula (9); end for // Generating STDO for data owner.

3. for i = l1 + 1 to l2 do

4. ps,i ←formula (9); end for // Outsourced key ps,i

5. \(\begin{aligned}\mathrm{CSP} \leftarrow P_{S}, \vec{v}'\\\end{aligned}\), STDO, hashpf;

6. for i = l1 + 1 to l2 do

7. Di←formula (11); end for // Checking sentinel ST

8. CSP generates ST; // Detecting user's authority

9. if user sends S to CSP then CSP sends (ST, A) to user;

10. User computes pf′ and sends pf′ to CSP;

11. if formula (13) holds then return true;

12. else return false;

13. end if

5. Algorithm Analysis

In this paper, the algorithm constructs the access structure by data owner and CSP to realize the verification control of users, so that users without data access rights cannot launch the data integrity verification for data owner. In order to illustrate the feasibility of the algorithm, the security analysis is conducted in this section, and the theoretical analysis of computational complexity and storage and transmission overhead is implemented in Section 6.1.

The discrete logarithm calculation hypothesis (abbreviated as DL problem) supposes that a ∈ Zp* , p is a large prime number, and g1 is a generator of group G1, where g1a ∈ G1, g1 ∈ G1. Take g1a as input and output a.

Definition 1. Discrete logarithm hypothesis (abbreviated as DL hypothesis). It exists ε > 0. The advantage of any attacker in solving the DL problem on group G1 in a polynomial time algorithm Θ is defined as follows

\(\begin{aligned}A d v D L_{\Theta}=\operatorname{Pr}\left[\Theta\left(g_{1}, g_{1}^{a}\right)=a: a \stackrel{R}{\leftarrow} Z_{p}\right] \leq \varepsilon\\ \end{aligned}\).

It can be seen from the above formula that the possibility of solving the DL problem is equivalent to using a random number a to perform a violent collision on Θ, with a probability of \(\begin{aligned}\frac{1}{p}<\varepsilon\\\end{aligned}\). Let p be a sufficiently large prime number. The advantage of solving the DL problem can be ignored, because the solving probability is close to zero. That is to say, it is computationally difficult or impossible to solve the DL problem on the group G1 based on the establishment of the above hypothesis [28].

Definition 2. Decision-making q-BDHE hypothesis. Assume that G represents a bilinear group of order p, g and h are two independent generators of the group, select a random value a ∈ Zp, and then define yg,a,l = (g1, g2, ..., gl, gl+2, ..., g2l) ∈ G2l−1, where \(\begin{aligned}g_{i}=g^{\left(\alpha^{i}\right)}\\\end{aligned}\). The algorithm makes a guess based on the output value z ∈ {0,1} . If |Pr[B(g, h, yg,a,l, e(gl+1, h)) = 0] − Pr[B(g, h, yg,a,l, Z) = 0]| ≥ ε, then the advantage ε is defned to solve the decision-making problem under groups G and GT. If no polynomial time algorithm solves the decision-making problem with a non-negligible advantage, then the decision-making hypothesis holds in groups G and GT.

5.1 Robustness of verification control

Users whose attribute set does not satisfy the access structure cannot pass verification control. Suppose that the decision-making q-BDHE hypothesis holds. Without any polynomial time, the adversary can selectively destroy the algorithm by challenging the LSSS matrix.

Init. Suppose adversary A has a non-negligible advantage 𝜖 = AdvA to break this algorithm. Adversary A chooses an access structure (M*, ρ*), where M* has l* rows and n* columns.

Setup. The simulator chooses the random number a′ ∈ Zp, and a = a′ + aq+1, so that e(g, g)a = e(ga, gaq)e(g, g)a′. For 1 ≤ x ≤ S, choose a random number zx. Let X denote the index set i , and ρ* = x. The simulator calculates hx by hx = gzxi∈XgaMi,1*/b· gaMi,1*/bi ⋯ gaMi,1*/bi. If X = ∅, then hx = gzx.

Phase 1. In this phase, adversary A generates attribute set S. At the same time, adversary A sends S to the simulator to obtain the private key, where S does not satisfy M*. The simulator selects a random number r ∈ Zp, and then selects a vector ω = (ω1, ..., ωn*) ∈ Zp, where ω1 = −1. According to the definition of the LSSS matrix, if the attribute set S does not satisfy the access structure (M*, ρ*), there must be ω ⋅ Mi* = 0 for any ρ(i) ∈ S. The simulator will define t as t = r + ω1aq + ω2aq-1+. . . +ωn*aq−n*+1.

And let L = gri=1,...,n*(gaq+1-i)ωi = gt. The simulator calculates K by K = ga′ gari=2,...,n*(gaq+2-i)ωi .

For ∀x ∈ S, calculate Kx by

\(\begin{aligned}K_{x}=g^{\left(v_{x}+\beta d_{x}\right) t \beta} \cdot \prod_{i \in X} \prod_{j=1, \ldots, n^{*}}\left(g^{\left(\frac{a^{j}}{b_{i}}\right) r \beta^{2}} \prod_{k=1, \ldots, n^{*}, k \neq j}\left(g^{a^{q+1+j-k} / b_{i}}\right)^{\omega_{k} \beta^{2}}\right)^{M_{i, j}^{*}}\\\end{aligned}\)

Challenge. The adversary generates two plaintexts m0 and m1, and then sends them to the simulator. The simulator randomly selects b ∈ {0,1} , and then calculates C = mbT∙e(gs,a′),C′ = gs. Applying the vector \(\begin{aligned}\vec{v}=\left(s, s a+y_{2}^{\prime}, s a^{2}+y_{3}^{\prime}, \ldots, s a^{n^{*}-1}+y_{n^{*}}^{\prime}\right) \in \mathbb{Z}_{p}^{n^{*}}\\\end{aligned}\), we have

\(\begin{aligned}\begin{array}{c}C_{i}=\left(g^{v^{*}(i)} \cdot H\left(\rho^{*}(i)\right)\right)^{\gamma r_{i}^{\prime}} \cdot\left(\prod_{j=1, \ldots, n^{*}} g^{a M_{i, j} y_{j}}\right) \cdot\left(g^{b_{i} s}\right)^{-\gamma\left(v_{\rho^{*}(i)}+d_{\rho^{*}(i)}\right)} \\ \cdot\left(\prod_{k \in R_{i}} \prod_{j=1, \ldots, n^{*}}\left(g^{a^{j} s\left(b_{i} / b_{k}\right)}\right)^{\gamma M_{k, j}^{*}}\right)\end{array}\\\end{aligned}\)

Phase 2. Repeat phase 1.

Guess. The adversary outputs a guess b′ to b. If b′ = b, the simulator outputs 0 to guess T = e(g, g)aq+1S; otherwise, it outputs 1,and T is one random element of the group G.

The advantage of calculating simulator B to get the correct guess result is \(\begin{aligned}\operatorname{Pr}\left[B\left(\vec{y}, T=e(g, g)^{a^{q+1} s}\right)=0\right]=\frac{1}{2}+A d v_{A}\\\end{aligned}\).

5.2 Resisting EDOS attacks

The security of many ABE schemes [28-29] and the schemes in this paper are based on the assumption that no probabilistic polynomial time algorithm can solve the q-DBDH problem and has a non-negligible advantage. This assumption is reasonable because the DL problem is widely considered to be tricky in the large number domain [11-12], and the selected group is a cyclic multiplicative group of prime order, where the q-DBDH problem is considered difficult. Therefore, a malicious user cannot challenge a download request to CSP through the malicious user, so that CSP continues to provide downloads, resulting in resource consumption and EDOS attacks. Therefore, in order to prevent this kind of attack, CSP sends a verification sentinel ST to verify whether a user has download permission before data downloading. However, the size of ST is much smaller than the size of data on the CSP. Moreover, the computation overhead of CSP executing the verification authority detection is much smaller than that of CSP generating the integrity proof. Therefore, an EDOS attack cannot be caused.

5.3 Resisting collusion attacks

Because the access structure is constructed together by data owner and CSP. If the malicious users collude with data owner, they only obtain the access structure A1 and generate the LSSS matrix M1 referring to formula (3). However, they cannot obtain the access structure A2 provided by CSP so that they are impossible to generate the LSSS matrix M2 by formula (4). Therefore, the LSSS matrix M cannot be calculated by formula (5), that is, the malicious users do not have the permission to verify the data integrity on CSP. In a similar manner, if the malicious users collude with CSP, they will not be able to be granted the verification authorization since the LSSS matrix M cannot be calculated by formula (5).

Moreover, even if the malicious users collude with each other, they cannot grant the verification authorization either. The reason is analyzed as follows. When a malicious user obtains SK1 and SK2 from CSP, he needs to decrypt the outsourced attribute key Kx′ to obtain the final attribute key Kx. However, decrypting the outsourced attribute key, i.e., Kx′ (Kx′ = (Kx)−r), requires first to decrypt r′ to get r(r = AES.Dec(x, r′)). But decrypting r′ requires the attribute x. Even if malicious users collude with each other to obtain each other's attributes, but it can be known in Section 5.1 that the adversary cannot selectively destroy the algorithm by challenging the LSSS matrix without any polynomial time referring to Definition 2.

6. Performance Analysis

6.1 Theoretical analysis

6.1.1 Computational complexity

Let the multiplication operation consumption in G be MulG , exponential operation consumption be ExxpG, and bilinear pairing operation (e: G × G → GT) consumption be Pair. The complexity of the algorithm needs to be analyzed from three aspects: the computation overhead generated by data owner, the computation overhead generated by CSP, and the computation overhead generated by user.

(1) Computation overhead of data owner

Data owner executes SentinelGenDO((M, ρ), PK) to generate the verification sentry STDO and AttKeyGenDO(S1) to generate the intermediate key before uploading the data F. Assuming that there are n1 attributes in the attribute set S1, the calculation of SentinelGenDO((M, ρ), PK) consumes ExpG + l1(2MulG + 2ExpG) . The computation overhead of AttKeyGenDO(S1) is 2l1ExpG.

(2) Computation overhead of CSP

CSP calculates Ci and generates the attribute key SK2 of the attribute set S2. Suppose there are l2 attributes in the attribute set S2. The computation overhead of CSP calculating Di is l2(2MulG + 2ExpG). The computation overhead of generating the attribute key SK2 is MulG + 4ExpG + l2ExxG.

(3) Computation overhead of user (or TPA)

User's computation overhead is mainly generated by AuthProofGen(SK, ST). At this stage, user generates pf′ to prove that the set of attributes he possesses meets the access structure (M, ρ). Suppose there are n attributes in the attribute set owned by the user, and the computation overhead incurred by user at this stage is nExxG + (n + 1)Pair.

6.1.2 Storage and transmission overhead

Assume that data owner manages nDO attributes and CSP manages nCSP attributes. Let |p| be the element size of G in Zp. The storage and transmission overhead of the algorithm is also analyzed from three aspects: storage and transmission overhead generated by data owner, storage and transmission overhead generated by CSP, and the storage and transmission overhead generated by user.

(1) Storage and transmission overhead of data owner

Since data owner needs to store all attribute keys, the storage overhead is nDO. At the same time, he needs to transmit the sentinel and the encrypted attribute key to CSP, which consumes the transmission overhead of nDO + 1.

(2) Storage and transmission overhead of CSP

CSP needs to store and generate the attribute key and Ci of the attribute set S2, and the storage overhead is nDO + 2∙nCSP. Assume that n users apply for attribute keys from CSP, and each user applies for nTPA attribute keys, the transmission overhead of CSP is n∙nTPA.

(3) Storage and transmission overhead of user (or TPA)

Let user have natt attributes. The storage overhead of user is natt. The user needs to send pf′ to CSP, and natt attributes need to be sent to CSP to apply for the attribute key. Therefore, the transmission overhead of user is natt + 1.

6.2 Simulation

To further analyze the performance of the proposed algorithm, two laptops equipped with Intel core i5-4210M 2.60GHz CPU and a 8GB RAM were used as data owner and user respectively. A service system with 4 core CPU and a 8GB RAM was rented from CentOS Alibaba cloud server to simulate CSP. The experimental code is based on PBC-0.5.14 (pairing-based cryptography library), modified and written with CPABE-0.11. The size of element g in group G is 512 bits, and the length of elements in Zp is 160 bits. The access strategy in the form of (S1 AND S2 AND . . . AND Sn) is used to simulate the most complex situation, where Si is an attribute. Each experiment was repeated 20 times in the same environment and the experimental results were averaged. The proposed algorithm in this paper is called MV-VCP. In the experiment, the performance of MV-VCP, the algorithms partially outsourced protocol (POP) and fully outsourced protocol (FOP) [18], and CP-ABE [23] were compared.

(1) The computation overhead of data owner at the preprocessing phase

The preparation time refers to the calculation time of data owner before uploading the data to CSP. It is seen from Fig. 3 that the computation overhead and the number of attributes increase linearly. In Fig. 3, CP-ABE almost overlaps with POP and FOP since the running time of CP-ABE is only a few milliseconds longer than that of POP and FOP. The computation overhead of MV-VCP is higher than that of POP, FOP, and CP-ABE. However, in MV-VCP, data owner not only encrypts the data according to the access structure, but also generates an intermediate attribute key before uploading the data. Therefore, the calculation time of MV-VCP at this stage is greater than that of POP and FOP. Moreover, data owner generates the key during the preprocessing stage so that subsequent users will not apply to data owner when applying for the attribute key through the attribute. It is only performed once during the entire verification control process, so it will not cause a serious burden on data owner.

E1KOBZ_2022_v16n2_565_f0003.png 이미지

Fig. 3. Data owner preprocessing time

(2) Computation overhead at the key distribution phase

Fig. 4(a) shows the computation overhead incurred by data owner at the key distribution phase. MV-VCP does not require user to apply for the attribute key from data owner, the computation overhead of which is zero. POP requires user to apply for the attribute key from data owner so that the key generation time is linearly related to the number of attributes as shown in Fig. 4(a) while only one user applies for the attribute key. FOP consumes a few milliseconds longer than POP and CP-ABE since data owner needs to generate a pair of signature keys for each file in FOP. If multiple users apply for the attribute key, a large amount of computation overhead will be consumed, making data owner vulnerable to resource consumption. Data owner can also generate attribute keys for all attributes in attribute set S1 in advance. When a user applies for an attribute key from the data owner, the data owner only needs to extract the corresponding attribute key from the previously generated attribute key and send it to the user. In this case, the computation overhead of data owner is also limited.

However, keys distributed by data owner is actually unreasonable. Data owner is not likely to be online always. If a user applies for an attribute key and data owner is not online at that time, the user has to wait until data owner is online. Moreover, if the key is distributed by data owner, when user registers or purchases the service of CSP, CSP needs to forward the request to data owner or user needs to find data owner to apply for the attribute key according to CSP's guidelines. This process is extremely complicated.

The computation overhead of CSP distributing key is shown in Fig. 4(b). Since CP-ABE, POP and FOP will not be distributed by CSP, the computation overhead of POP, FOP and CP-ABE is zero. The overhead of AttKeyGenCSP(.) is still relatively large in Fig. 4(b). When a user applies for an attribute key, CSP calculates the attribute set S2 attribute key, incurring a large overhead. Therefore, the attribute key SK of the attribute set S2 can be generated by CSP in advance. When a user applies for the key, CSP only needs to search for a key in SK.

E1KOBZ_2022_v16n2_565_f0004.png 이미지

Fig. 4. Key distribution time

(3) Storage and transmission overhead

The storage overhead of attribute keys in MV-VCP is slightly higher than that of POP and FOP, and the overhead of POP is equal to that of FOP as shown in Fig. 5. The overhead is linearly related to the number of attributes in access structure A. When the number of attributes is 8, even if the size of the attribute key is 2312KB, it does not burden CSP either since the storage resources on CSP are very sufficient.

E1KOBZ_2022_v16n2_565_f0005.png 이미지

Fig. 5. Storage overhead of attribute keys

The transmission overhead of MV-VCP and POP at the verification authority detection stage is shown in Fig. 6. The overhead of MV-VCP is slightly lower than that of POP and FOP. At this stage, CSP sends the verification sentinel ST to user to perform authorization detection, where the size of ST is linearly related to the number of attributes in the access structure A. However, POP sends not only the ciphertext CT, but also a challenge of detecting user rights. Moreover, the overhead of FOP is slightly higher than that of POP since FOP has to transmit one more ciphertext of the signature key.

E1KOBZ_2022_v16n2_565_f0006.png 이미지

Fig. 6. Transmission overhead of authority

(4) Verification authority detection

Before a user challenges data integrity verification to CSP, his authority needs to be checked first. The experiment set up 10 users to challenge CSP, where each user only launched a challenge. When a user who does not meet the access structure A challenges the data on CSP, the computation overhead of CSP performing proof generation is shown in Fig. 7. It can be seen that verification control in MV-VCP will greatly reduce the computation overhead of CSP computing unnecessary verification proof relative to the traditional verification algorithm, e.g., DHT-PA [13].

E1KOBZ_2022_v16n2_565_f0007.png 이미지

Fig. 7. Proof generation time of unqualified users (or TPAs)

Assume there are 50% of users in the experiment who do not have the authority to verify the data integrity. The experimental results are shown in Fig. 8. It can be seen that the effective verification of MV-VCP reaches 100%, while that of DHT-PA is 50%. This is mainly because MV-VCP filters users who have no authority to verify the data integrity on CSP. However, DHT-PA does not perform the authority check on users without the authority.

E1KOBZ_2022_v16n2_565_f0008.png 이미지

Fig. 8. Effective verification of users (or TPAs)

In summary, MV-VCP can perform verification control on users, construct the access structure by CSP and data owner, and use the access structure to perform verification control on users. The users can challenge the integrity verification of the corresponding data on CSP if and only if the usrs meet the access structure. MV-VCP greatly reduces the computational burden of CSP by removing the unauthorized verification.

7. Conclusion

In the process of data integrity verification in cloud storage, users without data access authority perform integrity verification, adding unnecessary verification overhead to data owner. This paper proposes a verification control algorithm. The algorithm mainly includes two aspects. On the one hand, data owner and CSP jointly construct the access structure, which ensures the fairness of the integrity verification results. On the other hand, user hides CSP's attributes during the key distribution stage to ensure user’s privacy. The proposed algorithm can effectively intercept users without data access permissions, so that only users who meet the access policy can perform data integrity verification. In the future, we will research the verification control algorithm of multiple data owners.

References

  1. J. Brodkin, "Gartner: Seven cloud-computing security risks," Infoworld, vol. 1, no. 1, pp. 1-3, July 2, 2008.
  2. B. R. Kandukuri and A. Rakshit, "Cloud Security Issues," in Proc. of the 2009 IEEE International Conference on Services Computing, Bangalore, India: IEEE, pp. 517-520, September 21-25, 2009.
  3. Y. Li, Y. Yu, G. Min, W. Susilo, J. Ni and K.-K. R. Choo, "Fuzzy Identity-Based Data Integrity Auditing for Reliable Cloud Storage Systems," IEEE Transactions on Dependable and Secure Computing, vol. 16, no.1, pp.72-83, January-February 1, 2019. https://doi.org/10.1109/TDSC.2017.2662216
  4. S. Zawoad, R. Hasan and K. Islam, "SECProv: Trustworthy and Efficient Provenance Management in the Cloud," in Proc. of the 2018 IEEE INFOCOM, Honolulu, HI, USA: IEEE, pp. 1241-1249, April 16-19, 2018.
  5. Y. Yu, M. H. Au, G. Ateniese, X. Huang, W. Susilo, Y. Dai, G Min, "Identity-Based Remote Data Integrity Checking With Perfect Data Privacy Preserving for Cloud Storage," IEEE Transactions on Information Forensics and Security, vol. 12, no. 4, pp. 767-778, April, 2017. https://doi.org/10.1109/TIFS.2016.2615853
  6. Y. Deswarte, J.-J. Quisquater and A. Saidane, "Remote Integrity Checking - How to Trust Files Stored on Untrusted Servers," in Proc. of the Integrity and Internal Control in Information Systems, Lausanne, Switzerland: Springer, pp. 1-11, November 13-14, 2003.
  7. G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, D Song, "Provable data possession at untrusted stores," in Proc. of the ACM Conference on Computer and Communications Security, Alexandria, VA, USA: ACM, pp. 598-609, October, 2007.
  8. H. Shacham and B. Waters, "Compact Proofs of Retrievability," Journal of Cryptology, vol. 26, no. 3, pp. 442-483, July, 2013. https://doi.org/10.1007/s00145-012-9129-2
  9. D. Boneh, B. Lynn, H. Shacham, "Short Signatures from the Weil Pairing," Journal of Cryptology, vol. 17, no. 4, pp. 297-319, September, 2004. https://doi.org/10.1007/s00145-004-0314-9
  10. G. Yang, J. Yu, W. Shen, Q. Su, Z. Fu and R. Hao, "Enabling Public Auditing for Shared Data in Cloud Storage Supporting Identity Privacy and Traceability," Journal of Systems and Software, vol. 113, no. C, pp. 130-139, March, 2016. https://doi.org/10.1016/j.jss.2015.11.044
  11. J. Shen, J. Shen, X. Chen, X. Huang and W. Susilo, "An Efficient Public Auditing Protocol With Novel Dynamic Structure for Cloud Data," IEEE Transactions on Information Forensics and Security, vol. 12, no. 10, pp. 2402-2415, October, 2017. https://doi.org/10.1109/TIFS.2017.2705620
  12. J. M. Rivas, J. J. Gutierrez, J. C. Palencia and M. G. Harbour, "Deadline Assignment in EDF Schedulers for Real-Time Distributed Systems," IEEE Transactions on Parallel and Distributed Systems, vol. 26, no. 10, pp. 2671-2684, October 1, 2015. https://doi.org/10.1109/TPDS.2014.2359449
  13. H. Tian, Y. Chen, C.-C. Chang, H. Jiang, Y. Huang, Y. Chen, J Liu, "Dynamic-Hash-Table Based Public Auditing for Secure Cloud Storage," IEEE Trans. Services Computing, vol. 10, no. 5, pp. 701-714, September 1, 2017. https://doi.org/10.1109/TSC.2015.2512589
  14. T. Jiang, X. Chen and J. Ma, "Public Integrity Auditing for Shared Dynamic Cloud Data with Group User Revocation," IEEE Transactions on Computers, vol. 65, no. 8, pp. 2363-2373, August1, 2016. https://doi.org/10.1109/TC.2015.2389955
  15. G. Xu, M. Lai, X. Feng, Q. Huang, X. Luo, L. Li and S. Li, "Verification Algorithm for the Duplicate Verification Data with Multiple Verifiers and Multiple Verification Challenges," KSII Transactions on Internet and Information Systems, vol. 15, no. 2, pp. 558-579, 2021.
  16. G. Xu, S. Han, Y. Bai, X. Feng and Y. Gan, "Data tag replacement algorithm for data integrity verification in cloud storage," Computers & Security, vol. 103, no. 3, pp.1-12, 2021.
  17. W. Shen, J. Qin, J. Yu, R. Hao and J. Hu, "Enabling Identity-Based Integrity Auditing and Data Sharing with Sensitive Information Hiding for Secure Cloud Storage," IEEE Trans. Information Forensics and Security, vol. 14, no. 2, pp. 331-346, February, 2019. https://doi.org/10.1109/tifs.2018.2850312
  18. K. Xue, W. Chen, W. Li, J. Hong and P. Hong, "Combining Data Owner-Side and Cloud-Side Access Control for Encrypted Cloud Storage," IEEE Trans. Information Forensics and Security, vol. 13, no. 8, pp. 2062-2074, August, 2018. https://doi.org/10.1109/TIFS.2018.2809679
  19. J. Idziorek and M. Tannian, "Exploiting Cloud Utility Models for Profit and Ruin," in Proc. of the IEEE CLOUD, Washington, DC: IEEE, pp. 33-40, August, 2011.
  20. N. Vlajic and A. Slopek, "Web bugs in the cloud: Feasibility study of a new form of EDoS attack," in Proc. of the GLOBECOM Workshops, Austin, TX, USA: IEEE, pp. 64-69, December 8-12, 2014.
  21. G. Ananthanarayanan, S. Agarwal, S. Kandula, A. Greenberg, I. Stoica, D. Harlan, E Harris, "Scarlett: Coping with skewed content popularity in mapreduce clusters," in Proc. of the Eurosys, Salzburg, Austria: ACM, pp.287-300, April, 2011.
  22. Y. Zhang, C. Xu, X. Liang, H. Li, Y. Mu and X. Zhang, "Efficient Public Verification of Data Integrity for Cloud Storage Systems from Indistinguishability Obfuscation," IEEE Transactions on Information Forensics and Security, vol. 12, no. 3, pp. 676-688, March, 2017. https://doi.org/10.1109/TIFS.2016.2631951
  23. B Waters, "Ciphertext-Policy Attribute-Based Encryption: An Expressive, Efficient, and Provably Secure Realization," in Proc. of the Public Key Cryptography, Taormina, Italy: Springer, pp. 53-70, March 6-9, 2011.
  24. B. Li, D. Huang, Z. Wang and Y. Zhu,"Attribute-based Access Control for ICN Naming Scheme," IEEE Trans. Dependable Sec. Comput, vol. 15, no. 2, pp. 194-206, March, 2018. https://doi.org/10.1109/tdsc.2016.2550437
  25. T. V. X. Phuong, R. Ning, C. Xin and H. Wu, "Puncturable Attribute-Based Encryption for Secure Data Delivery in Internet of Things," in Proc. of the INFOCOM 2018, Honolulu, HI, USA: IEEE, pp. 1511-1519, April 16-19, 2018.
  26. J. Idziorek, M. Tannian and D. Jacobson, "Attribution of Fraudulent Resource Consumption in the Cloud," in Proc. of the IEEE CLOUD, Honolulu, HI, USA: IEEE, pp. 99-106, June 24-29, 2012.
  27. A. Sahai and B. Waters, "Fuzzy Identity-Based Encryption," EUROCRYPT, Aarhus, Denmark: Springer, pp. 457-473, May 22-26, 2005.
  28. V. Goyal, O. Pandey, A. Sahai and B. Waters, "Attribute-based encryption for fine-grained access control of encrypted data," in Proc. of the ACM Conference on Computer and Communications Security, Alexandria, VA, USA: ACM, pp. 89-98, October, 2006.
  29. J. Bethencourt, A. Sahai and B. Waters, "Ciphertext-Policy Attribute-Based Encryption," in Proc. of the IEEE Symposium on Security and Privacy, Oakland, California, USA: IEEE, pp. 321-334, May 20-23, 2007.
  30. Z Liu, Z Cao, "On Efficiently Transferring the Linear Secret-Sharing Scheme Matrix in Ciphertext-Policy Attribute-Based Encryption," IACR Cryptology ePrint Archive, January 2010.