IoT-Key Agreement Protocol Based on The Lowest Work-Load Versions of The Elliptic Curve Diffie-Hellman

A key agreement protocol (KAP) is a fundamental block in any cryptosystem since it ensures secure communication between two parties. Furthermore, KAP should include advanced features in limited-resource environments such as IoT, in which this protocol must be lightweight and efficient in consuming resources. Despite the Elliptic Curve Diffie-Hellman (ECDH) algorithm having often been considered efficient in providing an acceptable security degree with less resource consumption, it has suffered from weakness against Man-In-The-Middle Attacks (MITMA). This paper presents two versions of the Variant Elliptic Curve Diffie-Hellman (VECDH) algorithms as a key agreement protocol. The security analysis indicates that the proposed algorithm could be more robust compared to the MITMA, in addition to several security features. The proposed algorithms scale down the computation complexity by decreasing the arithmetic operations, to make the algorithms the lowest workload and suitable for application in restricted resource environments


Introduction
Secure communication between any two parties is achieved when each of them has the capability to ensure the legality of the other. In this context, several protocols were established. One of them, a KAP, is a security protocol used to provide a shared session key between two communication parties. This key is more important in integrating contact and boosting confidence in communication security [1].
An Elliptic Curve Cryptosystem (ECC) is an alternative model to popular public key security models such as RSA and the Diffie-Helman key exchange algorithm. ECC provides the same security level with a small key size, lower resource consumption, and faster computation [2]. ECC offers multiple security solutions, such as public key encryption and decryption algorithms; digital signature certificates by the Elliptic Curve Digital Signature Algorithm (ECDSA); and (what is in the scope of the paper) a key agreement protocol by the ECDH algorithm [3].
However, the standard ECDH algorithm suffers from a vulnerability against Man-in-themiddle attacks when all exchanged messages can be readable and modifiable by the impersonating attacker without giving any attention to legitimate users. Therefore, the EC parameters should be chosen carefully. Furthermore, the researchers suggested two solutions to make the ECDH algorithm strong enough against the mentioned attack [4]: 1. Authentication of the user's public key: validating the user's public key is required when it is static. 2. Temporal public key: both communication sides can produce new public keys for each communication session. This solution enables the Perfect Forward Secrecy Protocol (PFS) and reduces the algorithm's complexity, which means it does not require extra authentication computation.
Yooni and Yoo [5] proposed a new two-party key agreement protocol (EECKE-1N) as a modification to ECKE-1N [6]. This protocol combines public key authentication and ECDH key exchange. The most important aspect is that this protocol has reduced the number of arithmetic operations in a single key-round to make the protocol usable on the lowest-cost network. It also achieved an efficient security feature such as known-key security, forward secrecy, unknown key-share resilience, and key control. In addition, EECKE-1N has the same security features that ECKE-1N enjoyed.
As a different improvement idea, Kaur and Paraste [7] proposed two enhancements for ECDH. The first one, the secret key, is a product of the multiplication between the secret key coordinates. The second improvement is exponentiation of the coordinates to encrypt a message, and the receiver computes the inverse to decrypt the cipher. That multiplication and exponential operations add strength to the algorithm, but at the same time, more execution time and resources are required to accommodate the complexity of the algorithm.
Mehibel and Hamadouche [8] proposed a new integrated algorithm that used ECDSA to authenticate the secret session key depending on two random variables. The proposed algorithm resolves the weakness of the previous integrated algorithm [9] that used a single random variable. The proposed algorithm achieves multiple security features such as mutual authentication, PFS, and more crucially, it is more immune against the man-in-the-middle attack. Also, the authors claimed that the proposed algorithm is lightweight and suitable for application in restricted resource environments. Ripon Patgiri and Senior Member (2021) [10], proposed a new protocol called "PrivateDH" to manipulate the Man-in-the-middle weakness that standard ECDH suffers from. This protocol used the AES algorithm to encrypt the public shareable parameters of ECDH and used the RSA algorithm to retrieve the public key. The performance analysis shows that the privateDH can report a MITM attack to the receiver when he/she breaks the public key. Although the protocol has an obvious computation overhead, the protocol achieved better communication overhead, relatively. But still, this protocol does not look efficient to apply in restricted resource environments, such as the IoT.
Dar et al. [11] proposed an incorporated common shared key as an authentication procedure to make ECDH more secure and reliable against MITM attacks. But the performance analysis shows the modified algorithm consumes more memory since it has more computation overhead compared with standard ECDH. Thus, the analytic results demonstrated that the proposed algorithm cannot run efficiently with limited memory and processor.
The proposed algorithms are aimed at further mitigation of computations to make them more suitable for application in limited-resource environments, such as the IoT. The algorithms add an authentication scheme to prepare for the sharing of a secret key between legitimate parties. At the same time, the VECDH algorithms are bidirectional authentication, which means the calculation of the secret key depends on the communication direction. This feature allows both parties to change their parameters, thus changing the encryption key for every new session to enable the PFS protocol.

Preliminaries: 2.1 Elliptic Curve:
has initial parameters over both communication sides should synchronize these parameters. These parameters are called Elliptic Curve Domain parameters: = { , , , , , ℎ} Where : is a large prime number, , ∈ specify the ( , ) equation: ( , ) is the base point on ( , ), is the order of , and ℎ is cofactor, i.e., ℎ = ( , )/ . over a finite field security is depending on the Elliptic Curve Discrete Logarithm Problem (ECDLP), in which no successful subexponentially algorithm can solve the ECDLP problem in polynomial time. The hardness of (ECDLP) involves the computation of retrieving the multiplier point and multiplicand integer from a known product point [12]. Therefore, the EC parameters should be chosen carefully to make the algorithm immune against attacks on the ECDLP.

Elliptic Curve Diffie-Hellman (ECDH) Key exchange:
Both sides of communication have the same parameters and generate different multiplicand private keys. Let's say: generates as the private key generates , too. Then both compute their ( , ), ( , ) that is a product of [3]: Then can encrypt a message using a Symmetric Secret Key defined as follow: = (4) And can decrypt 's encrypted message using the same Symmetric Secret Key , when computes the following: = (5) The proven of ( ) = ( ) comes from the scalar multiplies of (2)(3): Figure 1 shows how a man-in-the-middle attack can threaten the ECDH. In which the adversary can intercept the traffic and expose the exchanged messages without any attention from the communication participants [13].

The Proposed system design:
The description of the algorithms of VECDHs is as follows:

VECDH version1 algorithm:
In the first stage of 1 , the centralized server should be responsible for registering the entities within the local network. The registration is vital in authenticating the registered entities. The server chooses a nonce private key (global certificate) and divides it into (local certificates) , based on Shamir's secret sharing algorithm. Thus, all plugged devices (PCs, laptops, IoTs,.. etc.) would gain a specified , and each entity validates the communicated side by reconstructing the secret shared , from their own local certificate and the communicated side's local certificate. The earlier registration stage was proposed in the previous work that was published in [14]. Figure 2 shows the 1 model, which steps as follows: 1. Both sides compute their public key by (2)(3).
2. Both and compute , to authenticate each other.
Where 1 2 are identities of the sender and destination, respectively. is immediate information comprising a query time and other changeable according to the time 5. When wants to send a response message, he/she computes the session secret key for a given session, as follows: = + (8) Figure 2: steps of 1 to generate authenticated As below, it is proven that computation of at both sides is equal. Let's assume that wants to compute: And, in such a session, after, verifies the 's signature ( ), he/she computes the decryption key as follows: Ultimately, at and are the same. Figure 3 shows the 2 model, which steps as follows: 1. Both sides compute their public key by (2)(3). = ( ) (11) 4. Whenever one of them needs to send a message, he/she verifies of the destination, which means it resides on the , and computes the session secret key as the following: 1 , 2 } is a secure hash function, and 1 2 are identities of the sender and destination sides.

VECDH Version2 algorithm:
The computation of (12) depends on the communication direction, in which the can be computed by to encrypt his/her message, and computed at to form a decryption key. When want to send his/her message, it computes a new as an encryption key, whereas computes it as a decryption key. At any time, both sides can generate new (encryption and decryption) keys as desired using (12). Noteworthy, both sides used the same secure hash function to compute the e value. As a result, this scheme achieves the perfect forward secrecy protocol. As below, a proven that computation of at both sides is equal. Let's assume that wants to compute: And, in such session, computes decryption key as follow: So, at and are the same.

Resistant the Man-in-the-middle:
1 algorithm can avoid the man-in-the-middle attack when run in a real-time state, since the adversary needs to gain the valid local certificate, from the centralized server in an earlier (registration) state, to become authenticated on the remote side, so he can't calculate the , and if he/she could forge a local certification ! , the validation step (step-2) could detect the forged certification, in which: When (adversary) computes a forge ! , this local certification cannot fulfill the following validation: The global certification cannot be built because the ! was not generated by the cartelized server. On the other hand, in the 2 , it is up to you to use a secret hash function and the secret parameters that are distributed during the installation stage. 2. Mutual authentication: in 1 the communication participants can authenticate each other using the certification parameters that are obtained in registration phase. In turn, the validation phase involves building the global certification using the local authentication parameters and the remote one . So, the adversary can detect when you forge illegal parameters.

Perfect Forward Secrecy protocol (PFS):
1 and 2 enable the PFS protocol since the global certificate can be changeable at each new session by the centralized server, or even at each new plugging device, since a new set of local certificates should be calculated and distributed among the network's entities. According to the and calculation, would be volatile, in which the key in the current session would not be used in further sessions. For example, the 1 enables the PFS protocol, as follows: When: = { , 1 , 2 , } Then, = + But when, = { , 1 , 2 , } So, = + 4. Key privacy: the attacker cannot retrieve a session secret key that established by honest parties, because the underlying computation of 1 and 2 algorithms depend on the intractability of ECDLP, when compute the and in (2)(3). 5. Key independence: in 1 , the calculation of is independent of previous and subsequent session keys because the centralized server can generate a new collection of , this step can lead to computing new , thus new , resulting in each individual session having fresh parameters. Hence, the revealed keys of a specific session or multi-sessions do not help in deducing the key of the current session. The proof of this step can be deduced from the one demonstrated at point 3. Also, this feature prevents key-compromise impersonation attacks from being able to impersonate one of the legal communication parts since there are fresh certification parameters that are generated at every new session. In addition, this prevention can be confirmed by the impossibility of deducing the private key according to point 4. 6. Hash function immunity: suppose an adversary can reveal the secret key of a specific legitimate user and gain the authentication parameters , and tries to deceive the communicated parties, but here the hash function's role comes to immunize the secrecy of the system, depending on the collision resistance and deterministic primitives of the hash function. Thus, the secret hash function reports the potential man-in-the-middle attack or other malicious interceptions. Table 1 reports a comparison with regard to the computation effort requirements of the VECDH algorithms and other proposed algorithms. The first column refers to the count of scalar point multiply operations. The second column refers to the number of fields multiplied.

Oudah and Maolood
Iraqi Journal of Science, 2023, Vol. 64, No. 8, pp: 4198-4207 4205 The third one refers to hash computation operations. The last refers to the number of field inversion operations. This aspect can affect memory and processor overload. As shown, the proposed 1 adapted its performance to limited-resource environments, and it is better than 2 . But with respect to the remaining methods, they both achieve low workload and suitable computation efforts, so these are important features present in the VECDH versions. From another aspect, the complexity of the proposed algorithm has been compared with [8] with respect to the evaluation of the execution time of the algorithm's phases (certification generation , certification validation , session key generation ). Table 2 depicts this comparison. The comparison illustrated that the 1 can be the lowest complexity, which leads to the lowest workload. This characteristic makes the proposed algorithm run faster with limited processor and memory, and most importantly, it has the quickest response when working with a real-time system. That is what is aimed at most IoT networks.  Table 3 shows the security capabilities of the proposed algorithms 1 and 2 and compare them with other research works. The comparison showed that the performance of the proposed algorithms is efficient across achieving more security features.

Conclusion
The VECDH algorithms enhance the security level of the standard algorithm by improving its immunity against various attacks, such as man-in-the-middle attacks, from which the original algorithm has suffered. On the other hand, the appropriate workload effort allows for running the algorithm with the lowest resource consumption. This aspect was confirmed through the time execution evaluation. These features make VECDH algorithms more suitable for applying in restricted resource environments, such as the IoT, especially those that are running in real-time fashion. The future work, depending on the 1 and 2 , can develop a novel pseudo-random key generator as a further security level, for encryption of the sensor information and captured pictures and videos, to send real-time information across a hostile network in highly secure coding.

Disclosure and conflict of interest
Conceptualization, methodology, software, validation, formal analysis, investigation, resources, data curation, writing-original draft preparation, writing-review and editing, and visualization have been implemented by the first author. Supervision and project administration have been implemented by the second author.