A Genetic Algorithm for Task Allocation Problem in the Internet of Things

In the last few years, the Internet of Things (IoT) is gaining remarkable attention in both academic and industrial worlds. The main goal of the IoT is laying on describing everyday objects with different capabilities in an interconnected fashion to the Internet to share resources and to carry out the assigned tasks. Most of the IoT objects are heterogeneous in terms of the amount of energy, processing ability, memory storage, etc. However, one of the most important challenges facing the IoT networks is the energy-efficient task allocation. An efficient task allocation protocol in the IoT network should ensure the fair and efficient distribution of resources for all objects to collaborate dynamically with limited energy. The canonical definition for network lifetime in the IoT is to increase the period of cooperation between objects to carry out all the assigned tasks. The main contribution in this paper is to address the problem of task allocation in the IoT as an optimization problem with a lifetime-aware model. A genetic algorithm is proposed as a task allocation protocol. For the proposed algorithm, a problem-tailored individual representation and a modified uniform crossover are designed. Further, the individual initialization and perturbation operators (crossover and mutation) are designed so as to remedy the infeasibility of any solution located or reached by the proposed genetic algorithm. The results showed reasonable performance for the proposed genetic-based task allocation protocol. Further, the results prove the necessity for designing problem-specific operators instead of adopting the canonical counterparts.


Introduction
The increased growth in the Internet of things (IoT) technologies provides a new perspective for the cooperation between the components of the physical world and engineering systems. Examples are Smart Home, Smart City, Connected Car, Connected Health (Digital health/Telehealth/Telemedicine), servers, and sensors. These devices can communicate and cooperate as heterogeneous devices in the IoT. Further examples extend from the current IoT solution to Collaborative IoT that can be connected through different communication technologies, e.g., 2G, 3G, 4G, LTE, 5G, WiFi, Zigbee, Bluetooth, and BLE.
However, one of the main issues toward improving the efficiency of the network is task allocation. This key challenge has recently promoted a set of task allocation studies while supporting energyefficient IoT. The main aim of the energy-efficient task allocation is to enable the IoT objects to cooperate for a long period of time to perform different tasks. A simple example of the task allocation problem in IoT is shown in Figure-1. In the literature, many protocols have been proposed for solving the problem of task allocation in the IoT. Colistra et al. [1] were the first to handle the task allocation problem while improving the network lifetime. Many other studies followed that work. Recently, the work of Khalil et al. [2] proposed an approach to prevent the untimely ends of the network lifetime by providing entitlement to all tasks assigned to this network while preserving the energy of batterypowered objects. Further details of these works will be presented in Section 2. In what follows, we summarize the main aim of this paper:  The problem of task allocation in IoT is addressed in this paper as an optimization problem with a new formulation expressing the total set of objects as both one active subset and a number of inactive subsets, where only the active subset works at each round in the IoT lifetime. To the best of our knowledge, no such study has been addressed in the literature.  A single objective genetic algorithm (GA) is developed to tackle the formulated problem.  A modified uniform crossover is proposed to improve the performance of the adopted GA. The rest of the paper is organized as follows. Section 2 describes the related scenarios proposed in the literature for solving the task allocation problem in the IoT networks. Section 3 defines the proposed protocol. In section 4, the traditional genetic algorithm and the proposed genetic algorithm are evaluated. Finally, concluding remarks are summarized and some future works are given in Section 5.

Literature Review
The task allocation problem received a wide area of studies and was addressed in many real applications; for example, in distributed and collaborative systems [3,4,5] and in wireless sensor networks [6,7,8]. However, the existing methods have a limited scope in studying the task allocation problem in IoT. Regarding the allocation of resources in the IoT, the problem is an open issue. It makes network heterogeneity, which pertains to the capabilities of objects and this in turn complicates the assignment problem.
The work proposed in previous studies [9,10] is restricted in reality, as they focused their attention on the assumptions about finding and allocating the resources without implementing a service to satisfy the best configuration of optimal resource allocation in IoT. The scope of the investigation began about the discovery of the characteristics of the best task allocation after the very earlier works [1,11,12]; the authors provided a scenario for allocating and sharing resources among all nodes in the IoT network. Their protocols aimed to maximize the network lifetime as it is expressed as the probable duration of the network before the expiration of the first object. Task groups and virtual objects were used. According to their protocol, an IoT is made of groups of abject nodes, i.e. task groups that perform similar and replaceable tasks. On the other hand, control powers are given to one node in each task group, known as virtual objects (VOs) . A VO receives a signal from the central server (Central Deployment Server) and redirects the signal to the appropriate nodes in the task group to activate it. IoT-Device to Device (D2D) cooperation framework for task allocation among objects in the IoT was suggested later [13]. It enables direct interaction between IoT objects, where proximity services based on D2D communication are used. They presented a game-theory based approach called Nash Equilibrium Point (NEP) to find a solution to minimize the energy of objects utility functions. The D2D objects nodes are divided into clusters, with only one object in each cluster is designated as cluster head and then the central server sends a request to the cluster head which in turn redirects the request to the cluster nodes to perform specific tasks. The energy-aware IoT (EnergIoT) approach was proposed in another report [14], where the authors defined the proposed approach as a hierarchical clustering approach based on the duty cycle ratio to maximize the network lifetime of battery-powered IoT devices. Different duty cycle ratios are designed to balance the energy consumption among objects nodes. Based on the coverage-lifetime problem in wireless sensor networks (WSNs), an evolutionary algorithm was proposed [15], where a single-objective optimization problem was adopted for solving the coverage-lifetime problem as disjoint groups under both Boolean and probabilistic sensing models. The work proposed later [16] adopted the genetic algorithm (GA) as an efficient optimization algorithm with the aim of maintaining sensors schedule of minimum rank. They schedule the sensors into disjoint groups to design energy-efficient wireless sensor network that can reliably cover a target area.

The Proposed Task Allocation Protocol
An IoT system can mathematically be modeled by × matrix with a set of tasks = { 1 , T 2 ..., T n }, and a set of objects = {O 1 , O 2 , ..., O m }. Rows of IoT matrix are labeled with the tasks in . On the other hand, columns are labeled with objects in . Also, let Ѕ be a collection of subsets of tasks, i.e. S = {S 1 , S 2 , ..., S m }, each S i ∈ Ѕ defines the set of tasks that can be performed by object O j . The tasks are assumed to be randomly assigned to the objects in . Any entry (i, j) ∈ is set to 1 if O j can perform T i . Otherwise, (i, j) = 0.
A critical object set, , is identified as the smallest set of objects with the ability to perform the critical task. A critical task is defined as the task with the minimum number of objects that can perform it (refer to Figure- In this paper, we state the task allocation problem in IoT as an optimization problem where the GA has to search for the maximum number of object subsets in which each subset can completely perform all the tasks in . Note that the maximum number of object subsets cannot exceed .

Algorithmic framework for the proposed protocol
In this section, we present the task allocation problem in IoT as finding the maximum possible number of the active subsets of the objects. The characteristic components for the proposed GA, specifically the formulations of individual initialization mechanism, recombination, and mutation operators, are designed to suit properly for solving the problem. With population initialization and evaluation, the GA then operates in cycles of generations, each with solutions selection, recombination and mutation, new population evaluation, and termination test. The first decision step of any genetic algorithm is the individual representation and population initialization. Each individual Ⅰ is represented as a vector Ⅰ = { 1 , 2 , ..., m } of genes. The locus of each gene maps to the object . The allele of each gene maps to an integer number 1 i ≤ represents the subset number to which the corresponding object belongs to. Note that the allele value cannot exceed . Each subset is to be filled with a collection of objects (selected randomly) until the generated subset can completely perform all the assigned tasks. This process is then repeated to generate the next subset. The chromosome depicted in Figure- Regarding the generation of the mating pool, binary tournament selection is used to select pairs of parents. Next, both crossover and mutation are used as the main perturbation operators. In this paper, two crossover operators are experimented. The first operator is somewhat similar to the traditional uniform crossover operator, taking into account the condition of generating only feasible individuals, with probability = 0.5. Noting that the number of groups formed in a child does not exceed the largest number of subsets in the two parents. For example, let 1 and 2 , are two individual parents. Let the largest number of subsets in is , and in it is (2) ( Then the largest number of subsets in which the child can reach is { , }. Let us consider the two chromosomes shown in Figures-4 and 5 as the two parents. As a result of the crossing of these two parents by the uniform crossover operator, the generated child is shown in Figure- Figures-4 and 5. The chromosome is composed of three subsets {C1, C2, C3}. We can easily notice several redundant objects or several critical objects assigned to singular subsets. In the proposed modified uniform crossover, on the other hand, two parents are crossed in their genes, in a sequence of one complete subset after another complete subset. Each gene in both parents with the same objects subset number is collected into Common set, . Initially is empty (i.e. = ∅). After that is filled out by the first subset of both parents starting from . This can be formally expressed as : Genes from the parents are selected randomly from to the child until the child's subset ( ) meets the feasibility condition. After completing the formation of the current subset, the remaining unassigned objects are coupled with the objects of the parents for generating the next subset. In this way, we will reduce the possibility of selecting multiple critical objects or objects with many common tasks shared in the same subset. Again, let us consider the two chromosomes shown in Figures-4 and 5 as the two parents. As a result of the crossing these two parents by the modified uniform crossover operator, the generated child is shown in Figure-  We can easily see that the existence of redundant objects in a single subset is reduced compared to the two parents shown in Figures-(4 and 5). Finally, the second operator, i.e. the mutation operator, is applied to the child population. The simplest rule for designing the mutation operator is the exchange of two genes with probability . For example, a child's chromosome * + where two gene loci and in the child chromosome are swapped in their subset values, i.e. and .

Experimental Results and Discussion
In this section, we will examine the performance of the proposed single genetic algorithm for solving the task allocation problem in the IoT. Parameter settings that affect the performance of the algorithm are summarized in Table 1. While only one setting for tasks ( = 4) is adopted in the work of the single objective evolutionary based task allocation protocol proposed in [4], here we vary to three different settings: = {5, 10, 15}. Further, we increase the upper limit of objects from = 400 (as in [4]) up to = 0110 with five different settings: = {100, 250, 500, 750, 1000}. We have a total of 3 × 5 IoT different model instances. Further, for each IoT model, we define 10 different IoT systems with tasks and objects. Moreover, for each IoT system, the algorithms execute 10 different runs, thus we have a total of 3 × 5 × 10 × 10. We compare the performance of both protocols, i.e. Genetic Algorithm for Task Allocation (GATA) and the Modified Genetic Algorithm (mGATA) for Task Allocation.
Table-2 reports the average of the performance of the algorithm in terms of the number of the generated complete subsets (i.e., lifetime) for the IoT system model with =5 and =100. Competitive results are given in bold. The results in Tables 3, 4 and 5 compare the performance of the algorithms for all possible settings indicated in Table 1. The competitions are performed among all possible comparison pairs. Efficacious and successful results in all these tables are given in bold. We can see the positive impact of the proposed crossover operator on extending the IoT lifetime.

Conclusions
This paper addresses the problem of task allocation in IoT as an optimization problem. The protocol is designed to solve the problem as a single objective optimization problem with the aim of extending the lifetime of the IoT networks. The problem is solved by adopting the mechanism of genetic algorithm.
A modified crossover operator is also proposed to improve the performance of the algorithm. The results showed reasonable evidence for the importance of designing problem aware operators. An extension to this work can be recommended by modeling the problem as a disjoint set cover problem. Further, additional measures can be achieved. In such case, the problem has to satisfy two or more contradictory objectives. To meet this goal, a multi-objective genetic algorithm can be adopted rather than the single objective algorithm.