Enumerative numerical solution for optimal control using treatment and vaccination for an SIS epidemic model

Optimal control problems in mathematical epidemiology are often solved by Hamiltonian methods. However, these methods require conditions on the problem to guarantee that they give global solutions. Because of the improved computational power of modern computers, numerical approximate solutions that systematically try a large number of possibilities have become practical. In this paper we give an efficientimplementation of an enumerative numerical solution method for an optimal control problem, which applies to cases where standard methods cannot guarantee global optimality. We demonstrate the method on a model where vaccination and treatment are used to control the level of prevalence of an infectious disease. We describe the solution algorithm in detail, and verify the method with simulations. We verify that the enumerative numerical method produces solutions that are locallyoptimal.


I. INTRODUCTION
Within the field of epidemiology, extensive research efforts have been devoted to establishing mathematical models that accurately characterize disease dynamics, including the effects of disease controls. Numerous mathematical techniques have been developed since the groundbreaking 1926 paper by A. G. M'Kendrick [37]. The most basic and most widely used models in epidemiology are multi-compartment models such as SIS (susceptible-infected-susceptible), SIR (susceptible-infected-recovered) and similar models. The SIS model is suitable to model diseases which do not confer immunity. Some of these diseases are discussed in [1] and [28]. In the basic SIS model, individuals move between two compartments, susceptible and infected [22]. Numerous elaborations of the basic model have been developed, such as age-structured epidemic models [15], stochastic models [39], and models with vaccination [23]. Important model properties include the basic reproduction number, equilibria and stability characteristics.
In mathematical models with controls, finding optimal controls is an important problem with significant practical implications. Many general numerical techniques have been developed for this purpose [8]. In [10] it is suggested a new methodology to find solution for an optimal control problems with delays by shooting method which is mixed with continuation on the delay. A formulation on binary indicator functions, direct and simultaneous adaptive collocation approach to optimal control are developed in [9] while in [13] proposes a numerical solution technique for constrained optimal control problems with parameters where an extension penalty function is used to adjust the state, controls, and parameter inequality constraints. In [26] an approach with Haar wavelets method is applied for finding the piecewise constant feedback controls for a finitetime linear optimal control problem of a timevarying state-delayed system. A direct method based on hybrid of block-pulse functions and Legendre polynomials is discussed in [35] while a direct collocation method is used in [49] to solve numerically optimal control problems. In [50], a sequential quadratic programming is used to solve an optimal control problem which has convex control constraints. [43] studies dynamical tunneling versus fast diffusion for a non-convex Hamiltonian and find that dynamical tunneling results at an important quicker rate than classical transport while [14] develops for certain nonconvex Hamilitonian-Jacobi Equations, their homogenization and non-homogenization. An algo-rithm for solving a non-convex state-dependent Hamilton-Jacobi partial differential equations is established in [12].
Researchers have also been involved in finding solutions to the infectious diseases and techniques to control them. In [40], R. M. Neilan and S. Lenhart introduce the theory of optimal control applied to systems of ordinary differential equations with an application on SEIR model while in [33] S. Lenhart and J. T. Workman give an interesting overview on optimal control applied to biological models. Many authors have focused on this topic of optimal control of different diseases. Treatment in the SIS model under learning have been considered in [34], while in [19], treatment is used in the controlled SIS model, but considers different cost structures than the earlier literature. For controlling an epidemic spread, optimal quarantine programs are used in [47], while [16] and [1] consider non-vaccine prevention in the SI and SIS models respectively. The prevention by strategic individuals in linked sub-populations is analyzed in [45], while [46] considers prevention through social distancing. The prevention and treatment in an SIS framework are considered in [17]. In [51], vaccination and treatment are used in an SIR setting and simulate optimal paths while a similar approach is discussed in [5]for an HIV type disease. In [29], it was examined the fundamental role of three type of controls, personal protection, treatment and mosquito reduction strategies in controlling malaria.
In addition to prevention and treatment strategies and the allocation of resources available to reduce these diseases, researchers have focused in the development of techniques to evaluate which are more effective for approximating the solutions of those epidemic models. In [52], A. Zeb et al. studied the SEIR and employed the multistep generalized differential transform method and compared the results with those obtained by the fourth-order Runge-Kutta method and nonstandard finite difference method in the case of integer. In [2] F.S. Akinboro et al. used differential transformation method and variational iteration method to obtain the approximate solution of SIR model with initial condition. O. J.Peter and al. [3] use the differential transform method to study the transmission dynamics of typhoid fever diseases in a population while [38] uses differential transformation method and variational iteration method in finding the approximate solution of Ebola model. In [20], it is investigated the application of matrix nonstandard finite difference schemes to obtain numerical solutions of epidemic models while [4] applies Non Standard Finite Difference (NSFD) Scheme to a modified SIR epidemic model with the effect of time delay. Laplace-Adomian Decomposition Method (LADM) in [21] is used to study the fractional order childhood disease model and shows that this method provides excellent numerical solutions for nonlinear fractional order models compared to homotopy analysis, homotopy perturbation method, and fourth order Runge-Kutta. In [7], the Homotopy Analysis Method (HAM ) is applied and finds that it is successfully for finding the approximate solution of fractional SIR model. In [6] A. J. Arenas and al. developed the non standard finite difference scheme with Conservation Law (NSFDCL) for predictorcorrector type for epidemic models and compared the result with the RungeKutta type schemes. V. K. Srivastava and al. [48] compare the solution from Euler's and RK4 methods with those obtained by the differential transform method when they studied HIV infection of CD4 + T cells.
In this paper, we will investigate the effect of the vaccination and treatment on an SIS epidemic model. In this model, neither the Pontryagin theorem for local optimality nor the Arrow theorem for global optimality applies, so the usual analytical methods for finding locally or globally optimal solutions cannot be used. We develop instead a numerical algorithm that first finds the overall best control from a large class of controls, and then improve it successively until local optimal conditions are satisfied.
The paper is organized as follows. In Section 2, we recall basic analytical results on optimal control problems that give necessary and suffi-cient conditions for a globally optimal solution. In Section 3, we establish an SIS epidemic model under treatment and vaccination controls, and in Section 4, we formulate objective functions for the model. In Section 5 we discuss the necessary conditions for locally optimal control vaccination and treatment. Section 6 describes an enumerative numerical method for finding near-optimal controls, while in Section 7, we present simulation results and conclude.
II. SOME BASIC RESULTS ON OPTIMAL CONTROL PROBLEMS Two of the foundational results in optimal control theory are Pontryagin's Maximum Principle and the Arrow Sufficiency Theorem. Pontryagin's theorem gives conditions that a locally optimal solution must satisfy; while Arrow's theorem guarantees global optimality of a locally optimal solution. These theorems are stated below.
Theorem 2.1: (Pontryagin's maximum principle) [33] If u * (t) and x * (t) are optimal for the problem subject to dx dt = g(t, x(t), u(t)), where the functions f and g are continuously differentiable and x(t) is piecewise differentiable. Then there exists a piecewise differentiable adjoint function λ(t) such that for all controls u at each time t, where the Hamiltonian H is given by and V. Mbazumutima, C. Thron, L. Todjihounde, Enumerative numerical solution for optimal control using ... Theorem 2.1 is proved in [44].
The following theorem gives sufficient conditions under which local optima are also globally optimal.
Theorem 2.2: (The Arrow theorem) [11] For the optimal control problem (2), the conditions of the maximum principle are sufficient for the global minimization of J[x(t), u(t)] if the maximized Hamiltonian function H, defined in(4), is convex in the variable x for all t in the time interval [t 0 , t f ] for the given λ.
When the conditions of Arrow's theorem are not satisfied, it may be difficult to guarantee global optimality of a locally optimal solution. In this case, numerical algorithms that widely explore the space of possible controls may be used to avoid converging prematurely to a local optimum that is not globally optimal. The strategy used in this paper is to find the best control from a large class of controls and then improve it successively until local optimal conditions are satisfied. While this strategy does not necessarily give the global optimum, by first doing a preliminary search over a large class of controls it does help to prevent obtaining a suboptimal solution that is only locally optimal.

VACCINATION CONTROLS
The SIS (Susceptible-Infected-Susceptible) model is the model where a susceptible individual is sick and when he recovers immediately becomes susceptible again. In this basic model, each individual belongs in one of the following two states: susceptible or infectious. In the literature, many studies have been made to analyze the importance of the use vaccination and treatment on the spread of infectious diseases by using the control theory ( [25]; [27]; [31]; [30]; [32]). Those optimal controls techniques play the role of limiting the spread of the infectious disease from the concerned population. In this section, we will etablish the controlled SIS system .
The classical SIS epidemic model under vaccination and treatment has four groups or compartments, whose populations are represented by four letters: S, the number of individuals who are healthy but susceptible to the infection; I, the number of individuals who have been contaminated and can spread the infection to susceptibles; T , the number of individuals who have undergone treatment to cure an infection; and V the number of individuals who have been vaccinated when susceptible. We also denote the total population by N , so that N = S + I + V + T . Note S, I, V, T may all vary with time, so that all are represented as functions of time.
In our model we suppose that individuals enter and leave the population (either by birth/death or immigration/emigration), but the total population remains constant. Individuals leave from each group in the same proportion, but all incoming individuals are susceptible. We suppose that susceptibles are infected by direct contact with infected individuals, so that susceptibles' rate of infection is proportional to the number of infected individuals. We suppose that susceptible individuals are vaccinated at a given rate (which may depend on time), and infected individuals are treated and become no longer infective at a different rate (also possibly time-dependent). Finally, we assume that the vaccination is not efficacious at hundred percent, the vaccinated individuals who contact infected individuals may become reinfected at the small rate. These assumptions are represented by the following system of ordinary differential equations: with initial conditions V. Mbazumutima, C. Thron, L. Todjihounde, Enumerative numerical solution for optimal control using ...

and control conditions
The parameters in (6) have the following significances: µ and γ are the population replacement rate and the recovery rate from infection respectively; β = λ N is the disease transmission coefficient, so that 1/β is the average number of infective contacts per unit time which result in the susceptible individual becoming infected; is the fraction of vaccinated individuals for whom the vaccine is ineffective; and u 1 (t) and u 2 (t) are the proportionate vaccination and treatment control levels, so that for example u 1 (t)S(t) is the number of susceptible individuals vaccinated per unit time at time t.
We may rewrite the system (6) in matrix format. Define and define the matrix A( x(t)) as follows: where N = S 0 + I 0 + V 0 + T 0 . Then system (6) can be written as with initial conditions Definition 3.1: [18]. Metzler matrices are square (real) matrices in which all the off-diagonal elements are non-negative: a ij ≥ 0, ∀i = j. The matrix A is a Metzler matrix according to Definition (3.1).
Proof: Whenever x j = 0 and x i ≥ 0 for i = j, since A is Metzler matrix it follows that dx j /dt ≥ 0. It follows that if x i (0) ≥ 0 ∀i, then none of the x i (t) will change sign for t ≥ 0. Therefore, we conclude that the system (6) determined by the matrix(9) preserves invariance of the non-negative cone R 4 + .

IV. OBJECTIVE FUNCTIONS FORMULATION
In this section, we establish objective functions in case of nonlinear cost function and the piecewise linear cost function.

A. Original objective function
We suppose a nonlinear cost function in order to take into account various types of costs affecting the Susceptible and Infected populations. The cost function for system (6) takes the following form: and The different terms in (11)-(13) are motivated as follows. The functions f 1 and f 2 represent cost rates associated with vaccination and treatment respectively, while the final additive term in (11) represents costs attributed to latent infections which remain after the treatment period is complete.
As far as vaccination cost, we expect a fixed, low-level maintenance cost rate during time periods when no active vaccination efforts are being made: this fixed cost rate is represented by the constant c 0 in (12). When vaccination efforts are being prosecuted, higher fixed costs are incurred, including salaries and facilities: this higher cost level is represented by c 0 in (12), where c 0 > c 0 . We suppose that each vaccination has a fixed cost c 1 , which multiplies the total vaccination rate u 1 S in (12). Finally, the quadratic term c 2 u 2 1 is included to model diminishing returns of higher levels of vaccination effort, including the cost of vaccinating harder-to-reach or less cooperative individuals.
As far as treatment cost, as with vaccination we also expect a low-level maintenance cost rate even when no treatments are given, which is modeled by the constant d 0 term in (13). There is also a cost rate per infected individual (which may include both financial cost and quality-of-life cost), expressed by the coefficient d 2 in (13). Lastly, each patient who receives treatment has a cost rate d 2 , which multiplies the treatment rate u 2 I to give the non-fixed cost rate associated with patient care.
The values u 1max and u 2max represent maximum population penetration rates for vaccination and treatment, respectively. For example, a value u 1max = 0.05 indicates that at most 5% of the susceptible population can be vaccinated per basic time unit (which is typically taken as days); while a value u 2max = 0.1 means that at most 10% of the infected population is treated per day.
In summary, the optimization problem is to find the controls (u * 1 , u * 2 ) that minimize the objective function: where (u 1 , u 2 ) ∈ U such that 0 < u 1 < u 1max and 0 < u 2 < u 2max . (15) In this problem, the cost function is not continuously differentiable, and the Hamiltonian is not convex. So neither Theorem 2.1 nor Theorem 2.2 can be applied in this case.

B. Piecewise linear objective function
Since we are using an enumerative approach to numerical solution, it is preferred to have a finite number of possible optimal control values. In Section V, we will show that the optimal treatment level is either 0 or u 2max , given the cost function (13). However, with the cost function (12) there is an infinite number of possible optimal values. For this reason, we consider a modification of the vaccination cost function f 1 which closely approximates (12), but which leads to a finite number of optimal vaccination levels (as will be shown in the next section).
The modified function is piecewise linear with the following mathematical form: where the function x + is the ramp function, The constants u 1mid , c 2 and c 3 may be chosen to approximate the quadratic term c 2 u 2 1 in (12). Figure 1 shows various possibilities for piecewise linear approximations. Given the constant c 2 , the values u 1mid = 0.5, c 2 = 0.5c 2 and c 3 = c 2 produce a cost function that is an upper bound to the quadratic term c 2 u 2 1 ; the values u 1mid = 5/9, c 2 = 0.2c 2 , and c 3 = 2c 2 produce an effective lower bound; and the values u 1mid = 0.5, c 2 = ( √ 2 − 1)c 2 and c 3 = (3 − √ 2)c 2 gives an approximation that minimizes the maximum deviation (maximum deviation is equal to 0.043c 2 ). Using these different functions, upper and lower bounds on the optimal cost for the original quadratic cost function (12) may be obtained.

NON-AUTONOMOUS VACCINATION AND TREATMENT CONTROL PROBLEM
A solution x(t) with controls u 1 (t) and u 2 (t) is locally optimal if small perturbations of the controls u 1 and u 2 during small time intervals never decrease the cost. This means that there is no way to improve the solution by making slight adjustments to the controls. Local optimization is applicable to the global problem in that a globally optimal solution must also be locally optimal. Hence local optimality is a necessary condition for global optimality. In this section, we explicitly calculate the effect of local changes in the controls Figure 1. Approximation of the quadratic cost function with piecewise linear functions. The linear functions shown give upper and lower bounds to the quadratic cost function, as well as a best approximation that minimizes the maximum deviation between the quadratic cost function and piecewise linear approximation. u 1 (t) and u 2 (t) on the cost function. Changes in the two controls are considered separately, because in practice they can be varied independently. This gives us a way to identify local optima, which correspond to control levels for which differential changes yield no improvement.
In previous sections, we have considered the autonomous problem in which the parameters β, , γ, and λ and also the costs c 0 , c 0 , d 0 , d 0 , c 1 , c 2 , d 1 , d 2 are constants independent of time. In this section we consider the more general non-autonomous problem, in which all parameters can be continuous function of time. The reader should understand that parameters and costs in this section now represent time dependent functions (e.g. β represents β(t)).
A. Necessary conditions for the optimal control vaccination with simplified cost function First, we will perturb the control u 1 (s) and calculate the effect on the cost function. Given a control u 1 (t), the perturbed control u 1 (s, t) differs from u 1 (t) by a small amount on an interval of length δ, as follows: where s is a fixed value between 0 and t f . The system evolution x (s, t) corresponding to controls u 1 (s, t) and u 2 (t) may be written and satisfies the equations for t < s and t > s + δ, A( x(t)) x (s,t)−S (s,t)du 1 ∆ 10 for s ≤ t ≤ s + δ, V. Mbazumutima, C. Thron, L. Todjihounde, Enumerative numerical solution for optimal control using ...
System (20) corresponds to system (10) with perturbed control u 1 (s, t) on the interval s ≤ t ≤ s + δ. Note that x (s, t) = x(t) for t ≤ s since x (s, 0) = x(0) and x (s, t) and x(t) satisfy the same differential equation on [0, s]. In order to compute the new cost associated with the perturbed solution x (s, t) with controls u 1 (s, t) and u 2 (t), we need to solve (20) for x (s, t). Since equations (20) have different forms on the intervals s < t < s + δ and t > s + δ, we will treat these two intervals separately. First we find the solution on s ≤ t ≤ s + δ. For t > s, we may define: We also use the notation: For simplicity, in the remaining discussion we will write the functions S, S , I, I , V, V , T, T , ∆ 1 , ∆ 1 , ∆ 2 without arguments (e.g. we write I (s, t) as I ). From (10) and (20) we have for interval s < t < s + δ, Using the notations (23) and (24) for the system (25), we obtain for s < t < s + δ Using similar computations, we may obtain corresponding equations for infected, vaccinated, and treated compartments on the interval s < t < s+δ: Equations (26)-(29) may be rewritten in matrix form: Using (30), we have on t ∈ [s, s + δ] where We also note ∆ 1 (s, s) = 0. To lowest order in δ, the solution of (31) gives We are now ready to compute the difference between cost functions for x and x. We denote these cost functions by J and J respectively. For simplicity, we first consider the case where c 0 = c 0 and d 0 = d 0 . From (11), (13) and (16), we obtain the sequence of equations (34) V. Mbazumutima, C. Thron, L. Todjihounde, Enumerative numerical solution for optimal control using ...
the Heaviside (step) function (note the Heaviside function is the derivative of the ramp function). Taking the limit as du 1 → 0 for small δ we obtain In this case, the local optimum conditions on vaccination control u 1 (s) depend on 3 cases: (a) Ψ 1 ≤ c 1 S + c 2 : then ∂J ∂u 1 (s) ≥ 0 and J(u 1 (s)) is minimized when u 1 (s) = 0.

B. Necessary conditions for the optimal control treatment
To find the necessary conditions for the optimal control treatment function u 2 (t), we will perturb the control u 2 (s) and calculate the effect on the cost function. The perturbed control u 2 which differs from u 2 by a small amount on an interval of length δ is For this purpose, we define x (s, t) ≡ (S (s, t), I (s, t), V (s, t), T (s, t)) T , (40) which satisfies the equations: System (41) corresponds to system (10) with perturbed control u 2 (s, t) on the interval s ≤ t ≤ s + δ.
Note that x (s, t) = x(t) for t ≤ s since x (s, 0) = x(0) and x (s, t) and x(t) satisfy the same differential equation on [0, s]. In order to compute the new cost associated with the perturbed solution x (s, t) with controls u 2 (s, t) and u 1 (t), we need to solve (20) for x (s, t). Since equations (20) have different forms on the intervals s < t < s + δ and t > s + δ, we will solve for x (s, t) on these two intervals separately. For s ≤ t ≤ s + δ, we define : where b ≡ I(s)du 2 δ.
The following notation will be also used: Following the same arguments used from (25) to (30), to first order for s < t < s + δ, we obtain and for t > s + δ To lowest order in b, the solution of (45) gives and ∆ 2 (s, t) is solution to for t ∈ (s, s + δ), By the same argument which has been used for (34) and (35) V. Mbazumutima, C. Thron, L. Todjihounde, Enumerative numerical solution for optimal control using ...
Corollary 5.1 implies that at any given time instant, there are only 6 possible optimal control pairs. So, given that an optimal control is constant on a specified set N of intervals, then there are 6 N of possible optimal controls. This fact is based to the discussion in the next section.

VI. EFFICIENT NUMERICAL METHOD FOR
FINDING NEAR-OPTIMAL CONTROLS Theorem 5.1 gives conditions for local optimality, which does not necessary imply global optimality. Many algorithms employ a process which converges to a locally optimal solution given a starting point. If the starting point is close enough to a globally optimal solution, these algorithms will converge to a global optimum. Thus, it is important to identify near-optimal solutions. One way on doing this is to find optimal solutions from a large class of controls that are representative of the different possibilities.
First, consider the class of control strategies that are constant in intervals of length T /N where T is the total time of the system and N is a positive integer. We also consider strategies that are restricted to the optimal values specified in Theorem 5.1. Then there are 6 N strategies of which meet these conditions. If N is small enough, the best solution from this class can be found by simply evaluating the cost for all 6 N strategies. This limits on the size of N for practical computation. In order to increase N , we may make further assumptions. We expect the vaccination level u 1mid to occur as the system is transitioning from no control to full control. In order to reduce computation time, we consider only the two extreme vaccination strategies: 0, u 1max . This leads to 4 N strategies that are constant on the N intervals. On each interval 0 ≤ k ≤ N − 1, there are four strategy options: These options may be indexed as follows: where a N −k−1 is the index of the control on interval k with k = 0, · · · , N −1. Then (a 0 , · · · , a N −1 ) completely specifies the control. We associate this control with the index Schematically, we have the Figure 2. For more explanation of the numerical method used, we consider the flowchart in Figure 3.  An enumerative algorithm for finding a nearoptimal strategy from among these strategies proceeds as follows: Algorithm 1 calculates the opti-Algorithm 1 Calculation of optimal piecewise constant control strategy part I. 1: N = number of intervals 2: C best = 1E100 3: for j in 0, · · · , 4 N − 1 do 4: Generate strategy S j = (a if C j < C best then 7: C best ← C j 8: S best ← S j 9: end if 10: end for 11: return C best , S best mal piece-wise constant control strategy by computing the costs of all 4 N possible strategies. This is computationally expensive. It is possible to greatly reduce costs by reusing cost computations between strategies as follows. Suppose S 1 and S 2 are two strategies which agree on the first k intervals. This means that the cost contributions from first k intervals are the same for both strategies. If these interval costs are known for S 1 , then it is not necessary to recompute them for S 2 . Thus in the calculation for S 2 , it is only necessary to compute the costs from the last N −k intervals. It may be shown that on average, only 2 intervals' costs must be recomputed, instead of N intervals as in Algorithm 1. It follows that the total amount of computation required is reduced by a multiplicative factor of 2/N . A pseudo-code for the improved algorithm is shown in Algorithm 2.
After part I, we consider on flowchart part II.a, where the intervals are divided in half to obtain 2N intervals. As shown in Figure 4, we keep the u 20 , · · · , u 2N −1 the same. We recompute u 10 , · · · , u 12N −1 by evaluating the 2N strategies and choosing the optimum. The resulting strategy has vaccination constant on 2 2N intervals and treatment constant on N intervals.

11:
if C j (k) < C best then 12: C best ← C j

13:
S best ← (a  in flowchart part II.b is to divide the intervals of constant treatment in half and recompute u 20 , · · · , u 22N −1 keeping the vaccination control u * 10 , · · · , u * 12N −1 fixed. The resulting strategy is represented in Figure 5. At the end of part II, the resulting strategy is constant on 2N intervals of equal length. So in part III, the purpose is to improve the solution by adjusting the sizes of active treatment and vaccination intervals. Each iteration of this part III changes each active treatment or vaccination interval by at most 1. Also, this part III works by considering all strategies that agree with the previous best strategy except at the endpoints of the active treatment or vaccination intervals. Figures 6 and  7 show how different vaccination and treatment controls are tried which differ from the previous best solution only where active treatment intervals begin or end. Note that the dashed line in Figures 6  and 7 represents the previous optimal vaccination and previous optimal treatment respectively. Part III of the flowchart Figure 3 has the following pseudocode: N 2 = number of change points 6: compute the costs on all 2N intervals for previous optimal strategy C 0 (0), · · · , C 0 (2N −1) 7: S best = S 0 8: for j = 1, · · · , 3 N1 2 N2 − 1 do do 9: Compute next candidate strategy S new = a Compute the first such that a (j) = a (j−1)

A. Numerical simulations
In this section, we use this algorithm described in Section VI to solve the control problem given by Equations (11), (12) and (16). Table I, shows the baseline parameters for our simulations. In our analysis, we used the baseline configuration described in Table I and varied one cost parameter at a time. Figures 8 -14 show the results for the parameters c 0 , c 1 , c 2 , d 0 , d 1 , d 2 and z respectively. In each set of three sub-figures, the first two sub-figures give the optimal vaccination and treatment controls found by the algorithm. For all solutions found by the algorithm, local optimality was numerically verified. The third subfigure shows the process of convergence during the algorithm. In each graph, the thin solid line shows the default configuration, and the other two  Figure 3.

B. Discussion
Figures 8, (9) and (10) show that increasing c 0 , c 1 or c 2 (which are fixed cost of vaccination and cost per vaccination respectively) reduces the active vaccination interval and increases the active treatment interval. For Figures (10) shows that no matter how much the quadratic vaccination cost c 2 is increased, the vaccination time interval is reduced while the active treatment time interval is increasing until to a certain maximum level less than 20 . Figure 11 shows that increasing the value of d 0 , increases a little the vaccination interval, while the treatment interval decreases to zero. In Figure 12, when the value of d 1 is increased, it follows that the vaccination interval is increased slightly while the treatment interval decreases. Figure 13 shows that no matter how much d 2 is increased, both vaccination and treatment intervals don't change much. Figure 14 shows that by increasing the value of z, the vaccination interval increases a little while the treatment interval is reduced.
The rightmost sub-figure in each set of figures shows the rates of convergence, and the costs of the solutions found by the algorithm. The algorithm takes 3 to 20 iterations to converge, and increasing any cost parameter produces increased final cost. Typically initial large decreases in cost which represents the cost improvement from Algorithm I followed by Algorithm IIa. Usually little improvement iteration of part III modify each treatment or vaccination interval by stimulating variable reduction in the cost. The algorithm some times III brings rapid improvement followed by slower. During the rapid phase, both controls are being adjusted. The rapid improvement phase ends when one control has reached its optimal configuration and the control continues to adjust. In most cases, execution time to compute each optimal control was a minute or less on an Intel R Core (TM) i3-2328M CPU at 2.20GHz, 2200 MHz, 2 cores, 4 logical processors with 4G RAM .

CONCLUSIONS
In this paper, we introduced an SIS epidemic model under vaccination and treatment controls. We formulated an objective function and simplified it for numerical computations: the simplified objective function can configured to give an upper bound, lower bound, or best estimate for the cost.  We determined necessary conditions for optimal control treatment and necessary conditions for optimal control vaccination with simplified cost function. We established an algorithm to optimize the strategy cost. This algorithm has been improved to reduce the execution time to find the best strategy cost. We also verified that the final strategy obtained by the algorithm in simulation  satisfies the local optimum conditions given in Theorem 5.1 . Although, this does not guarantee global optimality, the fact that we have tried a large diversity of strategies in the algorithm makes it plausible that the final strategy is indeed the global optimum. Finally, some numerical simulations are presented to illustrate the performance of this algorithm.