2

Let us assume that there is the number of infectious individuals, $I(t)$, where $t=1, 2, \ldots, T$ is time. We want to take into account the restrictive measures that affect the infection probability.

We introduced into a compartmental model, a scalar function of parameter for the restrictive measures: $$p(t): t \rightarrow [0, 1].$$ The step function $p(t)$ is a constant between two neighboring measures in time. Currently, the function's value defined by an expert in most cases after modeling:

\begin{equation*} p(t) \in \begin{cases} [0, 0.5), & \text{a restrictive measure limits the } I(t)\\ [0.5, 1], &\text{otherewise.} \end{cases} \end{equation*} Finally, we have obtained the model data $\hat I(t, p(t))$.

In the proposed model modification, we use four compartments with a cycle: Susceptible -> Exposed -> Infected -> Recovered -> Susceptible. Thus, we allow the recurrence of the disease and that's why we named the model SEIRS.

The figure shows the empirical data, $I(t)$, (red line) and model data, $\hat I(t, p(t))$, (blue dotted line), and the function, $p(t)$ (green line) where numbers indicate the function's values:

$$p(t)=\{0.29, 0.22, 0.17, 0.2, 0.24, 0.2, 0.17, 0.2, 0,25, 0.22, 0.17, 0.22\}.$$

One can see $0<p(t)\leq 0.29<0.5$ all times.

enter image description here

Question: it is required to optimize the function values, $p(t)$.

Edit after @Jean-ClaudeArbaut's comment

My attampt is:

  1. I have used the approximation accuracy as the optimality criterion: $$|I(t)-\hat I(t, p(t))|$$ then a target function can be $$L(t, p(t))=\sqrt{\frac{1}{T} \sum_{t=1}^T(I(t)-\hat I(t, p(t)))^2} \rightarrow \underset{p(t)}\min \\ \text{ s. t. } p(t) \in [0, 1]\\ t = 1, 2, \ldots T.$$
  1. I have took the $p_1=0.29$, and setup the lower and upper bounds, then tried minimizing $L$ over $p_1$ for first time window where $t=1,2, \ldots, 50$ with parameter fitting using Levenberg-Marquart algorithm:
    library(minpack.lm) 
    
    I <- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 26, 32, 45, 
    64, 88, 122, 136, 154, 165, 176, 192, 214, 232, 254, 
    275, 301, 338, 360, 395, 400, 413, 440, 453, 467, 474, 
    498, 515, 538, 550, 570, 598, 614, 611, 618, 619, 614, 
    614, 616, 600))
    
    I_hat <-c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 28, 37, 52, 
    74, 98, 133, 150, 163, 178, 201, 233, 261, 282, 310, 
    334, 359, 382, 413, 439, 439, 460, 483, 494, 506, 495, 
    509, 527, 541, 553, 567, 586, 609, 609, 602, 586, 569, 
    560, 572, 572)
    
    ## residual function 
    residual <- function(I_hat, observed) I_hat - observed 
    
    # parameter fitting using Levenberg-Marquart algorithm
     fitval = nls.lm(par = 0.29, lower=0, upper=1, fn = residual, 
                     observed = I, control = nls.lm.control(nprint=1))

The output is:

It.    0, RSS = 6.85828e+06, Par. =       0.29
It.    1, RSS = 6.8379e+06, Par. =          1
It.    2, RSS = 4.73522e+06, Par. =          1

One can see that $p_1$ obtains the max value on the 1st step.

Nick
  • 1,221
  • 10
  • 20
  • Does the expert change the parameter before or after the process happens? – SampleTime Aug 23 '21 at 08:56
  • After in most time, but sometimes before. – Nick Aug 23 '21 at 09:59
  • In the figure it looks like the parameter is constant for some time. If it is most of the time changed after the process you could just use some least squares fitting (maybe nonlinear, depends on the model). – SampleTime Aug 23 '21 at 12:12
  • @SampleTime, thank for the comment. I have updated the question. Could you add some details how to apply least squares fitting? – Nick Aug 24 '21 at 05:50
  • 1
    I would pretty much do it as you wrote in the edit, by minimizing $L$ over $p$ for each time window. You can use the $p$ from the previous window as starting point for the optimization for example. Is your question how you actually perform the minimization? Because it seems with your formulation of $L$ you already have all the required ingredients? – SampleTime Aug 24 '21 at 12:12
  • I have the optimization problem formulation but currently I don't have an idea how to perform the minimization over the times series $I(t)$, $\hat I(t, p(t))$, $p(t)$. – Nick Aug 25 '21 at 03:27
  • 2
    It's not really "optimization over the time series". You have a function (parameters) $\to$ (mean error). It's a scalar function of parameters, and it's rather easy to optimize. That this function has to solve an ODE and compare values to observed data isn't relevant to the optimization process. See for instance this: https://www.r-bloggers.com/2013/06/learning-r-parameter-fitting-for-models-involving-differential-equations/ (btw, if you do it in R, there are builtin optimization routines). – Jean-Claude Arbaut Aug 25 '21 at 06:21
  • 2
    @Jean-ClaudeArbaut I wonder what you think of the general idea of the OP in principle. As far as I understand the post, the goal is to check the influence of the restrictive measures on the dynamics of the disease, but the measures (unless they are totally inefficient) *change* the equation coefficients, so fitting the constant coefficient ODE system doesn't seem to make much sense unless the task is to test the null-hypothesis that no restrictive measures had any effect whatsoever and the whole dynamics can be explained by "purely natural" causes. Is it really the case? – fedja Sep 11 '21 at 15:39
  • @fedja, your point of view is interesting. How to test H0 here? – Nick Sep 12 '21 at 05:44
  • @Nick Technically you cannot do the standard probabilistic interpretation here, but you can test the assumption like "the measures changed the number of infected people by less than 15%", for example. If there is no fit within that level of accuracy, you may say that (within your model, of course) that assumption gets rejected, but if there is such a fit, you have to say that we cannot really reject the possibility that $H_0$ holds. Of course, you tacitly assume here that the model is correct, the data do not have noticeable noise, etc. but overall this "sanity check" makes sense IMHO :-) – fedja Sep 13 '21 at 02:34

0 Answers0