the parameters of a pdf mle is written as an infinite sum of terms

I suggest starting out with that algorithm and making a density function that can be tested for proper behavior by integrating over its range of definition, (c(0, 2*pi). You are calling it a “probability function” but that is a term that I associate with CDF’s rather than density distributions (PDF’s):

dL <- function(x, c=1,lambda=1,n = 1000, log=FALSE) {
    k <- 0:n
    r <- sum(lambda*c*(x+2*k*pi)^(-c-1)*(exp(-(x+2*k*pi)^(-c))^(lambda)))
    if (log) {log(r) }
}
vdL <- Vectorize(dL)
integrate(vdL, 0,2*pi)
#0.999841 with absolute error < 9.3e-06
LL <- function(x, c, lambda){ -sum( log( vdL(x, c, lambda))) }

(I think you were trying to pack too much into your log-likelihood function so I decide to break apart the steps.)

When I ran that version I got a warning message from the final mle2 step that I didn’t like and I thought it might be the case that this density function was occasionally returning negative values, so this was my final version:

 dL <- function(x, c=1,lambda=1,n = 1000) {
     k <- 0:n
     r <- max( sum(lambda*c*(x+2*k*pi)^(-c-1)*(exp(-(x+2*k*pi)^(-c))^(lambda))), 0.00000001)
     
 }
 vdL <- Vectorize(dL)
 integrate(vdL, 0,2*pi)
#0.999841 with absolute error < 9.3e-06
 LL <- function(x, c, lambda){ -sum( log( vdL(x, c, lambda))) }
 (m0 <- mle2(LL,start=list(c=0.2,lambda=1),data=list(x=x)))
#------------------------
Call:
mle2(minuslogl = LL, start = list(c = 0.2, lambda = 1), data = list(x = x))

Coefficients:
        c    lambda 
0.9009665 1.1372237 

Log-likelihood: -116.96 

(The warning and the warning-free LL numbers were the same.)

So I guess I think you were attempting to pack too much into your definition of a log-likelihood function and got tripped up somewhere. There should have been two summations, one for the density approximation and a second one for the summation of the log-likelihood. The numbers in those summations would have been different, hence the error you were seeing. Unpacking the steps allowed success at least to the extent of not throwing errors. I’m not sure what that density represents and cannot verify correctness.

As far as the question of whether there is a better way to approximate an infinite series, the answer hinges on what is known about the rate of convergence of the partial sums, and whether you can set up a tolerance value to compare successive values and stop calculations after a smaller number of terms.

When I look at the density, it makes me wonder if it applies to some scattering process:

curve(vdL(x, c=.9, lambda=1.137), 0.00001, 2*pi)

enter image description here

You can examine the speed of convergence by looking at the ratios of successive terms. Here’s a function that does that for the first 10 terms at an arbitrary x:

> ratios <- function(x, c=1, lambda=1) {lambda*c*(x+2*(1:11)*pi)^(-c-1)*(exp(-(x+2*(1:10)*pi)^(-c))^(lambda))/lambda*c*(x+2*(0:10)*pi)^(-c-1)*(exp(-(x+2*(0:10)*pi)^(-c))^(lambda)) }
> ratios(0.5)
 [1] 1.015263e-02 1.017560e-04 1.376150e-05 3.712618e-06 1.392658e-06 6.351874e-07 3.299032e-07 1.880054e-07
 [9] 1.148694e-07 7.409595e-08 4.369854e-08
Warning message:
In lambda * c * (x + 2 * (1:11) * pi)^(-c - 1) * (exp(-(x + 2 *  :
  longer object length is not a multiple of shorter object length
> ratios(0.05)
 [1] 1.755301e-08 1.235632e-04 1.541082e-05 4.024074e-06 1.482741e-06 6.686497e-07 3.445688e-07 1.952358e-07
 [9] 1.187626e-07 7.634088e-08 4.443193e-08
Warning message:
In lambda * c * (x + 2 * (1:11) * pi)^(-c - 1) * (exp(-(x + 2 *  :
  longer object length is not a multiple of shorter object length
> ratios(0.5)
 [1] 1.015263e-02 1.017560e-04 1.376150e-05 3.712618e-06 1.392658e-06 6.351874e-07 3.299032e-07 1.880054e-07
 [9] 1.148694e-07 7.409595e-08 4.369854e-08
Warning message:
In lambda * c * (x + 2 * (1:11) * pi)^(-c - 1) * (exp(-(x + 2 *  :
  longer object length is not a multiple of shorter object length

That looks like pretty rapid convergence to me, so I’m guessing that you could use only the first 20 terms and get similar results. With 20 terms the results look like:

> integrate(vdL, 0,2*pi)
0.9924498 with absolute error < 9.3e-06
> (m0 <- mle2(LL,start=list(c=0.2,lambda=1),data=list(x=x)))

Call:
mle2(minuslogl = LL, start = list(c = 0.2, lambda = 1), data = list(x = x))

Coefficients:
        c    lambda 
0.9542066 1.1098169 

Log-likelihood: -117.83 

Since you never attempt to interpret a LL in isolation but rather look at differences, I’m guessing that the minor difference will not affect your inferences adversely.

CLICK HERE to find out more related problems solutions.

Leave a Comment

Your email address will not be published.

Scroll to Top