You are not logged in.
Hi again
I guess this may be the last message of this post, since I know what I needed, which is much more than I knew before posting
So thanks a lot for your help.
As far as I have tested, both methods have turned out to be quite stable and accurate. However, the first, solving the overdetermined system of equations, has proven faster.
The reason is that the first proposed method finds the solution for the mapping function f, or g, and only a subset of the pixels, while the second iterates for all the values of f, or I, and finds the solution for all the pixels.
Obviously the first method still needs a final step after solving the system of equations to provide the values for all pixels, but is still faster, even though I accelerated the iterative method as much as I could, embedding C MEX code in the Matlab program.
Hope I will post again soon, and maybe not to ask, but to help!
Jose
Hi again!
I had an idea regarding the iterative solution, since the author explains it in his paper. And I have worked with similar problems in the past. However, I like to read a properly justified and more detailed explanation than that given by the author, or than the vague idea that I had.
Regarding the solution to the problem, I think steps 1 and 3 are the minimization of the objective function with respect to I and x respectively.
And the second step is a normalization condition imposed by the author. I know the superscript is strange in this case, but in the paper none is used, and I tried to write it down myself.
I will try to find a better way of typing it, but what I mean is that right after step 1, the solution found for I is normalized by the value at position 128. I could simply write a more complicated formula in step 1, dividing the same expression over the one using m = 128. But I thought the way I wrote it was more clear, and I do not change k again since I consider that I change iteration from k-1 to k in the first step.
Hope this helps! I had already computed the solution using Matlab before posting my doubts, and compared it with the first problem I posted.
Both work pretty well, but I was wondering if I could explain them having more characteristics in common, from a theoretical point of view. That's why I posted
Following your advice, I will explain each method separately!
Thank you again
Jose
Sorry for answering so late!
Exactly, in this case the unknowns are again the function, f() or its inverse I(), and x_j.
Thanks!
You're right I was suspecting my solution was too easy!
Would then this formulation be correct?
The total time from the arrival of the 1st customer to the departure of the 2nd can be written as the following random variable:
Where T1 and T2 characterize the serving time of the 1st and 2nd customers.
Then we would have:
Hope it is correct now!
Jose
I hope this can help, you can search for Queueing Theory in google or, directly, wikipedia:
"A Poisson process models random events (such as a customer arrival, a request for action from a web server, or the completion of the actions requested of a web server) as emanating from a memoryless process. That is, the length of the time interval from the current time to the occurrence of the next event does not depend upon the time of occurrence of the last event."
In Queueing Theory the customer inter-arrival time, as well as the service time, both can usually be modeled using the Exponential Distribution, which satisfies the memoryless property.
In any case, an Exponential Distribution is characterized by the following probability density function:
For this pdf you can calculate the mean, which is
In the problem you post, they already tell you what is the service mean time, 30 minutes. Since the inter-arrival time is independent from the service time, you don't need the information regarding when the customers arrive (0 and 5 minutes), but only to know there are in fact 2 customers in the server's queue, waiting to be attended.
The expected time between the departure of the first customer and of the second is equal to the mean of the exponential distribution, hence 30 minutes.
Jose
Thanks a lot again
The "Numerical Recipes" resource seems really interesting, so I will have a look at it. Furthermore, I would really like to learn some algorithm development in C/C++, so even better
But for now, I would like to "abuse" from your knowledge a little further, if you have time to answer.
The problem I have to relate to the first shown in the post would be the following, which is a similar approach to the one explained but by another author (Mark A Robertson, "Dynamic Range Improvement Through Multiple Exposures", if anybody feels curious)
In this paper the author follows an statistical approach to address the same problem, but he uses a slightly different notation and he doesn't introduce logarithms.
First of all he arrives to the following objective function to minimize:
Where:
He introduces the constrain that I will have the intermediate value I_128 = 1
And the solution he proposes is retrieved by using a form of Gauss-Seidel relaxation, minimizing first with respect to I, and then with respect to x.
My question is, due to the fact that I am not familiar with optimization problems and, therefore, neither with iterative solutions to the problems... Is it possible to formulate the problem in a similar way as the first I posted? Then I guess it would be possible to directly find the solution using SVD, as Debevec proposed for the first post's problem...
Thanks in advance
Jose
Great!
Thanks a lot for your explanation Ricky. I had suspected the fact that the author was simply setting the first term in the O function to zero, for 2 reasons:
- The first, obviously is that I had his solution, even though in a Matlab code, but straightforward enough, jaja.
- The second, that for being an optimization problem, there was something "missing" in the formulation. I mean, if I rewrite the objective function like this:
Where
Then, it resembles the following regular notation optimization problem
Where the diagonal of W contains the weights, and F is a matrix whose columns contain the basis functions with which we want to fit the data.
However, what I considered "missing" is the set of basis functions, so F = 1.
In this case, the solution is clearly as expected,
Is this formulation correct? I should explain it and I would like to use the "development" for another optimization problem
Regarding the smoothing equations, double thanks! I actually had no clue on that!
By the way, can you recommend me any good source / reference either on the Internet or a book where I can learn more about this topic?
Jose
Hi to everyone
I am new to this forum, but I expect to be posting quite often! Hopefully not always for my own doubts
Lets get into the problem. I am trying to understand an algorithm developed by P. Debevec in the paper "Recovering High Dynamic Range Radiance Maps from Photographs" (this is only for anyone who has curiosity)
The fact is that I have to code it, which is rather easy since the author provides the main, and most complex, routine to solve the problem to minimize an objective function of interest.
However, what I cannot completely understand is how he derives the final system of equations' parameters' values from the objective function.
First things first The function in the paper is the following
The first term (the double sum) is, according to the author, the function to minimize, while the second term is a smoothing factor. Both are weighted to give less credit to the extreme values of z (or Z).
In short, z or Z_ij, are the values of the pixels of an image, for a certain color channel. Well, maybe that much information is not important. So they are simply integer values in [0, 255]
Then, i controls the number of the pixel within a photo (assuming the photos are matrices, but have been transformed to arrays just placing one row after another).
Also, j controls the number of the photo. We actually have a set of P exposures of the same photo, or P photos, all of size N pixels, and stored in Z being a (NxP) matrix.
Moreover, the author introduces the ln() so as to be able to separate some unknowns, E_i, from known values, t_i.
The next relation is also known, and already used in the cost function:
Which actually was derived from the expression:
by introducing ln() and calling
So, to conclude, does anybody know how to solve the least square problem given by the O function? The unknowns are both g(Z_ij), or call them g_ij, and ln(E_i), or x_i.
My problem is having for some unknowns the double index, and for others only the index i. Moreover, the smoothing equations are also a weird "addition", in the sense that I don't know why they are used, but they make the solution better.
As a clue, the author seems to arrive to the following solution:
Where the over-dimensioned system of equations contains NxP linear equations involving the solution to the minimization of the cost function, and 254 more equations to introduce the smoothing factor.
Note that also the next approximation is used, to work with discrete values
Thanks in advance,
Jose
PS: Hope this is not too long!
PS2: Really nice the latex tutorial! First post and first latex lines! Nice