You are not logged in.
That is because you are using a least square quadratic. I am using a collocating polynomial that will be exact for all the points that are used in the fit. Least squares minimizes the error of the fit so that squares of the residual are the smallest possible. Had the collocation been successful we would have had an exact fit yielding an exact formula. We can see that is impossble even for a small list like yours. The data is just not polynomic in nature.
We will look now at least squares but remember, you can squueze a square peg into a round hole if you apply enough pressure but that does not mean a square and a circle are the same.
You want a quadratic least squares through the first 239 points?
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Offline
Yes that would be good if you could.
I am a bit perplexed however.
If I try to analyse your polynomial evaluations and I test the information with respect to validity, I come up with the following situation
For the set of data that you have analysed (239 terms) the polynomial holds 100% true....now reduce that subset by 1...this new sub set, call it subset b, should still holds true since it is part of a true set a.
Now if we take this subset b we should be able to prove that it holds true for nth term +1, since we know that that is the case...this would seem to indicate that the set is indeed polynomic unless the terms in the original "confined" set is not truely polynomial in the first place?
Why would Excell return a R^2 of 1.0000 for the 6th order polynomial of the curve of the data? Am I missing something, maybe a hidden force pattern in any summation of data. However, if I try to change any term by small units, then the R^2 value reacts very quickly.
Rgds
Offline
Any body has any information about this....an improved algorithm using Sieve of Eratosthenes by Professor Helfgott (a Peruvian mathematician from France's National Centre for Scientific Research and the University of Göttingen in Germany? See link below
http://www.sciencealert.com/images/articles/processed/SieveofEratosthenes_web_1024.jpg
Offline
Now if we take this subset b we should be able to prove that it holds true for nth term +1
It will not hold for the n+1th one, just try it on the smallest example the quartic. Extrapolation which means going beyond the data you have is a very risky proposition. Lots of broke sports bettors and stock market players due to it.
Why would Excell return a R^2 of 1.0000
Excel is not a scientific package who knows what those fellows at Micro$oft know about math. Anyway, the real test in my view of any fit is the residuals.
original "confined" set is not truely polynomial in the first place?
That is the correct assumption, we can force a close fit for a certain small amount of data. But if we try to predict away from that data we will not be accurate, because the underlying law that governs your list is not a polynomial.
I am getting an R^2 of about .9999 for
I am getting an R^2 of 1 for
It predicts 166620 for the correct value of 166551.
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Offline
Thanx!
That's not to bad! for 6th order...that is an error of 0.041428752% for 240th term!!!
What about the 241th term!!!!!!!!!!!!!!!!!!!!
Does fame and fortune beckon us?
Last edited by Gophne (2016-10-19 06:41:19)
Offline
PREDICTED PRIME 240th TERM .............. 1531 vs 1511(actual)
The primes around (1511) is 1493,1499---1523,1531
EUREKA??????????????????????
Last edited by Gophne (2016-10-19 07:01:00)
Offline
What about the 241th term!!!!!!!!!!!!!!!!!!!!
Extrapolation is always a tricky business.
I am getting for a prediction 168159, while the correct answer is 168074
Does fame and fortune beckon us?
Ehhh, nope.
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Offline
What does the equivalent PNT formula give at this point? It was shown before in this tread that the PNT is more inaccurate in this range.
The new predicted number is not far away from the real value...in fact much closer than you would get by pure guessing!!!!
I get 168102 for the actual next term in the series, difference of only 57 for predicted term= 0.338098345% That seems a good forecast?
I am getting a bit electrified!
Last edited by Gophne (2016-10-19 07:45:44)
Offline
You are working on the accumulated sums, what does this have to do with the PNT?
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Offline
The PNT theorem would be surely the standard to compare any other formula for the prediction of prime numbers against?
So it would be critical to calculate what the PNT ( n log n) would give at given points along the prime number line against any competing recipe.
Last edited by Gophne (2016-10-19 07:50:45)
Offline
What does this formala produce at the 240th prime?
I shall give it a bash, but I think that you might be able to calculate quicker.
Offline
Hi zetafunc;
I have never been able to find more terms of that. You know of any?
I found some.
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Offline
There are also other formulae for the nth prime, outside of PNT. For instance, Cipolla showed in 1902 that:
and this has since been improved by other authors, e.g. Dusart.
In the formula do I read log n -1 or log (n-1)? For the big O notation term...do I ignore this term for relatively small numbers?
Last edited by Gophne (2016-10-19 08:54:12)
Offline
Hi;
It is log(log(n))-1 if I remember correctly. For computation leave out the Big O term. You can use it for error estimation.
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Offline
Cipolla formula produces 1217 & 1224 with big O (if my calc's are correct). The actual value for 241th prime is 1523.
The polynomial curve (graphic algorithm) is giving +-1500 (vs 1523) by eye-ball.
This is very encouraging.
I am going to try to formalize the algorithm a bit more.
Last edited by Gophne (2016-10-24 07:46:21)
Offline
Hi;
Remember the Cipolla formula has a much wider range, it must handle from 1 to infinity as best it can. The polynomial is expected to be better in the small range we fit for.
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Offline