You are not logged in.
Here how this game works. And for those of you who are wondering, I'm making this up on the fly.
I start off with a proof that has an error in it. The proof statement may be true, but there is some error within it. Who ever can spot the error first then posts their own proof. You may only post a proof once you have answered the previous one.
Keep in mind you have to find the actual mistake, not just that the proof is wrong somewhere.
Here are some simple rules:
1. If you know where the error lies, post that first, then go back and edit your post with your proof. Do this so no one beats you to the punch.
2. After revealing where the error was, you have up to 24 hours to edit your post and put in new proof. If after 24 hours you do not post a proof, I will edit your post and put "Pass" on the bottom line. You may of course put "Pass" on the bottom line yourself if you don't want to or don't know of any proof to post.
3. If the last proof has "Pass" on the bottom line, the first person who posts up the next proof gets it. If there are multiple proofs posted after a pass, I will be deleting every post after the first.
4. If you reveal the error and post a new proof, but the error is debatable, I will cut your proof out (saving it) until the issue is resolved, putting it back if you were correct.
5. If after 3 days (72 hours) no one can find the error in the authors proof, the author will then show the error, and post a new proof or pass.
And remember, I am the final judge
*Any and all rules may be modified without further notice. Void where prohibted. All rules applicable in all states but Texas and Maine. No purchase necessary.
And without any more ado, here is the first proof:
1. Prove that all prime numbers are odd.
Proof: Let p be a prime number. Then there exist no integer greater than 1 that divides p. Specifically, 2 can't divide p. Since 2 does not divide p, p can't be even.
Therefore, p is odd.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
I foresee this being the greatest thread ever to exist.
I have to run, but I think I see the problem with your proof and it is a short issue, so I'll type it up:
Just a small note, there is an integer n > 1 that divides p, and that is p itself. This lends to my disproof: The number 2 is a prime number, because it is only divisible by one and itself, but the number itself is 2, which is a contradiction to the original claim that no prime number p may be divisible by 2. 2 is obviously an even number. Therefore not all prime numbers are odd.
(If the problem says prove we can possibly disprove it, right? Perhaps we should just make conjectures in some cases so people don't feel like the proposal is necessarily true?)
Edit: As I said, I have to leave for a while, and therefore I don't have time to think up a proof until later tonight. So if someone else would like to continue the game as soon as possible (assuming I was correct), feel free to go ahead.
Edit 2: Also, how advanced should the material being proved be? Should we keep the level somewhere from artihmetic to precalculus? Would it be alright to post a normal proof and a challenge proof, and participants may do either or both, but the challenge proof is more difficult and/or possibly contains calculus and beyond?
Edit by Ricky - Pass
Last edited by Zhylliolom (2006-08-07 11:08:42)
Offline
(If the problem says prove we can possibly disprove it, right? Perhaps we should just make conjectures in some cases so people don't feel like the proposal is necessarily true?)
No, you must, as the title says, find the error. The idea of the thread is to be able to spot errors in proofs, not the ability to prove things.
Edit 2: Also, how advanced should the material being proved be? Should we keep the level somewhere from artihmetic to precalculus? Would it be alright to post a normal proof and a challenge proof, and participants may do either or both, but the challenge proof is more difficult and/or possibly contains calculus and beyond?
It can be anything you want, but 1 and only 1 proof.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
Oh, I read
I start off with a proof that has an error in it. The proof statement may be true, but there is some error within it. Who ever can answer it first then posts their own proof. You may only post a proof once you have answered the previous one.
as meaning post a correct proof of the problem instead of posting a new and separate proof. I get it now. Now my plan to post an incorrect proof of Fermat's Last Theorem and force people to supply a correct one is ruined!
Offline
Bad wording, I agree. Fixing the mistake now... fixed.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
No one wants to take it? Ok, try this one.
Natural number: A positive integer
Prove that every even natural number can be represented by 4a, where a is also a natural number.
Proof: Let n = 2k, where k is a natural number. Then n is an even natural number. So 4a = 2(2a). Let 2a = k, so 2(2a) = 2(k) = 2k = n.
Therefore, 4a = n for any natural number n.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
No one wants to take it? Ok, try this one.
Natural number: A positive integer
Prove that every even natural number can be represented by 4a, where a is also a natural number.
Proof: Let n = 2k, where k is a natural number. Then n is an even natural number. So 4a = 2(2a). Let 2a = k, so 2(2a) = 2(k) = 2k = n.
Therefore, 4a = n for any natural number n.
k = 1
n = 2k = 2
2 = 4a
1/2 = a
1/2 is not a natural number, therefore not every even natural number can be represented by 4a, where a is also a natural number.
You can shear a sheep many times but skin him only once.
Offline
100% correct. Remember, all you have to do is find the error in my proof. You don't have to prove my conjecture incorrect, although you get 10 bonus points for doing so. (1 goggol bonus points gets you a "Good job" from me).
To be a bit more exact, the error was here:
Let 2a = k. k is defined as a natural number. By forcing it to be equal to 2a, you are making k be even, and so the proof no longer applies to all even naturals.
Just a reminder All_Is_Number, you have up to 24 hours to post a new proof, or you may pass it on to someone else.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
There's bonus points!? I love points. But if we're lazy, a simple counterexample will work? I look forward to the next proof . I'll try to think of one later that I can use sometime in the future, but to me it just seems like most errors would be pretty obvious... and I like to challenge/torture people with more difficult problems. I just can't help it.
Offline
100% correct. Remember, all you have to do is find the error in my proof. You don't have to prove my conjecture incorrect, although you get 10 bonus points for doing so. (1 goggol bonus points gets you a "Good job" from me).
To be a bit more exact, the error was here:
Let 2a = k. k is defined as a natural number. By forcing it to be equal to 2a, you are making k be even, and so the proof no longer applies to all even naturals.
Just a reminder All_Is_Number, you have up to 24 hours to post a new proof, or you may pass it on to someone else.
Since I've never really had any formal instruction on proofs (evidenced by this thread), I'll have to pass.
It's much easier just to find a scenario in which an incorrect proof contradicts itself.
Last edited by All_Is_Number (2006-08-08 11:10:17)
You can shear a sheep many times but skin him only once.
Offline
In an attempt to revive this thread:
For those that believe this proof is correct, take a calculator and put in the tenth-root of 10, and raise it to the tenth-root of 10 a bunch of times. You will find it goes way past 10.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
In this line lies your error - specifically in the step
.In that step, you have simply assumed that
as part of the proof, enabling you to obtain , when you have no grounds to do so. I shall post a proof at some point in the near future...Bad speling makes me [sic]
Offline
Nope.
Since we know that x^x^...^x = 10, x^(x^x^...^x) = x^10.
Maybe an easier way to see it is:
10 = x^x^...^x = y
So x^(x^x^...^x) = x^(y) = x^10
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
Bad speling makes me [sic]
Offline
My fault, bad notation.
You're right, we are assuming the existence when there is none. But the if statement in the proof allows us to assume this.
This is an example of a _______ ________.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
This is an example of a _______ ________.
I don't know what to call it - is there another mistake, or was that the one you were after?
Bad speling makes me [sic]
Offline
Vacuous statement. It's the equivalent of saying "If 1 + 1 = 3, then all powers of 2 are divisible by 3." That statement is true but only because 1 + 1 = 3 is not.
The proof I wrote was in the most technical sense correct, but meaningless.
So yea, you got it. If you have a proof to post, go ahead and do so.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
I know what you mean - valid argument, unsound because of a false premise. Caught me out though
Anyways, I was wondering who might be able to spot the flaw in this reasoning, recently posted elsewhere:
Proof by induction:
Let P(n) be the statement "any set of n people are the same person".
Now, obviously P(1) is true, since in any set of one person, all the members of that set are the same person.
Now, assume that P(n) is true for some n = k (k an integer). Then consider the set of k+1 people. By removing one of these people, we see that the other k people are all the same person, by our assumption. Now by placing the removed person back in the set and taking another person out, we see that every person in the set is, again, the same person by our assumption. So the collection of k+1 people are all the same person.
So P(1) is true, and P(k)
P(k+1), so any set of n people are, in fact, the same person.Bad speling makes me [sic]
Offline
You're using that fact that:
If A = B and A = C, then B = C
Without having an A, since the size of the group is only 1.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
You're using that fact that:
If A = B and A = C, then B = C
Without having an A, since the size of the group is only 1.
Indeed - the P(k)
P(k+1) statement does not work for P(1) P(2) because of this.Bad speling makes me [sic]
Offline
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
just a little question, im curious how you can get
by testing i see it is correct, but how do you do it, step by step?
if im right on the error, i post a proof tomorrow...
Last edited by Kurre (2006-09-18 21:13:19)
Offline
Sorry, but you are wrong. You forgot about the first common sense rule about infinite series: there are no common sense rules. I'll give out a hint tomorrow if no one gets it by then.
And to get it step by step:
1/i(i+1) = A/i + B/(i+1) //Now multiply both sides by i(i+1)
1 = A(i(i+t))/i + B(i(i+1))/(i+1) //cancle out the fractions
1 = A(i+1) + B(i)
Now let i = 0:
1 = A(0+1) + B(0)
A = 1
Now let i = -1
1 = A(-1 + 1) + B(-1)
1 = A(0) - B
B = -1
So we get:
1/i(i+1) = 1/i + -1/(i+1) = 1/i - 1/(i+1)
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
You cannot regroup the terms as you have done, as the initial series is not absolutely convergent.
Bad speling makes me [sic]
Offline
So close... but still wrong. Everything before the first coma is correct, Dross.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline