You are not logged in.
Pages: 1
Math is fun!!
Stop the presses!
I discussed it with my roommate, and came to a revelation about something. Instruments of measurement don't truncate the last value they read, it's rounding it. Not by design, but by the nature of the inaccuracy. So, "1.5" on a thermometer won't include 1.5 to 1.59999, it means a value somewhere in the range of .5 to 1.5.
So, I retract pretty much all of my statements. Figuring that out removes a fundamental block in my arguments.
I think the way you round ".5" is dependent on the situation. If you care about consistency (like a programmer would), I guess I'd round up. Other likely factors are minimizing total error or equal representation of the resulting whole numbers.
I think you missed my previous post that noted that 4.45 can round up to 5 when 5's round upward and the rounding
occurs more than once in a sequence on a particular data point.
This makes the exact opposite argument you made prior about rounding down being biased.
So rounding up is biased because 56% of data rounds upward and 44% of data rounds down.But rounding down is biased because 55% of data rounds downward and 45% rounds upward.
Did you catch the joke here with the numbers I used for percentages.
For 56-44, I rounded up, but for 55-45, I rounded down!
Rounding multiple times in succession is a mistake, though. Each time you round, you knowingly give up a bit of accuracy on the data. You should only round to the place that you intend to, and you do that the first time. Besides, I could take the sequence one a little further, and round 4.45 into 4.5 into 5 into 10 into 0!
I think the absolute best option is to switch our entire society into base-3!
I believe 9.50000000000000000001 should be rounded to 10.
I believe 9.49999999999999999999 should be rounded to 9.
I hope we can agree on this at least.
Yep, I agree with that, of course.
I want to clarify what I consider to be "exactly .5". It has to depend on the situation. With a thermometer, even if it reads .5000000, you should round up, because the temperature could actually be .50000001.
I think I've finally got a good handle on the controversial situation:
There are situations where the decimals are exact figures, and the number of significant digits is actually the degree of acuraccy available. These are absolutely the ONLY cases where I can see how ".5" is a problem, but they do exist.
Lets say I break a pencils into ten pieces each. Then, I grab 5.5 of them. Would it be more accurate to say I grabbed 5 or 6?
From the perspective of the person who sees only the output of the rounding process, I see two main priorities in choosing a rounding method in this situation:
1. making sure that each whole number has an equal chance of "winning".
2. ensuring that the resulting value most closely resembles the original one.
(Note that I don't see "ensuring that rounding up as many times as rounding down" to be a priority. It simply does not matter to the person recieving the data, which I think is the important perspective. In fact, if you were in base 3, then ".0" and ".1" would round down, and ".2" would round up, that's a 66/33% split! But the results most closely match the data, and all round numbers have an equal chance of winning. I think I fell into this trap of thinking earlier.)
And I see three main ways of rounding ".5":
A. Round to the nearest even number.
I hate this one because it blatantly screws up priority #1. I hate it I hate it I hate it.
B. Round randomly up and down, or switch between rounding up and down for each instance of ".5" you come across.
This seems ok, however, this can mess up priority #1. Depending on in what order the numbers chug through the process, you may end up always rounding "1.5" and "2.5" into "2". Now "2" has recieved more than its fair share of the data.
C. Always round up.
This method ensures that you round up half the time and round down half the time. But that's not a priority to whoever recieves the output of our rounding process! So, this factor doesn't count. However, this method can't possibly screw up #1.
".5" isn't any closer to 1 than it is to 0. It's an unfortunate by-product of using base-10! If we used an odd-numbered base, like base 3, then .1 would round down, and .2 would round up, and we'd have no problem. Anyway, rounding up in this situation (".5" in base 10) is no more inaccurate than rounding down, so it's not screwing up priority #2.
So, I like C. You could implement B in a way that priority #1 is still catered to, but it still wouldn't be any better than C, because C satisfies #1 anyway, and satisfies #2 just as much as B does.
So, round up darn it!
Also 0 + 1 + 2 + 3 + 4 is not equal to (10-5) + (10-6) + (10-7) + (10-8) + (10-9). (Your method)
However 1 + 2 + 3 + 4 is equal to (10-6) + (10-7) + (10-8) + (10-9). (My method)
Could you state this differently? I don't understand it.
I think we agree on what statisticians do. But I still disagree with it. Like I stated in my example, lets say a thermometer reads only to the tenths, but you round to the closest whole number.
2 will get 1.5 to 2.5 ... that's 11 possibilities.
3 will get 2.6 to 3.4 ... that's 9 possibilities.
Each possibility is equally likely to happen. So now we're giving the even numbers recieve a larger span of possibilities. Why is that preferable? After the rounding, the even numbers will be represented as being more frequent than they really are.
In your 50 possibilities, the problem is that the sum of the distances you are rounding up is
greater than the sum of distances you are rounding down.
How it really goes is: .00 doesn't round anywhere; the value stays the same.
.50 rounds up or down, so it's a don't care.
49 cases of .01 to .49 round down the same amount as 49 cases of .51 to .99 round up.
".00" stays the same, but it still rounds down, and it is just as likely to occur as any of the other 99 possibilities.
When applyin the rounding process, in any numeric base, we want to ensure that every number recieves an equal span of the possibilities. It's not the only thing you're looking for, but that's a factor. Rounding a way of simplifying the data without skewing the actual results. Look at the span of negative infinity to infinity, and assume that we only have accurate enough instruments to measure whatever it is we're measuring to the tenths. Now, lets look at the numbers 2 and 3 as an example.
In the 50/50 world, you round .5 to the closest even number... so:
2 will get 1.5 to 2.5 ... that's 11 possibilities.
3 will get 2.6 to 3.4 ... that's 9 possibilities.
So, now we've given even numbers an unfair advantage! Why would statisticians do this??
Another very important thing to remember is that you're usually rounding data that was already truncated. And 99.999999999(ad infinitum)% of the cases out there that you have as a number that ends with ".5", you're actually dealing with a value greater than .5, like .50000000001, but it's already been simplified to .5. If looking at the problem from a "I want the output of this rounding process to show the value closest to the data I'm given", then I can see why you'd be tempted to use a 50/50 rule, maybe by flipping a coin. But you're forgetting about those cases such as ".50000001"!
Here's a ascii diagram of what doing a 50/50 solution on .5 is doing to the probabilities of rounding up and down
-=round down
+=round up
----------------+++++++++
.0 ... .5 .6 ... 1.0
By doing a 50/50 rule, you're advocating rounding down 55% of the time, and rounding up 45% of the time. Now, this isn't inherenly bad, depending on what you want. You could still make it work so that each round number has the same probability of being rounded to. But it is no longer the case that each rounded number goes to the round number that it most closely represents.
And I know I haven't really addressed the rounding in different bases, but I will sometime. sorry!
But look at your first example, it confuses the issue...
"I think that .50 should not be rounded in either direction because 1/2 is exactly 1/2 away from the adjacent whole numbers.
0 1234 5 6789 0"
Now it IS true that .5 lands right inbetween the adjacent whole numbers, but the important point is that BOTH whole numbers round down. And don't look at them as whole numbers, look at them as their decimal equivalents... they're really ".0", and the ".0" happens as frequently as ".1", ".2", up to ".9" will. There is rarely the case when these freuquencies are not equal, especially in the cases of significant digits / precision.
And something about your second example is resting uneasily with me, but I can't put my finger on it yet.
Hey I'm a newcomer. Hello. Wee!
And well, I've had it. I've simply had enough of people disagreeing with how to round .5
I can see where if you have a range of numbers between 0.0 and 1.0 (inclusively), you may want to split it 50/50. But I simply don't see any real world examples that would impose an exact and inclusive range like that.
In cases of statistics, and significant digits, it's merely logical to round up when you see a 5. Fifty percent of the time, a random number from 0 to 9 is going to be 0 through 4, and the other fifty percent of the time it will be 5 through 9. This is an exact, unquestionable 50/50 split. Why is there any debate?
In fact, I wrote a program to generate random numbers between 000 and 999, and round them off at the tens place using the two methods in question. As I figured, the approach of choosing up/down according to the even/odd status of the parent number (when encountering a number that ends with "50") gives a bias on rounding down.
Does someone here think that 5 should not always be rounded up? I'd like to know what the reasoning is.
--Johnny
Pages: 1