You are not logged in.
Pages: 1
Hi folks, me again.
I'm having problems with Big-O and Big-Theta notation in general. I don't understand how to find the Big-O or Big-Theta notation of anything, partially because I don't see where K comes in.
To elaborate, I know that...
f(n) is O(g(n))
if
f(n) <= C * g(n) for n >= some k
But if, for example:
7n^2 + 1 is O(n^2)
my answer is:
7n^2 + n <= 8n^2
But I don't see how I got to that point. (This was the only example we had in class and the book is not helpful, it's skipping too much. )
First of all, what in the world do I do with K?
Second, why do I change +1 to +n in my example?
Lastly... why was 8 chosen?
I'm (as you can tell) quite confused on this. We didn't spend much time on it at all but he suddenly decided to make a big deal about it for the exam. I think he assumes we know it from a previous class, which isn't the case for me.
Any help on Big-O or Big-Theta would be great, truly!
Thank you for reading this,
-Miche
Offline
Ok, here's another example that's fairly straightforward...
Show
I know the answer is:
But I don't know how to get to the answer. This is one of the problems in the book where they just give you the answer in the back so i don't know the steps to get there.
Last edited by miche (2008-07-23 03:15:06)
Offline
Then for Big Theta, the only real example I have is:
Show
And I know the answer is:
But I have no idea how the book got to that answer.
Last edited by miche (2008-07-23 03:24:41)
Offline
When the functions are polynomials, the most important thing to know is
In your first example we have
If we want to show that
To get the answer that you have we need to observe that for x>9
I prefer the first method, since it does not require one to think about the exact numbers involved.
I cannot give working for the Big Theta example since I do not believe that it is correct.
Offline
I'm sure you know most or all of this already, but I'm just going to start from the beginning. Big-O and Big-Theta notation are used to dermine how fast a function will grow as it's input variable grows to infinity. I assume you're studying this in the context of algorithms, in which case their use is to compare how complex different algorithms will become as their inputs increase in size. Keep in mind that we're only interested in large inputs, small inputs can generally be handled easily even if they begin with relatively higher levels of complexity.
With that in mind let's look at your definition. f(n) is O(g(n)) iff f(n) <= C * g(n) for all n >= k, for some positive integers C and k.
What does this definition really mean? It tells us that if the function f stays within a scalar multiple of g for any input of sufficient size, then we can say that f is Big-O of g (or g is Big-O of f, I don't remember the correct wording, but you know what I mean).
Let's look at an example. Say f(n) = 3x + 2. I can tell you that f(n) = O(g(n)), because f(n) <= 4 * g(n) for all n >= 2. Notice that f(n) is greater than g(n) for n = 1, but this doesn't matter because we're only interested in big values of n. In this case "big" means all values of n >= 2.
Now, the question is: How do you find a good C and k to prove that f is Big-O of g? The easiest way is to find the term in f that will grow the fastest with n. In our previous example that term is obviously 3n, since the 2 portion is constant. You can choose any C that is greater than 3, I went with 4 for simplicity's sake. You can go with 3.1, 10, or a google. Once you choose this number you simply have to find a k such that f(n) <= C * g(n) for all n >= k. What you can do is set f(n) and C * g(n) equal to each other and use that for your k, since you know that C * g(n) will grow faster than f(n). In my example that came out to 3n + 2 = 4n -> n = 2. Again, you can go with any k greater than 2 if you want, the point is that the Big-O condition will be fulfilled.
The trick to determining Big-O notation is to find the term in f(n) that will grow the fastest as n increases. If you find it easier you can ignore coefficients, since those have no bearing on the rate of increase. This is an easy task if f(n) is polynomial, you simply choose the term with the largest exponent. It can get more difficult if you work with non-polynomials, but this simply comes down to familiarizing yourself with the rate of growth of a few basic functions:
... < log(n) < sqrt(n) < n < n^2 < ... < a^n < n! < n^n < ...
There are other more complicated functions, but those will almost never show up in the real world, just on tests. If you're good with limits (particularly limits to infinity) you shouldn't have any problems with this. As a quick exercise see if you can find the Big-O notations for these functions:
Something you might notice is that f(n) actually has many Big-O functions. In fact, it has infinitely many. Going back to my original example f(n) = 3n + 2, it's trivial to show that not only is f(n) = O(n), but also f(n) = O(n^2), and f(n) = O(n^3), and f(n) = O(n!), and so on. This is where Big-Theta comes in. Big-Theta gives us a much tighter upper bound on f(n). Basically, f(n) = Big-Theta(g(n)) iff f(n) = O(g(n)) AND g(n) = O(f(n)). In English, this means that f(n) and g(n) both grow within a scalar multiple of each other. Someone else can correct if I'm wrong, but I believe that every function f(n) can have exactly 1 Big-Theta counterpart, so this gives you a tight estimate of how quickly f(n) grows with n.
Wrap it in bacon
Offline
I have another Complexity Analysis question.
If I assume that f1(n) is O(g1(n)) and f2(n) is O(g2(n)) prove
f1(n) + f2(n) is O(max(g1(n), g2(n)))
Here is what I have so far:
f1(n) <= c1*g1(n) for n >= N1
f2(n) <= c2*g2(n) for n >= N2
so f1(n) + f2(n) <= (c1*g1(n)) + (c2 * g2(n)) for n >= max(N1, N2)
it is from here to the finish that I am lost on where to go next.
I apologize if this is a poor question but I would appreciate any help provided.
You want to assume (without loss of generality) that O(g1(n)) <= O(g2(n)).
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
I am still not sure what the next step would look like. I found some information on another website that showed a subsitution of g1(n) and g2(n) for h(n) and then it went on to simplify giving (c1+c2)h(n), but I dont understand how you get from where I show above to the subsitution of h(n) for g1(n) and g2(n). Here is the excerpt
Rule 3) If f1(x) is O(g1(x)) and f2(x) is O(g2(x)) then
f1(x)+f2(x) is O(max(|g1(x)|,|g2(x)|))
Proof:
|f1(x)| <= C1*|g1(x)| for x > k1 and
|f2(x)| <= C2*|g1(x)| for x > k2 means
|f1(x)|+|f2(x)| <= C1*|g1(x)|+C2*|g2(x)|
for x > max(k1,k2) so
<= C1*h(x) + C2*h(x) for x > max(k1,k2)
<= (C1+C2)*h(x) for x > max(k1,k2)
Set h(x) = max(g1(x), g2(x)). Now how can you relate f1 and f2 to h?
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
I think I am missing the "why" in doing those things. I feel like everything that i should need I already have I am just not seeing or putting together the underlying concepts on why.
Or was all the previous work only to simplify the left side of the original inequality which would be f1(n) + f2(n) <= O(max(g1(n),g2(n))) and the right side of the original inequality still needs to be dealt with?
Set h(x) = max(g1(x), g2(x)). Now how can you relate f1 and f2 to h?
If you can conclude that f1(x) <= h(x) and f2(x) <= h(x), then f1(x) + f2(x) <= h(x) + h(x) = 2*h(x).
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
I still cant see how we can say that g1(n) = h(n) by simply setting h(x) = max(g1(n),g2(n)).
You don't need to conclude that. All you need to conclude is that g1(x) <= h(x).
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
OK
We set h(n) = max(g1(n),g2(n))
That would make h(n) equal to either g1(n) or g2(n) depending on which is the larger of the two (the result of the max function).
Based on that fact we can then say that h(n) = g1(n) and that h(n) = g2(n) because it can be either depending on which is the larger.
Which is what would allow us to substitute g1(n) and g2(n) for h(n) and get from this:
|f1(x)|+|f2(x)| <= C1*|g1(x)|+C2*|g2(x)| for x > max(k1,k2)
to this:
|f1(x)|+|f2(x)| <= C1*h(x) + C2*h(x) for x > max(k1,k2)
Is that reasonig correct? If so, how can we do that when we do not know which of g1(n) or g2(n) is larger?
Based on that fact we can then say that h(n) = g1(n) and that h(n) = g2(n) because it can be either depending on which is the larger.
This is not true. But we can say that g1(n) <= h(n) and g2(n) <= h(n). Now the rest of your proof works perfectly.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
How can i prove that
f(n)=2^n + 6n^2 + 3n is big theta(2^n)
also
(n+a)^b = big theta(n^b).
i don't find any steps to solve big theta. Every where big 0h is given. not big theta. so some one please help me.
All big-O means that it is an upper bound. big-theta means that it is both an upper and lower bound. It should be obvious from looking at 2^n + 6n^2 + 3n that it is big-theta of 2^n. Show that there exists a k and K such that
k2^n <= 2^n + 6n^2 + 3n <= K2^n
For all n >= c for some c.
For your section one: binomial theorem.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Offline
"TheDude"
Your detailed post above regarding the big-O notation is the clearest I have seen yet on the subject. Have you considered writing a text book for n00bs?
Agreed with above, better than all textbooks/websites I have seen. Want more of TheDude!
I have two questions i really need an understand of this problem.
1. Use big-O notation to classify the traditional grade school algorithms for addition and multiplication. That is, if asked to add two numbers each having N digits, how many individual additions must be performed? If asked to multiply two N-digit numbers, how many individual multiplications are required?
2. Suppose f is a function that returns the result of reversing the string of symbols given as its input, and g is a function that returns the concatenation of the two strings given as its input. If x is the string hrwa, what is returned by g(f(x),x)? Explain your answer - don't just provide the result!
Offline
Show that f1(n)+f2(n)= O(max(g1(n),g2(n))) where f1(n)= O(g1(n)) and f2(n)= O(g2(n))
Offline
If I assume that f1(n) is O(g1(n)) and f2(n) is O(g2(n)) prove
f1(n) + f2(n) is O(max(g1(n), g2(n)))
Here is what I have so far:
f1(n) <= c1*g1(n) for n >= N1
f2(n) <= c2*g2(n) for n >= N2
so f1(n) + f2(n) <= (c1*g1(n)) + (c2 * g2(n)) for n >= max(N1, N2)
it is from here to the finish that I am lost on where to go next.
I apologize if this is a poor question but I would appreciate any help provided.
Offline
Dude, that was a darn good answer.
Offline
Pages: 1