You are not logged in.
Apparently we can't move away from talking past each other.
You took my original quotes out of context. I was pointing that out. Now, you've taken that out of context as well. We could continue this, but it is a waste of time.
I never said it was impossible. What is impossible is a general procedure for size > 4. One that works for every matrix. There are always special cases where it is possible to find the exact eigenvalues and eigenvectors (for triangular matrices it is trivial, no matter what the size).
Most of the algorithms listed on the Wikipedia page were created within the last 30 years, many within the last 10.
Not good enough. Many CAS algorithms are secret.
And in what way does this somehow counter the point from which my quote was taken, which was that eigenvalue algorithms are an active field of research?
So if we can please move away from talking past each other, I would rather return to the actual problem here:
Having looked up the definition of Adjacency and Incidence matrices. I note that the adjacency matrix is symmetric. If the incidence matrix is similar to it, it must be symmetric as well. Each row in the incidence matrix is a vertex, and each column is an edge. So each column has exactly two 1s. If the matrix is symmetric, each row has only two 1s as well. This means that each vertex is connected to exactly two edges. As a result, every vertex and every edge must lie in simple loop. If the graph is connected, then it has to be simply one big loop of eight vertices connected by eight edges. There is only one such graph.
Edited to add: I see that only in problem 2 does it assume the graph is connected. This broadens the choices:
a loop of 8
a loop of 6 and a loop of 2
a loop of 5 and a loop of 3
two loops of 4
a loop of 4 and and two loops of 2
two loops of 3 and a loop of 2
4 loops of 2
I apologize for being abrasive. I had not intended to be so. I respect the generosity and intelligence you've amply demonstrated in these forums.
But I am not at all behind the times concerning what a CAS can do. My remarks are entirely acurate. Diagonalization by finite means is not possible beyond 4x4 matrices - at least not without involving trancendental functions, (which even if you used, also require iterative means to calculate). This is not a "nobody has figured out how to do it yet", but rather has been proven to be a fundamental limitation.
That said, there are, and have long been, iterative means of finding eigenvalues and eigenvectors for any size of matrix. But this remains an active area of research as people attempt to find new ways that work faster, either in general, or for specific classes of matrices. Most of the algorithms listed on the Wikipedia page were created within the last 30 years, many within the last 10. And there are others that are not listed.
As for size of matrices, I suspect the record is held by Google's Page Rank algorithm, which finds eignenvalues for a matrix whose dimension is in the trillions now, I think (one for each page on the internet). Fortunately, this matrix is very sparse - almost all of its entries are 0, which makes the job a lot easier.
It is easy to find P. The process is called diagonalizing a matrix and is a rote procedure. My favorite thing in the world, a no brainer! But it may not exist!
???
You should share your great secrets with the world, then!
Diagonalizing, and its related problem of finding eigenvalues and eigenvectors, is a major subject of research for Numerical mathematics. For 5 dimensions or higher, it can only be done by iterative methods.
Check out the Wikipedia page "Eigenvalue Algorithm" for more information.
For 2x2 and 3x3, my favorite method is to solve the determinant equation for the eigenvalues, then exploit the Cayley-Hamilton theorem to find the eigenvectors. If you have enough independent eigenvectors to span the space, then you can use them as the columns of the matrix P (but normalize them first, so that P[sup]-1[/sup] = P[sup]T[/sup], which isn't necessary but is a lot easier).
We could also go with these...
or this
But my experiments with non-quadratic-irrational numbers have not shown any particular success in finding tight patterns, nor suggested any additional avenues of investigation. Quadratic irrationals have, and continued fractions are at least suggestive of possible explanations, though my experiments so far have failed to uncover any definitive patterns.
The particular expression I found for 0.15656 does indeed look fairly arbitrary. But its simple continued fraction form shows otherwise. It is one of the simplest irrational continued fractions to try (other than the leading 6, which mostly seems to rotate the pattern so it could be left off).
eigenguy, why? The math taught in high schools here in US and Canada (where I was before) is practically all memorization.
That is the problem. It shouldn't be, andi it is because of attitudes such as this lady's that it is this way. I was fortunate enough to have a particularly good teacher in high school. He taught math as it needs be taught. Math should be understood, not memorized.
I never worry about remembering a formula, and I seldom look one up. If I cannot remember how it goes, I back off to what I do remember and re-derive it. I do this because I understand the math behind it, so deriving the formula is seldom hard to do. In fact, it is often takes me less time to rederive than it does to search it out on Google. I usually only look up formulas for areas of mathematics I haven't mastered yet. And when I do look them up, I generally search for a derivation so I can improve my understanding. (Okay - when the calculation is unusually nasty, I am happy to let someone else slog through it, and just look up the result. But such nasty calculations generally means I (or they) have not found the right approach.)
I believe the intent is:
The ship can be loaded with 10 animals.
Animals available to be loaded consist of three types: Cows, calves, horses.
There are enough of each type of animal available to fill all 10 stalls with cows, or all 10 with calves, or all 10 with horses, if you desired.
Individual cows are not distinguishable from each other (so exchanging one cow for another does not constitute a different loading). The same is true of calves, and of horses.
However, and I base this only on the answer provided, not something that made it clear in the problem, the individual stalls are distinguished (so having a cow in #1 and a horse in #2 is a different loading from having a horse in #1 and a cow in #2, even if all the other stalls are the same).
Based on that understanding, the type of animal in each stall is independent from the types in the other stalls, and there are exactly 3 ways to fill each stall, so the total number of loadings is 3*3*3*...*3 = 3^10.
(A) No. Perhaps bobbym sees something distinct about each of those 2s, but they all look the same to me. That matrix is already in Jordan normal form, and that superdiagonal 1 tells me that the eigenspace of 2 is going to be 2 dimensional (if there were a second superdiagonal 1, it would only be one dimensional).
These aren't reading suggestions for you, knightstar - at least not for a few more years. They are excellent books (certainly Bartle is, and I have no doubt that ShirvamS is entirely correct about the others), but you are not ready for any of them. They all expect you to have mastered a number of basic concepts and terminology of mathematics before you begin, and your questions here are about those basics.
Exponentiation is a binary operation. You can think of it as a function, but it is not 1-1, so it does not have an inverse in and of it self.
By fixing one of the operands, exponentiation can define two functions, both of which are locally 1-1, and thus do have local inverses:
Power functions fix the exponent and allow the base to vary: y = x[sup]c[/sup]. The inverse of this function is taking the root = power function with multiplicative inverse for the exponent.
Exponential functions fix the base and allow the exponent to vary: y = c[sup]x[/sup]. The inverse of the function is the corresponding logarithm function.
Tell that lady to first know what she is talking about. Now, if she had said "high school math is just knowing which formulas to use", I would agree with her.
I wouldn't agree with her even then.
Considering the problem, I note that it is 17(b). Further the solution makes use of something not indicated anywhere in the problem itself, namely this "7.5" term. These two bits of information, and some peculiarity in the wording points to problem 17(a) providing additional information we did not have for solving this problem.
bobbym is correct that the problem we saw has many other solutions. but I strongly suspect that had we been given the information in problem 17(a), it no longer would, and the solution there would make sense.
A different approach.
Satyendra Bose: An identical chicken already crossed the road, so this one was much more likely to do the same.
Wolfgang Pauli disagrees: The chicken crossed the road because there was already an identical chicken on this side, so it could not stay here.
For question 1, We need the intersection of the two graphs y = 1/3 x^2 and y = x + c. Subtract one eqaution from the other to get 1/3 x^2 - x - c = 0. Since there are only one solutions to this quadratic equation, the discriminant must be 0. Equate the discriminant with 0 and solve for c. If c is a constant you are done. If c is a function of x, maximise the function by equating the derivative of that function with 0.
This starts off well, but goes astray. If we are looking for an intersection point then the same value of x gives the same y in both equations, so 1/3 x^2 = x + c, or 1/3 x^2 - x - c = 0.
At this point, you pull out your handy-dandy quadratic equation to get x = (something) +/- (something) √((-1)^ - 4(1/3)(-c)) = (something) +/- (something)√(1 - (4/3)c), where I've not bothered with the somethings, because we actually don't care about the value of x. What we need, though, is for there to be 0 or 1 real values for x, this occurs only when the discrimant (1-(4/3)c) <= 0. Thus bobbym's solution.
c is never a function of x, here.
As for question 2, one approach (whether it was bobbym's or not, I don't know), is to consider the curves xy = k. These are a family of nonoverlapping hyperboli with the axes as asymptotes. As the value of k increases, the hyperboli move farther away from the origin. So consider the intersection of the ellipse with the hyperbola xy = k for a particular k. For small enough k, this intersects with ellipse in 2 points in quadrant I (and 2 more in Q3, but they are mirror images, and give nothing new). As we increase k, the hyperbola moves outward, and the intersection points move closer together. Increase k far enough, and the intersection points meet, leaving a single point of tangency. This is the maximum value of k for which the hyperbola intersects the ellipse. Increase k any more and there is no intersection. That k cannot be reached on the ellipse.
So the maximum value k of xy occurs when the hyperbola xy = k is tangent to the ellipse. As with question 1, substitute y = k/x into the ellipse equation, solve for x^2, find the discriminant. The value of k that makes the discrimant = 0 is the maximum value of xy on the ellipse.
I haven't tried some of the others, but it looks like maybe low values in the continued fraction expression is best, or may near constant, or maybe both.
So much for those ideas.
[0;2,1,1,2,1,1...] = 0.387426
looks very similar to
[0;6,2,1,1,2,1,1, ...] = 0.156558,
but rotated. I also experimented with shifting the pattern, but these did much worse. Apparently a pattern of High, Low is better than Low, High:
[0;1,2,1,1,2,1,1, ...] = 0.720759 opened the middle up
and
[0;1,1,2,1,1,2, ...] = 0.581139 opens it considerably. It only starts to close back up just when the simulation ends.
Based on my ideas that I quoted above, I next tried
[0;2,1,1,1,2,1,1,1,...] = 0.379796
This was significantly looser than [0;2,1,1,2,1,1...], and
[0;3,1,1,3,1,1,...] = 0.280776
leaves large gaps in the center before it closes back up.
As noted already, [0;2,2,2,...] = √2 - 1 = 0.41421356 does well, but not as good the golden ratio [0;1,1,1,...] = 0.61803399
But [0;3,3,3,...] and higher numbers all tend to be very open patterns.
I think it is likely that the continued fraction patterns can help you predict good values, and it appears that maybe 0.156558 is 2nd best after phi. But there is more going on here, or else [0;2,1,1,1,2,1,1,1,...] would do better yet, but it doesn't.
They are horrible for computing, but they have some interesting properties. A continued fraction is an expression of the form:
This is denoted by [a[sub]0[/sub]; a[sub]1[/sub], a[sub]2[/sub], a[sub]3[/sub], ..., a[sub]n[/sub]] (note the leading term is separated by a ";:, the remainder by "," - a favored convention these days, but one I find foolish). The extension to infinite continued fractions is obvious. Canonically, a[sub]0[/sub] is an integer, and all the other a[sub]i[/sub] are positive integers. Under that condition, every rational number has exactly two continued fraction expansions, both of them finite (one of which ends in 1, while the other has a[sub]n[/sub] > 1). Every irrational number has exactly 1 canonical continued fraction, which is infinite. Quadratic numbers (solutions of quadratic equations with rational coefficients) have, eventually, a repeating pattern in their expansions. All other irrational numbers do not repeat. But e = [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, ...], continuing in the same pattern.
You can allow the a[sub]i[/sub] to be other than integer, in which case you don't get unique expansions anymore, but which also allows you to express some useful relationships that are undefined if you demand positive integers only. For example, [a[sub]0[/sub]; a[sub]1[/sub], a[sub]2[/sub], a[sub]3[/sub], ...] = [a[sub]0[/sub]; a[sub]1[/sub], a[sub]2[/sub], a[sub]3[/sub], ..., a[sub]n-1[/sub], [a[sub]n[/sub]; a[sub]n+1[/sub], ...]]
To find the continued fraction expansion of a number x: let x[sub]0[/sub] = x, a[sub]0[/sub] = floor(x), the greatest integer <= x. Then for all n > 0, x[sub]n[/sub] = 1/(x[sub]n-1[/sub] - a[sub]n-1[/sub]) and a[sub]n[/sub] = floor(x[sub]n[/sub]). Then x = [a[sub]0[/sub]; a[sub]1[/sub], a[sub]2[/sub], ...].
To show the usefulness of continued fractions, consider our mystery number x = 0.15656. Following the procedure gives x = [0; 6, 2, 1, 1, 2, 1, 1, ...] (as a rational number, its continued fraction will terminate, but the spreadsheet I used to calculate this went screwy at this point, as the error grew out of control). Looking at this, I decided to look at the quadratic number whose continued fraction follows that pattern of repeating 2, 1, 1 forever. Let y = [2; 1, 1, 2, 1, 1, ...], then x = [0; 6, y], and y = [2; 1, 1, y]. That latter equation is:
Q.1) Thermodynamic temperature is temperature on a scale with 0 = absolute zero, most commonly (indeed, almost exclusively) in Kelvins. There are other thermodynamic temperature scales, though. 2nd most well known is degrees Rankine, whose unit of temperature matches that of the Farenheit scale.
Q.3) ShivamS is incorrect. Temperature is never a pure number. The answer is 273.16 Kelvins.
When doing physics, watch those units. If you do, they will actually steer you in the right direction even when you try to go wrong. If you ignore them, they *will* shipwreck you even in subjects that you think you know well.
The continued fraction for phi is [1; 1, 1, 1, ...]
The continued fraction for 0.15656 approximates [0; 6, 2, 1, 1, 2, 1, 1, ...] (I didn't expand it all the way out, so I don't know exactly where it breaks off the pattern.)
If we assume that this is the "real" value, it comes out to be (16 - √10) / 82 = 0.156558...
√2 has continued fraction [1; 2, 2, 2, ...]
I haven't tried some of the others, but it looks like maybe low values in the continued fraction expression is best, or may near constant, or maybe both.
The thing about math is there really isn't any easy problems.
It was so tempting to post "2 + 2 = ?" in response to this.
But alas, if someone were to do that to me, I would have returned with a considerable discussion upon the meaning of "number", "addition", "2", and perhaps even "=". So I guess I have to acquiesce to the point.
One thing that I always found helpful is that when I would learned something, I always thought about how I would try to teach it to someone else. In particular, why this is the way it ought to be. This would force me to look at it in different ways, and often resulted in my developing a deeper understanding of the subject. (This was a learning exercise only, though! When I started teaching classes - part of any graduate program - I quickly learned that much of how I thought things ought to be taught was definitely NOT how they should be taught. I pity now those who endured through my first couple of years.)
Math is all of math. For physics, you need calculus, differential equations, real analysis, linear algebra. That's about it.
NOT EVEN CLOSE! bobbym is right. Physics goes deep, deep into many varied fields of mathematics. I was led into differential geometry to study both cosmology and particle physics problems. Indeed, significant parts of differential geometry were developed exactly to support the theory of relativity. And my dissertation was developed to support certain aspects of string theory.
Differential geometry requires topology and abstract algebra (arising out of linear algebra, but extending to some very esoteric groups). The abstract algebra in particular has a very strong impact on the physics. Essentially, the reason we have only a finite number of elementary particles is because of abstract algebra. Also, perhaps the strongest need for classifying all finite groups was the need for physicists to see if there were other groups out there that could be used to explain current observations, which would have then resulted in different physical theories.
I know. I feel the same way about Bartle's "Elements of Real Analysis", which is the book that caused me to start following Mathematics as a pursuit in and of itself. It doesn't build a model of the real numbers as like Oakley & Allendoefer apparently does (from comments you've made about it in other threads), but the teacher of that class took us through the whole process - from cardinal numbers to Dedekind cuts - before we even cracked the textbook, so I never felt the lack. I still feel that Bartle has the best, most instructional, set of exercises of any textbook I have ever read.
They tried that in 60's and early 70's. It was referred to as "the new math", and it was a dismal failure. Why? One reason was because most of the elementary school teachers never adequately understood what it was they were teaching, so they were unable to communicate it to their students. I learned the basic ideas of set theory in grade school (I was fortunate I guess in having some teachers who understood that much), but I was in high school before it became more than a rather boring and pointless thing we talked about a little each year. (I will admit that it helped in high school that I already knew the terminology when we actually started to make use of it.) And it wasn't until college that I came to know its true usefulness and power.
ShivamS - slow down a bit. Knightstar needs to learn how to walk before he can run.
knightstar - there a two types of terminology in mathematics:
1. Terms that refer to specific mathematical objects, such as "one", "two", "addition", etc. These must have very precise definitions.
2. Terms that are used to describe mathematical elements or processes, such as "operand", "addend", "input", "output", etc. These tend to be more loose in definition, like words in ordinary language. How they are defined may differ somewhat from one person to another.
How I would define your terms - aiming at your current mathematical level:
Function - a process that starts with an ordered collection of values (the "inputs") and produces a single value from them (the "output"). It is critical here that if the same collection of inputs is used again, then the same output will be produced. If the function takes only one input value, then it is called a "function of one variable". If it requires an ordered pair of input values, it is called a "function of two variables", etc.
Operation = Function. The only difference between the processes that we call Operations and those we call Functions is how we think of them. We think of operations as ways of combining objects (particularly numbers) to produce other objects. We think of Functions as "machines" spit out new objects when other objects are fed into them. In particular, Functions are thought of as being objects themselves. However, in truth, they are really the same thing: A unary operation is a function of one variable. A binary operation is a function of two variables, etc.
Operand = one of the inputs to an operation.
Relation = a logical function. That is, one whose output value is either "true" or "false". For example, "=" is a relation of two variables. "Is an integer" is a relation on only one variable.
value = the thing that a variable represents. This may or may not be numeric: in the expression x[sup]2[/sup] - 1, the values of x can be assumed to be numeric. But in the expression
, the values of A and B are apparently sets, not number.quantity = a real numeric value (not just a number, but a member of the Real numbers - no imaginary or non-real complex numbers allowed).
So perhaps it is just a byproduct of my app's algorithm.
Well, of course it's a product of the algorithm. That doesn't mean it isn't interesting. It appears to be a fundamental property of the algorithm itself, not just an artifact of computing limitations (though I could be wrong about that).
I would still like to know what exactly you meant when you wrote that phi was farthest from fractions. And an outline of the algorithm.